Jan 13 21:29:57.855555 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:40:50 -00 2025 Jan 13 21:29:57.855576 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:29:57.855587 kernel: BIOS-provided physical RAM map: Jan 13 21:29:57.855593 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 13 21:29:57.855599 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 13 21:29:57.855605 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 13 21:29:57.855613 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 13 21:29:57.855619 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 13 21:29:57.855625 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 13 21:29:57.855633 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 13 21:29:57.855640 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 13 21:29:57.855646 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 13 21:29:57.855652 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 13 21:29:57.855658 kernel: NX (Execute Disable) protection: active Jan 13 21:29:57.855666 kernel: APIC: Static calls initialized Jan 13 21:29:57.855674 kernel: SMBIOS 2.8 present. Jan 13 21:29:57.855681 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 13 21:29:57.855688 kernel: Hypervisor detected: KVM Jan 13 21:29:57.855694 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 13 21:29:57.855701 kernel: kvm-clock: using sched offset of 2181475486 cycles Jan 13 21:29:57.855708 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 13 21:29:57.855715 kernel: tsc: Detected 2794.748 MHz processor Jan 13 21:29:57.855722 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 13 21:29:57.855729 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 13 21:29:57.855736 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 13 21:29:57.855746 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 13 21:29:57.855753 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 13 21:29:57.855760 kernel: Using GB pages for direct mapping Jan 13 21:29:57.855767 kernel: ACPI: Early table checksum verification disabled Jan 13 21:29:57.855773 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 13 21:29:57.855780 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:29:57.855787 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:29:57.855794 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:29:57.855803 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 13 21:29:57.855810 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:29:57.855817 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:29:57.855824 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:29:57.855831 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:29:57.855837 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Jan 13 21:29:57.855844 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Jan 13 21:29:57.855855 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 13 21:29:57.855864 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Jan 13 21:29:57.855871 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Jan 13 21:29:57.855878 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Jan 13 21:29:57.855885 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Jan 13 21:29:57.855892 kernel: No NUMA configuration found Jan 13 21:29:57.855899 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 13 21:29:57.855906 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jan 13 21:29:57.855915 kernel: Zone ranges: Jan 13 21:29:57.855922 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 13 21:29:57.855929 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 13 21:29:57.855936 kernel: Normal empty Jan 13 21:29:57.855946 kernel: Movable zone start for each node Jan 13 21:29:57.855956 kernel: Early memory node ranges Jan 13 21:29:57.855965 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 13 21:29:57.855974 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 13 21:29:57.855984 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 13 21:29:57.855997 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 21:29:57.856004 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 13 21:29:57.856011 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 13 21:29:57.856018 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 13 21:29:57.856026 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 13 21:29:57.856033 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 13 21:29:57.856040 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 13 21:29:57.856047 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 13 21:29:57.856056 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 13 21:29:57.856076 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 13 21:29:57.856086 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 13 21:29:57.856096 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 13 21:29:57.856105 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 13 21:29:57.856112 kernel: TSC deadline timer available Jan 13 21:29:57.856120 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 13 21:29:57.856127 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 13 21:29:57.856134 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 13 21:29:57.856141 kernel: kvm-guest: setup PV sched yield Jan 13 21:29:57.856150 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 13 21:29:57.856157 kernel: Booting paravirtualized kernel on KVM Jan 13 21:29:57.856165 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 13 21:29:57.856172 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 13 21:29:57.856179 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Jan 13 21:29:57.856186 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Jan 13 21:29:57.856193 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 13 21:29:57.856200 kernel: kvm-guest: PV spinlocks enabled Jan 13 21:29:57.856207 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 13 21:29:57.856216 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:29:57.856228 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 21:29:57.856236 kernel: random: crng init done Jan 13 21:29:57.856245 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 21:29:57.856252 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 21:29:57.856259 kernel: Fallback order for Node 0: 0 Jan 13 21:29:57.856266 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jan 13 21:29:57.856273 kernel: Policy zone: DMA32 Jan 13 21:29:57.856280 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 21:29:57.856290 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42844K init, 2348K bss, 136900K reserved, 0K cma-reserved) Jan 13 21:29:57.856297 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 13 21:29:57.856304 kernel: ftrace: allocating 37918 entries in 149 pages Jan 13 21:29:57.856311 kernel: ftrace: allocated 149 pages with 4 groups Jan 13 21:29:57.856318 kernel: Dynamic Preempt: voluntary Jan 13 21:29:57.856325 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 21:29:57.856345 kernel: rcu: RCU event tracing is enabled. Jan 13 21:29:57.856353 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 13 21:29:57.856360 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 21:29:57.856370 kernel: Rude variant of Tasks RCU enabled. Jan 13 21:29:57.856377 kernel: Tracing variant of Tasks RCU enabled. Jan 13 21:29:57.856384 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 21:29:57.856391 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 13 21:29:57.856398 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 13 21:29:57.856406 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 21:29:57.856413 kernel: Console: colour VGA+ 80x25 Jan 13 21:29:57.856419 kernel: printk: console [ttyS0] enabled Jan 13 21:29:57.856426 kernel: ACPI: Core revision 20230628 Jan 13 21:29:57.856436 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 13 21:29:57.856443 kernel: APIC: Switch to symmetric I/O mode setup Jan 13 21:29:57.856450 kernel: x2apic enabled Jan 13 21:29:57.856457 kernel: APIC: Switched APIC routing to: physical x2apic Jan 13 21:29:57.856464 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 13 21:29:57.856471 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 13 21:29:57.856479 kernel: kvm-guest: setup PV IPIs Jan 13 21:29:57.856495 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 13 21:29:57.856502 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 13 21:29:57.856509 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jan 13 21:29:57.856517 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 13 21:29:57.856524 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 13 21:29:57.856534 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 13 21:29:57.856541 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 13 21:29:57.856549 kernel: Spectre V2 : Mitigation: Retpolines Jan 13 21:29:57.856556 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 13 21:29:57.856566 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 13 21:29:57.856573 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 13 21:29:57.856581 kernel: RETBleed: Mitigation: untrained return thunk Jan 13 21:29:57.856588 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 13 21:29:57.856596 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 13 21:29:57.856603 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 13 21:29:57.856611 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 13 21:29:57.856619 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 13 21:29:57.856626 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 13 21:29:57.856636 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 13 21:29:57.856643 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 13 21:29:57.856650 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 13 21:29:57.856660 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 13 21:29:57.856670 kernel: Freeing SMP alternatives memory: 32K Jan 13 21:29:57.856680 kernel: pid_max: default: 32768 minimum: 301 Jan 13 21:29:57.856690 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 21:29:57.856697 kernel: landlock: Up and running. Jan 13 21:29:57.856704 kernel: SELinux: Initializing. Jan 13 21:29:57.856714 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 21:29:57.856722 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 21:29:57.856729 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 13 21:29:57.856737 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 21:29:57.856746 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 21:29:57.856757 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 21:29:57.856767 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 13 21:29:57.856777 kernel: ... version: 0 Jan 13 21:29:57.856790 kernel: ... bit width: 48 Jan 13 21:29:57.856797 kernel: ... generic registers: 6 Jan 13 21:29:57.856805 kernel: ... value mask: 0000ffffffffffff Jan 13 21:29:57.856812 kernel: ... max period: 00007fffffffffff Jan 13 21:29:57.856820 kernel: ... fixed-purpose events: 0 Jan 13 21:29:57.856827 kernel: ... event mask: 000000000000003f Jan 13 21:29:57.856834 kernel: signal: max sigframe size: 1776 Jan 13 21:29:57.856842 kernel: rcu: Hierarchical SRCU implementation. Jan 13 21:29:57.856849 kernel: rcu: Max phase no-delay instances is 400. Jan 13 21:29:57.856857 kernel: smp: Bringing up secondary CPUs ... Jan 13 21:29:57.856866 kernel: smpboot: x86: Booting SMP configuration: Jan 13 21:29:57.856874 kernel: .... node #0, CPUs: #1 #2 #3 Jan 13 21:29:57.856881 kernel: smp: Brought up 1 node, 4 CPUs Jan 13 21:29:57.856889 kernel: smpboot: Max logical packages: 1 Jan 13 21:29:57.856896 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jan 13 21:29:57.856904 kernel: devtmpfs: initialized Jan 13 21:29:57.856911 kernel: x86/mm: Memory block size: 128MB Jan 13 21:29:57.856919 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 21:29:57.856926 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 13 21:29:57.856936 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 21:29:57.856943 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 21:29:57.856951 kernel: audit: initializing netlink subsys (disabled) Jan 13 21:29:57.856958 kernel: audit: type=2000 audit(1736803797.663:1): state=initialized audit_enabled=0 res=1 Jan 13 21:29:57.856965 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 21:29:57.856973 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 13 21:29:57.856980 kernel: cpuidle: using governor menu Jan 13 21:29:57.856988 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 21:29:57.856995 kernel: dca service started, version 1.12.1 Jan 13 21:29:57.857005 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 13 21:29:57.857012 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 13 21:29:57.857020 kernel: PCI: Using configuration type 1 for base access Jan 13 21:29:57.857027 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 13 21:29:57.857035 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 21:29:57.857042 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 21:29:57.857050 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 21:29:57.857057 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 21:29:57.857073 kernel: ACPI: Added _OSI(Module Device) Jan 13 21:29:57.857083 kernel: ACPI: Added _OSI(Processor Device) Jan 13 21:29:57.857090 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 21:29:57.857099 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 21:29:57.857106 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 21:29:57.857114 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 13 21:29:57.857121 kernel: ACPI: Interpreter enabled Jan 13 21:29:57.857128 kernel: ACPI: PM: (supports S0 S3 S5) Jan 13 21:29:57.857136 kernel: ACPI: Using IOAPIC for interrupt routing Jan 13 21:29:57.857143 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 13 21:29:57.857153 kernel: PCI: Using E820 reservations for host bridge windows Jan 13 21:29:57.857160 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 13 21:29:57.857168 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 21:29:57.857397 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 21:29:57.857530 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 13 21:29:57.857651 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 13 21:29:57.857661 kernel: PCI host bridge to bus 0000:00 Jan 13 21:29:57.857788 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 13 21:29:57.857900 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 13 21:29:57.858011 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 13 21:29:57.858143 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 13 21:29:57.858273 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 13 21:29:57.858402 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 13 21:29:57.858514 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 21:29:57.858657 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 13 21:29:57.858798 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 13 21:29:57.858935 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jan 13 21:29:57.859058 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jan 13 21:29:57.859189 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jan 13 21:29:57.859308 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 13 21:29:57.859492 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 13 21:29:57.859613 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 13 21:29:57.859736 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jan 13 21:29:57.859857 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jan 13 21:29:57.859987 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 13 21:29:57.860128 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jan 13 21:29:57.860254 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jan 13 21:29:57.860418 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jan 13 21:29:57.860552 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 13 21:29:57.860675 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jan 13 21:29:57.860798 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jan 13 21:29:57.860930 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 13 21:29:57.861074 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jan 13 21:29:57.861206 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 13 21:29:57.861345 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 13 21:29:57.861490 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 13 21:29:57.861612 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jan 13 21:29:57.861732 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jan 13 21:29:57.861861 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 13 21:29:57.861980 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 13 21:29:57.861991 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 13 21:29:57.862003 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 13 21:29:57.862010 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 13 21:29:57.862018 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 13 21:29:57.862026 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 13 21:29:57.862033 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 13 21:29:57.862041 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 13 21:29:57.862048 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 13 21:29:57.862055 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 13 21:29:57.862063 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 13 21:29:57.862082 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 13 21:29:57.862090 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 13 21:29:57.862098 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 13 21:29:57.862106 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 13 21:29:57.862113 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 13 21:29:57.862121 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 13 21:29:57.862128 kernel: iommu: Default domain type: Translated Jan 13 21:29:57.862136 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 13 21:29:57.862143 kernel: PCI: Using ACPI for IRQ routing Jan 13 21:29:57.862153 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 13 21:29:57.862160 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 13 21:29:57.862168 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 13 21:29:57.862298 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 13 21:29:57.862456 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 13 21:29:57.862587 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 13 21:29:57.862598 kernel: vgaarb: loaded Jan 13 21:29:57.862606 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 13 21:29:57.862618 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 13 21:29:57.862625 kernel: clocksource: Switched to clocksource kvm-clock Jan 13 21:29:57.862633 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 21:29:57.862641 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 21:29:57.862648 kernel: pnp: PnP ACPI init Jan 13 21:29:57.862777 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 13 21:29:57.862788 kernel: pnp: PnP ACPI: found 6 devices Jan 13 21:29:57.862796 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 13 21:29:57.862806 kernel: NET: Registered PF_INET protocol family Jan 13 21:29:57.862814 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 21:29:57.862822 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 13 21:29:57.862829 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 21:29:57.862837 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 21:29:57.862844 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 13 21:29:57.862852 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 13 21:29:57.862859 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 21:29:57.862867 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 21:29:57.862877 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 21:29:57.862884 kernel: NET: Registered PF_XDP protocol family Jan 13 21:29:57.862995 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 13 21:29:57.863125 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 13 21:29:57.863252 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 13 21:29:57.863418 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 13 21:29:57.863529 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 13 21:29:57.863654 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 13 21:29:57.863670 kernel: PCI: CLS 0 bytes, default 64 Jan 13 21:29:57.863677 kernel: Initialise system trusted keyrings Jan 13 21:29:57.863685 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 13 21:29:57.863692 kernel: Key type asymmetric registered Jan 13 21:29:57.863700 kernel: Asymmetric key parser 'x509' registered Jan 13 21:29:57.863708 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 13 21:29:57.863715 kernel: io scheduler mq-deadline registered Jan 13 21:29:57.863723 kernel: io scheduler kyber registered Jan 13 21:29:57.863730 kernel: io scheduler bfq registered Jan 13 21:29:57.863740 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 13 21:29:57.863747 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 13 21:29:57.863755 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 13 21:29:57.863763 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 13 21:29:57.863770 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 21:29:57.863778 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 13 21:29:57.863785 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 13 21:29:57.863793 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 13 21:29:57.863800 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 13 21:29:57.863931 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 13 21:29:57.863942 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 13 21:29:57.864053 kernel: rtc_cmos 00:04: registered as rtc0 Jan 13 21:29:57.864175 kernel: rtc_cmos 00:04: setting system clock to 2025-01-13T21:29:57 UTC (1736803797) Jan 13 21:29:57.864287 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 13 21:29:57.864296 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 13 21:29:57.864304 kernel: NET: Registered PF_INET6 protocol family Jan 13 21:29:57.864311 kernel: Segment Routing with IPv6 Jan 13 21:29:57.864323 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 21:29:57.864342 kernel: NET: Registered PF_PACKET protocol family Jan 13 21:29:57.864350 kernel: Key type dns_resolver registered Jan 13 21:29:57.864357 kernel: IPI shorthand broadcast: enabled Jan 13 21:29:57.864365 kernel: sched_clock: Marking stable (563002846, 104076212)->(712179661, -45100603) Jan 13 21:29:57.864372 kernel: registered taskstats version 1 Jan 13 21:29:57.864380 kernel: Loading compiled-in X.509 certificates Jan 13 21:29:57.864388 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: e8ca4908f7ff887d90a0430272c92dde55624447' Jan 13 21:29:57.864395 kernel: Key type .fscrypt registered Jan 13 21:29:57.864405 kernel: Key type fscrypt-provisioning registered Jan 13 21:29:57.864413 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 21:29:57.864421 kernel: ima: Allocated hash algorithm: sha1 Jan 13 21:29:57.864428 kernel: ima: No architecture policies found Jan 13 21:29:57.864438 kernel: clk: Disabling unused clocks Jan 13 21:29:57.864449 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 13 21:29:57.864459 kernel: Write protecting the kernel read-only data: 36864k Jan 13 21:29:57.864470 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 13 21:29:57.864478 kernel: Run /init as init process Jan 13 21:29:57.864489 kernel: with arguments: Jan 13 21:29:57.864496 kernel: /init Jan 13 21:29:57.864503 kernel: with environment: Jan 13 21:29:57.864511 kernel: HOME=/ Jan 13 21:29:57.864518 kernel: TERM=linux Jan 13 21:29:57.864526 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 21:29:57.864538 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:29:57.864552 systemd[1]: Detected virtualization kvm. Jan 13 21:29:57.864566 systemd[1]: Detected architecture x86-64. Jan 13 21:29:57.864576 systemd[1]: Running in initrd. Jan 13 21:29:57.864584 systemd[1]: No hostname configured, using default hostname. Jan 13 21:29:57.864592 systemd[1]: Hostname set to . Jan 13 21:29:57.864600 systemd[1]: Initializing machine ID from VM UUID. Jan 13 21:29:57.864608 systemd[1]: Queued start job for default target initrd.target. Jan 13 21:29:57.864616 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:29:57.864624 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:29:57.864635 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 21:29:57.864655 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:29:57.864666 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 21:29:57.864674 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 21:29:57.864684 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 21:29:57.864695 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 21:29:57.864703 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:29:57.864711 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:29:57.864720 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:29:57.864728 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:29:57.864736 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:29:57.864744 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:29:57.864753 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:29:57.864764 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:29:57.864772 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 21:29:57.864780 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 21:29:57.864788 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:29:57.864797 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:29:57.864807 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:29:57.864815 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:29:57.864824 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 21:29:57.864834 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:29:57.864842 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 21:29:57.864850 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 21:29:57.864859 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:29:57.864867 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:29:57.864875 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:29:57.864884 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 21:29:57.864892 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:29:57.864900 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 21:29:57.864912 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 21:29:57.864939 systemd-journald[192]: Collecting audit messages is disabled. Jan 13 21:29:57.864960 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:29:57.864969 systemd-journald[192]: Journal started Jan 13 21:29:57.864988 systemd-journald[192]: Runtime Journal (/run/log/journal/87545de28f7844ab8b7e01c8b8bef721) is 6.0M, max 48.4M, 42.3M free. Jan 13 21:29:57.868245 systemd-modules-load[193]: Inserted module 'overlay' Jan 13 21:29:57.894417 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:29:57.897141 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:29:57.901356 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 21:29:57.903276 systemd-modules-load[193]: Inserted module 'br_netfilter' Jan 13 21:29:57.904218 kernel: Bridge firewalling registered Jan 13 21:29:57.904504 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:29:57.907513 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:29:57.909968 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:29:57.910753 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:29:57.912719 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:29:57.925090 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:29:57.929596 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:29:57.932609 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:29:57.933258 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:29:57.947594 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 21:29:57.950007 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:29:57.958762 dracut-cmdline[226]: dracut-dracut-053 Jan 13 21:29:57.961902 dracut-cmdline[226]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:29:57.989803 systemd-resolved[228]: Positive Trust Anchors: Jan 13 21:29:57.989818 systemd-resolved[228]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:29:57.989850 systemd-resolved[228]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:29:57.995453 systemd-resolved[228]: Defaulting to hostname 'linux'. Jan 13 21:29:57.996584 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:29:58.001554 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:29:58.051384 kernel: SCSI subsystem initialized Jan 13 21:29:58.060362 kernel: Loading iSCSI transport class v2.0-870. Jan 13 21:29:58.070381 kernel: iscsi: registered transport (tcp) Jan 13 21:29:58.092358 kernel: iscsi: registered transport (qla4xxx) Jan 13 21:29:58.092384 kernel: QLogic iSCSI HBA Driver Jan 13 21:29:58.141078 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 21:29:58.151469 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 21:29:58.177926 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 21:29:58.177996 kernel: device-mapper: uevent: version 1.0.3 Jan 13 21:29:58.178009 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 21:29:58.219363 kernel: raid6: avx2x4 gen() 30188 MB/s Jan 13 21:29:58.236362 kernel: raid6: avx2x2 gen() 30829 MB/s Jan 13 21:29:58.253427 kernel: raid6: avx2x1 gen() 26083 MB/s Jan 13 21:29:58.253451 kernel: raid6: using algorithm avx2x2 gen() 30829 MB/s Jan 13 21:29:58.271426 kernel: raid6: .... xor() 19989 MB/s, rmw enabled Jan 13 21:29:58.271448 kernel: raid6: using avx2x2 recovery algorithm Jan 13 21:29:58.292357 kernel: xor: automatically using best checksumming function avx Jan 13 21:29:58.445376 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 21:29:58.459370 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:29:58.470557 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:29:58.482676 systemd-udevd[412]: Using default interface naming scheme 'v255'. Jan 13 21:29:58.487290 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:29:58.490226 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 21:29:58.506526 dracut-pre-trigger[416]: rd.md=0: removing MD RAID activation Jan 13 21:29:58.541924 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:29:58.555499 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:29:58.615196 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:29:58.626498 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 21:29:58.642255 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 21:29:58.645211 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:29:58.648252 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:29:58.650766 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:29:58.656361 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 13 21:29:58.686463 kernel: cryptd: max_cpu_qlen set to 1000 Jan 13 21:29:58.686480 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 13 21:29:58.686627 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 21:29:58.686639 kernel: GPT:9289727 != 19775487 Jan 13 21:29:58.686649 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 21:29:58.686666 kernel: GPT:9289727 != 19775487 Jan 13 21:29:58.686676 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 21:29:58.686686 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:29:58.660481 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 21:29:58.673176 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:29:58.690110 kernel: AVX2 version of gcm_enc/dec engaged. Jan 13 21:29:58.673293 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:29:58.675706 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:29:58.676825 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:29:58.676978 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:29:58.678216 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:29:58.699420 kernel: AES CTR mode by8 optimization enabled Jan 13 21:29:58.699447 kernel: libata version 3.00 loaded. Jan 13 21:29:58.702637 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:29:58.707269 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:29:58.710694 kernel: ahci 0000:00:1f.2: version 3.0 Jan 13 21:29:58.731769 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 13 21:29:58.731786 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (457) Jan 13 21:29:58.731806 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 13 21:29:58.731960 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 13 21:29:58.732113 kernel: BTRFS: device fsid b8e2d3c5-4bed-4339-bed5-268c66823686 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (473) Jan 13 21:29:58.732125 kernel: scsi host0: ahci Jan 13 21:29:58.732277 kernel: scsi host1: ahci Jan 13 21:29:58.732450 kernel: scsi host2: ahci Jan 13 21:29:58.732598 kernel: scsi host3: ahci Jan 13 21:29:58.732756 kernel: scsi host4: ahci Jan 13 21:29:58.732901 kernel: scsi host5: ahci Jan 13 21:29:58.733053 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jan 13 21:29:58.733065 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jan 13 21:29:58.733075 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jan 13 21:29:58.733086 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jan 13 21:29:58.733096 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jan 13 21:29:58.733110 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jan 13 21:29:58.731251 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 13 21:29:58.768726 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:29:58.776582 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 13 21:29:58.791642 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 21:29:58.797637 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 13 21:29:58.800900 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 13 21:29:58.814444 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 21:29:58.817401 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:29:58.840774 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:29:58.845720 disk-uuid[551]: Primary Header is updated. Jan 13 21:29:58.845720 disk-uuid[551]: Secondary Entries is updated. Jan 13 21:29:58.845720 disk-uuid[551]: Secondary Header is updated. Jan 13 21:29:58.850354 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:29:58.854351 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:29:59.038360 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 13 21:29:59.038420 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 13 21:29:59.039355 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 13 21:29:59.045724 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 13 21:29:59.045785 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 13 21:29:59.045796 kernel: ata3.00: applying bridge limits Jan 13 21:29:59.046360 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 13 21:29:59.047364 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 13 21:29:59.048356 kernel: ata3.00: configured for UDMA/100 Jan 13 21:29:59.050363 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 13 21:29:59.096876 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 13 21:29:59.108930 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 13 21:29:59.108945 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 13 21:29:59.855051 disk-uuid[562]: The operation has completed successfully. Jan 13 21:29:59.856460 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:29:59.884142 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 21:29:59.884259 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 21:29:59.905469 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 21:29:59.910819 sh[590]: Success Jan 13 21:29:59.923357 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 13 21:29:59.954433 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 21:29:59.969784 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 21:29:59.974189 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 21:29:59.985477 kernel: BTRFS info (device dm-0): first mount of filesystem b8e2d3c5-4bed-4339-bed5-268c66823686 Jan 13 21:29:59.985506 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:29:59.985517 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 21:29:59.986485 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 21:29:59.987818 kernel: BTRFS info (device dm-0): using free space tree Jan 13 21:29:59.991914 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 21:29:59.994224 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 21:30:00.012457 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 21:30:00.014958 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 21:30:00.022173 kernel: BTRFS info (device vda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:30:00.022199 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:30:00.022210 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:30:00.025352 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:30:00.034329 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 21:30:00.036031 kernel: BTRFS info (device vda6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:30:00.045292 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 21:30:00.052471 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 21:30:00.104832 ignition[683]: Ignition 2.19.0 Jan 13 21:30:00.104844 ignition[683]: Stage: fetch-offline Jan 13 21:30:00.104882 ignition[683]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:30:00.104892 ignition[683]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:30:00.105015 ignition[683]: parsed url from cmdline: "" Jan 13 21:30:00.105019 ignition[683]: no config URL provided Jan 13 21:30:00.105025 ignition[683]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 21:30:00.105035 ignition[683]: no config at "/usr/lib/ignition/user.ign" Jan 13 21:30:00.105063 ignition[683]: op(1): [started] loading QEMU firmware config module Jan 13 21:30:00.105070 ignition[683]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 13 21:30:00.117531 ignition[683]: op(1): [finished] loading QEMU firmware config module Jan 13 21:30:00.118960 ignition[683]: parsing config with SHA512: 34d41e2b291c581929a0db85b00cd9a4d93d9b1a6084e3fbc41132371dc05ccee295a7af466a9002f65ebdfc5ea5cd615a9ce53d5111f602d33ad9f352feb728 Jan 13 21:30:00.122029 unknown[683]: fetched base config from "system" Jan 13 21:30:00.122040 unknown[683]: fetched user config from "qemu" Jan 13 21:30:00.122523 ignition[683]: fetch-offline: fetch-offline passed Jan 13 21:30:00.122591 ignition[683]: Ignition finished successfully Jan 13 21:30:00.125142 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:30:00.138540 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:30:00.146600 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:30:00.167061 systemd-networkd[780]: lo: Link UP Jan 13 21:30:00.167071 systemd-networkd[780]: lo: Gained carrier Jan 13 21:30:00.168616 systemd-networkd[780]: Enumeration completed Jan 13 21:30:00.168694 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:30:00.168996 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:30:00.169000 systemd-networkd[780]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:30:00.169761 systemd-networkd[780]: eth0: Link UP Jan 13 21:30:00.169764 systemd-networkd[780]: eth0: Gained carrier Jan 13 21:30:00.169771 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:30:00.174064 systemd[1]: Reached target network.target - Network. Jan 13 21:30:00.176759 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 13 21:30:00.184387 systemd-networkd[780]: eth0: DHCPv4 address 10.0.0.157/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 21:30:00.196462 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 21:30:00.209633 ignition[783]: Ignition 2.19.0 Jan 13 21:30:00.209644 ignition[783]: Stage: kargs Jan 13 21:30:00.209824 ignition[783]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:30:00.209836 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:30:00.210577 ignition[783]: kargs: kargs passed Jan 13 21:30:00.210617 ignition[783]: Ignition finished successfully Jan 13 21:30:00.217172 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 21:30:00.230502 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 21:30:00.243292 ignition[792]: Ignition 2.19.0 Jan 13 21:30:00.243303 ignition[792]: Stage: disks Jan 13 21:30:00.243485 ignition[792]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:30:00.243498 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:30:00.244119 ignition[792]: disks: disks passed Jan 13 21:30:00.244163 ignition[792]: Ignition finished successfully Jan 13 21:30:00.249836 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 21:30:00.250477 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 21:30:00.250796 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 21:30:00.251133 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:30:00.251637 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:30:00.251967 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:30:00.267533 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 21:30:00.279248 systemd-fsck[801]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 13 21:30:00.285371 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 21:30:00.298427 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 21:30:00.382360 kernel: EXT4-fs (vda9): mounted filesystem 39899d4c-a8b1-4feb-9875-e812cc535888 r/w with ordered data mode. Quota mode: none. Jan 13 21:30:00.382850 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 21:30:00.385030 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 21:30:00.401399 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:30:00.403801 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 21:30:00.406240 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 21:30:00.406286 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 21:30:00.406306 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:30:00.410355 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (809) Jan 13 21:30:00.413739 kernel: BTRFS info (device vda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:30:00.413754 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:30:00.413765 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:30:00.417005 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 21:30:00.418877 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:30:00.419854 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:30:00.423304 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 21:30:00.458363 initrd-setup-root[833]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 21:30:00.463495 initrd-setup-root[840]: cut: /sysroot/etc/group: No such file or directory Jan 13 21:30:00.468251 initrd-setup-root[847]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 21:30:00.472889 initrd-setup-root[854]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 21:30:00.556444 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 21:30:00.570420 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 21:30:00.573734 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 21:30:00.578355 kernel: BTRFS info (device vda6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:30:00.597803 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 21:30:00.599627 ignition[921]: INFO : Ignition 2.19.0 Jan 13 21:30:00.599627 ignition[921]: INFO : Stage: mount Jan 13 21:30:00.599627 ignition[921]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:30:00.599627 ignition[921]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:30:00.599627 ignition[921]: INFO : mount: mount passed Jan 13 21:30:00.599627 ignition[921]: INFO : Ignition finished successfully Jan 13 21:30:00.601314 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 21:30:00.613428 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 21:30:00.984944 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 21:30:01.006498 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:30:01.012352 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (934) Jan 13 21:30:01.014934 kernel: BTRFS info (device vda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:30:01.014954 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:30:01.014964 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:30:01.017355 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:30:01.018655 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:30:01.045976 ignition[951]: INFO : Ignition 2.19.0 Jan 13 21:30:01.045976 ignition[951]: INFO : Stage: files Jan 13 21:30:01.047815 ignition[951]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:30:01.047815 ignition[951]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:30:01.047815 ignition[951]: DEBUG : files: compiled without relabeling support, skipping Jan 13 21:30:01.047815 ignition[951]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 21:30:01.047815 ignition[951]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 21:30:01.055289 ignition[951]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 21:30:01.056777 ignition[951]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 21:30:01.056777 ignition[951]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 21:30:01.055892 unknown[951]: wrote ssh authorized keys file for user: core Jan 13 21:30:01.060703 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 13 21:30:01.060703 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 21:30:01.060703 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:30:01.060703 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:30:01.060703 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 21:30:01.060703 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 21:30:01.060703 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 21:30:01.060703 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Jan 13 21:30:01.532595 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 13 21:30:01.866580 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 21:30:01.866580 ignition[951]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Jan 13 21:30:01.870379 ignition[951]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 21:30:01.870379 ignition[951]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 21:30:01.870379 ignition[951]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Jan 13 21:30:01.870379 ignition[951]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Jan 13 21:30:01.890883 ignition[951]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 21:30:01.896113 ignition[951]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 21:30:01.897655 ignition[951]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Jan 13 21:30:01.897655 ignition[951]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:30:01.897655 ignition[951]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:30:01.897655 ignition[951]: INFO : files: files passed Jan 13 21:30:01.897655 ignition[951]: INFO : Ignition finished successfully Jan 13 21:30:01.899308 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 21:30:01.907558 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 21:30:01.909428 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 21:30:01.911472 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 21:30:01.911585 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 21:30:01.919590 initrd-setup-root-after-ignition[979]: grep: /sysroot/oem/oem-release: No such file or directory Jan 13 21:30:01.922475 initrd-setup-root-after-ignition[981]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:30:01.922475 initrd-setup-root-after-ignition[981]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:30:01.925628 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:30:01.928891 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:30:01.931548 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 21:30:01.942427 systemd-networkd[780]: eth0: Gained IPv6LL Jan 13 21:30:01.944466 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 21:30:01.970555 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 21:30:01.971619 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 21:30:01.974228 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 21:30:01.976233 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 21:30:01.978250 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 21:30:01.985458 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 21:30:02.000916 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:30:02.013469 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 21:30:02.025099 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:30:02.027466 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:30:02.029822 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 21:30:02.031648 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 21:30:02.032650 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:30:02.035169 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 21:30:02.037214 systemd[1]: Stopped target basic.target - Basic System. Jan 13 21:30:02.039232 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 21:30:02.041427 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:30:02.043763 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 21:30:02.045992 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 21:30:02.048039 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:30:02.050550 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 21:30:02.052636 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 21:30:02.054670 systemd[1]: Stopped target swap.target - Swaps. Jan 13 21:30:02.056282 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 21:30:02.057308 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:30:02.059603 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:30:02.061792 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:30:02.064138 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 21:30:02.065099 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:30:02.067698 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 21:30:02.068701 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 21:30:02.070907 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 21:30:02.071993 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:30:02.074371 systemd[1]: Stopped target paths.target - Path Units. Jan 13 21:30:02.076111 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 21:30:02.076297 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:30:02.076877 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 21:30:02.081501 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 21:30:02.082082 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 21:30:02.082193 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:30:02.083832 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 21:30:02.083926 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:30:02.085682 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 21:30:02.085815 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:30:02.087410 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 21:30:02.087549 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 21:30:02.099513 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 21:30:02.099806 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 21:30:02.099916 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:30:02.102586 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 21:30:02.103852 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 21:30:02.104046 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:30:02.105909 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 21:30:02.106065 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:30:02.111347 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 21:30:02.111473 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 21:30:02.124815 ignition[1005]: INFO : Ignition 2.19.0 Jan 13 21:30:02.124815 ignition[1005]: INFO : Stage: umount Jan 13 21:30:02.126629 ignition[1005]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:30:02.126629 ignition[1005]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:30:02.129069 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 21:30:02.130223 ignition[1005]: INFO : umount: umount passed Jan 13 21:30:02.131247 ignition[1005]: INFO : Ignition finished successfully Jan 13 21:30:02.134031 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 21:30:02.134154 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 21:30:02.134837 systemd[1]: Stopped target network.target - Network. Jan 13 21:30:02.137210 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 21:30:02.137262 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 21:30:02.137914 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 21:30:02.137966 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 21:30:02.138247 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 21:30:02.138288 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 21:30:02.138751 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 21:30:02.138793 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 21:30:02.139210 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 21:30:02.139679 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 21:30:02.153132 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 21:30:02.156216 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 21:30:02.156376 systemd-networkd[780]: eth0: DHCPv6 lease lost Jan 13 21:30:02.160263 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 21:30:02.161454 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 21:30:02.164914 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 21:30:02.164979 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:30:02.180490 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 21:30:02.180754 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 21:30:02.180819 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:30:02.181174 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 21:30:02.181219 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:30:02.181655 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 21:30:02.181699 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 21:30:02.182064 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 21:30:02.182105 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:30:02.189935 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:30:02.236294 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 21:30:02.236465 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 21:30:02.246217 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 21:30:02.246424 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:30:02.248649 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 21:30:02.248705 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 21:30:02.250717 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 21:30:02.250761 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:30:02.252770 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 21:30:02.252825 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:30:02.255095 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 21:30:02.255147 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 21:30:02.257060 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:30:02.257109 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:30:02.268569 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 21:30:02.270815 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 21:30:02.270888 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:30:02.273157 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:30:02.273208 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:30:02.275881 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 21:30:02.276003 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 21:30:02.319744 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 21:30:02.319925 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 21:30:02.322348 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 21:30:02.323733 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 21:30:02.323821 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 21:30:02.340572 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 21:30:02.349580 systemd[1]: Switching root. Jan 13 21:30:02.385024 systemd-journald[192]: Journal stopped Jan 13 21:30:03.602183 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Jan 13 21:30:03.602257 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 21:30:03.602280 kernel: SELinux: policy capability open_perms=1 Jan 13 21:30:03.602291 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 21:30:03.602305 kernel: SELinux: policy capability always_check_network=0 Jan 13 21:30:03.602317 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 21:30:03.602384 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 21:30:03.602398 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 21:30:03.602409 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 21:30:03.602421 kernel: audit: type=1403 audit(1736803802.901:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 21:30:03.602439 systemd[1]: Successfully loaded SELinux policy in 43.580ms. Jan 13 21:30:03.602453 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.176ms. Jan 13 21:30:03.602466 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:30:03.602481 systemd[1]: Detected virtualization kvm. Jan 13 21:30:03.602493 systemd[1]: Detected architecture x86-64. Jan 13 21:30:03.602505 systemd[1]: Detected first boot. Jan 13 21:30:03.602517 systemd[1]: Initializing machine ID from VM UUID. Jan 13 21:30:03.602528 zram_generator::config[1049]: No configuration found. Jan 13 21:30:03.602542 systemd[1]: Populated /etc with preset unit settings. Jan 13 21:30:03.602554 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 21:30:03.602565 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 21:30:03.602580 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 21:30:03.602592 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 21:30:03.602604 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 21:30:03.602616 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 21:30:03.602628 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 21:30:03.602642 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 21:30:03.602654 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 21:30:03.602669 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 21:30:03.602680 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 21:30:03.602692 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:30:03.602704 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:30:03.602716 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 21:30:03.602728 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 21:30:03.602741 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 21:30:03.602755 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:30:03.602767 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 13 21:30:03.602779 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:30:03.602791 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 21:30:03.602803 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 21:30:03.602815 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 21:30:03.602826 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 21:30:03.602838 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:30:03.602853 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:30:03.602869 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:30:03.602881 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:30:03.602893 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 21:30:03.602904 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 21:30:03.602923 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:30:03.602937 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:30:03.602949 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:30:03.602961 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 21:30:03.602975 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 21:30:03.602987 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 21:30:03.602999 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 21:30:03.603011 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:30:03.603025 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 21:30:03.603037 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 21:30:03.603049 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 21:30:03.603061 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 21:30:03.603073 systemd[1]: Reached target machines.target - Containers. Jan 13 21:30:03.603087 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 21:30:03.603099 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:30:03.603111 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:30:03.603123 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 21:30:03.603134 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:30:03.603147 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:30:03.603158 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:30:03.603170 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 21:30:03.603183 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:30:03.603197 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 21:30:03.603209 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 21:30:03.603220 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 21:30:03.603232 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 21:30:03.603244 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 21:30:03.603255 kernel: fuse: init (API version 7.39) Jan 13 21:30:03.603267 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:30:03.603279 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:30:03.603293 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 21:30:03.603305 kernel: loop: module loaded Jan 13 21:30:03.603316 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 21:30:03.603328 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:30:03.603352 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 21:30:03.603364 systemd[1]: Stopped verity-setup.service. Jan 13 21:30:03.603377 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:30:03.603388 kernel: ACPI: bus type drm_connector registered Jan 13 21:30:03.603399 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 21:30:03.603430 systemd-journald[1119]: Collecting audit messages is disabled. Jan 13 21:30:03.603452 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 21:30:03.603464 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 21:30:03.603476 systemd-journald[1119]: Journal started Jan 13 21:30:03.603501 systemd-journald[1119]: Runtime Journal (/run/log/journal/87545de28f7844ab8b7e01c8b8bef721) is 6.0M, max 48.4M, 42.3M free. Jan 13 21:30:03.388166 systemd[1]: Queued start job for default target multi-user.target. Jan 13 21:30:03.406379 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 13 21:30:03.406816 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 21:30:03.607089 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:30:03.608162 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 21:30:03.609393 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 21:30:03.610662 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 21:30:03.611940 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 21:30:03.613413 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:30:03.614966 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 21:30:03.615137 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 21:30:03.616635 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:30:03.616808 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:30:03.618255 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:30:03.618559 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:30:03.620152 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:30:03.620320 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:30:03.621873 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 21:30:03.622050 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 21:30:03.623450 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:30:03.623621 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:30:03.625087 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:30:03.626537 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 21:30:03.628193 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 21:30:03.643248 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 21:30:03.651412 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 21:30:03.653665 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 21:30:03.654807 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 21:30:03.654838 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:30:03.656821 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 21:30:03.659099 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 21:30:03.661258 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 21:30:03.662437 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:30:03.665226 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 21:30:03.671458 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 21:30:03.672805 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:30:03.674387 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 21:30:03.676954 systemd-journald[1119]: Time spent on flushing to /var/log/journal/87545de28f7844ab8b7e01c8b8bef721 is 20.428ms for 930 entries. Jan 13 21:30:03.676954 systemd-journald[1119]: System Journal (/var/log/journal/87545de28f7844ab8b7e01c8b8bef721) is 8.0M, max 195.6M, 187.6M free. Jan 13 21:30:03.713677 systemd-journald[1119]: Received client request to flush runtime journal. Jan 13 21:30:03.713722 kernel: loop0: detected capacity change from 0 to 205544 Jan 13 21:30:03.678196 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:30:03.679600 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:30:03.686013 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 21:30:03.688541 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 21:30:03.694391 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:30:03.695931 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 21:30:03.697234 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 21:30:03.701110 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 21:30:03.702721 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 21:30:03.711316 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 21:30:03.719525 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 21:30:03.723477 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 21:30:03.725234 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 21:30:03.726809 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:30:03.733368 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 21:30:03.739640 udevadm[1174]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 13 21:30:03.753617 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 21:30:03.768119 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:30:03.770567 kernel: loop1: detected capacity change from 0 to 140768 Jan 13 21:30:03.771135 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 21:30:03.771992 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 21:30:03.798450 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. Jan 13 21:30:03.798470 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. Jan 13 21:30:03.802045 kernel: loop2: detected capacity change from 0 to 142488 Jan 13 21:30:03.805485 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:30:03.834371 kernel: loop3: detected capacity change from 0 to 205544 Jan 13 21:30:03.842363 kernel: loop4: detected capacity change from 0 to 140768 Jan 13 21:30:03.851406 kernel: loop5: detected capacity change from 0 to 142488 Jan 13 21:30:03.857399 (sd-merge)[1187]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 13 21:30:03.857971 (sd-merge)[1187]: Merged extensions into '/usr'. Jan 13 21:30:03.861883 systemd[1]: Reloading requested from client PID 1163 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 21:30:03.861899 systemd[1]: Reloading... Jan 13 21:30:03.931360 zram_generator::config[1216]: No configuration found. Jan 13 21:30:03.976803 ldconfig[1158]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 21:30:04.044896 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:30:04.094242 systemd[1]: Reloading finished in 231 ms. Jan 13 21:30:04.140988 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 21:30:04.142568 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 21:30:04.152525 systemd[1]: Starting ensure-sysext.service... Jan 13 21:30:04.154376 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:30:04.163029 systemd[1]: Reloading requested from client PID 1250 ('systemctl') (unit ensure-sysext.service)... Jan 13 21:30:04.163044 systemd[1]: Reloading... Jan 13 21:30:04.176631 systemd-tmpfiles[1251]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 21:30:04.177019 systemd-tmpfiles[1251]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 21:30:04.178065 systemd-tmpfiles[1251]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 21:30:04.178379 systemd-tmpfiles[1251]: ACLs are not supported, ignoring. Jan 13 21:30:04.178460 systemd-tmpfiles[1251]: ACLs are not supported, ignoring. Jan 13 21:30:04.181873 systemd-tmpfiles[1251]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:30:04.181885 systemd-tmpfiles[1251]: Skipping /boot Jan 13 21:30:04.192566 systemd-tmpfiles[1251]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:30:04.192578 systemd-tmpfiles[1251]: Skipping /boot Jan 13 21:30:04.225453 zram_generator::config[1281]: No configuration found. Jan 13 21:30:04.326168 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:30:04.375780 systemd[1]: Reloading finished in 212 ms. Jan 13 21:30:04.392842 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 21:30:04.406755 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:30:04.415505 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 13 21:30:04.418015 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 21:30:04.420392 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 21:30:04.423537 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:30:04.427535 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:30:04.431393 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 21:30:04.435208 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:30:04.435381 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:30:04.436565 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:30:04.439609 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:30:04.441848 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:30:04.443716 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:30:04.445542 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 21:30:04.446637 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:30:04.449495 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:30:04.449669 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:30:04.449824 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:30:04.449918 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:30:04.452270 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:30:04.453013 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:30:04.456959 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:30:04.458243 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:30:04.458374 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:30:04.459014 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:30:04.459217 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:30:04.461323 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:30:04.461651 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:30:04.465661 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:30:04.466163 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:30:04.468198 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:30:04.468581 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:30:04.475015 systemd-udevd[1322]: Using default interface naming scheme 'v255'. Jan 13 21:30:04.475322 systemd[1]: Finished ensure-sysext.service. Jan 13 21:30:04.476927 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 21:30:04.481112 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 21:30:04.486461 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:30:04.486525 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:30:04.495173 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 13 21:30:04.496956 augenrules[1351]: No rules Jan 13 21:30:04.498452 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 21:30:04.499950 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 21:30:04.501456 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 13 21:30:04.504924 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:30:04.512560 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:30:04.523997 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 21:30:04.526932 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 21:30:04.535494 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 21:30:04.559104 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 13 21:30:04.587409 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1369) Jan 13 21:30:04.615361 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 13 21:30:04.619364 kernel: ACPI: button: Power Button [PWRF] Jan 13 21:30:04.620294 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 21:30:04.630536 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 21:30:04.641770 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 13 21:30:04.644179 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 13 21:30:04.644384 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 13 21:30:04.654347 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 13 21:30:04.659008 systemd-resolved[1321]: Positive Trust Anchors: Jan 13 21:30:04.659284 systemd-resolved[1321]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:30:04.659369 systemd-resolved[1321]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:30:04.660767 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 13 21:30:04.664172 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 21:30:04.668688 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 21:30:04.677435 systemd-networkd[1366]: lo: Link UP Jan 13 21:30:04.677448 systemd-networkd[1366]: lo: Gained carrier Jan 13 21:30:04.679553 systemd-resolved[1321]: Defaulting to hostname 'linux'. Jan 13 21:30:04.681096 systemd-networkd[1366]: Enumeration completed Jan 13 21:30:04.681491 systemd-networkd[1366]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:30:04.681495 systemd-networkd[1366]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:30:04.682830 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:30:04.684550 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:30:04.684675 systemd-networkd[1366]: eth0: Link UP Jan 13 21:30:04.685355 systemd-networkd[1366]: eth0: Gained carrier Jan 13 21:30:04.685371 systemd-networkd[1366]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:30:04.686048 systemd[1]: Reached target network.target - Network. Jan 13 21:30:04.687258 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:30:04.699398 systemd-networkd[1366]: eth0: DHCPv4 address 10.0.0.157/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 21:30:04.700288 systemd-timesyncd[1349]: Network configuration changed, trying to establish connection. Jan 13 21:30:05.459006 systemd-resolved[1321]: Clock change detected. Flushing caches. Jan 13 21:30:05.459082 systemd-timesyncd[1349]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 13 21:30:05.459133 systemd-timesyncd[1349]: Initial clock synchronization to Mon 2025-01-13 21:30:05.458974 UTC. Jan 13 21:30:05.460542 kernel: mousedev: PS/2 mouse device common for all mice Jan 13 21:30:05.460146 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 21:30:05.465585 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:30:05.533799 kernel: kvm_amd: TSC scaling supported Jan 13 21:30:05.533869 kernel: kvm_amd: Nested Virtualization enabled Jan 13 21:30:05.533882 kernel: kvm_amd: Nested Paging enabled Jan 13 21:30:05.533894 kernel: kvm_amd: LBR virtualization supported Jan 13 21:30:05.535083 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 13 21:30:05.535106 kernel: kvm_amd: Virtual GIF supported Jan 13 21:30:05.557539 kernel: EDAC MC: Ver: 3.0.0 Jan 13 21:30:05.590213 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 21:30:05.594325 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:30:05.606684 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 21:30:05.615429 lvm[1410]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:30:05.650519 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 21:30:05.652253 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:30:05.653390 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:30:05.654609 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 21:30:05.656022 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 21:30:05.657499 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 21:30:05.658751 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 21:30:05.660185 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 21:30:05.661443 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 21:30:05.661470 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:30:05.662397 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:30:05.663932 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 21:30:05.666785 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 21:30:05.680498 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 21:30:05.683126 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 21:30:05.684703 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 21:30:05.685854 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:30:05.686816 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:30:05.687771 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:30:05.687798 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:30:05.688825 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 21:30:05.690938 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 21:30:05.694519 lvm[1414]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:30:05.694918 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 21:30:05.700152 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 21:30:05.703175 jq[1417]: false Jan 13 21:30:05.701882 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 21:30:05.705667 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 21:30:05.707930 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 21:30:05.713586 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 21:30:05.720087 extend-filesystems[1418]: Found loop3 Jan 13 21:30:05.720087 extend-filesystems[1418]: Found loop4 Jan 13 21:30:05.720087 extend-filesystems[1418]: Found loop5 Jan 13 21:30:05.720087 extend-filesystems[1418]: Found sr0 Jan 13 21:30:05.720087 extend-filesystems[1418]: Found vda Jan 13 21:30:05.720087 extend-filesystems[1418]: Found vda1 Jan 13 21:30:05.720087 extend-filesystems[1418]: Found vda2 Jan 13 21:30:05.720087 extend-filesystems[1418]: Found vda3 Jan 13 21:30:05.720087 extend-filesystems[1418]: Found usr Jan 13 21:30:05.720087 extend-filesystems[1418]: Found vda4 Jan 13 21:30:05.720087 extend-filesystems[1418]: Found vda6 Jan 13 21:30:05.720087 extend-filesystems[1418]: Found vda7 Jan 13 21:30:05.720087 extend-filesystems[1418]: Found vda9 Jan 13 21:30:05.720087 extend-filesystems[1418]: Checking size of /dev/vda9 Jan 13 21:30:05.736113 dbus-daemon[1416]: [system] SELinux support is enabled Jan 13 21:30:05.720655 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 21:30:05.722379 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 21:30:05.722848 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 21:30:05.723646 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 21:30:05.726424 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 21:30:05.728764 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 21:30:05.737769 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 21:30:05.741445 jq[1431]: true Jan 13 21:30:05.742208 extend-filesystems[1418]: Resized partition /dev/vda9 Jan 13 21:30:05.741940 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 21:30:05.742192 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 21:30:05.742845 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 21:30:05.743216 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 21:30:05.745216 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 21:30:05.745796 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 21:30:05.748820 extend-filesystems[1438]: resize2fs 1.47.1 (20-May-2024) Jan 13 21:30:05.757518 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 13 21:30:05.761594 jq[1440]: true Jan 13 21:30:05.765530 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1369) Jan 13 21:30:05.766188 update_engine[1430]: I20250113 21:30:05.766110 1430 main.cc:92] Flatcar Update Engine starting Jan 13 21:30:05.768002 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 21:30:05.768035 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 21:30:05.769701 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 21:30:05.769725 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 21:30:05.770428 (ntainerd)[1441]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 21:30:05.773152 systemd[1]: Started update-engine.service - Update Engine. Jan 13 21:30:05.775521 update_engine[1430]: I20250113 21:30:05.773933 1430 update_check_scheduler.cc:74] Next update check in 4m35s Jan 13 21:30:05.778110 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 21:30:05.789518 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 13 21:30:05.825406 extend-filesystems[1438]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 13 21:30:05.825406 extend-filesystems[1438]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 13 21:30:05.825406 extend-filesystems[1438]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 13 21:30:05.824232 locksmithd[1452]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 21:30:05.830406 extend-filesystems[1418]: Resized filesystem in /dev/vda9 Jan 13 21:30:05.824637 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 21:30:05.824881 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 21:30:05.829781 systemd-logind[1425]: Watching system buttons on /dev/input/event1 (Power Button) Jan 13 21:30:05.829801 systemd-logind[1425]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 13 21:30:05.830019 systemd-logind[1425]: New seat seat0. Jan 13 21:30:05.830837 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 21:30:05.838452 bash[1466]: Updated "/home/core/.ssh/authorized_keys" Jan 13 21:30:05.839561 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 21:30:05.842881 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 13 21:30:05.970544 containerd[1441]: time="2025-01-13T21:30:05.970364607Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 13 21:30:05.992856 containerd[1441]: time="2025-01-13T21:30:05.992785644Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:30:05.995904 containerd[1441]: time="2025-01-13T21:30:05.994453692Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:30:05.995904 containerd[1441]: time="2025-01-13T21:30:05.994497615Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 21:30:05.995904 containerd[1441]: time="2025-01-13T21:30:05.994531708Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 21:30:05.995904 containerd[1441]: time="2025-01-13T21:30:05.994746632Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 21:30:05.995904 containerd[1441]: time="2025-01-13T21:30:05.994766559Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 21:30:05.995904 containerd[1441]: time="2025-01-13T21:30:05.994852069Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:30:05.995904 containerd[1441]: time="2025-01-13T21:30:05.994865154Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:30:05.995904 containerd[1441]: time="2025-01-13T21:30:05.995086258Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:30:05.995904 containerd[1441]: time="2025-01-13T21:30:05.995101928Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 21:30:05.995904 containerd[1441]: time="2025-01-13T21:30:05.995115724Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:30:05.995904 containerd[1441]: time="2025-01-13T21:30:05.995126113Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 21:30:05.996148 containerd[1441]: time="2025-01-13T21:30:05.995217264Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:30:05.996148 containerd[1441]: time="2025-01-13T21:30:05.995449039Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:30:05.996148 containerd[1441]: time="2025-01-13T21:30:05.995586066Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:30:05.996148 containerd[1441]: time="2025-01-13T21:30:05.995600874Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 21:30:05.996148 containerd[1441]: time="2025-01-13T21:30:05.995700280Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 21:30:05.996148 containerd[1441]: time="2025-01-13T21:30:05.995760563Z" level=info msg="metadata content store policy set" policy=shared Jan 13 21:30:06.001775 containerd[1441]: time="2025-01-13T21:30:06.001734927Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 21:30:06.001775 containerd[1441]: time="2025-01-13T21:30:06.001780272Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 21:30:06.001892 containerd[1441]: time="2025-01-13T21:30:06.001796563Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 21:30:06.001892 containerd[1441]: time="2025-01-13T21:30:06.001817792Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 21:30:06.001892 containerd[1441]: time="2025-01-13T21:30:06.001833261Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 21:30:06.001985 containerd[1441]: time="2025-01-13T21:30:06.001964958Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 21:30:06.002230 containerd[1441]: time="2025-01-13T21:30:06.002201051Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 21:30:06.002334 containerd[1441]: time="2025-01-13T21:30:06.002307841Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 21:30:06.002334 containerd[1441]: time="2025-01-13T21:30:06.002330985Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 21:30:06.002384 containerd[1441]: time="2025-01-13T21:30:06.002343879Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 21:30:06.002384 containerd[1441]: time="2025-01-13T21:30:06.002358436Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 21:30:06.002384 containerd[1441]: time="2025-01-13T21:30:06.002370389Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 21:30:06.002384 containerd[1441]: time="2025-01-13T21:30:06.002381730Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 21:30:06.002457 containerd[1441]: time="2025-01-13T21:30:06.002394644Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 21:30:06.002457 containerd[1441]: time="2025-01-13T21:30:06.002408290Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 21:30:06.002457 containerd[1441]: time="2025-01-13T21:30:06.002420553Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 21:30:06.002457 containerd[1441]: time="2025-01-13T21:30:06.002432305Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 21:30:06.002457 containerd[1441]: time="2025-01-13T21:30:06.002443115Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 21:30:06.002559 containerd[1441]: time="2025-01-13T21:30:06.002461109Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 21:30:06.002559 containerd[1441]: time="2025-01-13T21:30:06.002474374Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 21:30:06.002559 containerd[1441]: time="2025-01-13T21:30:06.002486497Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 21:30:06.002559 containerd[1441]: time="2025-01-13T21:30:06.002498299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 21:30:06.002559 containerd[1441]: time="2025-01-13T21:30:06.002524618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 21:30:06.002559 containerd[1441]: time="2025-01-13T21:30:06.002540969Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 21:30:06.002559 containerd[1441]: time="2025-01-13T21:30:06.002552470Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 21:30:06.002693 containerd[1441]: time="2025-01-13T21:30:06.002564202Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 21:30:06.002693 containerd[1441]: time="2025-01-13T21:30:06.002576545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 21:30:06.002693 containerd[1441]: time="2025-01-13T21:30:06.002590111Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 21:30:06.002693 containerd[1441]: time="2025-01-13T21:30:06.002601903Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 21:30:06.002693 containerd[1441]: time="2025-01-13T21:30:06.002613885Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 21:30:06.002693 containerd[1441]: time="2025-01-13T21:30:06.002626619Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 21:30:06.002693 containerd[1441]: time="2025-01-13T21:30:06.002641257Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 21:30:06.002693 containerd[1441]: time="2025-01-13T21:30:06.002662677Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 21:30:06.002693 containerd[1441]: time="2025-01-13T21:30:06.002674369Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 21:30:06.002693 containerd[1441]: time="2025-01-13T21:30:06.002686792Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 21:30:06.002861 containerd[1441]: time="2025-01-13T21:30:06.002735143Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 21:30:06.002861 containerd[1441]: time="2025-01-13T21:30:06.002751433Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 21:30:06.002861 containerd[1441]: time="2025-01-13T21:30:06.002761242Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 21:30:06.002861 containerd[1441]: time="2025-01-13T21:30:06.002773284Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 21:30:06.002861 containerd[1441]: time="2025-01-13T21:30:06.002782592Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 21:30:06.002861 containerd[1441]: time="2025-01-13T21:30:06.002805064Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 21:30:06.002861 containerd[1441]: time="2025-01-13T21:30:06.002815904Z" level=info msg="NRI interface is disabled by configuration." Jan 13 21:30:06.002861 containerd[1441]: time="2025-01-13T21:30:06.002827556Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 21:30:06.003150 containerd[1441]: time="2025-01-13T21:30:06.003086542Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 21:30:06.003150 containerd[1441]: time="2025-01-13T21:30:06.003140343Z" level=info msg="Connect containerd service" Jan 13 21:30:06.003290 containerd[1441]: time="2025-01-13T21:30:06.003177583Z" level=info msg="using legacy CRI server" Jan 13 21:30:06.003290 containerd[1441]: time="2025-01-13T21:30:06.003184586Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 21:30:06.003290 containerd[1441]: time="2025-01-13T21:30:06.003271118Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 21:30:06.003884 containerd[1441]: time="2025-01-13T21:30:06.003850965Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 21:30:06.004438 containerd[1441]: time="2025-01-13T21:30:06.004157651Z" level=info msg="Start subscribing containerd event" Jan 13 21:30:06.004438 containerd[1441]: time="2025-01-13T21:30:06.004210049Z" level=info msg="Start recovering state" Jan 13 21:30:06.004438 containerd[1441]: time="2025-01-13T21:30:06.004168771Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 21:30:06.004438 containerd[1441]: time="2025-01-13T21:30:06.004271494Z" level=info msg="Start event monitor" Jan 13 21:30:06.004438 containerd[1441]: time="2025-01-13T21:30:06.004307872Z" level=info msg="Start snapshots syncer" Jan 13 21:30:06.004438 containerd[1441]: time="2025-01-13T21:30:06.004317791Z" level=info msg="Start cni network conf syncer for default" Jan 13 21:30:06.004438 containerd[1441]: time="2025-01-13T21:30:06.004325395Z" level=info msg="Start streaming server" Jan 13 21:30:06.004438 containerd[1441]: time="2025-01-13T21:30:06.004329693Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 21:30:06.004645 containerd[1441]: time="2025-01-13T21:30:06.004496526Z" level=info msg="containerd successfully booted in 0.035279s" Jan 13 21:30:06.004740 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 21:30:06.072032 sshd_keygen[1439]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 21:30:06.095596 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 21:30:06.105888 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 21:30:06.113880 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 21:30:06.114125 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 21:30:06.116843 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 21:30:06.132810 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 21:30:06.140758 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 21:30:06.142780 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 13 21:30:06.144029 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 21:30:06.410752 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 21:30:06.413089 systemd[1]: Started sshd@0-10.0.0.157:22-10.0.0.1:51682.service - OpenSSH per-connection server daemon (10.0.0.1:51682). Jan 13 21:30:06.452694 sshd[1499]: Accepted publickey for core from 10.0.0.1 port 51682 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:30:06.454574 sshd[1499]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:30:06.463272 systemd-logind[1425]: New session 1 of user core. Jan 13 21:30:06.464550 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 21:30:06.474721 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 21:30:06.485918 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 21:30:06.500795 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 21:30:06.504873 (systemd)[1503]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 21:30:06.620256 systemd[1503]: Queued start job for default target default.target. Jan 13 21:30:06.631768 systemd[1503]: Created slice app.slice - User Application Slice. Jan 13 21:30:06.631793 systemd[1503]: Reached target paths.target - Paths. Jan 13 21:30:06.631805 systemd[1503]: Reached target timers.target - Timers. Jan 13 21:30:06.633435 systemd[1503]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 21:30:06.644966 systemd[1503]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 21:30:06.645138 systemd[1503]: Reached target sockets.target - Sockets. Jan 13 21:30:06.645160 systemd[1503]: Reached target basic.target - Basic System. Jan 13 21:30:06.645209 systemd[1503]: Reached target default.target - Main User Target. Jan 13 21:30:06.645246 systemd[1503]: Startup finished in 132ms. Jan 13 21:30:06.645575 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 21:30:06.648145 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 21:30:06.710336 systemd[1]: Started sshd@1-10.0.0.157:22-10.0.0.1:51684.service - OpenSSH per-connection server daemon (10.0.0.1:51684). Jan 13 21:30:06.745479 sshd[1514]: Accepted publickey for core from 10.0.0.1 port 51684 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:30:06.746912 sshd[1514]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:30:06.750641 systemd-logind[1425]: New session 2 of user core. Jan 13 21:30:06.762650 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 21:30:06.817458 sshd[1514]: pam_unix(sshd:session): session closed for user core Jan 13 21:30:06.828195 systemd[1]: sshd@1-10.0.0.157:22-10.0.0.1:51684.service: Deactivated successfully. Jan 13 21:30:06.829881 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 21:30:06.831375 systemd-logind[1425]: Session 2 logged out. Waiting for processes to exit. Jan 13 21:30:06.838727 systemd[1]: Started sshd@2-10.0.0.157:22-10.0.0.1:51692.service - OpenSSH per-connection server daemon (10.0.0.1:51692). Jan 13 21:30:06.840941 systemd-logind[1425]: Removed session 2. Jan 13 21:30:06.865774 sshd[1521]: Accepted publickey for core from 10.0.0.1 port 51692 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:30:06.867214 sshd[1521]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:30:06.871208 systemd-logind[1425]: New session 3 of user core. Jan 13 21:30:06.886641 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 21:30:06.924685 systemd-networkd[1366]: eth0: Gained IPv6LL Jan 13 21:30:06.928829 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 21:30:06.930800 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 21:30:06.941825 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 13 21:30:06.944923 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:30:06.947276 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 21:30:06.953258 sshd[1521]: pam_unix(sshd:session): session closed for user core Jan 13 21:30:06.958693 systemd-logind[1425]: Session 3 logged out. Waiting for processes to exit. Jan 13 21:30:06.959549 systemd[1]: sshd@2-10.0.0.157:22-10.0.0.1:51692.service: Deactivated successfully. Jan 13 21:30:06.962372 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 21:30:06.964455 systemd-logind[1425]: Removed session 3. Jan 13 21:30:06.969562 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 13 21:30:06.969792 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 13 21:30:06.971562 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 21:30:06.973824 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 21:30:07.564537 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:30:07.566139 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 21:30:07.567623 systemd[1]: Startup finished in 691ms (kernel) + 5.208s (initrd) + 3.950s (userspace) = 9.851s. Jan 13 21:30:07.599979 (kubelet)[1549]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:30:08.006710 kubelet[1549]: E0113 21:30:08.006633 1549 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:30:08.010820 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:30:08.011038 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:30:16.967168 systemd[1]: Started sshd@3-10.0.0.157:22-10.0.0.1:54434.service - OpenSSH per-connection server daemon (10.0.0.1:54434). Jan 13 21:30:16.997963 sshd[1562]: Accepted publickey for core from 10.0.0.1 port 54434 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:30:16.999553 sshd[1562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:30:17.003501 systemd-logind[1425]: New session 4 of user core. Jan 13 21:30:17.012637 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 21:30:17.067372 sshd[1562]: pam_unix(sshd:session): session closed for user core Jan 13 21:30:17.076879 systemd[1]: sshd@3-10.0.0.157:22-10.0.0.1:54434.service: Deactivated successfully. Jan 13 21:30:17.078695 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 21:30:17.080365 systemd-logind[1425]: Session 4 logged out. Waiting for processes to exit. Jan 13 21:30:17.091808 systemd[1]: Started sshd@4-10.0.0.157:22-10.0.0.1:54442.service - OpenSSH per-connection server daemon (10.0.0.1:54442). Jan 13 21:30:17.092770 systemd-logind[1425]: Removed session 4. Jan 13 21:30:17.119199 sshd[1569]: Accepted publickey for core from 10.0.0.1 port 54442 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:30:17.120724 sshd[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:30:17.124714 systemd-logind[1425]: New session 5 of user core. Jan 13 21:30:17.134622 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 21:30:17.184354 sshd[1569]: pam_unix(sshd:session): session closed for user core Jan 13 21:30:17.204307 systemd[1]: sshd@4-10.0.0.157:22-10.0.0.1:54442.service: Deactivated successfully. Jan 13 21:30:17.206038 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 21:30:17.207645 systemd-logind[1425]: Session 5 logged out. Waiting for processes to exit. Jan 13 21:30:17.208861 systemd[1]: Started sshd@5-10.0.0.157:22-10.0.0.1:54454.service - OpenSSH per-connection server daemon (10.0.0.1:54454). Jan 13 21:30:17.209563 systemd-logind[1425]: Removed session 5. Jan 13 21:30:17.240437 sshd[1576]: Accepted publickey for core from 10.0.0.1 port 54454 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:30:17.242141 sshd[1576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:30:17.245843 systemd-logind[1425]: New session 6 of user core. Jan 13 21:30:17.257625 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 21:30:17.312107 sshd[1576]: pam_unix(sshd:session): session closed for user core Jan 13 21:30:17.331549 systemd[1]: sshd@5-10.0.0.157:22-10.0.0.1:54454.service: Deactivated successfully. Jan 13 21:30:17.333404 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 21:30:17.335184 systemd-logind[1425]: Session 6 logged out. Waiting for processes to exit. Jan 13 21:30:17.336621 systemd[1]: Started sshd@6-10.0.0.157:22-10.0.0.1:43874.service - OpenSSH per-connection server daemon (10.0.0.1:43874). Jan 13 21:30:17.337416 systemd-logind[1425]: Removed session 6. Jan 13 21:30:17.369207 sshd[1583]: Accepted publickey for core from 10.0.0.1 port 43874 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:30:17.370972 sshd[1583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:30:17.374955 systemd-logind[1425]: New session 7 of user core. Jan 13 21:30:17.384757 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 21:30:17.443115 sudo[1586]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 21:30:17.443453 sudo[1586]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:30:17.461633 sudo[1586]: pam_unix(sudo:session): session closed for user root Jan 13 21:30:17.463773 sshd[1583]: pam_unix(sshd:session): session closed for user core Jan 13 21:30:17.480351 systemd[1]: sshd@6-10.0.0.157:22-10.0.0.1:43874.service: Deactivated successfully. Jan 13 21:30:17.482184 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 21:30:17.483517 systemd-logind[1425]: Session 7 logged out. Waiting for processes to exit. Jan 13 21:30:17.491723 systemd[1]: Started sshd@7-10.0.0.157:22-10.0.0.1:43878.service - OpenSSH per-connection server daemon (10.0.0.1:43878). Jan 13 21:30:17.492713 systemd-logind[1425]: Removed session 7. Jan 13 21:30:17.518531 sshd[1591]: Accepted publickey for core from 10.0.0.1 port 43878 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:30:17.520094 sshd[1591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:30:17.523952 systemd-logind[1425]: New session 8 of user core. Jan 13 21:30:17.535633 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 21:30:17.588447 sudo[1595]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 21:30:17.588807 sudo[1595]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:30:17.592644 sudo[1595]: pam_unix(sudo:session): session closed for user root Jan 13 21:30:17.598592 sudo[1594]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 13 21:30:17.598922 sudo[1594]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:30:17.616714 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 13 21:30:17.618328 auditctl[1598]: No rules Jan 13 21:30:17.618736 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 21:30:17.618936 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 13 21:30:17.621343 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 13 21:30:17.651057 augenrules[1616]: No rules Jan 13 21:30:17.652801 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 13 21:30:17.654065 sudo[1594]: pam_unix(sudo:session): session closed for user root Jan 13 21:30:17.656138 sshd[1591]: pam_unix(sshd:session): session closed for user core Jan 13 21:30:17.666760 systemd[1]: sshd@7-10.0.0.157:22-10.0.0.1:43878.service: Deactivated successfully. Jan 13 21:30:17.668652 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 21:30:17.670231 systemd-logind[1425]: Session 8 logged out. Waiting for processes to exit. Jan 13 21:30:17.671651 systemd[1]: Started sshd@8-10.0.0.157:22-10.0.0.1:43884.service - OpenSSH per-connection server daemon (10.0.0.1:43884). Jan 13 21:30:17.672438 systemd-logind[1425]: Removed session 8. Jan 13 21:30:17.711931 sshd[1624]: Accepted publickey for core from 10.0.0.1 port 43884 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:30:17.713764 sshd[1624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:30:17.717910 systemd-logind[1425]: New session 9 of user core. Jan 13 21:30:17.731613 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 21:30:17.783824 sudo[1627]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 21:30:17.784152 sudo[1627]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:30:17.805773 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 13 21:30:17.823806 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 13 21:30:17.824038 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 13 21:30:18.100886 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 21:30:18.112734 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:30:18.264561 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 21:30:18.264654 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 21:30:18.264921 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:30:18.268097 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:30:18.289635 systemd[1]: Reloading requested from client PID 1675 ('systemctl') (unit session-9.scope)... Jan 13 21:30:18.289652 systemd[1]: Reloading... Jan 13 21:30:18.369527 zram_generator::config[1716]: No configuration found. Jan 13 21:30:18.777367 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:30:18.852563 systemd[1]: Reloading finished in 562 ms. Jan 13 21:30:18.905676 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:30:18.909635 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 21:30:18.909881 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:30:18.911362 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:30:19.055653 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:30:19.059977 (kubelet)[1763]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 21:30:19.092773 kubelet[1763]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:30:19.092773 kubelet[1763]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 21:30:19.092773 kubelet[1763]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:30:19.093846 kubelet[1763]: I0113 21:30:19.093793 1763 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 21:30:19.311562 kubelet[1763]: I0113 21:30:19.311298 1763 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 13 21:30:19.311562 kubelet[1763]: I0113 21:30:19.311336 1763 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 21:30:19.312178 kubelet[1763]: I0113 21:30:19.312156 1763 server.go:929] "Client rotation is on, will bootstrap in background" Jan 13 21:30:19.329025 kubelet[1763]: I0113 21:30:19.328999 1763 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:30:19.335993 kubelet[1763]: E0113 21:30:19.335956 1763 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 13 21:30:19.335993 kubelet[1763]: I0113 21:30:19.335985 1763 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 13 21:30:19.342081 kubelet[1763]: I0113 21:30:19.342057 1763 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 21:30:19.342936 kubelet[1763]: I0113 21:30:19.342918 1763 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 13 21:30:19.343099 kubelet[1763]: I0113 21:30:19.343072 1763 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 21:30:19.343251 kubelet[1763]: I0113 21:30:19.343096 1763 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.157","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 13 21:30:19.343336 kubelet[1763]: I0113 21:30:19.343261 1763 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 21:30:19.343336 kubelet[1763]: I0113 21:30:19.343271 1763 container_manager_linux.go:300] "Creating device plugin manager" Jan 13 21:30:19.343387 kubelet[1763]: I0113 21:30:19.343376 1763 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:30:19.344662 kubelet[1763]: I0113 21:30:19.344639 1763 kubelet.go:408] "Attempting to sync node with API server" Jan 13 21:30:19.344662 kubelet[1763]: I0113 21:30:19.344656 1763 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 21:30:19.344723 kubelet[1763]: I0113 21:30:19.344700 1763 kubelet.go:314] "Adding apiserver pod source" Jan 13 21:30:19.344723 kubelet[1763]: I0113 21:30:19.344715 1763 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 21:30:19.344806 kubelet[1763]: E0113 21:30:19.344752 1763 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:19.344806 kubelet[1763]: E0113 21:30:19.344791 1763 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:19.348968 kubelet[1763]: I0113 21:30:19.348939 1763 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 13 21:30:19.350834 kubelet[1763]: I0113 21:30:19.350693 1763 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 21:30:19.352123 kubelet[1763]: W0113 21:30:19.351179 1763 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 21:30:19.352123 kubelet[1763]: I0113 21:30:19.351999 1763 server.go:1269] "Started kubelet" Jan 13 21:30:19.352517 kubelet[1763]: W0113 21:30:19.352467 1763 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.157" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 13 21:30:19.352641 kubelet[1763]: W0113 21:30:19.352626 1763 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 13 21:30:19.352665 kubelet[1763]: E0113 21:30:19.352641 1763 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 13 21:30:19.352665 kubelet[1763]: E0113 21:30:19.352653 1763 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.0.0.157\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 13 21:30:19.352767 kubelet[1763]: I0113 21:30:19.352743 1763 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 21:30:19.352793 kubelet[1763]: I0113 21:30:19.352728 1763 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 21:30:19.353150 kubelet[1763]: I0113 21:30:19.353133 1763 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 21:30:19.354356 kubelet[1763]: I0113 21:30:19.354115 1763 server.go:460] "Adding debug handlers to kubelet server" Jan 13 21:30:19.355057 kubelet[1763]: I0113 21:30:19.354691 1763 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 21:30:19.355779 kubelet[1763]: I0113 21:30:19.355751 1763 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 13 21:30:19.355936 kubelet[1763]: I0113 21:30:19.355841 1763 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 13 21:30:19.355936 kubelet[1763]: I0113 21:30:19.355901 1763 reconciler.go:26] "Reconciler: start to sync state" Jan 13 21:30:19.356210 kubelet[1763]: I0113 21:30:19.356191 1763 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 13 21:30:19.357439 kubelet[1763]: E0113 21:30:19.356960 1763 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.157\" not found" Jan 13 21:30:19.358360 kubelet[1763]: E0113 21:30:19.358286 1763 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.157\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Jan 13 21:30:19.358408 kubelet[1763]: W0113 21:30:19.358376 1763 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 13 21:30:19.358408 kubelet[1763]: E0113 21:30:19.358397 1763 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Jan 13 21:30:19.359206 kubelet[1763]: I0113 21:30:19.359179 1763 factory.go:221] Registration of the containerd container factory successfully Jan 13 21:30:19.359206 kubelet[1763]: I0113 21:30:19.359196 1763 factory.go:221] Registration of the systemd container factory successfully Jan 13 21:30:19.359274 kubelet[1763]: E0113 21:30:19.359211 1763 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 21:30:19.359274 kubelet[1763]: I0113 21:30:19.359262 1763 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 21:30:19.371767 kubelet[1763]: I0113 21:30:19.371749 1763 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 21:30:19.371767 kubelet[1763]: I0113 21:30:19.371763 1763 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 21:30:19.371856 kubelet[1763]: I0113 21:30:19.371778 1763 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:30:19.371971 kubelet[1763]: E0113 21:30:19.370086 1763 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.157.181a5ddf2904323f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.157,UID:10.0.0.157,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.157,},FirstTimestamp:2025-01-13 21:30:19.351978559 +0000 UTC m=+0.287995034,LastTimestamp:2025-01-13 21:30:19.351978559 +0000 UTC m=+0.287995034,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.157,}" Jan 13 21:30:19.375061 kubelet[1763]: E0113 21:30:19.375002 1763 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.157.181a5ddf29726d43 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.157,UID:10.0.0.157,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:10.0.0.157,},FirstTimestamp:2025-01-13 21:30:19.359202627 +0000 UTC m=+0.295219102,LastTimestamp:2025-01-13 21:30:19.359202627 +0000 UTC m=+0.295219102,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.157,}" Jan 13 21:30:19.378548 kubelet[1763]: E0113 21:30:19.378433 1763 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.157.181a5ddf2a27ff33 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.157,UID:10.0.0.157,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 10.0.0.157 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:10.0.0.157,},FirstTimestamp:2025-01-13 21:30:19.371102003 +0000 UTC m=+0.307118478,LastTimestamp:2025-01-13 21:30:19.371102003 +0000 UTC m=+0.307118478,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.157,}" Jan 13 21:30:19.382025 kubelet[1763]: E0113 21:30:19.381908 1763 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.157.181a5ddf2a2810ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.157,UID:10.0.0.157,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 10.0.0.157 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:10.0.0.157,},FirstTimestamp:2025-01-13 21:30:19.371106541 +0000 UTC m=+0.307123016,LastTimestamp:2025-01-13 21:30:19.371106541 +0000 UTC m=+0.307123016,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.157,}" Jan 13 21:30:19.386242 kubelet[1763]: E0113 21:30:19.386126 1763 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.157.181a5ddf2a281b24 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.157,UID:10.0.0.157,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node 10.0.0.157 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:10.0.0.157,},FirstTimestamp:2025-01-13 21:30:19.371109156 +0000 UTC m=+0.307125631,LastTimestamp:2025-01-13 21:30:19.371109156 +0000 UTC m=+0.307125631,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.157,}" Jan 13 21:30:19.457365 kubelet[1763]: E0113 21:30:19.457325 1763 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.157\" not found" Jan 13 21:30:19.558083 kubelet[1763]: E0113 21:30:19.558057 1763 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.157\" not found" Jan 13 21:30:19.593682 kubelet[1763]: E0113 21:30:19.593545 1763 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.157\" not found" node="10.0.0.157" Jan 13 21:30:19.658957 kubelet[1763]: E0113 21:30:19.658897 1763 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.157\" not found" Jan 13 21:30:19.759462 kubelet[1763]: E0113 21:30:19.759416 1763 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.157\" not found" Jan 13 21:30:19.805316 kubelet[1763]: I0113 21:30:19.805272 1763 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 21:30:19.806578 kubelet[1763]: I0113 21:30:19.806558 1763 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 21:30:19.806664 kubelet[1763]: I0113 21:30:19.806601 1763 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 21:30:19.806664 kubelet[1763]: I0113 21:30:19.806626 1763 kubelet.go:2321] "Starting kubelet main sync loop" Jan 13 21:30:19.807799 kubelet[1763]: E0113 21:30:19.806763 1763 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 21:30:19.813369 kubelet[1763]: I0113 21:30:19.813341 1763 policy_none.go:49] "None policy: Start" Jan 13 21:30:19.813922 kubelet[1763]: I0113 21:30:19.813902 1763 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 21:30:19.813966 kubelet[1763]: I0113 21:30:19.813926 1763 state_mem.go:35] "Initializing new in-memory state store" Jan 13 21:30:19.820435 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 21:30:19.835205 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 21:30:19.838074 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 21:30:19.849042 kubelet[1763]: I0113 21:30:19.848416 1763 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 21:30:19.849042 kubelet[1763]: I0113 21:30:19.848821 1763 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 13 21:30:19.849042 kubelet[1763]: I0113 21:30:19.848836 1763 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 21:30:19.849190 kubelet[1763]: I0113 21:30:19.849072 1763 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 21:30:19.850182 kubelet[1763]: E0113 21:30:19.850141 1763 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.157\" not found" Jan 13 21:30:19.950616 kubelet[1763]: I0113 21:30:19.950574 1763 kubelet_node_status.go:72] "Attempting to register node" node="10.0.0.157" Jan 13 21:30:19.982713 kubelet[1763]: I0113 21:30:19.982672 1763 kubelet_node_status.go:75] "Successfully registered node" node="10.0.0.157" Jan 13 21:30:19.982713 kubelet[1763]: E0113 21:30:19.982697 1763 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"10.0.0.157\": node \"10.0.0.157\" not found" Jan 13 21:30:19.993736 kubelet[1763]: E0113 21:30:19.993693 1763 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.157\" not found" Jan 13 21:30:20.094762 kubelet[1763]: E0113 21:30:20.094705 1763 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.157\" not found" Jan 13 21:30:20.195284 kubelet[1763]: E0113 21:30:20.195227 1763 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.157\" not found" Jan 13 21:30:20.292171 sudo[1627]: pam_unix(sudo:session): session closed for user root Jan 13 21:30:20.293768 sshd[1624]: pam_unix(sshd:session): session closed for user core Jan 13 21:30:20.295726 kubelet[1763]: E0113 21:30:20.295701 1763 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.157\" not found" Jan 13 21:30:20.297134 systemd[1]: sshd@8-10.0.0.157:22-10.0.0.1:43884.service: Deactivated successfully. Jan 13 21:30:20.298887 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 21:30:20.299450 systemd-logind[1425]: Session 9 logged out. Waiting for processes to exit. Jan 13 21:30:20.300354 systemd-logind[1425]: Removed session 9. Jan 13 21:30:20.313886 kubelet[1763]: I0113 21:30:20.313854 1763 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 13 21:30:20.314061 kubelet[1763]: W0113 21:30:20.314015 1763 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 13 21:30:20.314061 kubelet[1763]: W0113 21:30:20.314028 1763 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 13 21:30:20.345192 kubelet[1763]: E0113 21:30:20.345161 1763 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:20.395824 kubelet[1763]: E0113 21:30:20.395774 1763 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.157\" not found" Jan 13 21:30:20.496716 kubelet[1763]: E0113 21:30:20.496546 1763 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.157\" not found" Jan 13 21:30:20.597124 kubelet[1763]: E0113 21:30:20.597083 1763 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.157\" not found" Jan 13 21:30:20.697672 kubelet[1763]: E0113 21:30:20.697614 1763 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.157\" not found" Jan 13 21:30:20.798290 kubelet[1763]: E0113 21:30:20.798172 1763 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.157\" not found" Jan 13 21:30:20.899093 kubelet[1763]: I0113 21:30:20.899073 1763 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 13 21:30:20.899516 containerd[1441]: time="2025-01-13T21:30:20.899462965Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 21:30:20.899893 kubelet[1763]: I0113 21:30:20.899654 1763 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 13 21:30:21.345543 kubelet[1763]: I0113 21:30:21.345490 1763 apiserver.go:52] "Watching apiserver" Jan 13 21:30:21.345987 kubelet[1763]: E0113 21:30:21.345485 1763 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:21.348525 kubelet[1763]: E0113 21:30:21.348467 1763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-77qcb" podUID="b39eacdd-e838-4890-93a2-6a032889b329" Jan 13 21:30:21.356050 systemd[1]: Created slice kubepods-besteffort-pod1c2431fa_4aba_49e8_ac42_d23815f6dfbc.slice - libcontainer container kubepods-besteffort-pod1c2431fa_4aba_49e8_ac42_d23815f6dfbc.slice. Jan 13 21:30:21.356436 kubelet[1763]: I0113 21:30:21.356320 1763 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 13 21:30:21.365457 systemd[1]: Created slice kubepods-besteffort-pod8aff7c8e_f9ff_4cdb_b23b_b3f4e4a01d5a.slice - libcontainer container kubepods-besteffort-pod8aff7c8e_f9ff_4cdb_b23b_b3f4e4a01d5a.slice. Jan 13 21:30:21.365621 kubelet[1763]: I0113 21:30:21.365449 1763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b39eacdd-e838-4890-93a2-6a032889b329-socket-dir\") pod \"csi-node-driver-77qcb\" (UID: \"b39eacdd-e838-4890-93a2-6a032889b329\") " pod="calico-system/csi-node-driver-77qcb" Jan 13 21:30:21.365621 kubelet[1763]: I0113 21:30:21.365497 1763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b39eacdd-e838-4890-93a2-6a032889b329-registration-dir\") pod \"csi-node-driver-77qcb\" (UID: \"b39eacdd-e838-4890-93a2-6a032889b329\") " pod="calico-system/csi-node-driver-77qcb" Jan 13 21:30:21.365621 kubelet[1763]: I0113 21:30:21.365536 1763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1c2431fa-4aba-49e8-ac42-d23815f6dfbc-lib-modules\") pod \"kube-proxy-55jzp\" (UID: \"1c2431fa-4aba-49e8-ac42-d23815f6dfbc\") " pod="kube-system/kube-proxy-55jzp" Jan 13 21:30:21.365621 kubelet[1763]: I0113 21:30:21.365552 1763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8aff7c8e-f9ff-4cdb-b23b-b3f4e4a01d5a-tigera-ca-bundle\") pod \"calico-node-bqr4w\" (UID: \"8aff7c8e-f9ff-4cdb-b23b-b3f4e4a01d5a\") " pod="calico-system/calico-node-bqr4w" Jan 13 21:30:21.365621 kubelet[1763]: I0113 21:30:21.365569 1763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/8aff7c8e-f9ff-4cdb-b23b-b3f4e4a01d5a-var-run-calico\") pod \"calico-node-bqr4w\" (UID: \"8aff7c8e-f9ff-4cdb-b23b-b3f4e4a01d5a\") " pod="calico-system/calico-node-bqr4w" Jan 13 21:30:21.365807 kubelet[1763]: I0113 21:30:21.365585 1763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/8aff7c8e-f9ff-4cdb-b23b-b3f4e4a01d5a-cni-log-dir\") pod \"calico-node-bqr4w\" (UID: \"8aff7c8e-f9ff-4cdb-b23b-b3f4e4a01d5a\") " pod="calico-system/calico-node-bqr4w" Jan 13 21:30:21.365807 kubelet[1763]: I0113 21:30:21.365602 1763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/8aff7c8e-f9ff-4cdb-b23b-b3f4e4a01d5a-flexvol-driver-host\") pod \"calico-node-bqr4w\" (UID: \"8aff7c8e-f9ff-4cdb-b23b-b3f4e4a01d5a\") " pod="calico-system/calico-node-bqr4w" Jan 13 21:30:21.365807 kubelet[1763]: I0113 21:30:21.365618 1763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b39eacdd-e838-4890-93a2-6a032889b329-varrun\") pod \"csi-node-driver-77qcb\" (UID: \"b39eacdd-e838-4890-93a2-6a032889b329\") " pod="calico-system/csi-node-driver-77qcb" Jan 13 21:30:21.365807 kubelet[1763]: I0113 21:30:21.365645 1763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1c2431fa-4aba-49e8-ac42-d23815f6dfbc-kube-proxy\") pod \"kube-proxy-55jzp\" (UID: \"1c2431fa-4aba-49e8-ac42-d23815f6dfbc\") " pod="kube-system/kube-proxy-55jzp" Jan 13 21:30:21.365807 kubelet[1763]: I0113 21:30:21.365659 1763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8aff7c8e-f9ff-4cdb-b23b-b3f4e4a01d5a-lib-modules\") pod \"calico-node-bqr4w\" (UID: \"8aff7c8e-f9ff-4cdb-b23b-b3f4e4a01d5a\") " pod="calico-system/calico-node-bqr4w" Jan 13 21:30:21.365962 kubelet[1763]: I0113 21:30:21.365671 1763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8aff7c8e-f9ff-4cdb-b23b-b3f4e4a01d5a-xtables-lock\") pod \"calico-node-bqr4w\" (UID: \"8aff7c8e-f9ff-4cdb-b23b-b3f4e4a01d5a\") " pod="calico-system/calico-node-bqr4w" Jan 13 21:30:21.365962 kubelet[1763]: I0113 21:30:21.365686 1763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6j2k\" (UniqueName: \"kubernetes.io/projected/b39eacdd-e838-4890-93a2-6a032889b329-kube-api-access-p6j2k\") pod \"csi-node-driver-77qcb\" (UID: \"b39eacdd-e838-4890-93a2-6a032889b329\") " pod="calico-system/csi-node-driver-77qcb" Jan 13 21:30:21.365962 kubelet[1763]: I0113 21:30:21.365728 1763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqxjh\" (UniqueName: \"kubernetes.io/projected/1c2431fa-4aba-49e8-ac42-d23815f6dfbc-kube-api-access-wqxjh\") pod \"kube-proxy-55jzp\" (UID: \"1c2431fa-4aba-49e8-ac42-d23815f6dfbc\") " pod="kube-system/kube-proxy-55jzp" Jan 13 21:30:21.365962 kubelet[1763]: I0113 21:30:21.365767 1763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8aff7c8e-f9ff-4cdb-b23b-b3f4e4a01d5a-var-lib-calico\") pod \"calico-node-bqr4w\" (UID: \"8aff7c8e-f9ff-4cdb-b23b-b3f4e4a01d5a\") " pod="calico-system/calico-node-bqr4w" Jan 13 21:30:21.365962 kubelet[1763]: I0113 21:30:21.365794 1763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88hnv\" (UniqueName: \"kubernetes.io/projected/8aff7c8e-f9ff-4cdb-b23b-b3f4e4a01d5a-kube-api-access-88hnv\") pod \"calico-node-bqr4w\" (UID: \"8aff7c8e-f9ff-4cdb-b23b-b3f4e4a01d5a\") " pod="calico-system/calico-node-bqr4w" Jan 13 21:30:21.366128 kubelet[1763]: I0113 21:30:21.365839 1763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1c2431fa-4aba-49e8-ac42-d23815f6dfbc-xtables-lock\") pod \"kube-proxy-55jzp\" (UID: \"1c2431fa-4aba-49e8-ac42-d23815f6dfbc\") " pod="kube-system/kube-proxy-55jzp" Jan 13 21:30:21.366128 kubelet[1763]: I0113 21:30:21.365858 1763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/8aff7c8e-f9ff-4cdb-b23b-b3f4e4a01d5a-policysync\") pod \"calico-node-bqr4w\" (UID: \"8aff7c8e-f9ff-4cdb-b23b-b3f4e4a01d5a\") " pod="calico-system/calico-node-bqr4w" Jan 13 21:30:21.366128 kubelet[1763]: I0113 21:30:21.365900 1763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/8aff7c8e-f9ff-4cdb-b23b-b3f4e4a01d5a-node-certs\") pod \"calico-node-bqr4w\" (UID: \"8aff7c8e-f9ff-4cdb-b23b-b3f4e4a01d5a\") " pod="calico-system/calico-node-bqr4w" Jan 13 21:30:21.366128 kubelet[1763]: I0113 21:30:21.365932 1763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/8aff7c8e-f9ff-4cdb-b23b-b3f4e4a01d5a-cni-bin-dir\") pod \"calico-node-bqr4w\" (UID: \"8aff7c8e-f9ff-4cdb-b23b-b3f4e4a01d5a\") " pod="calico-system/calico-node-bqr4w" Jan 13 21:30:21.366128 kubelet[1763]: I0113 21:30:21.365947 1763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/8aff7c8e-f9ff-4cdb-b23b-b3f4e4a01d5a-cni-net-dir\") pod \"calico-node-bqr4w\" (UID: \"8aff7c8e-f9ff-4cdb-b23b-b3f4e4a01d5a\") " pod="calico-system/calico-node-bqr4w" Jan 13 21:30:21.366289 kubelet[1763]: I0113 21:30:21.365961 1763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b39eacdd-e838-4890-93a2-6a032889b329-kubelet-dir\") pod \"csi-node-driver-77qcb\" (UID: \"b39eacdd-e838-4890-93a2-6a032889b329\") " pod="calico-system/csi-node-driver-77qcb" Jan 13 21:30:21.467470 kubelet[1763]: E0113 21:30:21.467430 1763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:21.467470 kubelet[1763]: W0113 21:30:21.467455 1763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:21.467470 kubelet[1763]: E0113 21:30:21.467474 1763 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:21.467732 kubelet[1763]: E0113 21:30:21.467714 1763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:21.467732 kubelet[1763]: W0113 21:30:21.467727 1763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:21.467807 kubelet[1763]: E0113 21:30:21.467737 1763 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:21.470031 kubelet[1763]: E0113 21:30:21.470012 1763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:21.470031 kubelet[1763]: W0113 21:30:21.470027 1763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:21.470191 kubelet[1763]: E0113 21:30:21.470037 1763 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:21.474051 kubelet[1763]: E0113 21:30:21.474037 1763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:21.474145 kubelet[1763]: W0113 21:30:21.474105 1763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:21.474145 kubelet[1763]: E0113 21:30:21.474119 1763 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:21.475012 kubelet[1763]: E0113 21:30:21.474995 1763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:21.475012 kubelet[1763]: W0113 21:30:21.475005 1763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:21.475163 kubelet[1763]: E0113 21:30:21.475020 1763 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:21.475231 kubelet[1763]: E0113 21:30:21.475208 1763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:21.475231 kubelet[1763]: W0113 21:30:21.475216 1763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:21.475231 kubelet[1763]: E0113 21:30:21.475224 1763 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:21.665028 kubelet[1763]: E0113 21:30:21.664992 1763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:30:21.665640 containerd[1441]: time="2025-01-13T21:30:21.665585248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-55jzp,Uid:1c2431fa-4aba-49e8-ac42-d23815f6dfbc,Namespace:kube-system,Attempt:0,}" Jan 13 21:30:21.668518 kubelet[1763]: E0113 21:30:21.668478 1763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:30:21.668990 containerd[1441]: time="2025-01-13T21:30:21.668945950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bqr4w,Uid:8aff7c8e-f9ff-4cdb-b23b-b3f4e4a01d5a,Namespace:calico-system,Attempt:0,}" Jan 13 21:30:22.346087 kubelet[1763]: E0113 21:30:22.346050 1763 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:22.496806 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1919484373.mount: Deactivated successfully. Jan 13 21:30:22.504988 containerd[1441]: time="2025-01-13T21:30:22.504947732Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:30:22.505979 containerd[1441]: time="2025-01-13T21:30:22.505932368Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:30:22.507038 containerd[1441]: time="2025-01-13T21:30:22.506990964Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 21:30:22.507943 containerd[1441]: time="2025-01-13T21:30:22.507881193Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 13 21:30:22.508969 containerd[1441]: time="2025-01-13T21:30:22.508930481Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:30:22.511724 containerd[1441]: time="2025-01-13T21:30:22.511676511Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:30:22.512802 containerd[1441]: time="2025-01-13T21:30:22.512770653Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 843.707974ms" Jan 13 21:30:22.514912 containerd[1441]: time="2025-01-13T21:30:22.514884096Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 849.210453ms" Jan 13 21:30:22.610774 containerd[1441]: time="2025-01-13T21:30:22.610647877Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:30:22.610999 containerd[1441]: time="2025-01-13T21:30:22.610927352Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:30:22.610999 containerd[1441]: time="2025-01-13T21:30:22.610979890Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:30:22.611070 containerd[1441]: time="2025-01-13T21:30:22.610997824Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:30:22.611447 containerd[1441]: time="2025-01-13T21:30:22.611299870Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:30:22.611447 containerd[1441]: time="2025-01-13T21:30:22.611271337Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:30:22.611447 containerd[1441]: time="2025-01-13T21:30:22.611298778Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:30:22.611447 containerd[1441]: time="2025-01-13T21:30:22.611383447Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:30:22.676711 systemd[1]: Started cri-containerd-a6b143e06db641e8a3e069493965c4b7109dec12138974bea441c71391d8fef8.scope - libcontainer container a6b143e06db641e8a3e069493965c4b7109dec12138974bea441c71391d8fef8. Jan 13 21:30:22.678545 systemd[1]: Started cri-containerd-d024c1656fd3fd9a8ebfa65c5c0eaa5ef7df804e1aadbe7c0876b6521671cd9f.scope - libcontainer container d024c1656fd3fd9a8ebfa65c5c0eaa5ef7df804e1aadbe7c0876b6521671cd9f. Jan 13 21:30:22.701735 containerd[1441]: time="2025-01-13T21:30:22.701692629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-55jzp,Uid:1c2431fa-4aba-49e8-ac42-d23815f6dfbc,Namespace:kube-system,Attempt:0,} returns sandbox id \"a6b143e06db641e8a3e069493965c4b7109dec12138974bea441c71391d8fef8\"" Jan 13 21:30:22.703569 kubelet[1763]: E0113 21:30:22.703271 1763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:30:22.704481 containerd[1441]: time="2025-01-13T21:30:22.704451413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bqr4w,Uid:8aff7c8e-f9ff-4cdb-b23b-b3f4e4a01d5a,Namespace:calico-system,Attempt:0,} returns sandbox id \"d024c1656fd3fd9a8ebfa65c5c0eaa5ef7df804e1aadbe7c0876b6521671cd9f\"" Jan 13 21:30:22.704688 containerd[1441]: time="2025-01-13T21:30:22.704497018Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\"" Jan 13 21:30:22.705084 kubelet[1763]: E0113 21:30:22.705054 1763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:30:22.807384 kubelet[1763]: E0113 21:30:22.807316 1763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-77qcb" podUID="b39eacdd-e838-4890-93a2-6a032889b329" Jan 13 21:30:23.346435 kubelet[1763]: E0113 21:30:23.346391 1763 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:24.346798 kubelet[1763]: E0113 21:30:24.346751 1763 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:24.578832 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2439456400.mount: Deactivated successfully. Jan 13 21:30:24.807816 kubelet[1763]: E0113 21:30:24.807769 1763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-77qcb" podUID="b39eacdd-e838-4890-93a2-6a032889b329" Jan 13 21:30:24.880275 containerd[1441]: time="2025-01-13T21:30:24.880222881Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:30:24.883586 containerd[1441]: time="2025-01-13T21:30:24.883518561Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.4: active requests=0, bytes read=30230243" Jan 13 21:30:24.884831 containerd[1441]: time="2025-01-13T21:30:24.884796388Z" level=info msg="ImageCreate event name:\"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:30:24.886997 containerd[1441]: time="2025-01-13T21:30:24.886935860Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:30:24.887469 containerd[1441]: time="2025-01-13T21:30:24.887429756Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.4\" with image id \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\", repo tag \"registry.k8s.io/kube-proxy:v1.31.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\", size \"30229262\" in 2.182738093s" Jan 13 21:30:24.887469 containerd[1441]: time="2025-01-13T21:30:24.887458901Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\"" Jan 13 21:30:24.888349 containerd[1441]: time="2025-01-13T21:30:24.888319164Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 13 21:30:24.889627 containerd[1441]: time="2025-01-13T21:30:24.889595107Z" level=info msg="CreateContainer within sandbox \"a6b143e06db641e8a3e069493965c4b7109dec12138974bea441c71391d8fef8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 21:30:24.904023 containerd[1441]: time="2025-01-13T21:30:24.903987628Z" level=info msg="CreateContainer within sandbox \"a6b143e06db641e8a3e069493965c4b7109dec12138974bea441c71391d8fef8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c86c37cc7d907a99df8b81649f7cf07bf56737789d7be9ba8232e9a1e8d62c1d\"" Jan 13 21:30:24.904571 containerd[1441]: time="2025-01-13T21:30:24.904527891Z" level=info msg="StartContainer for \"c86c37cc7d907a99df8b81649f7cf07bf56737789d7be9ba8232e9a1e8d62c1d\"" Jan 13 21:30:24.934655 systemd[1]: Started cri-containerd-c86c37cc7d907a99df8b81649f7cf07bf56737789d7be9ba8232e9a1e8d62c1d.scope - libcontainer container c86c37cc7d907a99df8b81649f7cf07bf56737789d7be9ba8232e9a1e8d62c1d. Jan 13 21:30:24.963765 containerd[1441]: time="2025-01-13T21:30:24.963694530Z" level=info msg="StartContainer for \"c86c37cc7d907a99df8b81649f7cf07bf56737789d7be9ba8232e9a1e8d62c1d\" returns successfully" Jan 13 21:30:25.347147 kubelet[1763]: E0113 21:30:25.347105 1763 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:25.819216 kubelet[1763]: E0113 21:30:25.819188 1763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:30:25.826538 kubelet[1763]: I0113 21:30:25.826450 1763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-55jzp" podStartSLOduration=3.642271951 podStartE2EDuration="5.82643727s" podCreationTimestamp="2025-01-13 21:30:20 +0000 UTC" firstStartedPulling="2025-01-13 21:30:22.704017479 +0000 UTC m=+3.640033954" lastFinishedPulling="2025-01-13 21:30:24.888182808 +0000 UTC m=+5.824199273" observedRunningTime="2025-01-13 21:30:25.826220634 +0000 UTC m=+6.762237109" watchObservedRunningTime="2025-01-13 21:30:25.82643727 +0000 UTC m=+6.762453745" Jan 13 21:30:25.887454 kubelet[1763]: E0113 21:30:25.887431 1763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:25.887454 kubelet[1763]: W0113 21:30:25.887448 1763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:25.887615 kubelet[1763]: E0113 21:30:25.887466 1763 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:25.887702 kubelet[1763]: E0113 21:30:25.887684 1763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:25.887735 kubelet[1763]: W0113 21:30:25.887720 1763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:25.887735 kubelet[1763]: E0113 21:30:25.887729 1763 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:25.887947 kubelet[1763]: E0113 21:30:25.887925 1763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:25.887947 kubelet[1763]: W0113 21:30:25.887938 1763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:25.888023 kubelet[1763]: E0113 21:30:25.887952 1763 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:25.888255 kubelet[1763]: E0113 21:30:25.888235 1763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:25.888255 kubelet[1763]: W0113 21:30:25.888247 1763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:25.888344 kubelet[1763]: E0113 21:30:25.888257 1763 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:25.888500 kubelet[1763]: E0113 21:30:25.888478 1763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:25.888500 kubelet[1763]: W0113 21:30:25.888491 1763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:25.888600 kubelet[1763]: E0113 21:30:25.888528 1763 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:25.888746 kubelet[1763]: E0113 21:30:25.888728 1763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:25.888746 kubelet[1763]: W0113 21:30:25.888740 1763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:25.888823 kubelet[1763]: E0113 21:30:25.888749 1763 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:25.888987 kubelet[1763]: E0113 21:30:25.888969 1763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:25.888987 kubelet[1763]: W0113 21:30:25.888980 1763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:25.889055 kubelet[1763]: E0113 21:30:25.888989 1763 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:25.889203 kubelet[1763]: E0113 21:30:25.889185 1763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:25.889203 kubelet[1763]: W0113 21:30:25.889196 1763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:25.889284 kubelet[1763]: E0113 21:30:25.889206 1763 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:25.889399 kubelet[1763]: E0113 21:30:25.889382 1763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:25.889399 kubelet[1763]: W0113 21:30:25.889392 1763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:25.889468 kubelet[1763]: E0113 21:30:25.889401 1763 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:25.889611 kubelet[1763]: E0113 21:30:25.889592 1763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:25.889611 kubelet[1763]: W0113 21:30:25.889604 1763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:25.889697 kubelet[1763]: E0113 21:30:25.889614 1763 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:25.889804 kubelet[1763]: E0113 21:30:25.889787 1763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:25.889804 kubelet[1763]: W0113 21:30:25.889798 1763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:25.889867 kubelet[1763]: E0113 21:30:25.889807 1763 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:25.889992 kubelet[1763]: E0113 21:30:25.889974 1763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:25.889992 kubelet[1763]: W0113 21:30:25.889985 1763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:25.890071 kubelet[1763]: E0113 21:30:25.889994 1763 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:25.890191 kubelet[1763]: E0113 21:30:25.890174 1763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:25.890191 kubelet[1763]: W0113 21:30:25.890184 1763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:25.890259 kubelet[1763]: E0113 21:30:25.890194 1763 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:25.890380 kubelet[1763]: E0113 21:30:25.890363 1763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:25.890380 kubelet[1763]: W0113 21:30:25.890373 1763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:25.890452 kubelet[1763]: E0113 21:30:25.890382 1763 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:25.890587 kubelet[1763]: E0113 21:30:25.890569 1763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:25.890587 kubelet[1763]: W0113 21:30:25.890580 1763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:25.890665 kubelet[1763]: E0113 21:30:25.890591 1763 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:25.890786 kubelet[1763]: E0113 21:30:25.890769 1763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:25.890786 kubelet[1763]: W0113 21:30:25.890779 1763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:25.890869 kubelet[1763]: E0113 21:30:25.890788 1763 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:25.890978 kubelet[1763]: E0113 21:30:25.890960 1763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:25.890978 kubelet[1763]: W0113 21:30:25.890972 1763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:25.891049 kubelet[1763]: E0113 21:30:25.890982 1763 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:25.891172 kubelet[1763]: E0113 21:30:25.891155 1763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:25.891172 kubelet[1763]: W0113 21:30:25.891166 1763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:25.891248 kubelet[1763]: E0113 21:30:25.891175 1763 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:25.891369 kubelet[1763]: E0113 21:30:25.891351 1763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:25.891369 kubelet[1763]: W0113 21:30:25.891362 1763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:25.891436 kubelet[1763]: E0113 21:30:25.891371 1763 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:25.891608 kubelet[1763]: E0113 21:30:25.891590 1763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:25.891608 kubelet[1763]: W0113 21:30:25.891601 1763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:25.891688 kubelet[1763]: E0113 21:30:25.891611 1763 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:25.899884 kubelet[1763]: E0113 21:30:25.899864 1763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:25.899884 kubelet[1763]: W0113 21:30:25.899878 1763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:25.899953 kubelet[1763]: E0113 21:30:25.899889 1763 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:25.900106 kubelet[1763]: E0113 21:30:25.900088 1763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:25.900106 kubelet[1763]: W0113 21:30:25.900100 1763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:25.900160 kubelet[1763]: E0113 21:30:25.900115 1763 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:25.900413 kubelet[1763]: E0113 21:30:25.900385 1763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:25.900413 kubelet[1763]: W0113 21:30:25.900407 1763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:25.900458 kubelet[1763]: E0113 21:30:25.900433 1763 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:25.900688 kubelet[1763]: E0113 21:30:25.900668 1763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:25.900688 kubelet[1763]: W0113 21:30:25.900679 1763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:25.900743 kubelet[1763]: E0113 21:30:25.900694 1763 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:25.900912 kubelet[1763]: E0113 21:30:25.900892 1763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:25.900912 kubelet[1763]: W0113 21:30:25.900904 1763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:25.900957 kubelet[1763]: E0113 21:30:25.900917 1763 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:25.901158 kubelet[1763]: E0113 21:30:25.901144 1763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:25.901158 kubelet[1763]: W0113 21:30:25.901154 1763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:25.901211 kubelet[1763]: E0113 21:30:25.901167 1763 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:25.901466 kubelet[1763]: E0113 21:30:25.901445 1763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:25.901466 kubelet[1763]: W0113 21:30:25.901461 1763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:25.901530 kubelet[1763]: E0113 21:30:25.901478 1763 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:25.901717 kubelet[1763]: E0113 21:30:25.901700 1763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:25.901717 kubelet[1763]: W0113 21:30:25.901712 1763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:25.901773 kubelet[1763]: E0113 21:30:25.901726 1763 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:25.901928 kubelet[1763]: E0113 21:30:25.901910 1763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:25.901928 kubelet[1763]: W0113 21:30:25.901922 1763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:25.901970 kubelet[1763]: E0113 21:30:25.901935 1763 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:25.902152 kubelet[1763]: E0113 21:30:25.902140 1763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:25.902152 kubelet[1763]: W0113 21:30:25.902150 1763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:25.902210 kubelet[1763]: E0113 21:30:25.902162 1763 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:25.902395 kubelet[1763]: E0113 21:30:25.902381 1763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:25.902426 kubelet[1763]: W0113 21:30:25.902394 1763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:25.902426 kubelet[1763]: E0113 21:30:25.902408 1763 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:25.902659 kubelet[1763]: E0113 21:30:25.902647 1763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:30:25.902659 kubelet[1763]: W0113 21:30:25.902658 1763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:30:25.902713 kubelet[1763]: E0113 21:30:25.902669 1763 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:30:26.347316 kubelet[1763]: E0113 21:30:26.347272 1763 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:26.368889 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3326443182.mount: Deactivated successfully. Jan 13 21:30:26.446884 containerd[1441]: time="2025-01-13T21:30:26.446810412Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:30:26.447636 containerd[1441]: time="2025-01-13T21:30:26.447583321Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Jan 13 21:30:26.448732 containerd[1441]: time="2025-01-13T21:30:26.448673816Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:30:26.450823 containerd[1441]: time="2025-01-13T21:30:26.450762333Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:30:26.451532 containerd[1441]: time="2025-01-13T21:30:26.451470341Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.563116201s" Jan 13 21:30:26.451571 containerd[1441]: time="2025-01-13T21:30:26.451529091Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 13 21:30:26.453678 containerd[1441]: time="2025-01-13T21:30:26.453647233Z" level=info msg="CreateContainer within sandbox \"d024c1656fd3fd9a8ebfa65c5c0eaa5ef7df804e1aadbe7c0876b6521671cd9f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 13 21:30:26.468261 containerd[1441]: time="2025-01-13T21:30:26.468214582Z" level=info msg="CreateContainer within sandbox \"d024c1656fd3fd9a8ebfa65c5c0eaa5ef7df804e1aadbe7c0876b6521671cd9f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ffc78146259c4c9ac23cc181481aa65ec58646dc57a541bea39bfbab81ba91d9\"" Jan 13 21:30:26.468803 containerd[1441]: time="2025-01-13T21:30:26.468762339Z" level=info msg="StartContainer for \"ffc78146259c4c9ac23cc181481aa65ec58646dc57a541bea39bfbab81ba91d9\"" Jan 13 21:30:26.498656 systemd[1]: Started cri-containerd-ffc78146259c4c9ac23cc181481aa65ec58646dc57a541bea39bfbab81ba91d9.scope - libcontainer container ffc78146259c4c9ac23cc181481aa65ec58646dc57a541bea39bfbab81ba91d9. Jan 13 21:30:26.527828 containerd[1441]: time="2025-01-13T21:30:26.527768086Z" level=info msg="StartContainer for \"ffc78146259c4c9ac23cc181481aa65ec58646dc57a541bea39bfbab81ba91d9\" returns successfully" Jan 13 21:30:26.540759 systemd[1]: cri-containerd-ffc78146259c4c9ac23cc181481aa65ec58646dc57a541bea39bfbab81ba91d9.scope: Deactivated successfully. Jan 13 21:30:26.807403 kubelet[1763]: E0113 21:30:26.807345 1763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-77qcb" podUID="b39eacdd-e838-4890-93a2-6a032889b329" Jan 13 21:30:26.822056 kubelet[1763]: E0113 21:30:26.822021 1763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:30:26.822117 kubelet[1763]: E0113 21:30:26.822021 1763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:30:26.954224 containerd[1441]: time="2025-01-13T21:30:26.954152659Z" level=info msg="shim disconnected" id=ffc78146259c4c9ac23cc181481aa65ec58646dc57a541bea39bfbab81ba91d9 namespace=k8s.io Jan 13 21:30:26.954224 containerd[1441]: time="2025-01-13T21:30:26.954212301Z" level=warning msg="cleaning up after shim disconnected" id=ffc78146259c4c9ac23cc181481aa65ec58646dc57a541bea39bfbab81ba91d9 namespace=k8s.io Jan 13 21:30:26.954224 containerd[1441]: time="2025-01-13T21:30:26.954224694Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:30:27.347797 kubelet[1763]: E0113 21:30:27.347749 1763 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:27.349400 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ffc78146259c4c9ac23cc181481aa65ec58646dc57a541bea39bfbab81ba91d9-rootfs.mount: Deactivated successfully. Jan 13 21:30:27.824187 kubelet[1763]: E0113 21:30:27.824153 1763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:30:27.824929 containerd[1441]: time="2025-01-13T21:30:27.824892778Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 13 21:30:28.348299 kubelet[1763]: E0113 21:30:28.348243 1763 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:28.807312 kubelet[1763]: E0113 21:30:28.807254 1763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-77qcb" podUID="b39eacdd-e838-4890-93a2-6a032889b329" Jan 13 21:30:29.349322 kubelet[1763]: E0113 21:30:29.349266 1763 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:30.350161 kubelet[1763]: E0113 21:30:30.350079 1763 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:30.807385 kubelet[1763]: E0113 21:30:30.807324 1763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-77qcb" podUID="b39eacdd-e838-4890-93a2-6a032889b329" Jan 13 21:30:31.214631 containerd[1441]: time="2025-01-13T21:30:31.214562759Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:30:31.215375 containerd[1441]: time="2025-01-13T21:30:31.215316633Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 13 21:30:31.216476 containerd[1441]: time="2025-01-13T21:30:31.216438958Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:30:31.218558 containerd[1441]: time="2025-01-13T21:30:31.218527695Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:30:31.219198 containerd[1441]: time="2025-01-13T21:30:31.219157315Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 3.394227198s" Jan 13 21:30:31.219198 containerd[1441]: time="2025-01-13T21:30:31.219187452Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 13 21:30:31.221122 containerd[1441]: time="2025-01-13T21:30:31.221099037Z" level=info msg="CreateContainer within sandbox \"d024c1656fd3fd9a8ebfa65c5c0eaa5ef7df804e1aadbe7c0876b6521671cd9f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 13 21:30:31.235727 containerd[1441]: time="2025-01-13T21:30:31.235698206Z" level=info msg="CreateContainer within sandbox \"d024c1656fd3fd9a8ebfa65c5c0eaa5ef7df804e1aadbe7c0876b6521671cd9f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"1b53524223b511346da34b18d961c104429c7b3a649512a7776a4c7293af459c\"" Jan 13 21:30:31.236141 containerd[1441]: time="2025-01-13T21:30:31.236112633Z" level=info msg="StartContainer for \"1b53524223b511346da34b18d961c104429c7b3a649512a7776a4c7293af459c\"" Jan 13 21:30:31.266628 systemd[1]: Started cri-containerd-1b53524223b511346da34b18d961c104429c7b3a649512a7776a4c7293af459c.scope - libcontainer container 1b53524223b511346da34b18d961c104429c7b3a649512a7776a4c7293af459c. Jan 13 21:30:31.294064 containerd[1441]: time="2025-01-13T21:30:31.294017164Z" level=info msg="StartContainer for \"1b53524223b511346da34b18d961c104429c7b3a649512a7776a4c7293af459c\" returns successfully" Jan 13 21:30:31.350648 kubelet[1763]: E0113 21:30:31.350587 1763 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:31.831545 kubelet[1763]: E0113 21:30:31.831493 1763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:30:32.351608 kubelet[1763]: E0113 21:30:32.351566 1763 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:32.807798 kubelet[1763]: E0113 21:30:32.807660 1763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-77qcb" podUID="b39eacdd-e838-4890-93a2-6a032889b329" Jan 13 21:30:32.833356 kubelet[1763]: E0113 21:30:32.833307 1763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:30:32.881882 containerd[1441]: time="2025-01-13T21:30:32.881825121Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 21:30:32.885018 systemd[1]: cri-containerd-1b53524223b511346da34b18d961c104429c7b3a649512a7776a4c7293af459c.scope: Deactivated successfully. Jan 13 21:30:32.902519 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1b53524223b511346da34b18d961c104429c7b3a649512a7776a4c7293af459c-rootfs.mount: Deactivated successfully. Jan 13 21:30:32.913895 kubelet[1763]: I0113 21:30:32.913104 1763 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 13 21:30:33.218592 containerd[1441]: time="2025-01-13T21:30:33.218534531Z" level=info msg="shim disconnected" id=1b53524223b511346da34b18d961c104429c7b3a649512a7776a4c7293af459c namespace=k8s.io Jan 13 21:30:33.218592 containerd[1441]: time="2025-01-13T21:30:33.218584074Z" level=warning msg="cleaning up after shim disconnected" id=1b53524223b511346da34b18d961c104429c7b3a649512a7776a4c7293af459c namespace=k8s.io Jan 13 21:30:33.218592 containerd[1441]: time="2025-01-13T21:30:33.218591869Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:30:33.352715 kubelet[1763]: E0113 21:30:33.352664 1763 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:33.835564 kubelet[1763]: E0113 21:30:33.835528 1763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:30:33.836192 containerd[1441]: time="2025-01-13T21:30:33.836154069Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 13 21:30:34.352976 kubelet[1763]: E0113 21:30:34.352921 1763 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:34.812230 systemd[1]: Created slice kubepods-besteffort-podb39eacdd_e838_4890_93a2_6a032889b329.slice - libcontainer container kubepods-besteffort-podb39eacdd_e838_4890_93a2_6a032889b329.slice. Jan 13 21:30:34.814521 containerd[1441]: time="2025-01-13T21:30:34.814471623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-77qcb,Uid:b39eacdd-e838-4890-93a2-6a032889b329,Namespace:calico-system,Attempt:0,}" Jan 13 21:30:34.875388 containerd[1441]: time="2025-01-13T21:30:34.875328562Z" level=error msg="Failed to destroy network for sandbox \"92ac9a1f66ef6c83798acc75404e5069bae0b9ad310c75d33ee6d02b107bb88f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:30:34.875790 containerd[1441]: time="2025-01-13T21:30:34.875761654Z" level=error msg="encountered an error cleaning up failed sandbox \"92ac9a1f66ef6c83798acc75404e5069bae0b9ad310c75d33ee6d02b107bb88f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:30:34.875836 containerd[1441]: time="2025-01-13T21:30:34.875814593Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-77qcb,Uid:b39eacdd-e838-4890-93a2-6a032889b329,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"92ac9a1f66ef6c83798acc75404e5069bae0b9ad310c75d33ee6d02b107bb88f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:30:34.876366 kubelet[1763]: E0113 21:30:34.876032 1763 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92ac9a1f66ef6c83798acc75404e5069bae0b9ad310c75d33ee6d02b107bb88f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:30:34.876366 kubelet[1763]: E0113 21:30:34.876096 1763 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92ac9a1f66ef6c83798acc75404e5069bae0b9ad310c75d33ee6d02b107bb88f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-77qcb" Jan 13 21:30:34.876366 kubelet[1763]: E0113 21:30:34.876113 1763 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92ac9a1f66ef6c83798acc75404e5069bae0b9ad310c75d33ee6d02b107bb88f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-77qcb" Jan 13 21:30:34.876462 kubelet[1763]: E0113 21:30:34.876159 1763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-77qcb_calico-system(b39eacdd-e838-4890-93a2-6a032889b329)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-77qcb_calico-system(b39eacdd-e838-4890-93a2-6a032889b329)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"92ac9a1f66ef6c83798acc75404e5069bae0b9ad310c75d33ee6d02b107bb88f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-77qcb" podUID="b39eacdd-e838-4890-93a2-6a032889b329" Jan 13 21:30:34.876941 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-92ac9a1f66ef6c83798acc75404e5069bae0b9ad310c75d33ee6d02b107bb88f-shm.mount: Deactivated successfully. Jan 13 21:30:35.353938 kubelet[1763]: E0113 21:30:35.353870 1763 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:35.839311 kubelet[1763]: I0113 21:30:35.839181 1763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92ac9a1f66ef6c83798acc75404e5069bae0b9ad310c75d33ee6d02b107bb88f" Jan 13 21:30:35.839769 containerd[1441]: time="2025-01-13T21:30:35.839735489Z" level=info msg="StopPodSandbox for \"92ac9a1f66ef6c83798acc75404e5069bae0b9ad310c75d33ee6d02b107bb88f\"" Jan 13 21:30:35.840133 containerd[1441]: time="2025-01-13T21:30:35.839907351Z" level=info msg="Ensure that sandbox 92ac9a1f66ef6c83798acc75404e5069bae0b9ad310c75d33ee6d02b107bb88f in task-service has been cleanup successfully" Jan 13 21:30:35.864812 containerd[1441]: time="2025-01-13T21:30:35.864759959Z" level=error msg="StopPodSandbox for \"92ac9a1f66ef6c83798acc75404e5069bae0b9ad310c75d33ee6d02b107bb88f\" failed" error="failed to destroy network for sandbox \"92ac9a1f66ef6c83798acc75404e5069bae0b9ad310c75d33ee6d02b107bb88f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:30:35.864997 kubelet[1763]: E0113 21:30:35.864966 1763 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"92ac9a1f66ef6c83798acc75404e5069bae0b9ad310c75d33ee6d02b107bb88f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="92ac9a1f66ef6c83798acc75404e5069bae0b9ad310c75d33ee6d02b107bb88f" Jan 13 21:30:35.865056 kubelet[1763]: E0113 21:30:35.865017 1763 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"92ac9a1f66ef6c83798acc75404e5069bae0b9ad310c75d33ee6d02b107bb88f"} Jan 13 21:30:35.865082 kubelet[1763]: E0113 21:30:35.865070 1763 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b39eacdd-e838-4890-93a2-6a032889b329\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"92ac9a1f66ef6c83798acc75404e5069bae0b9ad310c75d33ee6d02b107bb88f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 21:30:35.865144 kubelet[1763]: E0113 21:30:35.865092 1763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b39eacdd-e838-4890-93a2-6a032889b329\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"92ac9a1f66ef6c83798acc75404e5069bae0b9ad310c75d33ee6d02b107bb88f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-77qcb" podUID="b39eacdd-e838-4890-93a2-6a032889b329" Jan 13 21:30:36.355087 kubelet[1763]: E0113 21:30:36.355013 1763 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:36.487979 systemd[1]: Created slice kubepods-besteffort-pod7380b185_c82c_4172_9f86_41cc3da17d10.slice - libcontainer container kubepods-besteffort-pod7380b185_c82c_4172_9f86_41cc3da17d10.slice. Jan 13 21:30:36.565959 kubelet[1763]: I0113 21:30:36.565873 1763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b59j\" (UniqueName: \"kubernetes.io/projected/7380b185-c82c-4172-9f86-41cc3da17d10-kube-api-access-9b59j\") pod \"nginx-deployment-8587fbcb89-cg54g\" (UID: \"7380b185-c82c-4172-9f86-41cc3da17d10\") " pod="default/nginx-deployment-8587fbcb89-cg54g" Jan 13 21:30:36.791688 containerd[1441]: time="2025-01-13T21:30:36.791383799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-cg54g,Uid:7380b185-c82c-4172-9f86-41cc3da17d10,Namespace:default,Attempt:0,}" Jan 13 21:30:36.935151 containerd[1441]: time="2025-01-13T21:30:36.935089118Z" level=error msg="Failed to destroy network for sandbox \"fd8de53954dfa2d684218909318c3248cea66f318a21891764dd6830633eb931\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:30:36.936091 containerd[1441]: time="2025-01-13T21:30:36.935497814Z" level=error msg="encountered an error cleaning up failed sandbox \"fd8de53954dfa2d684218909318c3248cea66f318a21891764dd6830633eb931\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:30:36.936091 containerd[1441]: time="2025-01-13T21:30:36.935563277Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-cg54g,Uid:7380b185-c82c-4172-9f86-41cc3da17d10,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fd8de53954dfa2d684218909318c3248cea66f318a21891764dd6830633eb931\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:30:36.937536 kubelet[1763]: E0113 21:30:36.936733 1763 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd8de53954dfa2d684218909318c3248cea66f318a21891764dd6830633eb931\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:30:36.937536 kubelet[1763]: E0113 21:30:36.936815 1763 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd8de53954dfa2d684218909318c3248cea66f318a21891764dd6830633eb931\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-cg54g" Jan 13 21:30:36.937536 kubelet[1763]: E0113 21:30:36.936834 1763 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd8de53954dfa2d684218909318c3248cea66f318a21891764dd6830633eb931\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-cg54g" Jan 13 21:30:36.937683 kubelet[1763]: E0113 21:30:36.936870 1763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-cg54g_default(7380b185-c82c-4172-9f86-41cc3da17d10)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-cg54g_default(7380b185-c82c-4172-9f86-41cc3da17d10)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fd8de53954dfa2d684218909318c3248cea66f318a21891764dd6830633eb931\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-cg54g" podUID="7380b185-c82c-4172-9f86-41cc3da17d10" Jan 13 21:30:36.937840 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fd8de53954dfa2d684218909318c3248cea66f318a21891764dd6830633eb931-shm.mount: Deactivated successfully. Jan 13 21:30:37.355524 kubelet[1763]: E0113 21:30:37.355461 1763 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:37.843149 kubelet[1763]: I0113 21:30:37.843042 1763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd8de53954dfa2d684218909318c3248cea66f318a21891764dd6830633eb931" Jan 13 21:30:37.843582 containerd[1441]: time="2025-01-13T21:30:37.843548233Z" level=info msg="StopPodSandbox for \"fd8de53954dfa2d684218909318c3248cea66f318a21891764dd6830633eb931\"" Jan 13 21:30:37.843783 containerd[1441]: time="2025-01-13T21:30:37.843750623Z" level=info msg="Ensure that sandbox fd8de53954dfa2d684218909318c3248cea66f318a21891764dd6830633eb931 in task-service has been cleanup successfully" Jan 13 21:30:37.876802 containerd[1441]: time="2025-01-13T21:30:37.876736773Z" level=error msg="StopPodSandbox for \"fd8de53954dfa2d684218909318c3248cea66f318a21891764dd6830633eb931\" failed" error="failed to destroy network for sandbox \"fd8de53954dfa2d684218909318c3248cea66f318a21891764dd6830633eb931\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:30:37.877080 kubelet[1763]: E0113 21:30:37.876971 1763 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fd8de53954dfa2d684218909318c3248cea66f318a21891764dd6830633eb931\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fd8de53954dfa2d684218909318c3248cea66f318a21891764dd6830633eb931" Jan 13 21:30:37.877080 kubelet[1763]: E0113 21:30:37.877064 1763 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fd8de53954dfa2d684218909318c3248cea66f318a21891764dd6830633eb931"} Jan 13 21:30:37.877144 kubelet[1763]: E0113 21:30:37.877109 1763 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7380b185-c82c-4172-9f86-41cc3da17d10\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fd8de53954dfa2d684218909318c3248cea66f318a21891764dd6830633eb931\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 21:30:37.877223 kubelet[1763]: E0113 21:30:37.877149 1763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7380b185-c82c-4172-9f86-41cc3da17d10\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fd8de53954dfa2d684218909318c3248cea66f318a21891764dd6830633eb931\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-cg54g" podUID="7380b185-c82c-4172-9f86-41cc3da17d10" Jan 13 21:30:37.932482 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3678131342.mount: Deactivated successfully. Jan 13 21:30:38.356533 kubelet[1763]: E0113 21:30:38.356456 1763 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:38.554479 containerd[1441]: time="2025-01-13T21:30:38.554411591Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:30:38.555089 containerd[1441]: time="2025-01-13T21:30:38.555058037Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 13 21:30:38.556272 containerd[1441]: time="2025-01-13T21:30:38.556212944Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:30:38.558278 containerd[1441]: time="2025-01-13T21:30:38.558246325Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:30:38.558785 containerd[1441]: time="2025-01-13T21:30:38.558752020Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 4.722564158s" Jan 13 21:30:38.558822 containerd[1441]: time="2025-01-13T21:30:38.558784142Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 13 21:30:38.566864 containerd[1441]: time="2025-01-13T21:30:38.566735225Z" level=info msg="CreateContainer within sandbox \"d024c1656fd3fd9a8ebfa65c5c0eaa5ef7df804e1aadbe7c0876b6521671cd9f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 13 21:30:38.582610 containerd[1441]: time="2025-01-13T21:30:38.582564187Z" level=info msg="CreateContainer within sandbox \"d024c1656fd3fd9a8ebfa65c5c0eaa5ef7df804e1aadbe7c0876b6521671cd9f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"3794289bc1e7b9699b8616b71b43da264299af1e1cb4992509c681bb72c7a1bf\"" Jan 13 21:30:38.583063 containerd[1441]: time="2025-01-13T21:30:38.583026229Z" level=info msg="StartContainer for \"3794289bc1e7b9699b8616b71b43da264299af1e1cb4992509c681bb72c7a1bf\"" Jan 13 21:30:38.609645 systemd[1]: Started cri-containerd-3794289bc1e7b9699b8616b71b43da264299af1e1cb4992509c681bb72c7a1bf.scope - libcontainer container 3794289bc1e7b9699b8616b71b43da264299af1e1cb4992509c681bb72c7a1bf. Jan 13 21:30:38.639851 containerd[1441]: time="2025-01-13T21:30:38.639807230Z" level=info msg="StartContainer for \"3794289bc1e7b9699b8616b71b43da264299af1e1cb4992509c681bb72c7a1bf\" returns successfully" Jan 13 21:30:38.698777 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 13 21:30:38.698874 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 13 21:30:38.847000 kubelet[1763]: E0113 21:30:38.846950 1763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:30:38.860312 kubelet[1763]: I0113 21:30:38.860155 1763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-bqr4w" podStartSLOduration=4.006193846 podStartE2EDuration="19.860139214s" podCreationTimestamp="2025-01-13 21:30:19 +0000 UTC" firstStartedPulling="2025-01-13 21:30:22.705474501 +0000 UTC m=+3.641490976" lastFinishedPulling="2025-01-13 21:30:38.559419879 +0000 UTC m=+19.495436344" observedRunningTime="2025-01-13 21:30:38.859705978 +0000 UTC m=+19.795722453" watchObservedRunningTime="2025-01-13 21:30:38.860139214 +0000 UTC m=+19.796155689" Jan 13 21:30:39.345731 kubelet[1763]: E0113 21:30:39.345573 1763 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:39.356683 kubelet[1763]: E0113 21:30:39.356648 1763 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:40.020537 kernel: bpftool[2579]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 13 21:30:40.244326 systemd-networkd[1366]: vxlan.calico: Link UP Jan 13 21:30:40.244337 systemd-networkd[1366]: vxlan.calico: Gained carrier Jan 13 21:30:40.356975 kubelet[1763]: E0113 21:30:40.356936 1763 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:41.357438 kubelet[1763]: E0113 21:30:41.357388 1763 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:41.677037 kubelet[1763]: I0113 21:30:41.676892 1763 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 21:30:41.677335 kubelet[1763]: E0113 21:30:41.677269 1763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:30:41.749943 systemd[1]: run-containerd-runc-k8s.io-3794289bc1e7b9699b8616b71b43da264299af1e1cb4992509c681bb72c7a1bf-runc.LOaRal.mount: Deactivated successfully. Jan 13 21:30:42.316729 systemd-networkd[1366]: vxlan.calico: Gained IPv6LL Jan 13 21:30:42.358151 kubelet[1763]: E0113 21:30:42.358112 1763 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:43.358790 kubelet[1763]: E0113 21:30:43.358741 1763 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:44.358942 kubelet[1763]: E0113 21:30:44.358892 1763 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:45.359855 kubelet[1763]: E0113 21:30:45.359809 1763 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:46.360061 kubelet[1763]: E0113 21:30:46.359955 1763 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:47.360569 kubelet[1763]: E0113 21:30:47.360483 1763 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:48.360987 kubelet[1763]: E0113 21:30:48.360922 1763 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:48.808535 containerd[1441]: time="2025-01-13T21:30:48.808373714Z" level=info msg="StopPodSandbox for \"92ac9a1f66ef6c83798acc75404e5069bae0b9ad310c75d33ee6d02b107bb88f\"" Jan 13 21:30:48.949722 containerd[1441]: 2025-01-13 21:30:48.914 [INFO][2721] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="92ac9a1f66ef6c83798acc75404e5069bae0b9ad310c75d33ee6d02b107bb88f" Jan 13 21:30:48.949722 containerd[1441]: 2025-01-13 21:30:48.914 [INFO][2721] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="92ac9a1f66ef6c83798acc75404e5069bae0b9ad310c75d33ee6d02b107bb88f" iface="eth0" netns="/var/run/netns/cni-dd1a9582-0a5e-59d4-b042-d236d9798cfc" Jan 13 21:30:48.949722 containerd[1441]: 2025-01-13 21:30:48.915 [INFO][2721] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="92ac9a1f66ef6c83798acc75404e5069bae0b9ad310c75d33ee6d02b107bb88f" iface="eth0" netns="/var/run/netns/cni-dd1a9582-0a5e-59d4-b042-d236d9798cfc" Jan 13 21:30:48.949722 containerd[1441]: 2025-01-13 21:30:48.915 [INFO][2721] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="92ac9a1f66ef6c83798acc75404e5069bae0b9ad310c75d33ee6d02b107bb88f" iface="eth0" netns="/var/run/netns/cni-dd1a9582-0a5e-59d4-b042-d236d9798cfc" Jan 13 21:30:48.949722 containerd[1441]: 2025-01-13 21:30:48.915 [INFO][2721] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="92ac9a1f66ef6c83798acc75404e5069bae0b9ad310c75d33ee6d02b107bb88f" Jan 13 21:30:48.949722 containerd[1441]: 2025-01-13 21:30:48.915 [INFO][2721] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="92ac9a1f66ef6c83798acc75404e5069bae0b9ad310c75d33ee6d02b107bb88f" Jan 13 21:30:48.949722 containerd[1441]: 2025-01-13 21:30:48.936 [INFO][2728] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="92ac9a1f66ef6c83798acc75404e5069bae0b9ad310c75d33ee6d02b107bb88f" HandleID="k8s-pod-network.92ac9a1f66ef6c83798acc75404e5069bae0b9ad310c75d33ee6d02b107bb88f" Workload="10.0.0.157-k8s-csi--node--driver--77qcb-eth0" Jan 13 21:30:48.949722 containerd[1441]: 2025-01-13 21:30:48.936 [INFO][2728] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:30:48.949722 containerd[1441]: 2025-01-13 21:30:48.936 [INFO][2728] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:30:48.949722 containerd[1441]: 2025-01-13 21:30:48.941 [WARNING][2728] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="92ac9a1f66ef6c83798acc75404e5069bae0b9ad310c75d33ee6d02b107bb88f" HandleID="k8s-pod-network.92ac9a1f66ef6c83798acc75404e5069bae0b9ad310c75d33ee6d02b107bb88f" Workload="10.0.0.157-k8s-csi--node--driver--77qcb-eth0" Jan 13 21:30:48.949722 containerd[1441]: 2025-01-13 21:30:48.941 [INFO][2728] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="92ac9a1f66ef6c83798acc75404e5069bae0b9ad310c75d33ee6d02b107bb88f" HandleID="k8s-pod-network.92ac9a1f66ef6c83798acc75404e5069bae0b9ad310c75d33ee6d02b107bb88f" Workload="10.0.0.157-k8s-csi--node--driver--77qcb-eth0" Jan 13 21:30:48.949722 containerd[1441]: 2025-01-13 21:30:48.943 [INFO][2728] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:30:48.949722 containerd[1441]: 2025-01-13 21:30:48.947 [INFO][2721] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="92ac9a1f66ef6c83798acc75404e5069bae0b9ad310c75d33ee6d02b107bb88f" Jan 13 21:30:48.950273 containerd[1441]: time="2025-01-13T21:30:48.949859928Z" level=info msg="TearDown network for sandbox \"92ac9a1f66ef6c83798acc75404e5069bae0b9ad310c75d33ee6d02b107bb88f\" successfully" Jan 13 21:30:48.950273 containerd[1441]: time="2025-01-13T21:30:48.949890656Z" level=info msg="StopPodSandbox for \"92ac9a1f66ef6c83798acc75404e5069bae0b9ad310c75d33ee6d02b107bb88f\" returns successfully" Jan 13 21:30:48.950894 containerd[1441]: time="2025-01-13T21:30:48.950602010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-77qcb,Uid:b39eacdd-e838-4890-93a2-6a032889b329,Namespace:calico-system,Attempt:1,}" Jan 13 21:30:48.951573 systemd[1]: run-netns-cni\x2ddd1a9582\x2d0a5e\x2d59d4\x2db042\x2dd236d9798cfc.mount: Deactivated successfully. Jan 13 21:30:49.049683 systemd-networkd[1366]: calia712f303f51: Link UP Jan 13 21:30:49.050097 systemd-networkd[1366]: calia712f303f51: Gained carrier Jan 13 21:30:49.061270 containerd[1441]: 2025-01-13 21:30:48.991 [INFO][2736] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.157-k8s-csi--node--driver--77qcb-eth0 csi-node-driver- calico-system b39eacdd-e838-4890-93a2-6a032889b329 1057 0 2025-01-13 21:30:19 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.0.0.157 csi-node-driver-77qcb eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia712f303f51 [] []}} ContainerID="fe89d5b1abf495c82cefe1800fd7d99fcb766efb72e01b475dbb8e70b3b9f1f2" Namespace="calico-system" Pod="csi-node-driver-77qcb" WorkloadEndpoint="10.0.0.157-k8s-csi--node--driver--77qcb-" Jan 13 21:30:49.061270 containerd[1441]: 2025-01-13 21:30:48.992 [INFO][2736] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="fe89d5b1abf495c82cefe1800fd7d99fcb766efb72e01b475dbb8e70b3b9f1f2" Namespace="calico-system" Pod="csi-node-driver-77qcb" WorkloadEndpoint="10.0.0.157-k8s-csi--node--driver--77qcb-eth0" Jan 13 21:30:49.061270 containerd[1441]: 2025-01-13 21:30:49.017 [INFO][2750] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fe89d5b1abf495c82cefe1800fd7d99fcb766efb72e01b475dbb8e70b3b9f1f2" HandleID="k8s-pod-network.fe89d5b1abf495c82cefe1800fd7d99fcb766efb72e01b475dbb8e70b3b9f1f2" Workload="10.0.0.157-k8s-csi--node--driver--77qcb-eth0" Jan 13 21:30:49.061270 containerd[1441]: 2025-01-13 21:30:49.024 [INFO][2750] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fe89d5b1abf495c82cefe1800fd7d99fcb766efb72e01b475dbb8e70b3b9f1f2" HandleID="k8s-pod-network.fe89d5b1abf495c82cefe1800fd7d99fcb766efb72e01b475dbb8e70b3b9f1f2" Workload="10.0.0.157-k8s-csi--node--driver--77qcb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005a0430), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.157", "pod":"csi-node-driver-77qcb", "timestamp":"2025-01-13 21:30:49.017267694 +0000 UTC"}, Hostname:"10.0.0.157", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:30:49.061270 containerd[1441]: 2025-01-13 21:30:49.024 [INFO][2750] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:30:49.061270 containerd[1441]: 2025-01-13 21:30:49.024 [INFO][2750] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:30:49.061270 containerd[1441]: 2025-01-13 21:30:49.024 [INFO][2750] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.157' Jan 13 21:30:49.061270 containerd[1441]: 2025-01-13 21:30:49.026 [INFO][2750] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.fe89d5b1abf495c82cefe1800fd7d99fcb766efb72e01b475dbb8e70b3b9f1f2" host="10.0.0.157" Jan 13 21:30:49.061270 containerd[1441]: 2025-01-13 21:30:49.029 [INFO][2750] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.157" Jan 13 21:30:49.061270 containerd[1441]: 2025-01-13 21:30:49.032 [INFO][2750] ipam/ipam.go 489: Trying affinity for 192.168.116.0/26 host="10.0.0.157" Jan 13 21:30:49.061270 containerd[1441]: 2025-01-13 21:30:49.034 [INFO][2750] ipam/ipam.go 155: Attempting to load block cidr=192.168.116.0/26 host="10.0.0.157" Jan 13 21:30:49.061270 containerd[1441]: 2025-01-13 21:30:49.035 [INFO][2750] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.116.0/26 host="10.0.0.157" Jan 13 21:30:49.061270 containerd[1441]: 2025-01-13 21:30:49.035 [INFO][2750] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.116.0/26 handle="k8s-pod-network.fe89d5b1abf495c82cefe1800fd7d99fcb766efb72e01b475dbb8e70b3b9f1f2" host="10.0.0.157" Jan 13 21:30:49.061270 containerd[1441]: 2025-01-13 21:30:49.036 [INFO][2750] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.fe89d5b1abf495c82cefe1800fd7d99fcb766efb72e01b475dbb8e70b3b9f1f2 Jan 13 21:30:49.061270 containerd[1441]: 2025-01-13 21:30:49.039 [INFO][2750] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.116.0/26 handle="k8s-pod-network.fe89d5b1abf495c82cefe1800fd7d99fcb766efb72e01b475dbb8e70b3b9f1f2" host="10.0.0.157" Jan 13 21:30:49.061270 containerd[1441]: 2025-01-13 21:30:49.045 [INFO][2750] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.116.1/26] block=192.168.116.0/26 handle="k8s-pod-network.fe89d5b1abf495c82cefe1800fd7d99fcb766efb72e01b475dbb8e70b3b9f1f2" host="10.0.0.157" Jan 13 21:30:49.061270 containerd[1441]: 2025-01-13 21:30:49.045 [INFO][2750] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.116.1/26] handle="k8s-pod-network.fe89d5b1abf495c82cefe1800fd7d99fcb766efb72e01b475dbb8e70b3b9f1f2" host="10.0.0.157" Jan 13 21:30:49.061270 containerd[1441]: 2025-01-13 21:30:49.045 [INFO][2750] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:30:49.061270 containerd[1441]: 2025-01-13 21:30:49.045 [INFO][2750] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.116.1/26] IPv6=[] ContainerID="fe89d5b1abf495c82cefe1800fd7d99fcb766efb72e01b475dbb8e70b3b9f1f2" HandleID="k8s-pod-network.fe89d5b1abf495c82cefe1800fd7d99fcb766efb72e01b475dbb8e70b3b9f1f2" Workload="10.0.0.157-k8s-csi--node--driver--77qcb-eth0" Jan 13 21:30:49.062039 containerd[1441]: 2025-01-13 21:30:49.047 [INFO][2736] cni-plugin/k8s.go 386: Populated endpoint ContainerID="fe89d5b1abf495c82cefe1800fd7d99fcb766efb72e01b475dbb8e70b3b9f1f2" Namespace="calico-system" Pod="csi-node-driver-77qcb" WorkloadEndpoint="10.0.0.157-k8s-csi--node--driver--77qcb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.157-k8s-csi--node--driver--77qcb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b39eacdd-e838-4890-93a2-6a032889b329", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 30, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.157", ContainerID:"", Pod:"csi-node-driver-77qcb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.116.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia712f303f51", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:30:49.062039 containerd[1441]: 2025-01-13 21:30:49.047 [INFO][2736] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.116.1/32] ContainerID="fe89d5b1abf495c82cefe1800fd7d99fcb766efb72e01b475dbb8e70b3b9f1f2" Namespace="calico-system" Pod="csi-node-driver-77qcb" WorkloadEndpoint="10.0.0.157-k8s-csi--node--driver--77qcb-eth0" Jan 13 21:30:49.062039 containerd[1441]: 2025-01-13 21:30:49.047 [INFO][2736] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia712f303f51 ContainerID="fe89d5b1abf495c82cefe1800fd7d99fcb766efb72e01b475dbb8e70b3b9f1f2" Namespace="calico-system" Pod="csi-node-driver-77qcb" WorkloadEndpoint="10.0.0.157-k8s-csi--node--driver--77qcb-eth0" Jan 13 21:30:49.062039 containerd[1441]: 2025-01-13 21:30:49.050 [INFO][2736] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fe89d5b1abf495c82cefe1800fd7d99fcb766efb72e01b475dbb8e70b3b9f1f2" Namespace="calico-system" Pod="csi-node-driver-77qcb" WorkloadEndpoint="10.0.0.157-k8s-csi--node--driver--77qcb-eth0" Jan 13 21:30:49.062039 containerd[1441]: 2025-01-13 21:30:49.050 [INFO][2736] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="fe89d5b1abf495c82cefe1800fd7d99fcb766efb72e01b475dbb8e70b3b9f1f2" Namespace="calico-system" Pod="csi-node-driver-77qcb" WorkloadEndpoint="10.0.0.157-k8s-csi--node--driver--77qcb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.157-k8s-csi--node--driver--77qcb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b39eacdd-e838-4890-93a2-6a032889b329", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 30, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.157", ContainerID:"fe89d5b1abf495c82cefe1800fd7d99fcb766efb72e01b475dbb8e70b3b9f1f2", Pod:"csi-node-driver-77qcb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.116.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia712f303f51", MAC:"8a:7c:c8:1f:74:97", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:30:49.062039 containerd[1441]: 2025-01-13 21:30:49.056 [INFO][2736] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="fe89d5b1abf495c82cefe1800fd7d99fcb766efb72e01b475dbb8e70b3b9f1f2" Namespace="calico-system" Pod="csi-node-driver-77qcb" WorkloadEndpoint="10.0.0.157-k8s-csi--node--driver--77qcb-eth0" Jan 13 21:30:49.080500 containerd[1441]: time="2025-01-13T21:30:49.079945578Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:30:49.080500 containerd[1441]: time="2025-01-13T21:30:49.080483932Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:30:49.080612 containerd[1441]: time="2025-01-13T21:30:49.080496796Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:30:49.080641 containerd[1441]: time="2025-01-13T21:30:49.080590966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:30:49.102644 systemd[1]: Started cri-containerd-fe89d5b1abf495c82cefe1800fd7d99fcb766efb72e01b475dbb8e70b3b9f1f2.scope - libcontainer container fe89d5b1abf495c82cefe1800fd7d99fcb766efb72e01b475dbb8e70b3b9f1f2. Jan 13 21:30:49.113175 systemd-resolved[1321]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:30:49.123566 containerd[1441]: time="2025-01-13T21:30:49.123525190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-77qcb,Uid:b39eacdd-e838-4890-93a2-6a032889b329,Namespace:calico-system,Attempt:1,} returns sandbox id \"fe89d5b1abf495c82cefe1800fd7d99fcb766efb72e01b475dbb8e70b3b9f1f2\"" Jan 13 21:30:49.124992 containerd[1441]: time="2025-01-13T21:30:49.124961441Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 13 21:30:49.361786 kubelet[1763]: E0113 21:30:49.361740 1763 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:49.951530 systemd[1]: run-containerd-runc-k8s.io-fe89d5b1abf495c82cefe1800fd7d99fcb766efb72e01b475dbb8e70b3b9f1f2-runc.450fP0.mount: Deactivated successfully. Jan 13 21:30:50.362554 kubelet[1763]: E0113 21:30:50.362520 1763 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:50.637039 systemd-networkd[1366]: calia712f303f51: Gained IPv6LL Jan 13 21:30:50.694090 containerd[1441]: time="2025-01-13T21:30:50.694030779Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:30:50.694883 containerd[1441]: time="2025-01-13T21:30:50.694845427Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 13 21:30:50.696064 containerd[1441]: time="2025-01-13T21:30:50.696025710Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:30:50.698020 containerd[1441]: time="2025-01-13T21:30:50.697980594Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:30:50.698574 containerd[1441]: time="2025-01-13T21:30:50.698538413Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.573540502s" Jan 13 21:30:50.698643 containerd[1441]: time="2025-01-13T21:30:50.698575543Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 13 21:30:50.700331 containerd[1441]: time="2025-01-13T21:30:50.700293046Z" level=info msg="CreateContainer within sandbox \"fe89d5b1abf495c82cefe1800fd7d99fcb766efb72e01b475dbb8e70b3b9f1f2\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 13 21:30:50.714929 containerd[1441]: time="2025-01-13T21:30:50.714891810Z" level=info msg="CreateContainer within sandbox \"fe89d5b1abf495c82cefe1800fd7d99fcb766efb72e01b475dbb8e70b3b9f1f2\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"59f85dbb1203ece0ea7b3a8cf6d526ba7fef784d65f0da710fc37893dfdb63ba\"" Jan 13 21:30:50.715416 containerd[1441]: time="2025-01-13T21:30:50.715386199Z" level=info msg="StartContainer for \"59f85dbb1203ece0ea7b3a8cf6d526ba7fef784d65f0da710fc37893dfdb63ba\"" Jan 13 21:30:50.718579 update_engine[1430]: I20250113 21:30:50.718535 1430 update_attempter.cc:509] Updating boot flags... Jan 13 21:30:50.743538 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2845) Jan 13 21:30:50.770682 systemd[1]: Started cri-containerd-59f85dbb1203ece0ea7b3a8cf6d526ba7fef784d65f0da710fc37893dfdb63ba.scope - libcontainer container 59f85dbb1203ece0ea7b3a8cf6d526ba7fef784d65f0da710fc37893dfdb63ba. Jan 13 21:30:50.803408 containerd[1441]: time="2025-01-13T21:30:50.803351422Z" level=info msg="StartContainer for \"59f85dbb1203ece0ea7b3a8cf6d526ba7fef784d65f0da710fc37893dfdb63ba\" returns successfully" Jan 13 21:30:50.804588 containerd[1441]: time="2025-01-13T21:30:50.804556081Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 13 21:30:51.363287 kubelet[1763]: E0113 21:30:51.363251 1763 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:52.177188 containerd[1441]: time="2025-01-13T21:30:52.177130339Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:30:52.178034 containerd[1441]: time="2025-01-13T21:30:52.177993707Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 13 21:30:52.179254 containerd[1441]: time="2025-01-13T21:30:52.179225954Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:30:52.181229 containerd[1441]: time="2025-01-13T21:30:52.181192644Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:30:52.181844 containerd[1441]: time="2025-01-13T21:30:52.181799095Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.37720842s" Jan 13 21:30:52.181871 containerd[1441]: time="2025-01-13T21:30:52.181845302Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 13 21:30:52.183703 containerd[1441]: time="2025-01-13T21:30:52.183667219Z" level=info msg="CreateContainer within sandbox \"fe89d5b1abf495c82cefe1800fd7d99fcb766efb72e01b475dbb8e70b3b9f1f2\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 13 21:30:52.199187 containerd[1441]: time="2025-01-13T21:30:52.199148954Z" level=info msg="CreateContainer within sandbox \"fe89d5b1abf495c82cefe1800fd7d99fcb766efb72e01b475dbb8e70b3b9f1f2\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"a0b775a058f8c28ef3f56611824a56f4ce3eab746eff49afe12caf29503d5874\"" Jan 13 21:30:52.199642 containerd[1441]: time="2025-01-13T21:30:52.199590292Z" level=info msg="StartContainer for \"a0b775a058f8c28ef3f56611824a56f4ce3eab746eff49afe12caf29503d5874\"" Jan 13 21:30:52.233637 systemd[1]: Started cri-containerd-a0b775a058f8c28ef3f56611824a56f4ce3eab746eff49afe12caf29503d5874.scope - libcontainer container a0b775a058f8c28ef3f56611824a56f4ce3eab746eff49afe12caf29503d5874. Jan 13 21:30:52.260368 containerd[1441]: time="2025-01-13T21:30:52.260302440Z" level=info msg="StartContainer for \"a0b775a058f8c28ef3f56611824a56f4ce3eab746eff49afe12caf29503d5874\" returns successfully" Jan 13 21:30:52.363431 kubelet[1763]: E0113 21:30:52.363381 1763 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:52.807971 containerd[1441]: time="2025-01-13T21:30:52.807916004Z" level=info msg="StopPodSandbox for \"fd8de53954dfa2d684218909318c3248cea66f318a21891764dd6830633eb931\"" Jan 13 21:30:52.867546 kubelet[1763]: I0113 21:30:52.867494 1763 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 13 21:30:52.867546 kubelet[1763]: I0113 21:30:52.867542 1763 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 13 21:30:52.875584 containerd[1441]: 2025-01-13 21:30:52.845 [INFO][2926] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fd8de53954dfa2d684218909318c3248cea66f318a21891764dd6830633eb931" Jan 13 21:30:52.875584 containerd[1441]: 2025-01-13 21:30:52.846 [INFO][2926] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fd8de53954dfa2d684218909318c3248cea66f318a21891764dd6830633eb931" iface="eth0" netns="/var/run/netns/cni-05fcf2ff-31e7-1110-7e3d-346203a30867" Jan 13 21:30:52.875584 containerd[1441]: 2025-01-13 21:30:52.846 [INFO][2926] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fd8de53954dfa2d684218909318c3248cea66f318a21891764dd6830633eb931" iface="eth0" netns="/var/run/netns/cni-05fcf2ff-31e7-1110-7e3d-346203a30867" Jan 13 21:30:52.875584 containerd[1441]: 2025-01-13 21:30:52.846 [INFO][2926] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fd8de53954dfa2d684218909318c3248cea66f318a21891764dd6830633eb931" iface="eth0" netns="/var/run/netns/cni-05fcf2ff-31e7-1110-7e3d-346203a30867" Jan 13 21:30:52.875584 containerd[1441]: 2025-01-13 21:30:52.846 [INFO][2926] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fd8de53954dfa2d684218909318c3248cea66f318a21891764dd6830633eb931" Jan 13 21:30:52.875584 containerd[1441]: 2025-01-13 21:30:52.846 [INFO][2926] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fd8de53954dfa2d684218909318c3248cea66f318a21891764dd6830633eb931" Jan 13 21:30:52.875584 containerd[1441]: 2025-01-13 21:30:52.864 [INFO][2934] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fd8de53954dfa2d684218909318c3248cea66f318a21891764dd6830633eb931" HandleID="k8s-pod-network.fd8de53954dfa2d684218909318c3248cea66f318a21891764dd6830633eb931" Workload="10.0.0.157-k8s-nginx--deployment--8587fbcb89--cg54g-eth0" Jan 13 21:30:52.875584 containerd[1441]: 2025-01-13 21:30:52.864 [INFO][2934] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:30:52.875584 containerd[1441]: 2025-01-13 21:30:52.864 [INFO][2934] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:30:52.875584 containerd[1441]: 2025-01-13 21:30:52.869 [WARNING][2934] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fd8de53954dfa2d684218909318c3248cea66f318a21891764dd6830633eb931" HandleID="k8s-pod-network.fd8de53954dfa2d684218909318c3248cea66f318a21891764dd6830633eb931" Workload="10.0.0.157-k8s-nginx--deployment--8587fbcb89--cg54g-eth0" Jan 13 21:30:52.875584 containerd[1441]: 2025-01-13 21:30:52.869 [INFO][2934] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fd8de53954dfa2d684218909318c3248cea66f318a21891764dd6830633eb931" HandleID="k8s-pod-network.fd8de53954dfa2d684218909318c3248cea66f318a21891764dd6830633eb931" Workload="10.0.0.157-k8s-nginx--deployment--8587fbcb89--cg54g-eth0" Jan 13 21:30:52.875584 containerd[1441]: 2025-01-13 21:30:52.871 [INFO][2934] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:30:52.875584 containerd[1441]: 2025-01-13 21:30:52.873 [INFO][2926] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fd8de53954dfa2d684218909318c3248cea66f318a21891764dd6830633eb931" Jan 13 21:30:52.875957 containerd[1441]: time="2025-01-13T21:30:52.875803112Z" level=info msg="TearDown network for sandbox \"fd8de53954dfa2d684218909318c3248cea66f318a21891764dd6830633eb931\" successfully" Jan 13 21:30:52.875957 containerd[1441]: time="2025-01-13T21:30:52.875829673Z" level=info msg="StopPodSandbox for \"fd8de53954dfa2d684218909318c3248cea66f318a21891764dd6830633eb931\" returns successfully" Jan 13 21:30:52.876400 containerd[1441]: time="2025-01-13T21:30:52.876355119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-cg54g,Uid:7380b185-c82c-4172-9f86-41cc3da17d10,Namespace:default,Attempt:1,}" Jan 13 21:30:52.878140 systemd[1]: run-netns-cni\x2d05fcf2ff\x2d31e7\x2d1110\x2d7e3d\x2d346203a30867.mount: Deactivated successfully. Jan 13 21:30:52.978609 systemd-networkd[1366]: cali3aa46208abb: Link UP Jan 13 21:30:52.979108 systemd-networkd[1366]: cali3aa46208abb: Gained carrier Jan 13 21:30:52.985170 kubelet[1763]: I0113 21:30:52.984989 1763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-77qcb" podStartSLOduration=30.92704436 podStartE2EDuration="33.984965492s" podCreationTimestamp="2025-01-13 21:30:19 +0000 UTC" firstStartedPulling="2025-01-13 21:30:49.124634038 +0000 UTC m=+30.060650513" lastFinishedPulling="2025-01-13 21:30:52.18255517 +0000 UTC m=+33.118571645" observedRunningTime="2025-01-13 21:30:52.884653701 +0000 UTC m=+33.820670176" watchObservedRunningTime="2025-01-13 21:30:52.984965492 +0000 UTC m=+33.920981967" Jan 13 21:30:52.987376 containerd[1441]: 2025-01-13 21:30:52.920 [INFO][2942] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.157-k8s-nginx--deployment--8587fbcb89--cg54g-eth0 nginx-deployment-8587fbcb89- default 7380b185-c82c-4172-9f86-41cc3da17d10 1082 0 2025-01-13 21:30:36 +0000 UTC map[app:nginx pod-template-hash:8587fbcb89 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.157 nginx-deployment-8587fbcb89-cg54g eth0 default [] [] [kns.default ksa.default.default] cali3aa46208abb [] []}} ContainerID="e16c38394c372fabcef5f2b3ef51c1c203a1972b21ec90331739146e894ce403" Namespace="default" Pod="nginx-deployment-8587fbcb89-cg54g" WorkloadEndpoint="10.0.0.157-k8s-nginx--deployment--8587fbcb89--cg54g-" Jan 13 21:30:52.987376 containerd[1441]: 2025-01-13 21:30:52.921 [INFO][2942] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e16c38394c372fabcef5f2b3ef51c1c203a1972b21ec90331739146e894ce403" Namespace="default" Pod="nginx-deployment-8587fbcb89-cg54g" WorkloadEndpoint="10.0.0.157-k8s-nginx--deployment--8587fbcb89--cg54g-eth0" Jan 13 21:30:52.987376 containerd[1441]: 2025-01-13 21:30:52.946 [INFO][2956] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e16c38394c372fabcef5f2b3ef51c1c203a1972b21ec90331739146e894ce403" HandleID="k8s-pod-network.e16c38394c372fabcef5f2b3ef51c1c203a1972b21ec90331739146e894ce403" Workload="10.0.0.157-k8s-nginx--deployment--8587fbcb89--cg54g-eth0" Jan 13 21:30:52.987376 containerd[1441]: 2025-01-13 21:30:52.953 [INFO][2956] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e16c38394c372fabcef5f2b3ef51c1c203a1972b21ec90331739146e894ce403" HandleID="k8s-pod-network.e16c38394c372fabcef5f2b3ef51c1c203a1972b21ec90331739146e894ce403" Workload="10.0.0.157-k8s-nginx--deployment--8587fbcb89--cg54g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f5a90), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.157", "pod":"nginx-deployment-8587fbcb89-cg54g", "timestamp":"2025-01-13 21:30:52.946934765 +0000 UTC"}, Hostname:"10.0.0.157", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:30:52.987376 containerd[1441]: 2025-01-13 21:30:52.953 [INFO][2956] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:30:52.987376 containerd[1441]: 2025-01-13 21:30:52.953 [INFO][2956] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:30:52.987376 containerd[1441]: 2025-01-13 21:30:52.953 [INFO][2956] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.157' Jan 13 21:30:52.987376 containerd[1441]: 2025-01-13 21:30:52.955 [INFO][2956] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e16c38394c372fabcef5f2b3ef51c1c203a1972b21ec90331739146e894ce403" host="10.0.0.157" Jan 13 21:30:52.987376 containerd[1441]: 2025-01-13 21:30:52.958 [INFO][2956] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.157" Jan 13 21:30:52.987376 containerd[1441]: 2025-01-13 21:30:52.961 [INFO][2956] ipam/ipam.go 489: Trying affinity for 192.168.116.0/26 host="10.0.0.157" Jan 13 21:30:52.987376 containerd[1441]: 2025-01-13 21:30:52.963 [INFO][2956] ipam/ipam.go 155: Attempting to load block cidr=192.168.116.0/26 host="10.0.0.157" Jan 13 21:30:52.987376 containerd[1441]: 2025-01-13 21:30:52.965 [INFO][2956] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.116.0/26 host="10.0.0.157" Jan 13 21:30:52.987376 containerd[1441]: 2025-01-13 21:30:52.965 [INFO][2956] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.116.0/26 handle="k8s-pod-network.e16c38394c372fabcef5f2b3ef51c1c203a1972b21ec90331739146e894ce403" host="10.0.0.157" Jan 13 21:30:52.987376 containerd[1441]: 2025-01-13 21:30:52.966 [INFO][2956] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e16c38394c372fabcef5f2b3ef51c1c203a1972b21ec90331739146e894ce403 Jan 13 21:30:52.987376 containerd[1441]: 2025-01-13 21:30:52.970 [INFO][2956] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.116.0/26 handle="k8s-pod-network.e16c38394c372fabcef5f2b3ef51c1c203a1972b21ec90331739146e894ce403" host="10.0.0.157" Jan 13 21:30:52.987376 containerd[1441]: 2025-01-13 21:30:52.973 [INFO][2956] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.116.2/26] block=192.168.116.0/26 handle="k8s-pod-network.e16c38394c372fabcef5f2b3ef51c1c203a1972b21ec90331739146e894ce403" host="10.0.0.157" Jan 13 21:30:52.987376 containerd[1441]: 2025-01-13 21:30:52.974 [INFO][2956] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.116.2/26] handle="k8s-pod-network.e16c38394c372fabcef5f2b3ef51c1c203a1972b21ec90331739146e894ce403" host="10.0.0.157" Jan 13 21:30:52.987376 containerd[1441]: 2025-01-13 21:30:52.974 [INFO][2956] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:30:52.987376 containerd[1441]: 2025-01-13 21:30:52.974 [INFO][2956] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.116.2/26] IPv6=[] ContainerID="e16c38394c372fabcef5f2b3ef51c1c203a1972b21ec90331739146e894ce403" HandleID="k8s-pod-network.e16c38394c372fabcef5f2b3ef51c1c203a1972b21ec90331739146e894ce403" Workload="10.0.0.157-k8s-nginx--deployment--8587fbcb89--cg54g-eth0" Jan 13 21:30:52.987908 containerd[1441]: 2025-01-13 21:30:52.976 [INFO][2942] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e16c38394c372fabcef5f2b3ef51c1c203a1972b21ec90331739146e894ce403" Namespace="default" Pod="nginx-deployment-8587fbcb89-cg54g" WorkloadEndpoint="10.0.0.157-k8s-nginx--deployment--8587fbcb89--cg54g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.157-k8s-nginx--deployment--8587fbcb89--cg54g-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"7380b185-c82c-4172-9f86-41cc3da17d10", ResourceVersion:"1082", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 30, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.157", ContainerID:"", Pod:"nginx-deployment-8587fbcb89-cg54g", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.116.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali3aa46208abb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:30:52.987908 containerd[1441]: 2025-01-13 21:30:52.976 [INFO][2942] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.116.2/32] ContainerID="e16c38394c372fabcef5f2b3ef51c1c203a1972b21ec90331739146e894ce403" Namespace="default" Pod="nginx-deployment-8587fbcb89-cg54g" WorkloadEndpoint="10.0.0.157-k8s-nginx--deployment--8587fbcb89--cg54g-eth0" Jan 13 21:30:52.987908 containerd[1441]: 2025-01-13 21:30:52.976 [INFO][2942] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3aa46208abb ContainerID="e16c38394c372fabcef5f2b3ef51c1c203a1972b21ec90331739146e894ce403" Namespace="default" Pod="nginx-deployment-8587fbcb89-cg54g" WorkloadEndpoint="10.0.0.157-k8s-nginx--deployment--8587fbcb89--cg54g-eth0" Jan 13 21:30:52.987908 containerd[1441]: 2025-01-13 21:30:52.978 [INFO][2942] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e16c38394c372fabcef5f2b3ef51c1c203a1972b21ec90331739146e894ce403" Namespace="default" Pod="nginx-deployment-8587fbcb89-cg54g" WorkloadEndpoint="10.0.0.157-k8s-nginx--deployment--8587fbcb89--cg54g-eth0" Jan 13 21:30:52.987908 containerd[1441]: 2025-01-13 21:30:52.979 [INFO][2942] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e16c38394c372fabcef5f2b3ef51c1c203a1972b21ec90331739146e894ce403" Namespace="default" Pod="nginx-deployment-8587fbcb89-cg54g" WorkloadEndpoint="10.0.0.157-k8s-nginx--deployment--8587fbcb89--cg54g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.157-k8s-nginx--deployment--8587fbcb89--cg54g-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"7380b185-c82c-4172-9f86-41cc3da17d10", ResourceVersion:"1082", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 30, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.157", ContainerID:"e16c38394c372fabcef5f2b3ef51c1c203a1972b21ec90331739146e894ce403", Pod:"nginx-deployment-8587fbcb89-cg54g", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.116.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali3aa46208abb", MAC:"b6:98:ce:29:f6:91", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:30:52.987908 containerd[1441]: 2025-01-13 21:30:52.984 [INFO][2942] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e16c38394c372fabcef5f2b3ef51c1c203a1972b21ec90331739146e894ce403" Namespace="default" Pod="nginx-deployment-8587fbcb89-cg54g" WorkloadEndpoint="10.0.0.157-k8s-nginx--deployment--8587fbcb89--cg54g-eth0" Jan 13 21:30:53.008224 containerd[1441]: time="2025-01-13T21:30:53.008104252Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:30:53.008224 containerd[1441]: time="2025-01-13T21:30:53.008171709Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:30:53.008224 containerd[1441]: time="2025-01-13T21:30:53.008183952Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:30:53.008440 containerd[1441]: time="2025-01-13T21:30:53.008286587Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:30:53.027647 systemd[1]: Started cri-containerd-e16c38394c372fabcef5f2b3ef51c1c203a1972b21ec90331739146e894ce403.scope - libcontainer container e16c38394c372fabcef5f2b3ef51c1c203a1972b21ec90331739146e894ce403. Jan 13 21:30:53.038722 systemd-resolved[1321]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:30:53.062660 containerd[1441]: time="2025-01-13T21:30:53.062369549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-cg54g,Uid:7380b185-c82c-4172-9f86-41cc3da17d10,Namespace:default,Attempt:1,} returns sandbox id \"e16c38394c372fabcef5f2b3ef51c1c203a1972b21ec90331739146e894ce403\"" Jan 13 21:30:53.064134 containerd[1441]: time="2025-01-13T21:30:53.064104708Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 13 21:30:53.364211 kubelet[1763]: E0113 21:30:53.364156 1763 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:54.364702 kubelet[1763]: E0113 21:30:54.364638 1763 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:54.989741 systemd-networkd[1366]: cali3aa46208abb: Gained IPv6LL Jan 13 21:30:55.365692 kubelet[1763]: E0113 21:30:55.365665 1763 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:55.525190 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount752724498.mount: Deactivated successfully. Jan 13 21:30:56.366204 kubelet[1763]: E0113 21:30:56.366145 1763 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:57.367285 kubelet[1763]: E0113 21:30:57.367198 1763 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:57.653391 containerd[1441]: time="2025-01-13T21:30:57.653243914Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:30:57.654167 containerd[1441]: time="2025-01-13T21:30:57.654135079Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71036018" Jan 13 21:30:57.655473 containerd[1441]: time="2025-01-13T21:30:57.655436541Z" level=info msg="ImageCreate event name:\"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:30:57.660363 containerd[1441]: time="2025-01-13T21:30:57.660225837Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"71035896\" in 4.596079199s" Jan 13 21:30:57.660363 containerd[1441]: time="2025-01-13T21:30:57.660284017Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\"" Jan 13 21:30:57.661037 containerd[1441]: time="2025-01-13T21:30:57.661009719Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:30:57.662469 containerd[1441]: time="2025-01-13T21:30:57.662436697Z" level=info msg="CreateContainer within sandbox \"e16c38394c372fabcef5f2b3ef51c1c203a1972b21ec90331739146e894ce403\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 13 21:30:57.675325 containerd[1441]: time="2025-01-13T21:30:57.675292887Z" level=info msg="CreateContainer within sandbox \"e16c38394c372fabcef5f2b3ef51c1c203a1972b21ec90331739146e894ce403\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"84f2e944f37c9237a233db0f0335864fe26de81bc2518663ff70beff514e8bda\"" Jan 13 21:30:57.675634 containerd[1441]: time="2025-01-13T21:30:57.675603313Z" level=info msg="StartContainer for \"84f2e944f37c9237a233db0f0335864fe26de81bc2518663ff70beff514e8bda\"" Jan 13 21:30:57.676824 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1389537067.mount: Deactivated successfully. Jan 13 21:30:57.737650 systemd[1]: run-containerd-runc-k8s.io-84f2e944f37c9237a233db0f0335864fe26de81bc2518663ff70beff514e8bda-runc.sVc3J1.mount: Deactivated successfully. Jan 13 21:30:57.751634 systemd[1]: Started cri-containerd-84f2e944f37c9237a233db0f0335864fe26de81bc2518663ff70beff514e8bda.scope - libcontainer container 84f2e944f37c9237a233db0f0335864fe26de81bc2518663ff70beff514e8bda. Jan 13 21:30:57.774777 containerd[1441]: time="2025-01-13T21:30:57.774734009Z" level=info msg="StartContainer for \"84f2e944f37c9237a233db0f0335864fe26de81bc2518663ff70beff514e8bda\" returns successfully" Jan 13 21:30:57.888699 kubelet[1763]: I0113 21:30:57.888638 1763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-cg54g" podStartSLOduration=17.291310639 podStartE2EDuration="21.888623311s" podCreationTimestamp="2025-01-13 21:30:36 +0000 UTC" firstStartedPulling="2025-01-13 21:30:53.063870204 +0000 UTC m=+33.999886679" lastFinishedPulling="2025-01-13 21:30:57.661182876 +0000 UTC m=+38.597199351" observedRunningTime="2025-01-13 21:30:57.888285021 +0000 UTC m=+38.824301496" watchObservedRunningTime="2025-01-13 21:30:57.888623311 +0000 UTC m=+38.824639786" Jan 13 21:30:58.367654 kubelet[1763]: E0113 21:30:58.367599 1763 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:59.345672 kubelet[1763]: E0113 21:30:59.345634 1763 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:59.368254 kubelet[1763]: E0113 21:30:59.368219 1763 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:31:00.368921 kubelet[1763]: E0113 21:31:00.368858 1763 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:31:01.369585 kubelet[1763]: E0113 21:31:01.369531 1763 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:31:02.369911 kubelet[1763]: E0113 21:31:02.369868 1763 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:31:03.370490 kubelet[1763]: E0113 21:31:03.370420 1763 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:31:04.371537 kubelet[1763]: E0113 21:31:04.371463 1763 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:31:04.804425 systemd[1]: Created slice kubepods-besteffort-podfbfa3fcf_abc3_43d0_924b_4818c5249309.slice - libcontainer container kubepods-besteffort-podfbfa3fcf_abc3_43d0_924b_4818c5249309.slice. Jan 13 21:31:04.907578 kubelet[1763]: I0113 21:31:04.907527 1763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/fbfa3fcf-abc3-43d0-924b-4818c5249309-data\") pod \"nfs-server-provisioner-0\" (UID: \"fbfa3fcf-abc3-43d0-924b-4818c5249309\") " pod="default/nfs-server-provisioner-0" Jan 13 21:31:04.907578 kubelet[1763]: I0113 21:31:04.907571 1763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9krr\" (UniqueName: \"kubernetes.io/projected/fbfa3fcf-abc3-43d0-924b-4818c5249309-kube-api-access-f9krr\") pod \"nfs-server-provisioner-0\" (UID: \"fbfa3fcf-abc3-43d0-924b-4818c5249309\") " pod="default/nfs-server-provisioner-0" Jan 13 21:31:05.108076 containerd[1441]: time="2025-01-13T21:31:05.108013781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:fbfa3fcf-abc3-43d0-924b-4818c5249309,Namespace:default,Attempt:0,}" Jan 13 21:31:05.372609 kubelet[1763]: E0113 21:31:05.372497 1763 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:31:05.835637 systemd-networkd[1366]: cali60e51b789ff: Link UP Jan 13 21:31:05.836104 systemd-networkd[1366]: cali60e51b789ff: Gained carrier Jan 13 21:31:05.927971 containerd[1441]: 2025-01-13 21:31:05.709 [INFO][3127] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.157-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default fbfa3fcf-abc3-43d0-924b-4818c5249309 1167 0 2025-01-13 21:31:04 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.0.0.157 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="39e363866cf08311f79e2fd8f844dca020cb896b2e784bc9958b5554dd953dc1" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.157-k8s-nfs--server--provisioner--0-" Jan 13 21:31:05.927971 containerd[1441]: 2025-01-13 21:31:05.709 [INFO][3127] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="39e363866cf08311f79e2fd8f844dca020cb896b2e784bc9958b5554dd953dc1" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.157-k8s-nfs--server--provisioner--0-eth0" Jan 13 21:31:05.927971 containerd[1441]: 2025-01-13 21:31:05.732 [INFO][3141] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="39e363866cf08311f79e2fd8f844dca020cb896b2e784bc9958b5554dd953dc1" HandleID="k8s-pod-network.39e363866cf08311f79e2fd8f844dca020cb896b2e784bc9958b5554dd953dc1" Workload="10.0.0.157-k8s-nfs--server--provisioner--0-eth0" Jan 13 21:31:05.927971 containerd[1441]: 2025-01-13 21:31:05.740 [INFO][3141] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="39e363866cf08311f79e2fd8f844dca020cb896b2e784bc9958b5554dd953dc1" HandleID="k8s-pod-network.39e363866cf08311f79e2fd8f844dca020cb896b2e784bc9958b5554dd953dc1" Workload="10.0.0.157-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000133f00), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.157", "pod":"nfs-server-provisioner-0", "timestamp":"2025-01-13 21:31:05.732805759 +0000 UTC"}, Hostname:"10.0.0.157", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:31:05.927971 containerd[1441]: 2025-01-13 21:31:05.740 [INFO][3141] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:31:05.927971 containerd[1441]: 2025-01-13 21:31:05.740 [INFO][3141] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:31:05.927971 containerd[1441]: 2025-01-13 21:31:05.740 [INFO][3141] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.157' Jan 13 21:31:05.927971 containerd[1441]: 2025-01-13 21:31:05.741 [INFO][3141] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.39e363866cf08311f79e2fd8f844dca020cb896b2e784bc9958b5554dd953dc1" host="10.0.0.157" Jan 13 21:31:05.927971 containerd[1441]: 2025-01-13 21:31:05.744 [INFO][3141] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.157" Jan 13 21:31:05.927971 containerd[1441]: 2025-01-13 21:31:05.747 [INFO][3141] ipam/ipam.go 489: Trying affinity for 192.168.116.0/26 host="10.0.0.157" Jan 13 21:31:05.927971 containerd[1441]: 2025-01-13 21:31:05.748 [INFO][3141] ipam/ipam.go 155: Attempting to load block cidr=192.168.116.0/26 host="10.0.0.157" Jan 13 21:31:05.927971 containerd[1441]: 2025-01-13 21:31:05.750 [INFO][3141] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.116.0/26 host="10.0.0.157" Jan 13 21:31:05.927971 containerd[1441]: 2025-01-13 21:31:05.750 [INFO][3141] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.116.0/26 handle="k8s-pod-network.39e363866cf08311f79e2fd8f844dca020cb896b2e784bc9958b5554dd953dc1" host="10.0.0.157" Jan 13 21:31:05.927971 containerd[1441]: 2025-01-13 21:31:05.751 [INFO][3141] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.39e363866cf08311f79e2fd8f844dca020cb896b2e784bc9958b5554dd953dc1 Jan 13 21:31:05.927971 containerd[1441]: 2025-01-13 21:31:05.809 [INFO][3141] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.116.0/26 handle="k8s-pod-network.39e363866cf08311f79e2fd8f844dca020cb896b2e784bc9958b5554dd953dc1" host="10.0.0.157" Jan 13 21:31:05.927971 containerd[1441]: 2025-01-13 21:31:05.830 [INFO][3141] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.116.3/26] block=192.168.116.0/26 handle="k8s-pod-network.39e363866cf08311f79e2fd8f844dca020cb896b2e784bc9958b5554dd953dc1" host="10.0.0.157" Jan 13 21:31:05.927971 containerd[1441]: 2025-01-13 21:31:05.830 [INFO][3141] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.116.3/26] handle="k8s-pod-network.39e363866cf08311f79e2fd8f844dca020cb896b2e784bc9958b5554dd953dc1" host="10.0.0.157" Jan 13 21:31:05.927971 containerd[1441]: 2025-01-13 21:31:05.830 [INFO][3141] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:31:05.927971 containerd[1441]: 2025-01-13 21:31:05.830 [INFO][3141] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.116.3/26] IPv6=[] ContainerID="39e363866cf08311f79e2fd8f844dca020cb896b2e784bc9958b5554dd953dc1" HandleID="k8s-pod-network.39e363866cf08311f79e2fd8f844dca020cb896b2e784bc9958b5554dd953dc1" Workload="10.0.0.157-k8s-nfs--server--provisioner--0-eth0" Jan 13 21:31:05.928838 containerd[1441]: 2025-01-13 21:31:05.833 [INFO][3127] cni-plugin/k8s.go 386: Populated endpoint ContainerID="39e363866cf08311f79e2fd8f844dca020cb896b2e784bc9958b5554dd953dc1" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.157-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.157-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"fbfa3fcf-abc3-43d0-924b-4818c5249309", ResourceVersion:"1167", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 31, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.157", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.116.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:31:05.928838 containerd[1441]: 2025-01-13 21:31:05.833 [INFO][3127] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.116.3/32] ContainerID="39e363866cf08311f79e2fd8f844dca020cb896b2e784bc9958b5554dd953dc1" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.157-k8s-nfs--server--provisioner--0-eth0" Jan 13 21:31:05.928838 containerd[1441]: 2025-01-13 21:31:05.833 [INFO][3127] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="39e363866cf08311f79e2fd8f844dca020cb896b2e784bc9958b5554dd953dc1" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.157-k8s-nfs--server--provisioner--0-eth0" Jan 13 21:31:05.928838 containerd[1441]: 2025-01-13 21:31:05.835 [INFO][3127] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="39e363866cf08311f79e2fd8f844dca020cb896b2e784bc9958b5554dd953dc1" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.157-k8s-nfs--server--provisioner--0-eth0" Jan 13 21:31:05.929035 containerd[1441]: 2025-01-13 21:31:05.836 [INFO][3127] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="39e363866cf08311f79e2fd8f844dca020cb896b2e784bc9958b5554dd953dc1" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.157-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.157-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"fbfa3fcf-abc3-43d0-924b-4818c5249309", ResourceVersion:"1167", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 31, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.157", ContainerID:"39e363866cf08311f79e2fd8f844dca020cb896b2e784bc9958b5554dd953dc1", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.116.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"f6:6d:36:cd:df:2e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:31:05.929035 containerd[1441]: 2025-01-13 21:31:05.925 [INFO][3127] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="39e363866cf08311f79e2fd8f844dca020cb896b2e784bc9958b5554dd953dc1" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.157-k8s-nfs--server--provisioner--0-eth0" Jan 13 21:31:05.987971 containerd[1441]: time="2025-01-13T21:31:05.987269667Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:31:05.987971 containerd[1441]: time="2025-01-13T21:31:05.987957634Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:31:05.987971 containerd[1441]: time="2025-01-13T21:31:05.987976610Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:31:05.988193 containerd[1441]: time="2025-01-13T21:31:05.988073172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:31:06.009634 systemd[1]: Started cri-containerd-39e363866cf08311f79e2fd8f844dca020cb896b2e784bc9958b5554dd953dc1.scope - libcontainer container 39e363866cf08311f79e2fd8f844dca020cb896b2e784bc9958b5554dd953dc1. Jan 13 21:31:06.020835 systemd-resolved[1321]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:31:06.043043 containerd[1441]: time="2025-01-13T21:31:06.043000205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:fbfa3fcf-abc3-43d0-924b-4818c5249309,Namespace:default,Attempt:0,} returns sandbox id \"39e363866cf08311f79e2fd8f844dca020cb896b2e784bc9958b5554dd953dc1\"" Jan 13 21:31:06.044994 containerd[1441]: time="2025-01-13T21:31:06.044965627Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 13 21:31:06.373139 kubelet[1763]: E0113 21:31:06.373083 1763 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:31:07.373891 kubelet[1763]: E0113 21:31:07.373725 1763 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:31:07.404713 systemd-networkd[1366]: cali60e51b789ff: Gained IPv6LL Jan 13 21:31:08.374773 kubelet[1763]: E0113 21:31:08.374725 1763 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:31:08.449983 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3785463097.mount: Deactivated successfully. Jan 13 21:31:09.375957 kubelet[1763]: E0113 21:31:09.375885 1763 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:31:10.376630 kubelet[1763]: E0113 21:31:10.376563 1763 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:31:10.718208 containerd[1441]: time="2025-01-13T21:31:10.718043325Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:31:10.718969 containerd[1441]: time="2025-01-13T21:31:10.718922279Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Jan 13 21:31:10.720438 containerd[1441]: time="2025-01-13T21:31:10.720406824Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:31:10.723411 containerd[1441]: time="2025-01-13T21:31:10.723349241Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:31:10.724694 containerd[1441]: time="2025-01-13T21:31:10.724637887Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 4.679631192s" Jan 13 21:31:10.724756 containerd[1441]: time="2025-01-13T21:31:10.724697739Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 13 21:31:10.727117 containerd[1441]: time="2025-01-13T21:31:10.727075644Z" level=info msg="CreateContainer within sandbox \"39e363866cf08311f79e2fd8f844dca020cb896b2e784bc9958b5554dd953dc1\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 13 21:31:10.741714 containerd[1441]: time="2025-01-13T21:31:10.741664171Z" level=info msg="CreateContainer within sandbox \"39e363866cf08311f79e2fd8f844dca020cb896b2e784bc9958b5554dd953dc1\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"02066b0fdf53e8b614046a24405008986d42bb66981d3318b77159ab2270596d\"" Jan 13 21:31:10.743154 containerd[1441]: time="2025-01-13T21:31:10.743104692Z" level=info msg="StartContainer for \"02066b0fdf53e8b614046a24405008986d42bb66981d3318b77159ab2270596d\"" Jan 13 21:31:10.775635 systemd[1]: Started cri-containerd-02066b0fdf53e8b614046a24405008986d42bb66981d3318b77159ab2270596d.scope - libcontainer container 02066b0fdf53e8b614046a24405008986d42bb66981d3318b77159ab2270596d. Jan 13 21:31:10.802048 containerd[1441]: time="2025-01-13T21:31:10.801969480Z" level=info msg="StartContainer for \"02066b0fdf53e8b614046a24405008986d42bb66981d3318b77159ab2270596d\" returns successfully" Jan 13 21:31:11.377493 kubelet[1763]: E0113 21:31:11.377445 1763 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:31:11.735937 kubelet[1763]: E0113 21:31:11.735815 1763 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:31:11.749416 kubelet[1763]: I0113 21:31:11.749365 1763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=3.068452116 podStartE2EDuration="7.749346245s" podCreationTimestamp="2025-01-13 21:31:04 +0000 UTC" firstStartedPulling="2025-01-13 21:31:06.044686331 +0000 UTC m=+46.980702806" lastFinishedPulling="2025-01-13 21:31:10.72558047 +0000 UTC m=+51.661596935" observedRunningTime="2025-01-13 21:31:10.912461694 +0000 UTC m=+51.848478169" watchObservedRunningTime="2025-01-13 21:31:11.749346245 +0000 UTC m=+52.685362720" Jan 13 21:31:12.378068 kubelet[1763]: E0113 21:31:12.378005 1763 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:31:13.378443 kubelet[1763]: E0113 21:31:13.378400 1763 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:31:14.379541 kubelet[1763]: E0113 21:31:14.379460 1763 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:31:15.379662 kubelet[1763]: E0113 21:31:15.379611 1763 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:31:16.379797 kubelet[1763]: E0113 21:31:16.379740 1763 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:31:17.380217 kubelet[1763]: E0113 21:31:17.380171 1763 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:31:18.380718 kubelet[1763]: E0113 21:31:18.380667 1763 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:31:19.345166 kubelet[1763]: E0113 21:31:19.345109 1763 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:31:19.360572 containerd[1441]: time="2025-01-13T21:31:19.360544935Z" level=info msg="StopPodSandbox for \"92ac9a1f66ef6c83798acc75404e5069bae0b9ad310c75d33ee6d02b107bb88f\"" Jan 13 21:31:19.381042 kubelet[1763]: E0113 21:31:19.380974 1763 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:31:19.422760 containerd[1441]: 2025-01-13 21:31:19.393 [WARNING][3346] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="92ac9a1f66ef6c83798acc75404e5069bae0b9ad310c75d33ee6d02b107bb88f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.157-k8s-csi--node--driver--77qcb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b39eacdd-e838-4890-93a2-6a032889b329", ResourceVersion:"1085", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 30, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.157", ContainerID:"fe89d5b1abf495c82cefe1800fd7d99fcb766efb72e01b475dbb8e70b3b9f1f2", Pod:"csi-node-driver-77qcb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.116.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia712f303f51", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:31:19.422760 containerd[1441]: 2025-01-13 21:31:19.393 [INFO][3346] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="92ac9a1f66ef6c83798acc75404e5069bae0b9ad310c75d33ee6d02b107bb88f" Jan 13 21:31:19.422760 containerd[1441]: 2025-01-13 21:31:19.393 [INFO][3346] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="92ac9a1f66ef6c83798acc75404e5069bae0b9ad310c75d33ee6d02b107bb88f" iface="eth0" netns="" Jan 13 21:31:19.422760 containerd[1441]: 2025-01-13 21:31:19.393 [INFO][3346] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="92ac9a1f66ef6c83798acc75404e5069bae0b9ad310c75d33ee6d02b107bb88f" Jan 13 21:31:19.422760 containerd[1441]: 2025-01-13 21:31:19.393 [INFO][3346] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="92ac9a1f66ef6c83798acc75404e5069bae0b9ad310c75d33ee6d02b107bb88f" Jan 13 21:31:19.422760 containerd[1441]: 2025-01-13 21:31:19.412 [INFO][3354] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="92ac9a1f66ef6c83798acc75404e5069bae0b9ad310c75d33ee6d02b107bb88f" HandleID="k8s-pod-network.92ac9a1f66ef6c83798acc75404e5069bae0b9ad310c75d33ee6d02b107bb88f" Workload="10.0.0.157-k8s-csi--node--driver--77qcb-eth0" Jan 13 21:31:19.422760 containerd[1441]: 2025-01-13 21:31:19.412 [INFO][3354] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:31:19.422760 containerd[1441]: 2025-01-13 21:31:19.412 [INFO][3354] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:31:19.422760 containerd[1441]: 2025-01-13 21:31:19.417 [WARNING][3354] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="92ac9a1f66ef6c83798acc75404e5069bae0b9ad310c75d33ee6d02b107bb88f" HandleID="k8s-pod-network.92ac9a1f66ef6c83798acc75404e5069bae0b9ad310c75d33ee6d02b107bb88f" Workload="10.0.0.157-k8s-csi--node--driver--77qcb-eth0" Jan 13 21:31:19.422760 containerd[1441]: 2025-01-13 21:31:19.417 [INFO][3354] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="92ac9a1f66ef6c83798acc75404e5069bae0b9ad310c75d33ee6d02b107bb88f" HandleID="k8s-pod-network.92ac9a1f66ef6c83798acc75404e5069bae0b9ad310c75d33ee6d02b107bb88f" Workload="10.0.0.157-k8s-csi--node--driver--77qcb-eth0" Jan 13 21:31:19.422760 containerd[1441]: 2025-01-13 21:31:19.418 [INFO][3354] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:31:19.422760 containerd[1441]: 2025-01-13 21:31:19.420 [INFO][3346] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="92ac9a1f66ef6c83798acc75404e5069bae0b9ad310c75d33ee6d02b107bb88f" Jan 13 21:31:19.423175 containerd[1441]: time="2025-01-13T21:31:19.422802534Z" level=info msg="TearDown network for sandbox \"92ac9a1f66ef6c83798acc75404e5069bae0b9ad310c75d33ee6d02b107bb88f\" successfully" Jan 13 21:31:19.423175 containerd[1441]: time="2025-01-13T21:31:19.422834144Z" level=info msg="StopPodSandbox for \"92ac9a1f66ef6c83798acc75404e5069bae0b9ad310c75d33ee6d02b107bb88f\" returns successfully" Jan 13 21:31:19.423415 containerd[1441]: time="2025-01-13T21:31:19.423387704Z" level=info msg="RemovePodSandbox for \"92ac9a1f66ef6c83798acc75404e5069bae0b9ad310c75d33ee6d02b107bb88f\"" Jan 13 21:31:19.423449 containerd[1441]: time="2025-01-13T21:31:19.423420846Z" level=info msg="Forcibly stopping sandbox \"92ac9a1f66ef6c83798acc75404e5069bae0b9ad310c75d33ee6d02b107bb88f\"" Jan 13 21:31:19.487408 containerd[1441]: 2025-01-13 21:31:19.457 [WARNING][3378] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="92ac9a1f66ef6c83798acc75404e5069bae0b9ad310c75d33ee6d02b107bb88f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.157-k8s-csi--node--driver--77qcb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b39eacdd-e838-4890-93a2-6a032889b329", ResourceVersion:"1085", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 30, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.157", ContainerID:"fe89d5b1abf495c82cefe1800fd7d99fcb766efb72e01b475dbb8e70b3b9f1f2", Pod:"csi-node-driver-77qcb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.116.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia712f303f51", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:31:19.487408 containerd[1441]: 2025-01-13 21:31:19.458 [INFO][3378] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="92ac9a1f66ef6c83798acc75404e5069bae0b9ad310c75d33ee6d02b107bb88f" Jan 13 21:31:19.487408 containerd[1441]: 2025-01-13 21:31:19.458 [INFO][3378] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="92ac9a1f66ef6c83798acc75404e5069bae0b9ad310c75d33ee6d02b107bb88f" iface="eth0" netns="" Jan 13 21:31:19.487408 containerd[1441]: 2025-01-13 21:31:19.458 [INFO][3378] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="92ac9a1f66ef6c83798acc75404e5069bae0b9ad310c75d33ee6d02b107bb88f" Jan 13 21:31:19.487408 containerd[1441]: 2025-01-13 21:31:19.458 [INFO][3378] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="92ac9a1f66ef6c83798acc75404e5069bae0b9ad310c75d33ee6d02b107bb88f" Jan 13 21:31:19.487408 containerd[1441]: 2025-01-13 21:31:19.477 [INFO][3385] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="92ac9a1f66ef6c83798acc75404e5069bae0b9ad310c75d33ee6d02b107bb88f" HandleID="k8s-pod-network.92ac9a1f66ef6c83798acc75404e5069bae0b9ad310c75d33ee6d02b107bb88f" Workload="10.0.0.157-k8s-csi--node--driver--77qcb-eth0" Jan 13 21:31:19.487408 containerd[1441]: 2025-01-13 21:31:19.477 [INFO][3385] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:31:19.487408 containerd[1441]: 2025-01-13 21:31:19.477 [INFO][3385] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:31:19.487408 containerd[1441]: 2025-01-13 21:31:19.482 [WARNING][3385] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="92ac9a1f66ef6c83798acc75404e5069bae0b9ad310c75d33ee6d02b107bb88f" HandleID="k8s-pod-network.92ac9a1f66ef6c83798acc75404e5069bae0b9ad310c75d33ee6d02b107bb88f" Workload="10.0.0.157-k8s-csi--node--driver--77qcb-eth0" Jan 13 21:31:19.487408 containerd[1441]: 2025-01-13 21:31:19.482 [INFO][3385] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="92ac9a1f66ef6c83798acc75404e5069bae0b9ad310c75d33ee6d02b107bb88f" HandleID="k8s-pod-network.92ac9a1f66ef6c83798acc75404e5069bae0b9ad310c75d33ee6d02b107bb88f" Workload="10.0.0.157-k8s-csi--node--driver--77qcb-eth0" Jan 13 21:31:19.487408 containerd[1441]: 2025-01-13 21:31:19.483 [INFO][3385] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:31:19.487408 containerd[1441]: 2025-01-13 21:31:19.485 [INFO][3378] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="92ac9a1f66ef6c83798acc75404e5069bae0b9ad310c75d33ee6d02b107bb88f" Jan 13 21:31:19.487888 containerd[1441]: time="2025-01-13T21:31:19.487439935Z" level=info msg="TearDown network for sandbox \"92ac9a1f66ef6c83798acc75404e5069bae0b9ad310c75d33ee6d02b107bb88f\" successfully" Jan 13 21:31:19.490766 containerd[1441]: time="2025-01-13T21:31:19.490730488Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"92ac9a1f66ef6c83798acc75404e5069bae0b9ad310c75d33ee6d02b107bb88f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:31:19.490831 containerd[1441]: time="2025-01-13T21:31:19.490773429Z" level=info msg="RemovePodSandbox \"92ac9a1f66ef6c83798acc75404e5069bae0b9ad310c75d33ee6d02b107bb88f\" returns successfully" Jan 13 21:31:19.491242 containerd[1441]: time="2025-01-13T21:31:19.491216231Z" level=info msg="StopPodSandbox for \"fd8de53954dfa2d684218909318c3248cea66f318a21891764dd6830633eb931\"" Jan 13 21:31:19.553335 containerd[1441]: 2025-01-13 21:31:19.524 [WARNING][3408] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fd8de53954dfa2d684218909318c3248cea66f318a21891764dd6830633eb931" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.157-k8s-nginx--deployment--8587fbcb89--cg54g-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"7380b185-c82c-4172-9f86-41cc3da17d10", ResourceVersion:"1113", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 30, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.157", ContainerID:"e16c38394c372fabcef5f2b3ef51c1c203a1972b21ec90331739146e894ce403", Pod:"nginx-deployment-8587fbcb89-cg54g", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.116.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali3aa46208abb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:31:19.553335 containerd[1441]: 2025-01-13 21:31:19.524 [INFO][3408] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fd8de53954dfa2d684218909318c3248cea66f318a21891764dd6830633eb931" Jan 13 21:31:19.553335 containerd[1441]: 2025-01-13 21:31:19.524 [INFO][3408] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fd8de53954dfa2d684218909318c3248cea66f318a21891764dd6830633eb931" iface="eth0" netns="" Jan 13 21:31:19.553335 containerd[1441]: 2025-01-13 21:31:19.524 [INFO][3408] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fd8de53954dfa2d684218909318c3248cea66f318a21891764dd6830633eb931" Jan 13 21:31:19.553335 containerd[1441]: 2025-01-13 21:31:19.524 [INFO][3408] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fd8de53954dfa2d684218909318c3248cea66f318a21891764dd6830633eb931" Jan 13 21:31:19.553335 containerd[1441]: 2025-01-13 21:31:19.543 [INFO][3415] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fd8de53954dfa2d684218909318c3248cea66f318a21891764dd6830633eb931" HandleID="k8s-pod-network.fd8de53954dfa2d684218909318c3248cea66f318a21891764dd6830633eb931" Workload="10.0.0.157-k8s-nginx--deployment--8587fbcb89--cg54g-eth0" Jan 13 21:31:19.553335 containerd[1441]: 2025-01-13 21:31:19.543 [INFO][3415] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:31:19.553335 containerd[1441]: 2025-01-13 21:31:19.543 [INFO][3415] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:31:19.553335 containerd[1441]: 2025-01-13 21:31:19.548 [WARNING][3415] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fd8de53954dfa2d684218909318c3248cea66f318a21891764dd6830633eb931" HandleID="k8s-pod-network.fd8de53954dfa2d684218909318c3248cea66f318a21891764dd6830633eb931" Workload="10.0.0.157-k8s-nginx--deployment--8587fbcb89--cg54g-eth0" Jan 13 21:31:19.553335 containerd[1441]: 2025-01-13 21:31:19.548 [INFO][3415] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fd8de53954dfa2d684218909318c3248cea66f318a21891764dd6830633eb931" HandleID="k8s-pod-network.fd8de53954dfa2d684218909318c3248cea66f318a21891764dd6830633eb931" Workload="10.0.0.157-k8s-nginx--deployment--8587fbcb89--cg54g-eth0" Jan 13 21:31:19.553335 containerd[1441]: 2025-01-13 21:31:19.549 [INFO][3415] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:31:19.553335 containerd[1441]: 2025-01-13 21:31:19.551 [INFO][3408] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fd8de53954dfa2d684218909318c3248cea66f318a21891764dd6830633eb931" Jan 13 21:31:19.553786 containerd[1441]: time="2025-01-13T21:31:19.553365066Z" level=info msg="TearDown network for sandbox \"fd8de53954dfa2d684218909318c3248cea66f318a21891764dd6830633eb931\" successfully" Jan 13 21:31:19.553786 containerd[1441]: time="2025-01-13T21:31:19.553389211Z" level=info msg="StopPodSandbox for \"fd8de53954dfa2d684218909318c3248cea66f318a21891764dd6830633eb931\" returns successfully" Jan 13 21:31:19.553924 containerd[1441]: time="2025-01-13T21:31:19.553876848Z" level=info msg="RemovePodSandbox for \"fd8de53954dfa2d684218909318c3248cea66f318a21891764dd6830633eb931\"" Jan 13 21:31:19.553924 containerd[1441]: time="2025-01-13T21:31:19.553904069Z" level=info msg="Forcibly stopping sandbox \"fd8de53954dfa2d684218909318c3248cea66f318a21891764dd6830633eb931\"" Jan 13 21:31:19.614545 containerd[1441]: 2025-01-13 21:31:19.585 [WARNING][3438] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fd8de53954dfa2d684218909318c3248cea66f318a21891764dd6830633eb931" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.157-k8s-nginx--deployment--8587fbcb89--cg54g-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"7380b185-c82c-4172-9f86-41cc3da17d10", ResourceVersion:"1113", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 30, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.157", ContainerID:"e16c38394c372fabcef5f2b3ef51c1c203a1972b21ec90331739146e894ce403", Pod:"nginx-deployment-8587fbcb89-cg54g", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.116.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali3aa46208abb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:31:19.614545 containerd[1441]: 2025-01-13 21:31:19.585 [INFO][3438] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fd8de53954dfa2d684218909318c3248cea66f318a21891764dd6830633eb931" Jan 13 21:31:19.614545 containerd[1441]: 2025-01-13 21:31:19.585 [INFO][3438] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fd8de53954dfa2d684218909318c3248cea66f318a21891764dd6830633eb931" iface="eth0" netns="" Jan 13 21:31:19.614545 containerd[1441]: 2025-01-13 21:31:19.586 [INFO][3438] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fd8de53954dfa2d684218909318c3248cea66f318a21891764dd6830633eb931" Jan 13 21:31:19.614545 containerd[1441]: 2025-01-13 21:31:19.586 [INFO][3438] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fd8de53954dfa2d684218909318c3248cea66f318a21891764dd6830633eb931" Jan 13 21:31:19.614545 containerd[1441]: 2025-01-13 21:31:19.604 [INFO][3445] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fd8de53954dfa2d684218909318c3248cea66f318a21891764dd6830633eb931" HandleID="k8s-pod-network.fd8de53954dfa2d684218909318c3248cea66f318a21891764dd6830633eb931" Workload="10.0.0.157-k8s-nginx--deployment--8587fbcb89--cg54g-eth0" Jan 13 21:31:19.614545 containerd[1441]: 2025-01-13 21:31:19.604 [INFO][3445] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:31:19.614545 containerd[1441]: 2025-01-13 21:31:19.604 [INFO][3445] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:31:19.614545 containerd[1441]: 2025-01-13 21:31:19.609 [WARNING][3445] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fd8de53954dfa2d684218909318c3248cea66f318a21891764dd6830633eb931" HandleID="k8s-pod-network.fd8de53954dfa2d684218909318c3248cea66f318a21891764dd6830633eb931" Workload="10.0.0.157-k8s-nginx--deployment--8587fbcb89--cg54g-eth0" Jan 13 21:31:19.614545 containerd[1441]: 2025-01-13 21:31:19.609 [INFO][3445] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fd8de53954dfa2d684218909318c3248cea66f318a21891764dd6830633eb931" HandleID="k8s-pod-network.fd8de53954dfa2d684218909318c3248cea66f318a21891764dd6830633eb931" Workload="10.0.0.157-k8s-nginx--deployment--8587fbcb89--cg54g-eth0" Jan 13 21:31:19.614545 containerd[1441]: 2025-01-13 21:31:19.610 [INFO][3445] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:31:19.614545 containerd[1441]: 2025-01-13 21:31:19.612 [INFO][3438] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fd8de53954dfa2d684218909318c3248cea66f318a21891764dd6830633eb931" Jan 13 21:31:19.615035 containerd[1441]: time="2025-01-13T21:31:19.614569516Z" level=info msg="TearDown network for sandbox \"fd8de53954dfa2d684218909318c3248cea66f318a21891764dd6830633eb931\" successfully" Jan 13 21:31:19.617601 containerd[1441]: time="2025-01-13T21:31:19.617555406Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fd8de53954dfa2d684218909318c3248cea66f318a21891764dd6830633eb931\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:31:19.617601 containerd[1441]: time="2025-01-13T21:31:19.617593317Z" level=info msg="RemovePodSandbox \"fd8de53954dfa2d684218909318c3248cea66f318a21891764dd6830633eb931\" returns successfully" Jan 13 21:31:20.381675 kubelet[1763]: E0113 21:31:20.381620 1763 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:31:20.908658 systemd[1]: Created slice kubepods-besteffort-pod46b1ce18_e6b6_4d1d_9300_301178447b39.slice - libcontainer container kubepods-besteffort-pod46b1ce18_e6b6_4d1d_9300_301178447b39.slice. Jan 13 21:31:20.993338 kubelet[1763]: I0113 21:31:20.993290 1763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fktpb\" (UniqueName: \"kubernetes.io/projected/46b1ce18-e6b6-4d1d-9300-301178447b39-kube-api-access-fktpb\") pod \"test-pod-1\" (UID: \"46b1ce18-e6b6-4d1d-9300-301178447b39\") " pod="default/test-pod-1" Jan 13 21:31:20.993338 kubelet[1763]: I0113 21:31:20.993325 1763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7446ced8-d114-4261-b94b-c6786427e0e1\" (UniqueName: \"kubernetes.io/nfs/46b1ce18-e6b6-4d1d-9300-301178447b39-pvc-7446ced8-d114-4261-b94b-c6786427e0e1\") pod \"test-pod-1\" (UID: \"46b1ce18-e6b6-4d1d-9300-301178447b39\") " pod="default/test-pod-1" Jan 13 21:31:21.117537 kernel: FS-Cache: Loaded Jan 13 21:31:21.185628 kernel: RPC: Registered named UNIX socket transport module. Jan 13 21:31:21.185733 kernel: RPC: Registered udp transport module. Jan 13 21:31:21.185753 kernel: RPC: Registered tcp transport module. Jan 13 21:31:21.187179 kernel: RPC: Registered tcp-with-tls transport module. Jan 13 21:31:21.187213 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 13 21:31:21.382406 kubelet[1763]: E0113 21:31:21.382362 1763 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:31:21.444566 kernel: NFS: Registering the id_resolver key type Jan 13 21:31:21.444631 kernel: Key type id_resolver registered Jan 13 21:31:21.444651 kernel: Key type id_legacy registered Jan 13 21:31:21.470896 nfsidmap[3479]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 13 21:31:21.475331 nfsidmap[3482]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 13 21:31:21.512031 containerd[1441]: time="2025-01-13T21:31:21.511991406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:46b1ce18-e6b6-4d1d-9300-301178447b39,Namespace:default,Attempt:0,}" Jan 13 21:31:21.600599 systemd-networkd[1366]: cali5ec59c6bf6e: Link UP Jan 13 21:31:21.601071 systemd-networkd[1366]: cali5ec59c6bf6e: Gained carrier Jan 13 21:31:21.608627 containerd[1441]: 2025-01-13 21:31:21.546 [INFO][3486] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.157-k8s-test--pod--1-eth0 default 46b1ce18-e6b6-4d1d-9300-301178447b39 1247 0 2025-01-13 21:31:05 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.157 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="85f689e6be20b293812da46bc5ae9e6afce6b65135ad0ecf42c749154211a4fa" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.157-k8s-test--pod--1-" Jan 13 21:31:21.608627 containerd[1441]: 2025-01-13 21:31:21.546 [INFO][3486] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="85f689e6be20b293812da46bc5ae9e6afce6b65135ad0ecf42c749154211a4fa" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.157-k8s-test--pod--1-eth0" Jan 13 21:31:21.608627 containerd[1441]: 2025-01-13 21:31:21.568 [INFO][3499] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="85f689e6be20b293812da46bc5ae9e6afce6b65135ad0ecf42c749154211a4fa" HandleID="k8s-pod-network.85f689e6be20b293812da46bc5ae9e6afce6b65135ad0ecf42c749154211a4fa" Workload="10.0.0.157-k8s-test--pod--1-eth0" Jan 13 21:31:21.608627 containerd[1441]: 2025-01-13 21:31:21.575 [INFO][3499] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="85f689e6be20b293812da46bc5ae9e6afce6b65135ad0ecf42c749154211a4fa" HandleID="k8s-pod-network.85f689e6be20b293812da46bc5ae9e6afce6b65135ad0ecf42c749154211a4fa" Workload="10.0.0.157-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000503590), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.157", "pod":"test-pod-1", "timestamp":"2025-01-13 21:31:21.568824977 +0000 UTC"}, Hostname:"10.0.0.157", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:31:21.608627 containerd[1441]: 2025-01-13 21:31:21.575 [INFO][3499] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:31:21.608627 containerd[1441]: 2025-01-13 21:31:21.575 [INFO][3499] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:31:21.608627 containerd[1441]: 2025-01-13 21:31:21.575 [INFO][3499] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.157' Jan 13 21:31:21.608627 containerd[1441]: 2025-01-13 21:31:21.577 [INFO][3499] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.85f689e6be20b293812da46bc5ae9e6afce6b65135ad0ecf42c749154211a4fa" host="10.0.0.157" Jan 13 21:31:21.608627 containerd[1441]: 2025-01-13 21:31:21.580 [INFO][3499] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.157" Jan 13 21:31:21.608627 containerd[1441]: 2025-01-13 21:31:21.583 [INFO][3499] ipam/ipam.go 489: Trying affinity for 192.168.116.0/26 host="10.0.0.157" Jan 13 21:31:21.608627 containerd[1441]: 2025-01-13 21:31:21.585 [INFO][3499] ipam/ipam.go 155: Attempting to load block cidr=192.168.116.0/26 host="10.0.0.157" Jan 13 21:31:21.608627 containerd[1441]: 2025-01-13 21:31:21.586 [INFO][3499] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.116.0/26 host="10.0.0.157" Jan 13 21:31:21.608627 containerd[1441]: 2025-01-13 21:31:21.586 [INFO][3499] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.116.0/26 handle="k8s-pod-network.85f689e6be20b293812da46bc5ae9e6afce6b65135ad0ecf42c749154211a4fa" host="10.0.0.157" Jan 13 21:31:21.608627 containerd[1441]: 2025-01-13 21:31:21.587 [INFO][3499] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.85f689e6be20b293812da46bc5ae9e6afce6b65135ad0ecf42c749154211a4fa Jan 13 21:31:21.608627 containerd[1441]: 2025-01-13 21:31:21.590 [INFO][3499] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.116.0/26 handle="k8s-pod-network.85f689e6be20b293812da46bc5ae9e6afce6b65135ad0ecf42c749154211a4fa" host="10.0.0.157" Jan 13 21:31:21.608627 containerd[1441]: 2025-01-13 21:31:21.596 [INFO][3499] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.116.4/26] block=192.168.116.0/26 handle="k8s-pod-network.85f689e6be20b293812da46bc5ae9e6afce6b65135ad0ecf42c749154211a4fa" host="10.0.0.157" Jan 13 21:31:21.608627 containerd[1441]: 2025-01-13 21:31:21.596 [INFO][3499] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.116.4/26] handle="k8s-pod-network.85f689e6be20b293812da46bc5ae9e6afce6b65135ad0ecf42c749154211a4fa" host="10.0.0.157" Jan 13 21:31:21.608627 containerd[1441]: 2025-01-13 21:31:21.596 [INFO][3499] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:31:21.608627 containerd[1441]: 2025-01-13 21:31:21.596 [INFO][3499] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.116.4/26] IPv6=[] ContainerID="85f689e6be20b293812da46bc5ae9e6afce6b65135ad0ecf42c749154211a4fa" HandleID="k8s-pod-network.85f689e6be20b293812da46bc5ae9e6afce6b65135ad0ecf42c749154211a4fa" Workload="10.0.0.157-k8s-test--pod--1-eth0" Jan 13 21:31:21.608627 containerd[1441]: 2025-01-13 21:31:21.598 [INFO][3486] cni-plugin/k8s.go 386: Populated endpoint ContainerID="85f689e6be20b293812da46bc5ae9e6afce6b65135ad0ecf42c749154211a4fa" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.157-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.157-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"46b1ce18-e6b6-4d1d-9300-301178447b39", ResourceVersion:"1247", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 31, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.157", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.116.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:31:21.609363 containerd[1441]: 2025-01-13 21:31:21.598 [INFO][3486] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.116.4/32] ContainerID="85f689e6be20b293812da46bc5ae9e6afce6b65135ad0ecf42c749154211a4fa" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.157-k8s-test--pod--1-eth0" Jan 13 21:31:21.609363 containerd[1441]: 2025-01-13 21:31:21.598 [INFO][3486] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="85f689e6be20b293812da46bc5ae9e6afce6b65135ad0ecf42c749154211a4fa" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.157-k8s-test--pod--1-eth0" Jan 13 21:31:21.609363 containerd[1441]: 2025-01-13 21:31:21.601 [INFO][3486] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="85f689e6be20b293812da46bc5ae9e6afce6b65135ad0ecf42c749154211a4fa" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.157-k8s-test--pod--1-eth0" Jan 13 21:31:21.609363 containerd[1441]: 2025-01-13 21:31:21.601 [INFO][3486] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="85f689e6be20b293812da46bc5ae9e6afce6b65135ad0ecf42c749154211a4fa" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.157-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.157-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"46b1ce18-e6b6-4d1d-9300-301178447b39", ResourceVersion:"1247", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 31, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.157", ContainerID:"85f689e6be20b293812da46bc5ae9e6afce6b65135ad0ecf42c749154211a4fa", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.116.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"92:32:06:f4:79:bb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:31:21.609363 containerd[1441]: 2025-01-13 21:31:21.605 [INFO][3486] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="85f689e6be20b293812da46bc5ae9e6afce6b65135ad0ecf42c749154211a4fa" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.157-k8s-test--pod--1-eth0" Jan 13 21:31:21.628877 containerd[1441]: time="2025-01-13T21:31:21.628777457Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:31:21.628877 containerd[1441]: time="2025-01-13T21:31:21.628835467Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:31:21.628877 containerd[1441]: time="2025-01-13T21:31:21.628847850Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:31:21.629067 containerd[1441]: time="2025-01-13T21:31:21.628935324Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:31:21.651640 systemd[1]: Started cri-containerd-85f689e6be20b293812da46bc5ae9e6afce6b65135ad0ecf42c749154211a4fa.scope - libcontainer container 85f689e6be20b293812da46bc5ae9e6afce6b65135ad0ecf42c749154211a4fa. Jan 13 21:31:21.663278 systemd-resolved[1321]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:31:21.687165 containerd[1441]: time="2025-01-13T21:31:21.687119091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:46b1ce18-e6b6-4d1d-9300-301178447b39,Namespace:default,Attempt:0,} returns sandbox id \"85f689e6be20b293812da46bc5ae9e6afce6b65135ad0ecf42c749154211a4fa\"" Jan 13 21:31:21.688641 containerd[1441]: time="2025-01-13T21:31:21.688565218Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 13 21:31:22.076614 containerd[1441]: time="2025-01-13T21:31:22.076563218Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:31:22.077330 containerd[1441]: time="2025-01-13T21:31:22.077292177Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 13 21:31:22.079884 containerd[1441]: time="2025-01-13T21:31:22.079831336Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"71035896\" in 391.104054ms" Jan 13 21:31:22.079884 containerd[1441]: time="2025-01-13T21:31:22.079869288Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\"" Jan 13 21:31:22.081721 containerd[1441]: time="2025-01-13T21:31:22.081697362Z" level=info msg="CreateContainer within sandbox \"85f689e6be20b293812da46bc5ae9e6afce6b65135ad0ecf42c749154211a4fa\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 13 21:31:22.093976 containerd[1441]: time="2025-01-13T21:31:22.093919631Z" level=info msg="CreateContainer within sandbox \"85f689e6be20b293812da46bc5ae9e6afce6b65135ad0ecf42c749154211a4fa\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"63ba5a9da8748c4fc117e778fe1704fbec6e5f1ab9eb2d49f08a301cd3f1f28b\"" Jan 13 21:31:22.094497 containerd[1441]: time="2025-01-13T21:31:22.094466078Z" level=info msg="StartContainer for \"63ba5a9da8748c4fc117e778fe1704fbec6e5f1ab9eb2d49f08a301cd3f1f28b\"" Jan 13 21:31:22.128638 systemd[1]: Started cri-containerd-63ba5a9da8748c4fc117e778fe1704fbec6e5f1ab9eb2d49f08a301cd3f1f28b.scope - libcontainer container 63ba5a9da8748c4fc117e778fe1704fbec6e5f1ab9eb2d49f08a301cd3f1f28b. Jan 13 21:31:22.152315 containerd[1441]: time="2025-01-13T21:31:22.152219905Z" level=info msg="StartContainer for \"63ba5a9da8748c4fc117e778fe1704fbec6e5f1ab9eb2d49f08a301cd3f1f28b\" returns successfully" Jan 13 21:31:22.383528 kubelet[1763]: E0113 21:31:22.383463 1763 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:31:23.384364 kubelet[1763]: E0113 21:31:23.384312 1763 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:31:23.596658 systemd-networkd[1366]: cali5ec59c6bf6e: Gained IPv6LL Jan 13 21:31:24.384939 kubelet[1763]: E0113 21:31:24.384899 1763 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:31:25.385603 kubelet[1763]: E0113 21:31:25.385540 1763 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:31:26.386355 kubelet[1763]: E0113 21:31:26.386281 1763 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:31:27.386459 kubelet[1763]: E0113 21:31:27.386393 1763 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"