Jan 13 21:32:36.957646 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:40:50 -00 2025 Jan 13 21:32:36.957686 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:32:36.957703 kernel: BIOS-provided physical RAM map: Jan 13 21:32:36.957716 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 13 21:32:36.957727 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 13 21:32:36.957740 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 13 21:32:36.957758 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Jan 13 21:32:36.957771 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Jan 13 21:32:36.957784 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Jan 13 21:32:36.957796 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 13 21:32:36.957858 kernel: NX (Execute Disable) protection: active Jan 13 21:32:36.957872 kernel: APIC: Static calls initialized Jan 13 21:32:36.957884 kernel: SMBIOS 2.7 present. Jan 13 21:32:36.957897 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Jan 13 21:32:36.957945 kernel: Hypervisor detected: KVM Jan 13 21:32:36.957960 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 13 21:32:36.957974 kernel: kvm-clock: using sched offset of 6142119253 cycles Jan 13 21:32:36.957989 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 13 21:32:36.958031 kernel: tsc: Detected 2499.998 MHz processor Jan 13 21:32:36.958046 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 13 21:32:36.958060 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 13 21:32:36.958078 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Jan 13 21:32:36.958117 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 13 21:32:36.958131 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 13 21:32:36.958145 kernel: Using GB pages for direct mapping Jan 13 21:32:36.958222 kernel: ACPI: Early table checksum verification disabled Jan 13 21:32:36.958241 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Jan 13 21:32:36.958256 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Jan 13 21:32:36.958270 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 13 21:32:36.958285 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 13 21:32:36.958304 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Jan 13 21:32:36.958319 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 13 21:32:36.958334 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 13 21:32:36.958349 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Jan 13 21:32:36.958363 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 13 21:32:36.958377 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Jan 13 21:32:36.958393 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Jan 13 21:32:36.958407 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 13 21:32:36.958423 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Jan 13 21:32:36.958442 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Jan 13 21:32:36.958463 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Jan 13 21:32:36.958479 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Jan 13 21:32:36.958522 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Jan 13 21:32:36.958538 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Jan 13 21:32:36.958558 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Jan 13 21:32:36.958574 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Jan 13 21:32:36.958589 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Jan 13 21:32:36.958605 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Jan 13 21:32:36.958620 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 13 21:32:36.958636 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 13 21:32:36.958651 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Jan 13 21:32:36.958666 kernel: NUMA: Initialized distance table, cnt=1 Jan 13 21:32:36.958682 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Jan 13 21:32:36.958702 kernel: Zone ranges: Jan 13 21:32:36.958716 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 13 21:32:36.958730 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Jan 13 21:32:36.958745 kernel: Normal empty Jan 13 21:32:36.958760 kernel: Movable zone start for each node Jan 13 21:32:36.958774 kernel: Early memory node ranges Jan 13 21:32:36.958789 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 13 21:32:36.958804 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Jan 13 21:32:36.958818 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Jan 13 21:32:36.958835 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 21:32:36.958849 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 13 21:32:36.958863 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Jan 13 21:32:36.958877 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 13 21:32:36.958892 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 13 21:32:36.958906 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Jan 13 21:32:36.958920 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 13 21:32:36.958934 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 13 21:32:36.958948 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 13 21:32:36.958966 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 13 21:32:36.958981 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 13 21:32:36.958995 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 13 21:32:36.959009 kernel: TSC deadline timer available Jan 13 21:32:36.959024 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 13 21:32:36.959039 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 13 21:32:36.959055 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Jan 13 21:32:36.959070 kernel: Booting paravirtualized kernel on KVM Jan 13 21:32:36.959086 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 13 21:32:36.959104 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 13 21:32:36.959125 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 13 21:32:36.959141 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 13 21:32:36.959157 kernel: pcpu-alloc: [0] 0 1 Jan 13 21:32:36.959172 kernel: kvm-guest: PV spinlocks enabled Jan 13 21:32:36.959188 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 13 21:32:36.959207 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:32:36.959224 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 21:32:36.959242 kernel: random: crng init done Jan 13 21:32:36.959258 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 21:32:36.959274 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 13 21:32:36.959291 kernel: Fallback order for Node 0: 0 Jan 13 21:32:36.959306 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Jan 13 21:32:36.959322 kernel: Policy zone: DMA32 Jan 13 21:32:36.959338 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 21:32:36.959354 kernel: Memory: 1932348K/2057760K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42844K init, 2348K bss, 125152K reserved, 0K cma-reserved) Jan 13 21:32:36.959370 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 13 21:32:36.959390 kernel: Kernel/User page tables isolation: enabled Jan 13 21:32:36.959406 kernel: ftrace: allocating 37918 entries in 149 pages Jan 13 21:32:36.959421 kernel: ftrace: allocated 149 pages with 4 groups Jan 13 21:32:36.959435 kernel: Dynamic Preempt: voluntary Jan 13 21:32:36.959449 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 21:32:36.959465 kernel: rcu: RCU event tracing is enabled. Jan 13 21:32:36.959502 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 13 21:32:36.959517 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 21:32:36.959531 kernel: Rude variant of Tasks RCU enabled. Jan 13 21:32:36.959545 kernel: Tracing variant of Tasks RCU enabled. Jan 13 21:32:36.959563 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 21:32:36.959692 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 13 21:32:36.959709 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 13 21:32:36.959724 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 21:32:36.959739 kernel: Console: colour VGA+ 80x25 Jan 13 21:32:36.959754 kernel: printk: console [ttyS0] enabled Jan 13 21:32:36.959769 kernel: ACPI: Core revision 20230628 Jan 13 21:32:36.959784 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Jan 13 21:32:36.959798 kernel: APIC: Switch to symmetric I/O mode setup Jan 13 21:32:36.959816 kernel: x2apic enabled Jan 13 21:32:36.959868 kernel: APIC: Switched APIC routing to: physical x2apic Jan 13 21:32:36.959898 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jan 13 21:32:36.959919 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Jan 13 21:32:36.959936 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 13 21:32:36.959953 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 13 21:32:36.959968 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 13 21:32:36.959983 kernel: Spectre V2 : Mitigation: Retpolines Jan 13 21:32:36.959997 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 13 21:32:36.960011 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 13 21:32:36.960025 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 13 21:32:36.960041 kernel: RETBleed: Vulnerable Jan 13 21:32:36.960058 kernel: Speculative Store Bypass: Vulnerable Jan 13 21:32:36.960073 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Jan 13 21:32:36.960087 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 13 21:32:36.960102 kernel: GDS: Unknown: Dependent on hypervisor status Jan 13 21:32:36.960117 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 13 21:32:36.960130 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 13 21:32:36.960148 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 13 21:32:36.960163 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jan 13 21:32:36.960177 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jan 13 21:32:36.960192 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 13 21:32:36.960207 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 13 21:32:36.960222 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 13 21:32:36.960237 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 13 21:32:36.960252 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 13 21:32:36.960266 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jan 13 21:32:36.960281 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jan 13 21:32:36.960296 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Jan 13 21:32:36.960314 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Jan 13 21:32:36.960329 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Jan 13 21:32:36.960344 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Jan 13 21:32:36.960359 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Jan 13 21:32:36.960411 kernel: Freeing SMP alternatives memory: 32K Jan 13 21:32:36.960426 kernel: pid_max: default: 32768 minimum: 301 Jan 13 21:32:36.960441 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 21:32:36.960456 kernel: landlock: Up and running. Jan 13 21:32:36.960471 kernel: SELinux: Initializing. Jan 13 21:32:36.960500 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 13 21:32:36.960513 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 13 21:32:36.960526 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 13 21:32:36.960544 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 21:32:36.960558 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 21:32:36.960571 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 21:32:36.960586 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 13 21:32:36.960598 kernel: signal: max sigframe size: 3632 Jan 13 21:32:36.960611 kernel: rcu: Hierarchical SRCU implementation. Jan 13 21:32:36.960627 kernel: rcu: Max phase no-delay instances is 400. Jan 13 21:32:36.960642 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 13 21:32:36.960658 kernel: smp: Bringing up secondary CPUs ... Jan 13 21:32:36.960677 kernel: smpboot: x86: Booting SMP configuration: Jan 13 21:32:36.960691 kernel: .... node #0, CPUs: #1 Jan 13 21:32:36.960707 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 13 21:32:36.960842 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 13 21:32:36.960863 kernel: smp: Brought up 1 node, 2 CPUs Jan 13 21:32:36.960879 kernel: smpboot: Max logical packages: 1 Jan 13 21:32:36.960894 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Jan 13 21:32:36.960912 kernel: devtmpfs: initialized Jan 13 21:32:36.960932 kernel: x86/mm: Memory block size: 128MB Jan 13 21:32:36.960945 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 21:32:36.960960 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 13 21:32:36.960973 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 21:32:36.960986 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 21:32:36.961000 kernel: audit: initializing netlink subsys (disabled) Jan 13 21:32:36.961018 kernel: audit: type=2000 audit(1736803956.255:1): state=initialized audit_enabled=0 res=1 Jan 13 21:32:36.961033 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 21:32:36.961048 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 13 21:32:36.961068 kernel: cpuidle: using governor menu Jan 13 21:32:36.961084 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 21:32:36.961102 kernel: dca service started, version 1.12.1 Jan 13 21:32:36.961119 kernel: PCI: Using configuration type 1 for base access Jan 13 21:32:36.961137 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 13 21:32:36.961154 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 21:32:36.961173 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 21:32:36.961189 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 21:32:36.961206 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 21:32:36.961224 kernel: ACPI: Added _OSI(Module Device) Jan 13 21:32:36.961239 kernel: ACPI: Added _OSI(Processor Device) Jan 13 21:32:36.961256 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 21:32:36.961272 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 21:32:36.961286 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 13 21:32:36.961362 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 13 21:32:36.961380 kernel: ACPI: Interpreter enabled Jan 13 21:32:36.961396 kernel: ACPI: PM: (supports S0 S5) Jan 13 21:32:36.961409 kernel: ACPI: Using IOAPIC for interrupt routing Jan 13 21:32:36.961428 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 13 21:32:36.961441 kernel: PCI: Using E820 reservations for host bridge windows Jan 13 21:32:36.961454 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jan 13 21:32:36.961467 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 21:32:36.961700 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 13 21:32:36.961838 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 13 21:32:36.961963 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 13 21:32:36.961984 kernel: acpiphp: Slot [3] registered Jan 13 21:32:36.961998 kernel: acpiphp: Slot [4] registered Jan 13 21:32:36.962012 kernel: acpiphp: Slot [5] registered Jan 13 21:32:36.962025 kernel: acpiphp: Slot [6] registered Jan 13 21:32:36.962039 kernel: acpiphp: Slot [7] registered Jan 13 21:32:36.962053 kernel: acpiphp: Slot [8] registered Jan 13 21:32:36.962067 kernel: acpiphp: Slot [9] registered Jan 13 21:32:36.962081 kernel: acpiphp: Slot [10] registered Jan 13 21:32:36.962095 kernel: acpiphp: Slot [11] registered Jan 13 21:32:36.962109 kernel: acpiphp: Slot [12] registered Jan 13 21:32:36.962126 kernel: acpiphp: Slot [13] registered Jan 13 21:32:36.962140 kernel: acpiphp: Slot [14] registered Jan 13 21:32:36.962154 kernel: acpiphp: Slot [15] registered Jan 13 21:32:36.962168 kernel: acpiphp: Slot [16] registered Jan 13 21:32:36.962182 kernel: acpiphp: Slot [17] registered Jan 13 21:32:36.962196 kernel: acpiphp: Slot [18] registered Jan 13 21:32:36.962210 kernel: acpiphp: Slot [19] registered Jan 13 21:32:36.962223 kernel: acpiphp: Slot [20] registered Jan 13 21:32:36.962237 kernel: acpiphp: Slot [21] registered Jan 13 21:32:36.962254 kernel: acpiphp: Slot [22] registered Jan 13 21:32:36.962269 kernel: acpiphp: Slot [23] registered Jan 13 21:32:36.962283 kernel: acpiphp: Slot [24] registered Jan 13 21:32:36.962298 kernel: acpiphp: Slot [25] registered Jan 13 21:32:36.962313 kernel: acpiphp: Slot [26] registered Jan 13 21:32:36.962327 kernel: acpiphp: Slot [27] registered Jan 13 21:32:36.962342 kernel: acpiphp: Slot [28] registered Jan 13 21:32:36.962357 kernel: acpiphp: Slot [29] registered Jan 13 21:32:36.962371 kernel: acpiphp: Slot [30] registered Jan 13 21:32:36.962385 kernel: acpiphp: Slot [31] registered Jan 13 21:32:36.962404 kernel: PCI host bridge to bus 0000:00 Jan 13 21:32:36.962558 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 13 21:32:36.962759 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 13 21:32:36.962889 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 13 21:32:36.963004 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 13 21:32:36.963118 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 21:32:36.963278 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 13 21:32:36.963435 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 13 21:32:36.963684 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Jan 13 21:32:36.963904 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 13 21:32:36.964049 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Jan 13 21:32:36.964183 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Jan 13 21:32:36.964494 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Jan 13 21:32:36.964721 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Jan 13 21:32:36.964874 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Jan 13 21:32:36.965152 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Jan 13 21:32:36.966524 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Jan 13 21:32:36.966721 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Jan 13 21:32:36.966876 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Jan 13 21:32:36.967017 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jan 13 21:32:36.967148 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 13 21:32:36.967302 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 13 21:32:36.967438 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Jan 13 21:32:36.967700 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 13 21:32:36.967864 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Jan 13 21:32:36.967888 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 13 21:32:36.967907 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 13 21:32:36.967930 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 13 21:32:36.967947 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 13 21:32:36.967964 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 13 21:32:36.967981 kernel: iommu: Default domain type: Translated Jan 13 21:32:36.967997 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 13 21:32:36.968015 kernel: PCI: Using ACPI for IRQ routing Jan 13 21:32:36.968031 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 13 21:32:36.968049 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 13 21:32:36.968065 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Jan 13 21:32:36.968224 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Jan 13 21:32:36.968376 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Jan 13 21:32:36.968657 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 13 21:32:36.968682 kernel: vgaarb: loaded Jan 13 21:32:36.968697 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jan 13 21:32:36.968711 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Jan 13 21:32:36.968725 kernel: clocksource: Switched to clocksource kvm-clock Jan 13 21:32:36.968738 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 21:32:36.968758 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 21:32:36.968771 kernel: pnp: PnP ACPI init Jan 13 21:32:36.968786 kernel: pnp: PnP ACPI: found 5 devices Jan 13 21:32:36.968800 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 13 21:32:36.968816 kernel: NET: Registered PF_INET protocol family Jan 13 21:32:36.968832 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 21:32:36.968847 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 13 21:32:36.968862 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 21:32:36.968876 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 13 21:32:36.968896 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 13 21:32:36.968912 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 13 21:32:36.968926 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 13 21:32:36.968941 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 13 21:32:36.968954 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 21:32:36.968968 kernel: NET: Registered PF_XDP protocol family Jan 13 21:32:36.969107 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 13 21:32:36.969228 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 13 21:32:36.969348 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 13 21:32:36.969462 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 13 21:32:36.969741 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 13 21:32:36.969763 kernel: PCI: CLS 0 bytes, default 64 Jan 13 21:32:36.969778 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 13 21:32:36.969792 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jan 13 21:32:36.969806 kernel: clocksource: Switched to clocksource tsc Jan 13 21:32:36.969820 kernel: Initialise system trusted keyrings Jan 13 21:32:36.969834 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 13 21:32:36.969852 kernel: Key type asymmetric registered Jan 13 21:32:36.969866 kernel: Asymmetric key parser 'x509' registered Jan 13 21:32:36.969880 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 13 21:32:36.969894 kernel: io scheduler mq-deadline registered Jan 13 21:32:36.969977 kernel: io scheduler kyber registered Jan 13 21:32:36.969991 kernel: io scheduler bfq registered Jan 13 21:32:36.970003 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 13 21:32:36.970016 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 21:32:36.970031 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 13 21:32:36.970049 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 13 21:32:36.970063 kernel: i8042: Warning: Keylock active Jan 13 21:32:36.970077 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 13 21:32:36.971267 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 13 21:32:36.971427 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 13 21:32:36.971581 kernel: rtc_cmos 00:00: registered as rtc0 Jan 13 21:32:36.971708 kernel: rtc_cmos 00:00: setting system clock to 2025-01-13T21:32:36 UTC (1736803956) Jan 13 21:32:36.971834 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 13 21:32:36.971858 kernel: intel_pstate: CPU model not supported Jan 13 21:32:36.971873 kernel: NET: Registered PF_INET6 protocol family Jan 13 21:32:36.971887 kernel: Segment Routing with IPv6 Jan 13 21:32:36.971901 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 21:32:36.971916 kernel: NET: Registered PF_PACKET protocol family Jan 13 21:32:36.971931 kernel: Key type dns_resolver registered Jan 13 21:32:36.971944 kernel: IPI shorthand broadcast: enabled Jan 13 21:32:36.971959 kernel: sched_clock: Marking stable (537021770, 257326370)->(877737691, -83389551) Jan 13 21:32:36.971973 kernel: registered taskstats version 1 Jan 13 21:32:36.971991 kernel: Loading compiled-in X.509 certificates Jan 13 21:32:36.972004 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: e8ca4908f7ff887d90a0430272c92dde55624447' Jan 13 21:32:36.972018 kernel: Key type .fscrypt registered Jan 13 21:32:36.972031 kernel: Key type fscrypt-provisioning registered Jan 13 21:32:36.972045 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 21:32:36.972060 kernel: ima: Allocated hash algorithm: sha1 Jan 13 21:32:36.972074 kernel: ima: No architecture policies found Jan 13 21:32:36.972087 kernel: clk: Disabling unused clocks Jan 13 21:32:36.972105 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 13 21:32:36.972119 kernel: Write protecting the kernel read-only data: 36864k Jan 13 21:32:36.972133 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 13 21:32:36.972147 kernel: Run /init as init process Jan 13 21:32:36.972160 kernel: with arguments: Jan 13 21:32:36.972174 kernel: /init Jan 13 21:32:36.972187 kernel: with environment: Jan 13 21:32:36.972200 kernel: HOME=/ Jan 13 21:32:36.972214 kernel: TERM=linux Jan 13 21:32:36.972228 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 21:32:36.972252 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:32:36.972282 systemd[1]: Detected virtualization amazon. Jan 13 21:32:36.972300 systemd[1]: Detected architecture x86-64. Jan 13 21:32:36.972315 systemd[1]: Running in initrd. Jan 13 21:32:36.972332 systemd[1]: No hostname configured, using default hostname. Jan 13 21:32:36.972346 systemd[1]: Hostname set to . Jan 13 21:32:36.972362 systemd[1]: Initializing machine ID from VM UUID. Jan 13 21:32:36.972429 systemd[1]: Queued start job for default target initrd.target. Jan 13 21:32:36.972444 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:32:36.972459 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:32:36.972476 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 21:32:36.973523 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:32:36.973559 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 21:32:36.973576 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 21:32:36.973596 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 21:32:36.973614 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 21:32:36.973633 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:32:36.973651 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:32:36.973669 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:32:36.973691 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:32:36.973709 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:32:36.973727 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:32:36.973745 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:32:36.973763 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:32:36.973780 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 21:32:36.973799 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 21:32:36.973817 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:32:36.973835 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:32:36.973858 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:32:36.973876 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:32:36.973947 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 21:32:36.973966 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:32:36.973984 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 21:32:36.974001 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 21:32:36.974021 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:32:36.974043 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:32:36.974062 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:32:36.974114 systemd-journald[178]: Collecting audit messages is disabled. Jan 13 21:32:36.974159 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 21:32:36.974637 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:32:36.974841 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 21:32:36.974865 systemd-journald[178]: Journal started Jan 13 21:32:36.974939 systemd-journald[178]: Runtime Journal (/run/log/journal/ec24760f6e600d05b9cfca9c9975384a) is 4.8M, max 38.6M, 33.7M free. Jan 13 21:32:36.992536 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:32:36.993680 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 21:32:36.998375 systemd-modules-load[179]: Inserted module 'overlay' Jan 13 21:32:37.196147 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Jan 13 21:32:37.196189 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 21:32:37.196213 kernel: Bridge firewalling registered Jan 13 21:32:37.002209 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:32:37.046246 systemd-modules-load[179]: Inserted module 'br_netfilter' Jan 13 21:32:37.199185 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:32:37.202670 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:32:37.212742 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:32:37.227040 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:32:37.228665 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:32:37.229374 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:32:37.235679 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:32:37.254943 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:32:37.262734 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:32:37.265163 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:32:37.267921 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:32:37.277723 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 21:32:37.301123 dracut-cmdline[215]: dracut-dracut-053 Jan 13 21:32:37.305327 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:32:37.321446 systemd-resolved[211]: Positive Trust Anchors: Jan 13 21:32:37.321464 systemd-resolved[211]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:32:37.321550 systemd-resolved[211]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:32:37.340379 systemd-resolved[211]: Defaulting to hostname 'linux'. Jan 13 21:32:37.343658 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:32:37.346856 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:32:37.415522 kernel: SCSI subsystem initialized Jan 13 21:32:37.427543 kernel: Loading iSCSI transport class v2.0-870. Jan 13 21:32:37.440514 kernel: iscsi: registered transport (tcp) Jan 13 21:32:37.463850 kernel: iscsi: registered transport (qla4xxx) Jan 13 21:32:37.464010 kernel: QLogic iSCSI HBA Driver Jan 13 21:32:37.507133 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 21:32:37.515443 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 21:32:37.553281 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 21:32:37.553364 kernel: device-mapper: uevent: version 1.0.3 Jan 13 21:32:37.553385 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 21:32:37.596513 kernel: raid6: avx512x4 gen() 14145 MB/s Jan 13 21:32:37.613540 kernel: raid6: avx512x2 gen() 15260 MB/s Jan 13 21:32:37.630541 kernel: raid6: avx512x1 gen() 14532 MB/s Jan 13 21:32:37.647511 kernel: raid6: avx2x4 gen() 15109 MB/s Jan 13 21:32:37.664519 kernel: raid6: avx2x2 gen() 15341 MB/s Jan 13 21:32:37.681580 kernel: raid6: avx2x1 gen() 11384 MB/s Jan 13 21:32:37.681653 kernel: raid6: using algorithm avx2x2 gen() 15341 MB/s Jan 13 21:32:37.699512 kernel: raid6: .... xor() 15661 MB/s, rmw enabled Jan 13 21:32:37.699672 kernel: raid6: using avx512x2 recovery algorithm Jan 13 21:32:37.723516 kernel: xor: automatically using best checksumming function avx Jan 13 21:32:37.915515 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 21:32:37.927757 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:32:37.941722 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:32:37.967081 systemd-udevd[397]: Using default interface naming scheme 'v255'. Jan 13 21:32:37.972961 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:32:37.983734 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 21:32:37.999575 dracut-pre-trigger[401]: rd.md=0: removing MD RAID activation Jan 13 21:32:38.034041 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:32:38.043781 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:32:38.140864 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:32:38.153973 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 21:32:38.194681 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 21:32:38.203758 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:32:38.205349 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:32:38.215388 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:32:38.227774 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 21:32:38.275378 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:32:38.282534 kernel: cryptd: max_cpu_qlen set to 1000 Jan 13 21:32:38.287816 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 13 21:32:38.300184 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 13 21:32:38.300457 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Jan 13 21:32:38.300716 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:39:22:e7:a4:d7 Jan 13 21:32:38.303464 (udev-worker)[459]: Network interface NamePolicy= disabled on kernel command line. Jan 13 21:32:38.325741 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:32:38.325928 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:32:38.328196 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:32:38.329592 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:32:38.329796 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:32:38.331807 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:32:38.344752 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:32:38.352437 kernel: AVX2 version of gcm_enc/dec engaged. Jan 13 21:32:38.352472 kernel: AES CTR mode by8 optimization enabled Jan 13 21:32:38.403656 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 13 21:32:38.403938 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 13 21:32:38.412643 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 13 21:32:38.419509 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 21:32:38.419572 kernel: GPT:9289727 != 16777215 Jan 13 21:32:38.419596 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 21:32:38.419629 kernel: GPT:9289727 != 16777215 Jan 13 21:32:38.419652 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 21:32:38.419674 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 21:32:38.531519 kernel: BTRFS: device fsid b8e2d3c5-4bed-4339-bed5-268c66823686 devid 1 transid 37 /dev/nvme0n1p3 scanned by (udev-worker) (448) Jan 13 21:32:38.587523 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (466) Jan 13 21:32:38.652498 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:32:38.662221 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:32:38.683578 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 13 21:32:38.699262 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 13 21:32:38.718635 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 13 21:32:38.719191 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 13 21:32:38.734364 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:32:38.747308 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 13 21:32:38.754727 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 21:32:38.785444 disk-uuid[629]: Primary Header is updated. Jan 13 21:32:38.785444 disk-uuid[629]: Secondary Entries is updated. Jan 13 21:32:38.785444 disk-uuid[629]: Secondary Header is updated. Jan 13 21:32:38.792536 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 21:32:38.800518 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 21:32:38.812544 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 21:32:39.813509 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 21:32:39.813799 disk-uuid[630]: The operation has completed successfully. Jan 13 21:32:39.983165 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 21:32:39.983291 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 21:32:40.018659 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 21:32:40.033528 sh[973]: Success Jan 13 21:32:40.055781 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 13 21:32:40.175036 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 21:32:40.184614 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 21:32:40.189418 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 21:32:40.235702 kernel: BTRFS info (device dm-0): first mount of filesystem b8e2d3c5-4bed-4339-bed5-268c66823686 Jan 13 21:32:40.235767 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:32:40.235787 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 21:32:40.236622 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 21:32:40.237882 kernel: BTRFS info (device dm-0): using free space tree Jan 13 21:32:40.348031 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 13 21:32:40.363843 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 21:32:40.366654 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 21:32:40.374803 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 21:32:40.383863 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 21:32:40.410824 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:32:40.411017 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:32:40.411059 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 13 21:32:40.416602 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 13 21:32:40.432420 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 21:32:40.433705 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:32:40.440640 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 21:32:40.449813 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 21:32:40.512303 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:32:40.520685 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:32:40.572036 systemd-networkd[1167]: lo: Link UP Jan 13 21:32:40.573994 systemd-networkd[1167]: lo: Gained carrier Jan 13 21:32:40.589686 systemd-networkd[1167]: Enumeration completed Jan 13 21:32:40.591098 systemd-networkd[1167]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:32:40.591104 systemd-networkd[1167]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:32:40.605691 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:32:40.624012 systemd[1]: Reached target network.target - Network. Jan 13 21:32:40.628191 systemd-networkd[1167]: eth0: Link UP Jan 13 21:32:40.628321 systemd-networkd[1167]: eth0: Gained carrier Jan 13 21:32:40.628339 systemd-networkd[1167]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:32:40.652833 systemd-networkd[1167]: eth0: DHCPv4 address 172.31.17.229/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 13 21:32:40.680799 ignition[1107]: Ignition 2.19.0 Jan 13 21:32:40.680977 ignition[1107]: Stage: fetch-offline Jan 13 21:32:40.682036 ignition[1107]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:32:40.682050 ignition[1107]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 21:32:40.685611 ignition[1107]: Ignition finished successfully Jan 13 21:32:40.687272 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:32:40.693953 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 13 21:32:40.726104 ignition[1176]: Ignition 2.19.0 Jan 13 21:32:40.726123 ignition[1176]: Stage: fetch Jan 13 21:32:40.726784 ignition[1176]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:32:40.726798 ignition[1176]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 21:32:40.726908 ignition[1176]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 21:32:40.736219 ignition[1176]: PUT result: OK Jan 13 21:32:40.741746 ignition[1176]: parsed url from cmdline: "" Jan 13 21:32:40.741758 ignition[1176]: no config URL provided Jan 13 21:32:40.741769 ignition[1176]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 21:32:40.741805 ignition[1176]: no config at "/usr/lib/ignition/user.ign" Jan 13 21:32:40.741829 ignition[1176]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 21:32:40.747146 ignition[1176]: PUT result: OK Jan 13 21:32:40.749178 ignition[1176]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 13 21:32:40.755134 ignition[1176]: GET result: OK Jan 13 21:32:40.755208 ignition[1176]: parsing config with SHA512: 11a281b17838696273076d8d86c8963777d2263a73252451a99250815d051c25d8ccc396f5bcf63851adee1869bed60c883cbceb3472ba8be2959593cf7a8bc8 Jan 13 21:32:40.758784 unknown[1176]: fetched base config from "system" Jan 13 21:32:40.759057 ignition[1176]: fetch: fetch complete Jan 13 21:32:40.758796 unknown[1176]: fetched base config from "system" Jan 13 21:32:40.759062 ignition[1176]: fetch: fetch passed Jan 13 21:32:40.758802 unknown[1176]: fetched user config from "aws" Jan 13 21:32:40.759096 ignition[1176]: Ignition finished successfully Jan 13 21:32:40.765046 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 13 21:32:40.772948 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 21:32:40.811514 ignition[1182]: Ignition 2.19.0 Jan 13 21:32:40.811529 ignition[1182]: Stage: kargs Jan 13 21:32:40.812209 ignition[1182]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:32:40.812223 ignition[1182]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 21:32:40.812390 ignition[1182]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 21:32:40.813846 ignition[1182]: PUT result: OK Jan 13 21:32:40.820238 ignition[1182]: kargs: kargs passed Jan 13 21:32:40.820313 ignition[1182]: Ignition finished successfully Jan 13 21:32:40.823035 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 21:32:40.830771 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 21:32:40.850140 ignition[1188]: Ignition 2.19.0 Jan 13 21:32:40.850154 ignition[1188]: Stage: disks Jan 13 21:32:40.850703 ignition[1188]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:32:40.850716 ignition[1188]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 21:32:40.850829 ignition[1188]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 21:32:40.852308 ignition[1188]: PUT result: OK Jan 13 21:32:40.870224 ignition[1188]: disks: disks passed Jan 13 21:32:40.870317 ignition[1188]: Ignition finished successfully Jan 13 21:32:40.872739 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 21:32:40.874749 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 21:32:40.876744 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 21:32:40.879478 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:32:40.880664 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:32:40.884637 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:32:40.893237 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 21:32:40.916472 systemd-fsck[1196]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 13 21:32:40.919636 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 21:32:40.925773 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 21:32:41.051522 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 39899d4c-a8b1-4feb-9875-e812cc535888 r/w with ordered data mode. Quota mode: none. Jan 13 21:32:41.051924 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 21:32:41.053509 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 21:32:41.065720 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:32:41.083745 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 21:32:41.086756 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 21:32:41.086939 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 21:32:41.090328 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:32:41.122525 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1215) Jan 13 21:32:41.134132 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:32:41.134211 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:32:41.134233 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 13 21:32:41.127216 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 21:32:41.142734 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 21:32:41.149516 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 13 21:32:41.153220 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:32:41.400470 initrd-setup-root[1245]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 21:32:41.408702 initrd-setup-root[1252]: cut: /sysroot/etc/group: No such file or directory Jan 13 21:32:41.427861 initrd-setup-root[1259]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 21:32:41.435463 initrd-setup-root[1266]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 21:32:41.619283 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 21:32:41.626614 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 21:32:41.640699 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 21:32:41.644329 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:32:41.646780 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 21:32:41.676661 ignition[1333]: INFO : Ignition 2.19.0 Jan 13 21:32:41.678612 ignition[1333]: INFO : Stage: mount Jan 13 21:32:41.678612 ignition[1333]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:32:41.678612 ignition[1333]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 21:32:41.678612 ignition[1333]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 21:32:41.684334 ignition[1333]: INFO : PUT result: OK Jan 13 21:32:41.687645 ignition[1333]: INFO : mount: mount passed Jan 13 21:32:41.688872 ignition[1333]: INFO : Ignition finished successfully Jan 13 21:32:41.693434 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 21:32:41.699688 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 21:32:41.718162 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 21:32:41.730160 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:32:41.754234 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1345) Jan 13 21:32:41.758509 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:32:41.758578 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:32:41.759907 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 13 21:32:41.769516 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 13 21:32:41.772081 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:32:41.803537 ignition[1362]: INFO : Ignition 2.19.0 Jan 13 21:32:41.804659 ignition[1362]: INFO : Stage: files Jan 13 21:32:41.805692 ignition[1362]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:32:41.805692 ignition[1362]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 21:32:41.805692 ignition[1362]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 21:32:41.810234 ignition[1362]: INFO : PUT result: OK Jan 13 21:32:41.813393 ignition[1362]: DEBUG : files: compiled without relabeling support, skipping Jan 13 21:32:41.816221 ignition[1362]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 21:32:41.816221 ignition[1362]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 21:32:41.824454 ignition[1362]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 21:32:41.826224 ignition[1362]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 21:32:41.828288 unknown[1362]: wrote ssh authorized keys file for user: core Jan 13 21:32:41.829959 ignition[1362]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 21:32:41.832316 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 13 21:32:41.834200 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 13 21:32:41.836133 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 13 21:32:41.838226 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 21:32:41.838226 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:32:41.838226 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:32:41.838226 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 21:32:41.838226 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 21:32:41.838226 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 21:32:41.838226 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jan 13 21:32:42.175897 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Jan 13 21:32:42.529808 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 21:32:42.529808 ignition[1362]: INFO : files: op(8): [started] processing unit "containerd.service" Jan 13 21:32:42.534198 ignition[1362]: INFO : files: op(8): op(9): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 13 21:32:42.536663 ignition[1362]: INFO : files: op(8): op(9): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 13 21:32:42.536663 ignition[1362]: INFO : files: op(8): [finished] processing unit "containerd.service" Jan 13 21:32:42.540618 ignition[1362]: INFO : files: createResultFile: createFiles: op(a): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:32:42.540618 ignition[1362]: INFO : files: createResultFile: createFiles: op(a): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:32:42.540618 ignition[1362]: INFO : files: files passed Jan 13 21:32:42.540618 ignition[1362]: INFO : Ignition finished successfully Jan 13 21:32:42.541820 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 21:32:42.551688 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 21:32:42.553831 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 21:32:42.563122 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 21:32:42.563269 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 21:32:42.574478 initrd-setup-root-after-ignition[1391]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:32:42.574478 initrd-setup-root-after-ignition[1391]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:32:42.579612 initrd-setup-root-after-ignition[1395]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:32:42.582801 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:32:42.583724 systemd-networkd[1167]: eth0: Gained IPv6LL Jan 13 21:32:42.585131 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 21:32:42.593674 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 21:32:42.635798 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 21:32:42.637641 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 21:32:42.642069 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 21:32:42.644091 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 21:32:42.646353 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 21:32:42.654653 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 21:32:42.669977 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:32:42.677021 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 21:32:42.692034 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:32:42.692217 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:32:42.697283 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 21:32:42.699618 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 21:32:42.700845 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:32:42.708255 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 21:32:42.710665 systemd[1]: Stopped target basic.target - Basic System. Jan 13 21:32:42.712668 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 21:32:42.715180 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:32:42.725305 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 21:32:42.736164 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 21:32:42.736986 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:32:42.748520 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 21:32:42.751302 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 21:32:42.755866 systemd[1]: Stopped target swap.target - Swaps. Jan 13 21:32:42.764655 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 21:32:42.764992 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:32:42.769159 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:32:42.770651 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:32:42.775955 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 21:32:42.776050 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:32:42.780633 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 21:32:42.781907 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 21:32:42.785031 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 21:32:42.786614 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:32:42.788659 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 21:32:42.788827 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 21:32:42.796732 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 21:32:42.798599 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 21:32:42.798788 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:32:42.803815 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 21:32:42.808604 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 21:32:42.808823 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:32:42.832937 ignition[1415]: INFO : Ignition 2.19.0 Jan 13 21:32:42.832937 ignition[1415]: INFO : Stage: umount Jan 13 21:32:42.810707 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 21:32:42.844203 ignition[1415]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:32:42.844203 ignition[1415]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 21:32:42.811787 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:32:42.866373 ignition[1415]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 21:32:42.866373 ignition[1415]: INFO : PUT result: OK Jan 13 21:32:42.845292 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 21:32:42.845408 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 21:32:42.887535 ignition[1415]: INFO : umount: umount passed Jan 13 21:32:42.889665 ignition[1415]: INFO : Ignition finished successfully Jan 13 21:32:42.890080 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 21:32:42.890222 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 21:32:42.892845 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 21:32:42.892948 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 21:32:42.897063 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 21:32:42.897140 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 21:32:42.899431 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 13 21:32:42.899479 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 13 21:32:42.904288 systemd[1]: Stopped target network.target - Network. Jan 13 21:32:42.908087 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 21:32:42.909206 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:32:42.910835 systemd[1]: Stopped target paths.target - Path Units. Jan 13 21:32:42.913132 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 21:32:42.920444 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:32:42.920596 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 21:32:42.925114 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 21:32:42.927133 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 21:32:42.927232 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:32:42.935588 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 21:32:42.935645 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:32:42.937786 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 21:32:42.938120 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 21:32:42.940307 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 21:32:42.940386 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 21:32:42.945095 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 21:32:42.949719 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 21:32:42.952610 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 21:32:42.953460 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 21:32:42.953601 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 21:32:42.956926 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 21:32:42.957028 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 21:32:42.959534 systemd-networkd[1167]: eth0: DHCPv6 lease lost Jan 13 21:32:42.962085 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 21:32:42.962217 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 21:32:42.969210 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 21:32:42.971518 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 21:32:42.979668 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 21:32:42.979749 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:32:42.988603 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 21:32:42.989907 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 21:32:42.989989 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:32:42.991764 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 21:32:42.991831 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:32:42.993336 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 21:32:42.993604 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 21:32:42.996401 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 21:32:42.996453 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:32:43.004951 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:32:43.023127 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 21:32:43.023267 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 21:32:43.035201 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 21:32:43.035431 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:32:43.039491 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 21:32:43.039559 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 21:32:43.044655 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 21:32:43.044711 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:32:43.046819 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 21:32:43.046917 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:32:43.058089 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 21:32:43.058174 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 21:32:43.060457 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:32:43.060541 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:32:43.069723 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 21:32:43.070921 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 21:32:43.071003 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:32:43.072598 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:32:43.072674 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:32:43.080090 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 21:32:43.080204 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 21:32:43.082616 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 21:32:43.093704 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 21:32:43.120550 systemd[1]: Switching root. Jan 13 21:32:43.140507 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Jan 13 21:32:43.140592 systemd-journald[178]: Journal stopped Jan 13 21:32:44.934054 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 21:32:44.934147 kernel: SELinux: policy capability open_perms=1 Jan 13 21:32:44.934168 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 21:32:44.934187 kernel: SELinux: policy capability always_check_network=0 Jan 13 21:32:44.934209 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 21:32:44.934227 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 21:32:44.934248 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 21:32:44.934268 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 21:32:44.934441 kernel: audit: type=1403 audit(1736803963.680:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 21:32:44.934474 systemd[1]: Successfully loaded SELinux policy in 58.695ms. Jan 13 21:32:44.934574 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 16.369ms. Jan 13 21:32:44.934599 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:32:44.934623 systemd[1]: Detected virtualization amazon. Jan 13 21:32:44.934643 systemd[1]: Detected architecture x86-64. Jan 13 21:32:44.934663 systemd[1]: Detected first boot. Jan 13 21:32:44.934682 systemd[1]: Initializing machine ID from VM UUID. Jan 13 21:32:44.934703 zram_generator::config[1475]: No configuration found. Jan 13 21:32:44.934728 systemd[1]: Populated /etc with preset unit settings. Jan 13 21:32:44.934750 systemd[1]: Queued start job for default target multi-user.target. Jan 13 21:32:44.934771 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 13 21:32:44.934793 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 21:32:44.934814 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 21:32:44.934834 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 21:32:44.934853 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 21:32:44.934873 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 21:32:44.934893 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 21:32:44.934913 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 21:32:44.934935 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 21:32:44.934955 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:32:44.934975 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:32:44.934995 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 21:32:44.935014 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 21:32:44.935034 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 21:32:44.935108 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:32:44.935131 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 13 21:32:44.935155 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:32:44.935175 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 21:32:44.935194 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:32:44.935216 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:32:44.935236 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:32:44.935256 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:32:44.935275 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 21:32:44.935296 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 21:32:44.935319 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 21:32:44.935338 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 21:32:44.935363 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:32:44.935383 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:32:44.935403 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:32:44.935495 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 21:32:44.935519 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 21:32:44.935545 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 21:32:44.935565 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 21:32:44.935585 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:32:44.935609 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 21:32:44.935630 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 21:32:44.935689 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 21:32:44.935709 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 21:32:44.935729 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:32:44.935750 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:32:44.935772 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 21:32:44.935792 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:32:44.935815 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:32:44.935835 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:32:44.935855 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 21:32:44.935874 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:32:44.935894 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 21:32:44.935914 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 13 21:32:44.935939 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 13 21:32:44.935959 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:32:44.935978 kernel: fuse: init (API version 7.39) Jan 13 21:32:44.936051 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:32:44.936072 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 21:32:44.936092 kernel: loop: module loaded Jan 13 21:32:44.936113 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 21:32:44.936133 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:32:44.936153 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:32:44.936254 systemd-journald[1572]: Collecting audit messages is disabled. Jan 13 21:32:44.936299 systemd-journald[1572]: Journal started Jan 13 21:32:44.936336 systemd-journald[1572]: Runtime Journal (/run/log/journal/ec24760f6e600d05b9cfca9c9975384a) is 4.8M, max 38.6M, 33.7M free. Jan 13 21:32:44.965128 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:32:44.954996 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 21:32:44.956627 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 21:32:44.958234 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 21:32:44.959501 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 21:32:44.962278 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 21:32:44.963513 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 21:32:44.965076 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:32:44.967422 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 21:32:44.967657 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 21:32:44.971118 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:32:44.971445 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:32:44.973129 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:32:44.973334 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:32:44.975902 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 21:32:44.976192 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 21:32:44.978057 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:32:44.978256 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:32:44.980121 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:32:44.986284 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 21:32:44.988731 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 21:32:45.001508 kernel: ACPI: bus type drm_connector registered Jan 13 21:32:45.001326 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:32:45.002326 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:32:45.021967 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 21:32:45.031695 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 21:32:45.040680 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 21:32:45.042094 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 21:32:45.050672 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 21:32:45.063693 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 21:32:45.066599 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:32:45.080682 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 21:32:45.084262 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:32:45.092736 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:32:45.095654 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 21:32:45.106419 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 21:32:45.110801 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 21:32:45.112336 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 21:32:45.120120 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 21:32:45.125765 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 21:32:45.137666 systemd-journald[1572]: Time spent on flushing to /var/log/journal/ec24760f6e600d05b9cfca9c9975384a is 69.204ms for 933 entries. Jan 13 21:32:45.137666 systemd-journald[1572]: System Journal (/var/log/journal/ec24760f6e600d05b9cfca9c9975384a) is 8.0M, max 195.6M, 187.6M free. Jan 13 21:32:45.215764 systemd-journald[1572]: Received client request to flush runtime journal. Jan 13 21:32:45.190074 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:32:45.213938 systemd-tmpfiles[1623]: ACLs are not supported, ignoring. Jan 13 21:32:45.213961 systemd-tmpfiles[1623]: ACLs are not supported, ignoring. Jan 13 21:32:45.221035 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 21:32:45.231034 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:32:45.232912 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:32:45.243785 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 21:32:45.248669 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 21:32:45.270891 udevadm[1642]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 13 21:32:45.308695 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 21:32:45.319806 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:32:45.350972 systemd-tmpfiles[1646]: ACLs are not supported, ignoring. Jan 13 21:32:45.351605 systemd-tmpfiles[1646]: ACLs are not supported, ignoring. Jan 13 21:32:45.371301 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:32:45.947587 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 21:32:45.955715 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:32:46.003043 systemd-udevd[1652]: Using default interface naming scheme 'v255'. Jan 13 21:32:46.044469 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:32:46.057710 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:32:46.104990 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 21:32:46.184874 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 13 21:32:46.200116 (udev-worker)[1653]: Network interface NamePolicy= disabled on kernel command line. Jan 13 21:32:46.223847 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 21:32:46.334929 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 13 21:32:46.337514 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Jan 13 21:32:46.340522 kernel: ACPI: button: Power Button [PWRF] Jan 13 21:32:46.343881 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Jan 13 21:32:46.349585 kernel: ACPI: button: Sleep Button [SLPF] Jan 13 21:32:46.363337 systemd-networkd[1658]: lo: Link UP Jan 13 21:32:46.363347 systemd-networkd[1658]: lo: Gained carrier Jan 13 21:32:46.366013 systemd-networkd[1658]: Enumeration completed Jan 13 21:32:46.366699 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:32:46.369396 systemd-networkd[1658]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:32:46.369406 systemd-networkd[1658]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:32:46.372268 systemd-networkd[1658]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:32:46.372376 systemd-networkd[1658]: eth0: Link UP Jan 13 21:32:46.374983 systemd-networkd[1658]: eth0: Gained carrier Jan 13 21:32:46.375005 systemd-networkd[1658]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:32:46.383592 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 21:32:46.390563 systemd-networkd[1658]: eth0: DHCPv4 address 172.31.17.229/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 13 21:32:46.391510 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input5 Jan 13 21:32:46.440055 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:32:46.448870 kernel: mousedev: PS/2 mouse device common for all mice Jan 13 21:32:46.452504 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (1665) Jan 13 21:32:46.647828 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 13 21:32:46.730955 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 21:32:46.747907 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 21:32:46.751984 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:32:46.786567 lvm[1775]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:32:46.816702 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 21:32:46.819267 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:32:46.823691 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 21:32:46.832043 lvm[1779]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:32:46.859686 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 21:32:46.861914 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 21:32:46.863385 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 21:32:46.863420 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:32:46.864564 systemd[1]: Reached target machines.target - Containers. Jan 13 21:32:46.867245 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 21:32:46.874253 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 21:32:46.878679 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 21:32:46.879992 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:32:46.888686 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 21:32:46.893830 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 21:32:46.899305 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 21:32:46.904039 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 21:32:46.915967 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 21:32:46.941105 kernel: loop0: detected capacity change from 0 to 142488 Jan 13 21:32:46.942023 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 21:32:46.943427 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 21:32:47.020028 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 21:32:47.039512 kernel: loop1: detected capacity change from 0 to 140768 Jan 13 21:32:47.110509 kernel: loop2: detected capacity change from 0 to 61336 Jan 13 21:32:47.223529 kernel: loop3: detected capacity change from 0 to 211296 Jan 13 21:32:47.357514 kernel: loop4: detected capacity change from 0 to 142488 Jan 13 21:32:47.386851 kernel: loop5: detected capacity change from 0 to 140768 Jan 13 21:32:47.416507 kernel: loop6: detected capacity change from 0 to 61336 Jan 13 21:32:47.435505 kernel: loop7: detected capacity change from 0 to 211296 Jan 13 21:32:47.477409 (sd-merge)[1800]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 13 21:32:47.478227 (sd-merge)[1800]: Merged extensions into '/usr'. Jan 13 21:32:47.490953 systemd[1]: Reloading requested from client PID 1787 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 21:32:47.490974 systemd[1]: Reloading... Jan 13 21:32:47.611524 zram_generator::config[1831]: No configuration found. Jan 13 21:32:47.797180 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:32:47.831607 systemd-networkd[1658]: eth0: Gained IPv6LL Jan 13 21:32:47.898362 systemd[1]: Reloading finished in 406 ms. Jan 13 21:32:47.936459 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 21:32:47.941357 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 21:32:47.957920 systemd[1]: Starting ensure-sysext.service... Jan 13 21:32:47.961735 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:32:47.977185 systemd[1]: Reloading requested from client PID 1884 ('systemctl') (unit ensure-sysext.service)... Jan 13 21:32:47.977208 systemd[1]: Reloading... Jan 13 21:32:47.993156 systemd-tmpfiles[1885]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 21:32:47.994666 systemd-tmpfiles[1885]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 21:32:47.996440 systemd-tmpfiles[1885]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 21:32:47.997181 systemd-tmpfiles[1885]: ACLs are not supported, ignoring. Jan 13 21:32:47.997434 systemd-tmpfiles[1885]: ACLs are not supported, ignoring. Jan 13 21:32:48.006221 systemd-tmpfiles[1885]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:32:48.007089 ldconfig[1783]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 21:32:48.007447 systemd-tmpfiles[1885]: Skipping /boot Jan 13 21:32:48.034922 systemd-tmpfiles[1885]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:32:48.034939 systemd-tmpfiles[1885]: Skipping /boot Jan 13 21:32:48.101511 zram_generator::config[1916]: No configuration found. Jan 13 21:32:48.257158 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:32:48.330114 systemd[1]: Reloading finished in 352 ms. Jan 13 21:32:48.349935 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 21:32:48.361563 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:32:48.372666 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 13 21:32:48.382196 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 21:32:48.386652 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 21:32:48.398745 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:32:48.406056 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 21:32:48.434602 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:32:48.435526 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:32:48.440259 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:32:48.453058 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:32:48.461643 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:32:48.463081 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:32:48.463653 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:32:48.471172 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:32:48.471810 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:32:48.483816 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:32:48.484061 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:32:48.496266 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:32:48.496552 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:32:48.508037 systemd[1]: Finished ensure-sysext.service. Jan 13 21:32:48.512972 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:32:48.513342 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:32:48.522911 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:32:48.526859 augenrules[2006]: No rules Jan 13 21:32:48.538293 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:32:48.539808 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:32:48.539914 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:32:48.539967 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 21:32:48.542409 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:32:48.543235 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 13 21:32:48.546398 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 21:32:48.548564 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 21:32:48.550442 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:32:48.550682 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:32:48.553917 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:32:48.556994 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:32:48.569624 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:32:48.576712 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 21:32:48.600218 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 21:32:48.604098 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 21:32:48.608434 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 21:32:48.628931 systemd-resolved[1977]: Positive Trust Anchors: Jan 13 21:32:48.628950 systemd-resolved[1977]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:32:48.628998 systemd-resolved[1977]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:32:48.634528 systemd-resolved[1977]: Defaulting to hostname 'linux'. Jan 13 21:32:48.636418 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:32:48.637743 systemd[1]: Reached target network.target - Network. Jan 13 21:32:48.638727 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 21:32:48.639853 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:32:48.641165 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:32:48.642595 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 21:32:48.644048 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 21:32:48.645669 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 21:32:48.647017 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 21:32:48.649287 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 21:32:48.650611 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 21:32:48.650638 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:32:48.653207 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:32:48.655067 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 21:32:48.658573 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 21:32:48.662848 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 21:32:48.669717 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 21:32:48.671729 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:32:48.673008 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:32:48.675045 systemd[1]: System is tainted: cgroupsv1 Jan 13 21:32:48.675103 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:32:48.675131 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:32:48.682659 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 21:32:48.692720 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 13 21:32:48.700990 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 21:32:48.709598 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 21:32:48.728802 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 21:32:48.730729 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 21:32:48.742814 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:32:48.775022 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 21:32:48.785658 jq[2034]: false Jan 13 21:32:48.797762 systemd[1]: Started ntpd.service - Network Time Service. Jan 13 21:32:48.810168 extend-filesystems[2036]: Found loop4 Jan 13 21:32:48.810278 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 21:32:48.812925 extend-filesystems[2036]: Found loop5 Jan 13 21:32:48.813807 extend-filesystems[2036]: Found loop6 Jan 13 21:32:48.814754 extend-filesystems[2036]: Found loop7 Jan 13 21:32:48.815738 extend-filesystems[2036]: Found nvme0n1 Jan 13 21:32:48.816791 extend-filesystems[2036]: Found nvme0n1p1 Jan 13 21:32:48.817934 extend-filesystems[2036]: Found nvme0n1p2 Jan 13 21:32:48.819074 extend-filesystems[2036]: Found nvme0n1p3 Jan 13 21:32:48.820021 extend-filesystems[2036]: Found usr Jan 13 21:32:48.820846 extend-filesystems[2036]: Found nvme0n1p4 Jan 13 21:32:48.821713 extend-filesystems[2036]: Found nvme0n1p6 Jan 13 21:32:48.822636 extend-filesystems[2036]: Found nvme0n1p7 Jan 13 21:32:48.823585 extend-filesystems[2036]: Found nvme0n1p9 Jan 13 21:32:48.823598 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 13 21:32:48.825775 extend-filesystems[2036]: Checking size of /dev/nvme0n1p9 Jan 13 21:32:48.836688 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 21:32:48.847086 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 21:32:48.871170 dbus-daemon[2033]: [system] SELinux support is enabled Jan 13 21:32:48.879733 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 21:32:48.881597 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 21:32:48.894699 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 21:32:48.904211 dbus-daemon[2033]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1658 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 13 21:32:48.904631 coreos-metadata[2032]: Jan 13 21:32:48.904 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 13 21:32:48.904631 coreos-metadata[2032]: Jan 13 21:32:48.904 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 13 21:32:48.904631 coreos-metadata[2032]: Jan 13 21:32:48.904 INFO Fetch successful Jan 13 21:32:48.904631 coreos-metadata[2032]: Jan 13 21:32:48.904 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 13 21:32:48.904631 coreos-metadata[2032]: Jan 13 21:32:48.904 INFO Fetch successful Jan 13 21:32:48.904631 coreos-metadata[2032]: Jan 13 21:32:48.904 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 13 21:32:48.904631 coreos-metadata[2032]: Jan 13 21:32:48.904 INFO Fetch successful Jan 13 21:32:48.904631 coreos-metadata[2032]: Jan 13 21:32:48.904 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 13 21:32:48.904631 coreos-metadata[2032]: Jan 13 21:32:48.904 INFO Fetch successful Jan 13 21:32:48.904631 coreos-metadata[2032]: Jan 13 21:32:48.904 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 13 21:32:48.904631 coreos-metadata[2032]: Jan 13 21:32:48.904 INFO Fetch failed with 404: resource not found Jan 13 21:32:48.904631 coreos-metadata[2032]: Jan 13 21:32:48.904 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 13 21:32:48.904631 coreos-metadata[2032]: Jan 13 21:32:48.904 INFO Fetch successful Jan 13 21:32:48.904631 coreos-metadata[2032]: Jan 13 21:32:48.904 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 13 21:32:48.904631 coreos-metadata[2032]: Jan 13 21:32:48.904 INFO Fetch successful Jan 13 21:32:48.904631 coreos-metadata[2032]: Jan 13 21:32:48.904 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 13 21:32:48.904631 coreos-metadata[2032]: Jan 13 21:32:48.904 INFO Fetch successful Jan 13 21:32:48.904631 coreos-metadata[2032]: Jan 13 21:32:48.904 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 13 21:32:48.904631 coreos-metadata[2032]: Jan 13 21:32:48.904 INFO Fetch successful Jan 13 21:32:48.904631 coreos-metadata[2032]: Jan 13 21:32:48.904 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 13 21:32:48.926001 coreos-metadata[2032]: Jan 13 21:32:48.914 INFO Fetch successful Jan 13 21:32:48.926088 extend-filesystems[2036]: Resized partition /dev/nvme0n1p9 Jan 13 21:32:48.942734 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jan 13 21:32:48.907746 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 21:32:48.943061 extend-filesystems[2066]: resize2fs 1.47.1 (20-May-2024) Jan 13 21:32:48.910848 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 21:32:48.932617 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 21:32:48.980520 jq[2057]: true Jan 13 21:32:48.932932 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 21:32:48.973023 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 21:32:48.977283 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 21:32:48.984888 ntpd[2044]: ntpd 4.2.8p17@1.4004-o Mon Jan 13 19:01:38 UTC 2025 (1): Starting Jan 13 21:32:48.992138 ntpd[2044]: 13 Jan 21:32:48 ntpd[2044]: ntpd 4.2.8p17@1.4004-o Mon Jan 13 19:01:38 UTC 2025 (1): Starting Jan 13 21:32:48.992138 ntpd[2044]: 13 Jan 21:32:48 ntpd[2044]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 13 21:32:48.992138 ntpd[2044]: 13 Jan 21:32:48 ntpd[2044]: ---------------------------------------------------- Jan 13 21:32:48.992138 ntpd[2044]: 13 Jan 21:32:48 ntpd[2044]: ntp-4 is maintained by Network Time Foundation, Jan 13 21:32:48.992138 ntpd[2044]: 13 Jan 21:32:48 ntpd[2044]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 13 21:32:48.992138 ntpd[2044]: 13 Jan 21:32:48 ntpd[2044]: corporation. Support and training for ntp-4 are Jan 13 21:32:48.992138 ntpd[2044]: 13 Jan 21:32:48 ntpd[2044]: available at https://www.nwtime.org/support Jan 13 21:32:48.992138 ntpd[2044]: 13 Jan 21:32:48 ntpd[2044]: ---------------------------------------------------- Jan 13 21:32:48.984920 ntpd[2044]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 13 21:32:48.985159 ntpd[2044]: ---------------------------------------------------- Jan 13 21:32:49.004846 ntpd[2044]: 13 Jan 21:32:49 ntpd[2044]: proto: precision = 0.077 usec (-24) Jan 13 21:32:49.004846 ntpd[2044]: 13 Jan 21:32:49 ntpd[2044]: basedate set to 2025-01-01 Jan 13 21:32:49.004846 ntpd[2044]: 13 Jan 21:32:49 ntpd[2044]: gps base set to 2025-01-05 (week 2348) Jan 13 21:32:48.985182 ntpd[2044]: ntp-4 is maintained by Network Time Foundation, Jan 13 21:32:48.985192 ntpd[2044]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 13 21:32:48.985203 ntpd[2044]: corporation. Support and training for ntp-4 are Jan 13 21:32:48.985213 ntpd[2044]: available at https://www.nwtime.org/support Jan 13 21:32:48.985223 ntpd[2044]: ---------------------------------------------------- Jan 13 21:32:49.001928 ntpd[2044]: proto: precision = 0.077 usec (-24) Jan 13 21:32:49.004255 ntpd[2044]: basedate set to 2025-01-01 Jan 13 21:32:49.004276 ntpd[2044]: gps base set to 2025-01-05 (week 2348) Jan 13 21:32:49.069110 update_engine[2055]: I20250113 21:32:49.069006 2055 main.cc:92] Flatcar Update Engine starting Jan 13 21:32:49.089055 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jan 13 21:32:49.089134 update_engine[2055]: I20250113 21:32:49.088682 2055 update_check_scheduler.cc:74] Next update check in 6m20s Jan 13 21:32:49.085589 ntpd[2044]: Listen and drop on 0 v6wildcard [::]:123 Jan 13 21:32:49.115705 ntpd[2044]: 13 Jan 21:32:49 ntpd[2044]: Listen and drop on 0 v6wildcard [::]:123 Jan 13 21:32:49.115705 ntpd[2044]: 13 Jan 21:32:49 ntpd[2044]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 13 21:32:49.115705 ntpd[2044]: 13 Jan 21:32:49 ntpd[2044]: Listen normally on 2 lo 127.0.0.1:123 Jan 13 21:32:49.115705 ntpd[2044]: 13 Jan 21:32:49 ntpd[2044]: Listen normally on 3 eth0 172.31.17.229:123 Jan 13 21:32:49.115705 ntpd[2044]: 13 Jan 21:32:49 ntpd[2044]: Listen normally on 4 lo [::1]:123 Jan 13 21:32:49.115705 ntpd[2044]: 13 Jan 21:32:49 ntpd[2044]: Listen normally on 5 eth0 [fe80::439:22ff:fee7:a4d7%2]:123 Jan 13 21:32:49.115705 ntpd[2044]: 13 Jan 21:32:49 ntpd[2044]: Listening on routing socket on fd #22 for interface updates Jan 13 21:32:49.115705 ntpd[2044]: 13 Jan 21:32:49 ntpd[2044]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 21:32:49.115705 ntpd[2044]: 13 Jan 21:32:49 ntpd[2044]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 21:32:49.116005 extend-filesystems[2066]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 13 21:32:49.116005 extend-filesystems[2066]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 13 21:32:49.116005 extend-filesystems[2066]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jan 13 21:32:49.093754 (ntainerd)[2085]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 21:32:49.157982 jq[2075]: true Jan 13 21:32:49.086390 ntpd[2044]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 13 21:32:49.158187 extend-filesystems[2036]: Resized filesystem in /dev/nvme0n1p9 Jan 13 21:32:49.104672 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 21:32:49.092565 ntpd[2044]: Listen normally on 2 lo 127.0.0.1:123 Jan 13 21:32:49.104719 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 21:32:49.092630 ntpd[2044]: Listen normally on 3 eth0 172.31.17.229:123 Jan 13 21:32:49.107641 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 21:32:49.092673 ntpd[2044]: Listen normally on 4 lo [::1]:123 Jan 13 21:32:49.107675 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 21:32:49.092721 ntpd[2044]: Listen normally on 5 eth0 [fe80::439:22ff:fee7:a4d7%2]:123 Jan 13 21:32:49.120344 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 21:32:49.092764 ntpd[2044]: Listening on routing socket on fd #22 for interface updates Jan 13 21:32:49.120765 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 21:32:49.101336 ntpd[2044]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 21:32:49.159221 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 21:32:49.101376 ntpd[2044]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 21:32:49.159548 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 21:32:49.107173 dbus-daemon[2033]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 13 21:32:49.199451 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 13 21:32:49.214128 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 21:32:49.249288 systemd[1]: Started update-engine.service - Update Engine. Jan 13 21:32:49.264324 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 13 21:32:49.285381 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 13 21:32:49.287802 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 21:32:49.310079 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 21:32:49.327125 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 13 21:32:49.338131 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 21:32:49.345336 systemd-logind[2053]: Watching system buttons on /dev/input/event1 (Power Button) Jan 13 21:32:49.353613 systemd-logind[2053]: Watching system buttons on /dev/input/event2 (Sleep Button) Jan 13 21:32:49.353659 systemd-logind[2053]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 13 21:32:49.381677 systemd-logind[2053]: New seat seat0. Jan 13 21:32:49.389227 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 21:32:49.517217 bash[2144]: Updated "/home/core/.ssh/authorized_keys" Jan 13 21:32:49.519054 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 21:32:49.535017 systemd[1]: Starting sshkeys.service... Jan 13 21:32:49.563532 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (2146) Jan 13 21:32:49.612314 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 13 21:32:49.624262 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 13 21:32:49.668370 amazon-ssm-agent[2120]: Initializing new seelog logger Jan 13 21:32:49.674511 amazon-ssm-agent[2120]: New Seelog Logger Creation Complete Jan 13 21:32:49.674511 amazon-ssm-agent[2120]: 2025/01/13 21:32:49 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 21:32:49.674511 amazon-ssm-agent[2120]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 21:32:49.678055 amazon-ssm-agent[2120]: 2025/01/13 21:32:49 processing appconfig overrides Jan 13 21:32:49.688508 amazon-ssm-agent[2120]: 2025/01/13 21:32:49 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 21:32:49.688508 amazon-ssm-agent[2120]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 21:32:49.690721 amazon-ssm-agent[2120]: 2025/01/13 21:32:49 processing appconfig overrides Jan 13 21:32:49.691298 amazon-ssm-agent[2120]: 2025/01/13 21:32:49 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 21:32:49.691812 amazon-ssm-agent[2120]: 2025-01-13 21:32:49 INFO Proxy environment variables: Jan 13 21:32:49.696679 amazon-ssm-agent[2120]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 21:32:49.696679 amazon-ssm-agent[2120]: 2025/01/13 21:32:49 processing appconfig overrides Jan 13 21:32:49.708016 amazon-ssm-agent[2120]: 2025/01/13 21:32:49 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 21:32:49.708016 amazon-ssm-agent[2120]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 21:32:49.708016 amazon-ssm-agent[2120]: 2025/01/13 21:32:49 processing appconfig overrides Jan 13 21:32:49.784356 locksmithd[2124]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 21:32:49.798684 amazon-ssm-agent[2120]: 2025-01-13 21:32:49 INFO https_proxy: Jan 13 21:32:49.845940 dbus-daemon[2033]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 13 21:32:49.846768 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 13 21:32:49.858196 dbus-daemon[2033]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2123 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 13 21:32:49.892054 systemd[1]: Starting polkit.service - Authorization Manager... Jan 13 21:32:49.899579 amazon-ssm-agent[2120]: 2025-01-13 21:32:49 INFO http_proxy: Jan 13 21:32:49.955299 coreos-metadata[2162]: Jan 13 21:32:49.955 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 13 21:32:49.958685 coreos-metadata[2162]: Jan 13 21:32:49.957 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 13 21:32:49.958685 coreos-metadata[2162]: Jan 13 21:32:49.958 INFO Fetch successful Jan 13 21:32:49.958685 coreos-metadata[2162]: Jan 13 21:32:49.958 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 13 21:32:49.960416 coreos-metadata[2162]: Jan 13 21:32:49.959 INFO Fetch successful Jan 13 21:32:49.961469 unknown[2162]: wrote ssh authorized keys file for user: core Jan 13 21:32:49.967398 polkitd[2241]: Started polkitd version 121 Jan 13 21:32:50.003617 amazon-ssm-agent[2120]: 2025-01-13 21:32:49 INFO no_proxy: Jan 13 21:32:50.023325 polkitd[2241]: Loading rules from directory /etc/polkit-1/rules.d Jan 13 21:32:50.029870 polkitd[2241]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 13 21:32:50.035806 polkitd[2241]: Finished loading, compiling and executing 2 rules Jan 13 21:32:50.049212 update-ssh-keys[2253]: Updated "/home/core/.ssh/authorized_keys" Jan 13 21:32:50.051948 dbus-daemon[2033]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 13 21:32:50.052722 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 13 21:32:50.056087 polkitd[2241]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 13 21:32:50.060022 systemd[1]: Started polkit.service - Authorization Manager. Jan 13 21:32:50.068242 systemd[1]: Finished sshkeys.service. Jan 13 21:32:50.107655 amazon-ssm-agent[2120]: 2025-01-13 21:32:49 INFO Checking if agent identity type OnPrem can be assumed Jan 13 21:32:50.151776 systemd-hostnamed[2123]: Hostname set to (transient) Jan 13 21:32:50.151906 systemd-resolved[1977]: System hostname changed to 'ip-172-31-17-229'. Jan 13 21:32:50.197388 containerd[2085]: time="2025-01-13T21:32:50.197289781Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 13 21:32:50.207498 amazon-ssm-agent[2120]: 2025-01-13 21:32:49 INFO Checking if agent identity type EC2 can be assumed Jan 13 21:32:50.306653 amazon-ssm-agent[2120]: 2025-01-13 21:32:50 INFO Agent will take identity from EC2 Jan 13 21:32:50.325086 containerd[2085]: time="2025-01-13T21:32:50.325010903Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:32:50.332507 containerd[2085]: time="2025-01-13T21:32:50.331628681Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:32:50.332507 containerd[2085]: time="2025-01-13T21:32:50.331687314Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 21:32:50.332507 containerd[2085]: time="2025-01-13T21:32:50.331716807Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 21:32:50.332507 containerd[2085]: time="2025-01-13T21:32:50.331933010Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 21:32:50.332507 containerd[2085]: time="2025-01-13T21:32:50.331963943Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 21:32:50.332507 containerd[2085]: time="2025-01-13T21:32:50.332054466Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:32:50.332507 containerd[2085]: time="2025-01-13T21:32:50.332074360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:32:50.332507 containerd[2085]: time="2025-01-13T21:32:50.332397305Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:32:50.332507 containerd[2085]: time="2025-01-13T21:32:50.332423573Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 21:32:50.332507 containerd[2085]: time="2025-01-13T21:32:50.332449273Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:32:50.332507 containerd[2085]: time="2025-01-13T21:32:50.332466833Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 21:32:50.333382 containerd[2085]: time="2025-01-13T21:32:50.333092528Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:32:50.333382 containerd[2085]: time="2025-01-13T21:32:50.333343837Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:32:50.333745 containerd[2085]: time="2025-01-13T21:32:50.333716060Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:32:50.334618 containerd[2085]: time="2025-01-13T21:32:50.334592014Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 21:32:50.334812 containerd[2085]: time="2025-01-13T21:32:50.334794767Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 21:32:50.334948 containerd[2085]: time="2025-01-13T21:32:50.334930625Z" level=info msg="metadata content store policy set" policy=shared Jan 13 21:32:50.344969 containerd[2085]: time="2025-01-13T21:32:50.344925943Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 21:32:50.345192 containerd[2085]: time="2025-01-13T21:32:50.345136198Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 21:32:50.345291 containerd[2085]: time="2025-01-13T21:32:50.345277338Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 21:32:50.345381 containerd[2085]: time="2025-01-13T21:32:50.345367923Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 21:32:50.345759 containerd[2085]: time="2025-01-13T21:32:50.345449499Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 21:32:50.345759 containerd[2085]: time="2025-01-13T21:32:50.345640751Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 21:32:50.347041 containerd[2085]: time="2025-01-13T21:32:50.346438199Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 21:32:50.347041 containerd[2085]: time="2025-01-13T21:32:50.346613740Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 21:32:50.347041 containerd[2085]: time="2025-01-13T21:32:50.346637647Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 21:32:50.347041 containerd[2085]: time="2025-01-13T21:32:50.346655837Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 21:32:50.347041 containerd[2085]: time="2025-01-13T21:32:50.346673250Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 21:32:50.347041 containerd[2085]: time="2025-01-13T21:32:50.346693880Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 21:32:50.347041 containerd[2085]: time="2025-01-13T21:32:50.346711978Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 21:32:50.347041 containerd[2085]: time="2025-01-13T21:32:50.346731772Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 21:32:50.347041 containerd[2085]: time="2025-01-13T21:32:50.346751100Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 21:32:50.347041 containerd[2085]: time="2025-01-13T21:32:50.346768437Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 21:32:50.347041 containerd[2085]: time="2025-01-13T21:32:50.346785574Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 21:32:50.347041 containerd[2085]: time="2025-01-13T21:32:50.346801300Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 21:32:50.347041 containerd[2085]: time="2025-01-13T21:32:50.346825395Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 21:32:50.347041 containerd[2085]: time="2025-01-13T21:32:50.346848177Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 21:32:50.347784 sshd_keygen[2084]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 21:32:50.348053 containerd[2085]: time="2025-01-13T21:32:50.346866479Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 21:32:50.348053 containerd[2085]: time="2025-01-13T21:32:50.346884070Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 21:32:50.348053 containerd[2085]: time="2025-01-13T21:32:50.346900130Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 21:32:50.348053 containerd[2085]: time="2025-01-13T21:32:50.346917904Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 21:32:50.348053 containerd[2085]: time="2025-01-13T21:32:50.346934836Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 21:32:50.348053 containerd[2085]: time="2025-01-13T21:32:50.346952328Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 21:32:50.348053 containerd[2085]: time="2025-01-13T21:32:50.346971490Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 21:32:50.348053 containerd[2085]: time="2025-01-13T21:32:50.346992686Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 21:32:50.348053 containerd[2085]: time="2025-01-13T21:32:50.347008817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 21:32:50.348053 containerd[2085]: time="2025-01-13T21:32:50.347025574Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 21:32:50.348053 containerd[2085]: time="2025-01-13T21:32:50.347042778Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 21:32:50.348053 containerd[2085]: time="2025-01-13T21:32:50.347064930Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 21:32:50.348053 containerd[2085]: time="2025-01-13T21:32:50.347093954Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 21:32:50.348053 containerd[2085]: time="2025-01-13T21:32:50.347111821Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 21:32:50.348053 containerd[2085]: time="2025-01-13T21:32:50.347127849Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 21:32:50.348592 containerd[2085]: time="2025-01-13T21:32:50.347192234Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 21:32:50.348592 containerd[2085]: time="2025-01-13T21:32:50.347217067Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 21:32:50.348592 containerd[2085]: time="2025-01-13T21:32:50.347235851Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 21:32:50.348592 containerd[2085]: time="2025-01-13T21:32:50.347255457Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 21:32:50.348592 containerd[2085]: time="2025-01-13T21:32:50.347270886Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 21:32:50.348592 containerd[2085]: time="2025-01-13T21:32:50.347290525Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 21:32:50.348592 containerd[2085]: time="2025-01-13T21:32:50.347379771Z" level=info msg="NRI interface is disabled by configuration." Jan 13 21:32:50.348592 containerd[2085]: time="2025-01-13T21:32:50.347405042Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 21:32:50.348952 containerd[2085]: time="2025-01-13T21:32:50.347805994Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 21:32:50.348952 containerd[2085]: time="2025-01-13T21:32:50.347891398Z" level=info msg="Connect containerd service" Jan 13 21:32:50.348952 containerd[2085]: time="2025-01-13T21:32:50.347947442Z" level=info msg="using legacy CRI server" Jan 13 21:32:50.348952 containerd[2085]: time="2025-01-13T21:32:50.347958171Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 21:32:50.348952 containerd[2085]: time="2025-01-13T21:32:50.348082843Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 21:32:50.359523 containerd[2085]: time="2025-01-13T21:32:50.352878762Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 21:32:50.359523 containerd[2085]: time="2025-01-13T21:32:50.353197121Z" level=info msg="Start subscribing containerd event" Jan 13 21:32:50.359523 containerd[2085]: time="2025-01-13T21:32:50.353268791Z" level=info msg="Start recovering state" Jan 13 21:32:50.359523 containerd[2085]: time="2025-01-13T21:32:50.353292380Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 21:32:50.359523 containerd[2085]: time="2025-01-13T21:32:50.353347208Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 21:32:50.359523 containerd[2085]: time="2025-01-13T21:32:50.353358025Z" level=info msg="Start event monitor" Jan 13 21:32:50.359523 containerd[2085]: time="2025-01-13T21:32:50.353384063Z" level=info msg="Start snapshots syncer" Jan 13 21:32:50.359523 containerd[2085]: time="2025-01-13T21:32:50.353397140Z" level=info msg="Start cni network conf syncer for default" Jan 13 21:32:50.359523 containerd[2085]: time="2025-01-13T21:32:50.353407283Z" level=info msg="Start streaming server" Jan 13 21:32:50.353647 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 21:32:50.366238 containerd[2085]: time="2025-01-13T21:32:50.366194268Z" level=info msg="containerd successfully booted in 0.171289s" Jan 13 21:32:50.406058 amazon-ssm-agent[2120]: 2025-01-13 21:32:50 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 13 21:32:50.406073 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 21:32:50.420253 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 21:32:50.432164 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 21:32:50.432497 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 21:32:50.445138 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 21:32:50.472850 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 21:32:50.487842 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 21:32:50.490363 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 13 21:32:50.491919 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 21:32:50.505769 amazon-ssm-agent[2120]: 2025-01-13 21:32:50 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 13 21:32:50.546873 amazon-ssm-agent[2120]: 2025-01-13 21:32:50 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 13 21:32:50.546873 amazon-ssm-agent[2120]: 2025-01-13 21:32:50 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 13 21:32:50.546873 amazon-ssm-agent[2120]: 2025-01-13 21:32:50 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Jan 13 21:32:50.546873 amazon-ssm-agent[2120]: 2025-01-13 21:32:50 INFO [amazon-ssm-agent] Starting Core Agent Jan 13 21:32:50.546873 amazon-ssm-agent[2120]: 2025-01-13 21:32:50 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 13 21:32:50.546873 amazon-ssm-agent[2120]: 2025-01-13 21:32:50 INFO [Registrar] Starting registrar module Jan 13 21:32:50.546873 amazon-ssm-agent[2120]: 2025-01-13 21:32:50 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 13 21:32:50.546873 amazon-ssm-agent[2120]: 2025-01-13 21:32:50 INFO [EC2Identity] EC2 registration was successful. Jan 13 21:32:50.546873 amazon-ssm-agent[2120]: 2025-01-13 21:32:50 INFO [CredentialRefresher] credentialRefresher has started Jan 13 21:32:50.546873 amazon-ssm-agent[2120]: 2025-01-13 21:32:50 INFO [CredentialRefresher] Starting credentials refresher loop Jan 13 21:32:50.546873 amazon-ssm-agent[2120]: 2025-01-13 21:32:50 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 13 21:32:50.605307 amazon-ssm-agent[2120]: 2025-01-13 21:32:50 INFO [CredentialRefresher] Next credential rotation will be in 30.241655163166666 minutes Jan 13 21:32:51.570397 amazon-ssm-agent[2120]: 2025-01-13 21:32:51 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 13 21:32:51.672312 amazon-ssm-agent[2120]: 2025-01-13 21:32:51 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2307) started Jan 13 21:32:51.778687 amazon-ssm-agent[2120]: 2025-01-13 21:32:51 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 13 21:32:51.861203 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:32:51.864462 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 21:32:51.866798 systemd[1]: Startup finished in 7.640s (kernel) + 8.243s (userspace) = 15.884s. Jan 13 21:32:52.022454 (kubelet)[2325]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:32:53.466590 kubelet[2325]: E0113 21:32:53.466470 2325 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:32:53.470145 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:32:53.470451 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:32:56.537859 systemd-resolved[1977]: Clock change detected. Flushing caches. Jan 13 21:32:57.656420 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 21:32:57.664188 systemd[1]: Started sshd@0-172.31.17.229:22-147.75.109.163:48078.service - OpenSSH per-connection server daemon (147.75.109.163:48078). Jan 13 21:32:57.860852 sshd[2338]: Accepted publickey for core from 147.75.109.163 port 48078 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:32:57.865052 sshd[2338]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:32:57.877399 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 21:32:57.882800 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 21:32:57.890015 systemd-logind[2053]: New session 1 of user core. Jan 13 21:32:57.912360 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 21:32:57.926411 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 21:32:57.932544 (systemd)[2343]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 21:32:58.182110 systemd[2343]: Queued start job for default target default.target. Jan 13 21:32:58.182595 systemd[2343]: Created slice app.slice - User Application Slice. Jan 13 21:32:58.182624 systemd[2343]: Reached target paths.target - Paths. Jan 13 21:32:58.182643 systemd[2343]: Reached target timers.target - Timers. Jan 13 21:32:58.190462 systemd[2343]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 21:32:58.198873 systemd[2343]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 21:32:58.198953 systemd[2343]: Reached target sockets.target - Sockets. Jan 13 21:32:58.198972 systemd[2343]: Reached target basic.target - Basic System. Jan 13 21:32:58.199023 systemd[2343]: Reached target default.target - Main User Target. Jan 13 21:32:58.199060 systemd[2343]: Startup finished in 234ms. Jan 13 21:32:58.199863 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 21:32:58.209285 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 21:32:58.358682 systemd[1]: Started sshd@1-172.31.17.229:22-147.75.109.163:48094.service - OpenSSH per-connection server daemon (147.75.109.163:48094). Jan 13 21:32:58.537522 sshd[2356]: Accepted publickey for core from 147.75.109.163 port 48094 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:32:58.540522 sshd[2356]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:32:58.552767 systemd-logind[2053]: New session 2 of user core. Jan 13 21:32:58.559838 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 21:32:58.696089 sshd[2356]: pam_unix(sshd:session): session closed for user core Jan 13 21:32:58.699909 systemd[1]: sshd@1-172.31.17.229:22-147.75.109.163:48094.service: Deactivated successfully. Jan 13 21:32:58.705214 systemd-logind[2053]: Session 2 logged out. Waiting for processes to exit. Jan 13 21:32:58.706667 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 21:32:58.713281 systemd-logind[2053]: Removed session 2. Jan 13 21:32:58.730705 systemd[1]: Started sshd@2-172.31.17.229:22-147.75.109.163:48098.service - OpenSSH per-connection server daemon (147.75.109.163:48098). Jan 13 21:32:58.901302 sshd[2364]: Accepted publickey for core from 147.75.109.163 port 48098 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:32:58.904069 sshd[2364]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:32:58.912814 systemd-logind[2053]: New session 3 of user core. Jan 13 21:32:58.920730 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 21:32:59.043122 sshd[2364]: pam_unix(sshd:session): session closed for user core Jan 13 21:32:59.050188 systemd[1]: sshd@2-172.31.17.229:22-147.75.109.163:48098.service: Deactivated successfully. Jan 13 21:32:59.058209 systemd-logind[2053]: Session 3 logged out. Waiting for processes to exit. Jan 13 21:32:59.058791 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 21:32:59.066698 systemd-logind[2053]: Removed session 3. Jan 13 21:32:59.073731 systemd[1]: Started sshd@3-172.31.17.229:22-147.75.109.163:48108.service - OpenSSH per-connection server daemon (147.75.109.163:48108). Jan 13 21:32:59.236439 sshd[2372]: Accepted publickey for core from 147.75.109.163 port 48108 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:32:59.238983 sshd[2372]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:32:59.254665 systemd-logind[2053]: New session 4 of user core. Jan 13 21:32:59.261919 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 21:32:59.402853 sshd[2372]: pam_unix(sshd:session): session closed for user core Jan 13 21:32:59.408512 systemd[1]: sshd@3-172.31.17.229:22-147.75.109.163:48108.service: Deactivated successfully. Jan 13 21:32:59.413871 systemd-logind[2053]: Session 4 logged out. Waiting for processes to exit. Jan 13 21:32:59.414486 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 21:32:59.416019 systemd-logind[2053]: Removed session 4. Jan 13 21:32:59.438517 systemd[1]: Started sshd@4-172.31.17.229:22-147.75.109.163:48112.service - OpenSSH per-connection server daemon (147.75.109.163:48112). Jan 13 21:32:59.614999 sshd[2380]: Accepted publickey for core from 147.75.109.163 port 48112 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:32:59.617500 sshd[2380]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:32:59.627653 systemd-logind[2053]: New session 5 of user core. Jan 13 21:32:59.639632 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 21:32:59.763616 sudo[2384]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 21:32:59.764007 sudo[2384]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:32:59.783391 sudo[2384]: pam_unix(sudo:session): session closed for user root Jan 13 21:32:59.805956 sshd[2380]: pam_unix(sshd:session): session closed for user core Jan 13 21:32:59.811734 systemd[1]: sshd@4-172.31.17.229:22-147.75.109.163:48112.service: Deactivated successfully. Jan 13 21:32:59.818743 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 21:32:59.820828 systemd-logind[2053]: Session 5 logged out. Waiting for processes to exit. Jan 13 21:32:59.822155 systemd-logind[2053]: Removed session 5. Jan 13 21:32:59.835874 systemd[1]: Started sshd@5-172.31.17.229:22-147.75.109.163:48122.service - OpenSSH per-connection server daemon (147.75.109.163:48122). Jan 13 21:32:59.994780 sshd[2389]: Accepted publickey for core from 147.75.109.163 port 48122 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:33:00.008868 sshd[2389]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:33:00.055492 systemd-logind[2053]: New session 6 of user core. Jan 13 21:33:00.070581 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 21:33:00.187507 sudo[2394]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 21:33:00.188173 sudo[2394]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:33:00.195189 sudo[2394]: pam_unix(sudo:session): session closed for user root Jan 13 21:33:00.203660 sudo[2393]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 13 21:33:00.204052 sudo[2393]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:33:00.234788 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 13 21:33:00.242685 auditctl[2397]: No rules Jan 13 21:33:00.243698 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 21:33:00.244602 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 13 21:33:00.260026 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 13 21:33:00.308218 augenrules[2416]: No rules Jan 13 21:33:00.310972 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 13 21:33:00.316624 sudo[2393]: pam_unix(sudo:session): session closed for user root Jan 13 21:33:00.345758 sshd[2389]: pam_unix(sshd:session): session closed for user core Jan 13 21:33:00.368718 systemd[1]: sshd@5-172.31.17.229:22-147.75.109.163:48122.service: Deactivated successfully. Jan 13 21:33:00.372496 systemd-logind[2053]: Session 6 logged out. Waiting for processes to exit. Jan 13 21:33:00.382234 systemd[1]: Started sshd@6-172.31.17.229:22-147.75.109.163:48134.service - OpenSSH per-connection server daemon (147.75.109.163:48134). Jan 13 21:33:00.383735 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 21:33:00.391029 systemd-logind[2053]: Removed session 6. Jan 13 21:33:00.551759 sshd[2425]: Accepted publickey for core from 147.75.109.163 port 48134 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:33:00.553126 sshd[2425]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:33:00.559857 systemd-logind[2053]: New session 7 of user core. Jan 13 21:33:00.566859 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 21:33:00.665958 sudo[2429]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 21:33:00.668418 sudo[2429]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:33:02.381069 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:33:02.407704 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:33:02.495685 systemd[1]: Reloading requested from client PID 2468 ('systemctl') (unit session-7.scope)... Jan 13 21:33:02.495725 systemd[1]: Reloading... Jan 13 21:33:02.777529 zram_generator::config[2512]: No configuration found. Jan 13 21:33:02.946788 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:33:03.055485 systemd[1]: Reloading finished in 557 ms. Jan 13 21:33:03.158037 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 21:33:03.158170 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 21:33:03.159213 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:33:03.169508 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:33:03.439550 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:33:03.457979 (kubelet)[2580]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 21:33:03.566536 kubelet[2580]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:33:03.566536 kubelet[2580]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 21:33:03.566536 kubelet[2580]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:33:03.567028 kubelet[2580]: I0113 21:33:03.566648 2580 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 21:33:04.060525 kubelet[2580]: I0113 21:33:04.060490 2580 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 13 21:33:04.060525 kubelet[2580]: I0113 21:33:04.060519 2580 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 21:33:04.060862 kubelet[2580]: I0113 21:33:04.060841 2580 server.go:919] "Client rotation is on, will bootstrap in background" Jan 13 21:33:04.100873 kubelet[2580]: I0113 21:33:04.100618 2580 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:33:04.130359 kubelet[2580]: I0113 21:33:04.128525 2580 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 21:33:04.130359 kubelet[2580]: I0113 21:33:04.129054 2580 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 21:33:04.130359 kubelet[2580]: I0113 21:33:04.129527 2580 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 21:33:04.130359 kubelet[2580]: I0113 21:33:04.129554 2580 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 21:33:04.130359 kubelet[2580]: I0113 21:33:04.129564 2580 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 21:33:04.130359 kubelet[2580]: I0113 21:33:04.130024 2580 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:33:04.131118 kubelet[2580]: I0113 21:33:04.130131 2580 kubelet.go:396] "Attempting to sync node with API server" Jan 13 21:33:04.131118 kubelet[2580]: I0113 21:33:04.130232 2580 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 21:33:04.131118 kubelet[2580]: I0113 21:33:04.130274 2580 kubelet.go:312] "Adding apiserver pod source" Jan 13 21:33:04.131118 kubelet[2580]: I0113 21:33:04.130289 2580 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 21:33:04.135906 kubelet[2580]: E0113 21:33:04.135874 2580 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:33:04.136102 kubelet[2580]: E0113 21:33:04.136089 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:33:04.136925 kubelet[2580]: I0113 21:33:04.136906 2580 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 13 21:33:04.143470 kubelet[2580]: I0113 21:33:04.143433 2580 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 21:33:04.143589 kubelet[2580]: W0113 21:33:04.143531 2580 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 21:33:04.144215 kubelet[2580]: I0113 21:33:04.144190 2580 server.go:1256] "Started kubelet" Jan 13 21:33:04.145153 kubelet[2580]: I0113 21:33:04.145126 2580 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 21:33:04.146235 kubelet[2580]: I0113 21:33:04.146208 2580 server.go:461] "Adding debug handlers to kubelet server" Jan 13 21:33:04.149250 kubelet[2580]: I0113 21:33:04.149227 2580 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 21:33:04.152351 kubelet[2580]: I0113 21:33:04.151134 2580 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 21:33:04.152351 kubelet[2580]: I0113 21:33:04.151317 2580 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 21:33:04.154303 kubelet[2580]: W0113 21:33:04.154280 2580 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes "172.31.17.229" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 13 21:33:04.154500 kubelet[2580]: E0113 21:33:04.154486 2580 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.17.229" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 13 21:33:04.154695 kubelet[2580]: W0113 21:33:04.154677 2580 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 13 21:33:04.154751 kubelet[2580]: E0113 21:33:04.154702 2580 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 13 21:33:04.158995 kubelet[2580]: E0113 21:33:04.158965 2580 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.17.229.181a5e0587655de6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.17.229,UID:172.31.17.229,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172.31.17.229,},FirstTimestamp:2025-01-13 21:33:04.144162278 +0000 UTC m=+0.680094078,LastTimestamp:2025-01-13 21:33:04.144162278 +0000 UTC m=+0.680094078,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.17.229,}" Jan 13 21:33:04.162910 kubelet[2580]: E0113 21:33:04.162554 2580 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.17.229\" not found" Jan 13 21:33:04.162910 kubelet[2580]: I0113 21:33:04.162593 2580 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 21:33:04.162910 kubelet[2580]: I0113 21:33:04.162704 2580 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 13 21:33:04.162910 kubelet[2580]: I0113 21:33:04.162764 2580 reconciler_new.go:29] "Reconciler: start to sync state" Jan 13 21:33:04.163193 kubelet[2580]: E0113 21:33:04.163165 2580 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 21:33:04.164627 kubelet[2580]: I0113 21:33:04.164604 2580 factory.go:221] Registration of the systemd container factory successfully Jan 13 21:33:04.164727 kubelet[2580]: I0113 21:33:04.164701 2580 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 21:33:04.166094 kubelet[2580]: I0113 21:33:04.166070 2580 factory.go:221] Registration of the containerd container factory successfully Jan 13 21:33:04.207757 kubelet[2580]: E0113 21:33:04.207719 2580 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.17.229.181a5e0588871513 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.17.229,UID:172.31.17.229,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:172.31.17.229,},FirstTimestamp:2025-01-13 21:33:04.163149075 +0000 UTC m=+0.699080876,LastTimestamp:2025-01-13 21:33:04.163149075 +0000 UTC m=+0.699080876,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.17.229,}" Jan 13 21:33:04.219196 kubelet[2580]: I0113 21:33:04.218879 2580 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 21:33:04.219196 kubelet[2580]: I0113 21:33:04.218903 2580 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 21:33:04.219196 kubelet[2580]: I0113 21:33:04.218946 2580 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:33:04.225614 kubelet[2580]: I0113 21:33:04.225508 2580 policy_none.go:49] "None policy: Start" Jan 13 21:33:04.227542 kubelet[2580]: I0113 21:33:04.227076 2580 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 21:33:04.227542 kubelet[2580]: I0113 21:33:04.227109 2580 state_mem.go:35] "Initializing new in-memory state store" Jan 13 21:33:04.240644 kubelet[2580]: I0113 21:33:04.240608 2580 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 21:33:04.240930 kubelet[2580]: I0113 21:33:04.240901 2580 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 21:33:04.249295 kubelet[2580]: E0113 21:33:04.249049 2580 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172.31.17.229\" not found" node="172.31.17.229" Jan 13 21:33:04.253821 kubelet[2580]: E0113 21:33:04.253403 2580 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.17.229\" not found" Jan 13 21:33:04.264479 kubelet[2580]: I0113 21:33:04.264196 2580 kubelet_node_status.go:73] "Attempting to register node" node="172.31.17.229" Jan 13 21:33:04.269954 kubelet[2580]: I0113 21:33:04.269791 2580 kubelet_node_status.go:76] "Successfully registered node" node="172.31.17.229" Jan 13 21:33:04.317156 kubelet[2580]: I0113 21:33:04.313069 2580 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 21:33:04.317156 kubelet[2580]: I0113 21:33:04.314881 2580 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 21:33:04.317156 kubelet[2580]: I0113 21:33:04.314915 2580 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 21:33:04.317156 kubelet[2580]: I0113 21:33:04.314942 2580 kubelet.go:2329] "Starting kubelet main sync loop" Jan 13 21:33:04.317156 kubelet[2580]: E0113 21:33:04.315066 2580 kubelet.go:2353] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 13 21:33:04.333247 kubelet[2580]: E0113 21:33:04.333213 2580 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.17.229\" not found" Jan 13 21:33:04.433428 kubelet[2580]: E0113 21:33:04.433381 2580 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.17.229\" not found" Jan 13 21:33:04.533897 kubelet[2580]: E0113 21:33:04.533852 2580 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.17.229\" not found" Jan 13 21:33:04.634650 kubelet[2580]: E0113 21:33:04.634522 2580 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.17.229\" not found" Jan 13 21:33:04.735068 kubelet[2580]: E0113 21:33:04.735014 2580 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.17.229\" not found" Jan 13 21:33:04.835709 kubelet[2580]: E0113 21:33:04.835660 2580 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.17.229\" not found" Jan 13 21:33:04.936433 kubelet[2580]: E0113 21:33:04.936380 2580 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.17.229\" not found" Jan 13 21:33:04.978793 sudo[2429]: pam_unix(sudo:session): session closed for user root Jan 13 21:33:05.001647 sshd[2425]: pam_unix(sshd:session): session closed for user core Jan 13 21:33:05.005633 systemd[1]: sshd@6-172.31.17.229:22-147.75.109.163:48134.service: Deactivated successfully. Jan 13 21:33:05.013191 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 21:33:05.013416 systemd-logind[2053]: Session 7 logged out. Waiting for processes to exit. Jan 13 21:33:05.016275 systemd-logind[2053]: Removed session 7. Jan 13 21:33:05.036991 kubelet[2580]: E0113 21:33:05.036949 2580 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.17.229\" not found" Jan 13 21:33:05.065395 kubelet[2580]: I0113 21:33:05.065311 2580 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 13 21:33:05.065549 kubelet[2580]: W0113 21:33:05.065534 2580 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.Service ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Jan 13 21:33:05.065624 kubelet[2580]: W0113 21:33:05.065573 2580 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.CSIDriver ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Jan 13 21:33:05.065624 kubelet[2580]: W0113 21:33:05.065601 2580 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.RuntimeClass ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Jan 13 21:33:05.137081 kubelet[2580]: E0113 21:33:05.137023 2580 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.17.229\" not found" Jan 13 21:33:05.137081 kubelet[2580]: E0113 21:33:05.137033 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:33:05.237977 kubelet[2580]: E0113 21:33:05.237707 2580 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.17.229\" not found" Jan 13 21:33:05.338986 kubelet[2580]: E0113 21:33:05.338944 2580 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.17.229\" not found" Jan 13 21:33:05.440068 kubelet[2580]: I0113 21:33:05.440032 2580 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 13 21:33:05.440490 containerd[2085]: time="2025-01-13T21:33:05.440441644Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 21:33:05.441048 kubelet[2580]: I0113 21:33:05.440673 2580 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 13 21:33:06.138164 kubelet[2580]: E0113 21:33:06.138101 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:33:06.138975 kubelet[2580]: I0113 21:33:06.138189 2580 apiserver.go:52] "Watching apiserver" Jan 13 21:33:06.144860 kubelet[2580]: I0113 21:33:06.144823 2580 topology_manager.go:215] "Topology Admit Handler" podUID="f952e87e-8e4b-4c22-8c96-b28618d230a0" podNamespace="calico-system" podName="calico-node-tpnq9" Jan 13 21:33:06.144984 kubelet[2580]: I0113 21:33:06.144948 2580 topology_manager.go:215] "Topology Admit Handler" podUID="458fe02f-e573-4bee-9390-8d8b1d8e6284" podNamespace="calico-system" podName="csi-node-driver-f6hv9" Jan 13 21:33:06.145124 kubelet[2580]: I0113 21:33:06.145002 2580 topology_manager.go:215] "Topology Admit Handler" podUID="5330098f-1be4-41c5-8f44-d692f446d378" podNamespace="kube-system" podName="kube-proxy-7qbwj" Jan 13 21:33:06.149344 kubelet[2580]: E0113 21:33:06.148000 2580 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f6hv9" podUID="458fe02f-e573-4bee-9390-8d8b1d8e6284" Jan 13 21:33:06.164449 kubelet[2580]: I0113 21:33:06.164420 2580 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 13 21:33:06.174487 kubelet[2580]: I0113 21:33:06.174377 2580 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f952e87e-8e4b-4c22-8c96-b28618d230a0-tigera-ca-bundle\") pod \"calico-node-tpnq9\" (UID: \"f952e87e-8e4b-4c22-8c96-b28618d230a0\") " pod="calico-system/calico-node-tpnq9" Jan 13 21:33:06.174954 kubelet[2580]: I0113 21:33:06.174919 2580 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f952e87e-8e4b-4c22-8c96-b28618d230a0-node-certs\") pod \"calico-node-tpnq9\" (UID: \"f952e87e-8e4b-4c22-8c96-b28618d230a0\") " pod="calico-system/calico-node-tpnq9" Jan 13 21:33:06.175663 kubelet[2580]: I0113 21:33:06.175591 2580 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f952e87e-8e4b-4c22-8c96-b28618d230a0-cni-bin-dir\") pod \"calico-node-tpnq9\" (UID: \"f952e87e-8e4b-4c22-8c96-b28618d230a0\") " pod="calico-system/calico-node-tpnq9" Jan 13 21:33:06.175901 kubelet[2580]: I0113 21:33:06.175792 2580 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/458fe02f-e573-4bee-9390-8d8b1d8e6284-kubelet-dir\") pod \"csi-node-driver-f6hv9\" (UID: \"458fe02f-e573-4bee-9390-8d8b1d8e6284\") " pod="calico-system/csi-node-driver-f6hv9" Jan 13 21:33:06.176142 kubelet[2580]: I0113 21:33:06.176040 2580 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5330098f-1be4-41c5-8f44-d692f446d378-xtables-lock\") pod \"kube-proxy-7qbwj\" (UID: \"5330098f-1be4-41c5-8f44-d692f446d378\") " pod="kube-system/kube-proxy-7qbwj" Jan 13 21:33:06.176499 kubelet[2580]: I0113 21:33:06.176425 2580 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f952e87e-8e4b-4c22-8c96-b28618d230a0-xtables-lock\") pod \"calico-node-tpnq9\" (UID: \"f952e87e-8e4b-4c22-8c96-b28618d230a0\") " pod="calico-system/calico-node-tpnq9" Jan 13 21:33:06.176499 kubelet[2580]: I0113 21:33:06.176482 2580 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f952e87e-8e4b-4c22-8c96-b28618d230a0-var-run-calico\") pod \"calico-node-tpnq9\" (UID: \"f952e87e-8e4b-4c22-8c96-b28618d230a0\") " pod="calico-system/calico-node-tpnq9" Jan 13 21:33:06.176809 kubelet[2580]: I0113 21:33:06.176663 2580 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f952e87e-8e4b-4c22-8c96-b28618d230a0-cni-net-dir\") pod \"calico-node-tpnq9\" (UID: \"f952e87e-8e4b-4c22-8c96-b28618d230a0\") " pod="calico-system/calico-node-tpnq9" Jan 13 21:33:06.176985 kubelet[2580]: I0113 21:33:06.176897 2580 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f952e87e-8e4b-4c22-8c96-b28618d230a0-flexvol-driver-host\") pod \"calico-node-tpnq9\" (UID: \"f952e87e-8e4b-4c22-8c96-b28618d230a0\") " pod="calico-system/calico-node-tpnq9" Jan 13 21:33:06.177283 kubelet[2580]: I0113 21:33:06.177212 2580 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/458fe02f-e573-4bee-9390-8d8b1d8e6284-registration-dir\") pod \"csi-node-driver-f6hv9\" (UID: \"458fe02f-e573-4bee-9390-8d8b1d8e6284\") " pod="calico-system/csi-node-driver-f6hv9" Jan 13 21:33:06.177744 kubelet[2580]: I0113 21:33:06.177541 2580 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5330098f-1be4-41c5-8f44-d692f446d378-lib-modules\") pod \"kube-proxy-7qbwj\" (UID: \"5330098f-1be4-41c5-8f44-d692f446d378\") " pod="kube-system/kube-proxy-7qbwj" Jan 13 21:33:06.177744 kubelet[2580]: I0113 21:33:06.177612 2580 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fn6bt\" (UniqueName: \"kubernetes.io/projected/5330098f-1be4-41c5-8f44-d692f446d378-kube-api-access-fn6bt\") pod \"kube-proxy-7qbwj\" (UID: \"5330098f-1be4-41c5-8f44-d692f446d378\") " pod="kube-system/kube-proxy-7qbwj" Jan 13 21:33:06.177964 kubelet[2580]: I0113 21:33:06.177880 2580 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f952e87e-8e4b-4c22-8c96-b28618d230a0-policysync\") pod \"calico-node-tpnq9\" (UID: \"f952e87e-8e4b-4c22-8c96-b28618d230a0\") " pod="calico-system/calico-node-tpnq9" Jan 13 21:33:06.178305 kubelet[2580]: I0113 21:33:06.178263 2580 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f952e87e-8e4b-4c22-8c96-b28618d230a0-var-lib-calico\") pod \"calico-node-tpnq9\" (UID: \"f952e87e-8e4b-4c22-8c96-b28618d230a0\") " pod="calico-system/calico-node-tpnq9" Jan 13 21:33:06.178646 kubelet[2580]: I0113 21:33:06.178575 2580 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwgzj\" (UniqueName: \"kubernetes.io/projected/f952e87e-8e4b-4c22-8c96-b28618d230a0-kube-api-access-mwgzj\") pod \"calico-node-tpnq9\" (UID: \"f952e87e-8e4b-4c22-8c96-b28618d230a0\") " pod="calico-system/calico-node-tpnq9" Jan 13 21:33:06.178646 kubelet[2580]: I0113 21:33:06.178622 2580 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/458fe02f-e573-4bee-9390-8d8b1d8e6284-socket-dir\") pod \"csi-node-driver-f6hv9\" (UID: \"458fe02f-e573-4bee-9390-8d8b1d8e6284\") " pod="calico-system/csi-node-driver-f6hv9" Jan 13 21:33:06.179034 kubelet[2580]: I0113 21:33:06.178775 2580 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5330098f-1be4-41c5-8f44-d692f446d378-kube-proxy\") pod \"kube-proxy-7qbwj\" (UID: \"5330098f-1be4-41c5-8f44-d692f446d378\") " pod="kube-system/kube-proxy-7qbwj" Jan 13 21:33:06.179034 kubelet[2580]: I0113 21:33:06.178906 2580 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f952e87e-8e4b-4c22-8c96-b28618d230a0-lib-modules\") pod \"calico-node-tpnq9\" (UID: \"f952e87e-8e4b-4c22-8c96-b28618d230a0\") " pod="calico-system/calico-node-tpnq9" Jan 13 21:33:06.179502 kubelet[2580]: I0113 21:33:06.179489 2580 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f952e87e-8e4b-4c22-8c96-b28618d230a0-cni-log-dir\") pod \"calico-node-tpnq9\" (UID: \"f952e87e-8e4b-4c22-8c96-b28618d230a0\") " pod="calico-system/calico-node-tpnq9" Jan 13 21:33:06.179734 kubelet[2580]: I0113 21:33:06.179721 2580 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/458fe02f-e573-4bee-9390-8d8b1d8e6284-varrun\") pod \"csi-node-driver-f6hv9\" (UID: \"458fe02f-e573-4bee-9390-8d8b1d8e6284\") " pod="calico-system/csi-node-driver-f6hv9" Jan 13 21:33:06.179914 kubelet[2580]: I0113 21:33:06.179831 2580 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8vkp\" (UniqueName: \"kubernetes.io/projected/458fe02f-e573-4bee-9390-8d8b1d8e6284-kube-api-access-q8vkp\") pod \"csi-node-driver-f6hv9\" (UID: \"458fe02f-e573-4bee-9390-8d8b1d8e6284\") " pod="calico-system/csi-node-driver-f6hv9" Jan 13 21:33:06.285956 kubelet[2580]: E0113 21:33:06.284269 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:33:06.285956 kubelet[2580]: W0113 21:33:06.284298 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:33:06.285956 kubelet[2580]: E0113 21:33:06.284352 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:33:06.285956 kubelet[2580]: E0113 21:33:06.284804 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:33:06.285956 kubelet[2580]: W0113 21:33:06.284976 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:33:06.285956 kubelet[2580]: E0113 21:33:06.285006 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:33:06.285956 kubelet[2580]: E0113 21:33:06.285250 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:33:06.285956 kubelet[2580]: W0113 21:33:06.285260 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:33:06.285956 kubelet[2580]: E0113 21:33:06.285288 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:33:06.285956 kubelet[2580]: E0113 21:33:06.285859 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:33:06.286621 kubelet[2580]: W0113 21:33:06.285871 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:33:06.286621 kubelet[2580]: E0113 21:33:06.285892 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:33:06.291704 kubelet[2580]: E0113 21:33:06.291678 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:33:06.291704 kubelet[2580]: W0113 21:33:06.291700 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:33:06.291946 kubelet[2580]: E0113 21:33:06.291722 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:33:06.303446 kubelet[2580]: E0113 21:33:06.300146 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:33:06.303446 kubelet[2580]: W0113 21:33:06.300166 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:33:06.303446 kubelet[2580]: E0113 21:33:06.300189 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:33:06.307353 kubelet[2580]: E0113 21:33:06.307039 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:33:06.307353 kubelet[2580]: W0113 21:33:06.307064 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:33:06.307353 kubelet[2580]: E0113 21:33:06.307089 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:33:06.310350 kubelet[2580]: E0113 21:33:06.308236 2580 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:33:06.310350 kubelet[2580]: W0113 21:33:06.308258 2580 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:33:06.310350 kubelet[2580]: E0113 21:33:06.308278 2580 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:33:06.454843 containerd[2085]: time="2025-01-13T21:33:06.454794106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7qbwj,Uid:5330098f-1be4-41c5-8f44-d692f446d378,Namespace:kube-system,Attempt:0,}" Jan 13 21:33:06.459428 containerd[2085]: time="2025-01-13T21:33:06.459030862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tpnq9,Uid:f952e87e-8e4b-4c22-8c96-b28618d230a0,Namespace:calico-system,Attempt:0,}" Jan 13 21:33:07.046556 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3262659326.mount: Deactivated successfully. Jan 13 21:33:07.064387 containerd[2085]: time="2025-01-13T21:33:07.064320404Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:33:07.065923 containerd[2085]: time="2025-01-13T21:33:07.065708662Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:33:07.069347 containerd[2085]: time="2025-01-13T21:33:07.066644081Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 13 21:33:07.069347 containerd[2085]: time="2025-01-13T21:33:07.068686040Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:33:07.069790 containerd[2085]: time="2025-01-13T21:33:07.069747271Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 21:33:07.074971 containerd[2085]: time="2025-01-13T21:33:07.074932127Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:33:07.075966 containerd[2085]: time="2025-01-13T21:33:07.075925644Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 621.032724ms" Jan 13 21:33:07.077390 containerd[2085]: time="2025-01-13T21:33:07.077307791Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 618.17372ms" Jan 13 21:33:07.138794 kubelet[2580]: E0113 21:33:07.138735 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:33:07.349501 containerd[2085]: time="2025-01-13T21:33:07.348941579Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:33:07.349501 containerd[2085]: time="2025-01-13T21:33:07.349014624Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:33:07.349501 containerd[2085]: time="2025-01-13T21:33:07.349031436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:33:07.349501 containerd[2085]: time="2025-01-13T21:33:07.349232948Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:33:07.355179 containerd[2085]: time="2025-01-13T21:33:07.355061368Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:33:07.358831 containerd[2085]: time="2025-01-13T21:33:07.358022504Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:33:07.358831 containerd[2085]: time="2025-01-13T21:33:07.358179160Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:33:07.358831 containerd[2085]: time="2025-01-13T21:33:07.358709412Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:33:07.546818 containerd[2085]: time="2025-01-13T21:33:07.546541400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tpnq9,Uid:f952e87e-8e4b-4c22-8c96-b28618d230a0,Namespace:calico-system,Attempt:0,} returns sandbox id \"85a0b9028c85fd2bf7381e345ef32f1759686bcf9de3d53e0d3484f1934fd59a\"" Jan 13 21:33:07.550350 containerd[2085]: time="2025-01-13T21:33:07.550133421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7qbwj,Uid:5330098f-1be4-41c5-8f44-d692f446d378,Namespace:kube-system,Attempt:0,} returns sandbox id \"e257c7c97c0e2a31554359cd9260f05094b32ef862da5490d033458c19d112b3\"" Jan 13 21:33:07.552276 containerd[2085]: time="2025-01-13T21:33:07.552117688Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 13 21:33:08.140769 kubelet[2580]: E0113 21:33:08.140703 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:33:08.315903 kubelet[2580]: E0113 21:33:08.315473 2580 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f6hv9" podUID="458fe02f-e573-4bee-9390-8d8b1d8e6284" Jan 13 21:33:09.141854 kubelet[2580]: E0113 21:33:09.141802 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:33:09.169721 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1425858721.mount: Deactivated successfully. Jan 13 21:33:09.332879 containerd[2085]: time="2025-01-13T21:33:09.332828490Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:33:09.337094 containerd[2085]: time="2025-01-13T21:33:09.335849997Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:33:09.337094 containerd[2085]: time="2025-01-13T21:33:09.335942457Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Jan 13 21:33:09.338920 containerd[2085]: time="2025-01-13T21:33:09.338124147Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:33:09.338920 containerd[2085]: time="2025-01-13T21:33:09.338733261Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.78657043s" Jan 13 21:33:09.338920 containerd[2085]: time="2025-01-13T21:33:09.338771652Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 13 21:33:09.340085 containerd[2085]: time="2025-01-13T21:33:09.340061631Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Jan 13 21:33:09.341258 containerd[2085]: time="2025-01-13T21:33:09.341223903Z" level=info msg="CreateContainer within sandbox \"85a0b9028c85fd2bf7381e345ef32f1759686bcf9de3d53e0d3484f1934fd59a\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 13 21:33:09.359200 containerd[2085]: time="2025-01-13T21:33:09.359161161Z" level=info msg="CreateContainer within sandbox \"85a0b9028c85fd2bf7381e345ef32f1759686bcf9de3d53e0d3484f1934fd59a\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"5d391f846b2d5365504cd8fbabaaafc759716b7968eb714ceec3092d4e1301ae\"" Jan 13 21:33:09.360002 containerd[2085]: time="2025-01-13T21:33:09.359970476Z" level=info msg="StartContainer for \"5d391f846b2d5365504cd8fbabaaafc759716b7968eb714ceec3092d4e1301ae\"" Jan 13 21:33:09.432384 containerd[2085]: time="2025-01-13T21:33:09.432243574Z" level=info msg="StartContainer for \"5d391f846b2d5365504cd8fbabaaafc759716b7968eb714ceec3092d4e1301ae\" returns successfully" Jan 13 21:33:09.562623 containerd[2085]: time="2025-01-13T21:33:09.561538845Z" level=info msg="shim disconnected" id=5d391f846b2d5365504cd8fbabaaafc759716b7968eb714ceec3092d4e1301ae namespace=k8s.io Jan 13 21:33:09.562623 containerd[2085]: time="2025-01-13T21:33:09.561595550Z" level=warning msg="cleaning up after shim disconnected" id=5d391f846b2d5365504cd8fbabaaafc759716b7968eb714ceec3092d4e1301ae namespace=k8s.io Jan 13 21:33:09.562623 containerd[2085]: time="2025-01-13T21:33:09.561607891Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:33:10.089084 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5d391f846b2d5365504cd8fbabaaafc759716b7968eb714ceec3092d4e1301ae-rootfs.mount: Deactivated successfully. Jan 13 21:33:10.144345 kubelet[2580]: E0113 21:33:10.142620 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:33:10.317665 kubelet[2580]: E0113 21:33:10.317634 2580 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f6hv9" podUID="458fe02f-e573-4bee-9390-8d8b1d8e6284" Jan 13 21:33:10.792321 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3507176512.mount: Deactivated successfully. Jan 13 21:33:11.144043 kubelet[2580]: E0113 21:33:11.143781 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:33:11.511900 containerd[2085]: time="2025-01-13T21:33:11.511740808Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:33:11.513361 containerd[2085]: time="2025-01-13T21:33:11.513267894Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=28619958" Jan 13 21:33:11.514634 containerd[2085]: time="2025-01-13T21:33:11.514586476Z" level=info msg="ImageCreate event name:\"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:33:11.517098 containerd[2085]: time="2025-01-13T21:33:11.517045194Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:33:11.518369 containerd[2085]: time="2025-01-13T21:33:11.517836356Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"28618977\" in 2.176821161s" Jan 13 21:33:11.518369 containerd[2085]: time="2025-01-13T21:33:11.517922720Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Jan 13 21:33:11.519386 containerd[2085]: time="2025-01-13T21:33:11.519308462Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 13 21:33:11.527262 containerd[2085]: time="2025-01-13T21:33:11.527217413Z" level=info msg="CreateContainer within sandbox \"e257c7c97c0e2a31554359cd9260f05094b32ef862da5490d033458c19d112b3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 21:33:11.577801 containerd[2085]: time="2025-01-13T21:33:11.577751062Z" level=info msg="CreateContainer within sandbox \"e257c7c97c0e2a31554359cd9260f05094b32ef862da5490d033458c19d112b3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"07dba6adfd09658ec3b55a08883929fe5560a5e84370d580f6ce0647642647f6\"" Jan 13 21:33:11.578725 containerd[2085]: time="2025-01-13T21:33:11.578671399Z" level=info msg="StartContainer for \"07dba6adfd09658ec3b55a08883929fe5560a5e84370d580f6ce0647642647f6\"" Jan 13 21:33:11.660561 containerd[2085]: time="2025-01-13T21:33:11.660432750Z" level=info msg="StartContainer for \"07dba6adfd09658ec3b55a08883929fe5560a5e84370d580f6ce0647642647f6\" returns successfully" Jan 13 21:33:12.144565 kubelet[2580]: E0113 21:33:12.144496 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:33:12.316146 kubelet[2580]: E0113 21:33:12.315723 2580 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f6hv9" podUID="458fe02f-e573-4bee-9390-8d8b1d8e6284" Jan 13 21:33:12.368936 kubelet[2580]: I0113 21:33:12.368885 2580 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-7qbwj" podStartSLOduration=4.404062693 podStartE2EDuration="8.3688374s" podCreationTimestamp="2025-01-13 21:33:04 +0000 UTC" firstStartedPulling="2025-01-13 21:33:07.553514533 +0000 UTC m=+4.089446309" lastFinishedPulling="2025-01-13 21:33:11.51828923 +0000 UTC m=+8.054221016" observedRunningTime="2025-01-13 21:33:12.367642251 +0000 UTC m=+8.903574050" watchObservedRunningTime="2025-01-13 21:33:12.3688374 +0000 UTC m=+8.904769198" Jan 13 21:33:13.145726 kubelet[2580]: E0113 21:33:13.144984 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:33:14.146040 kubelet[2580]: E0113 21:33:14.145978 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:33:14.316573 kubelet[2580]: E0113 21:33:14.316535 2580 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f6hv9" podUID="458fe02f-e573-4bee-9390-8d8b1d8e6284" Jan 13 21:33:15.146849 kubelet[2580]: E0113 21:33:15.146815 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:33:15.307056 containerd[2085]: time="2025-01-13T21:33:15.307005236Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:33:15.308373 containerd[2085]: time="2025-01-13T21:33:15.308249622Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 13 21:33:15.309464 containerd[2085]: time="2025-01-13T21:33:15.309427630Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:33:15.311778 containerd[2085]: time="2025-01-13T21:33:15.311729986Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:33:15.312530 containerd[2085]: time="2025-01-13T21:33:15.312496378Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 3.793005678s" Jan 13 21:33:15.312609 containerd[2085]: time="2025-01-13T21:33:15.312538896Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 13 21:33:15.315615 containerd[2085]: time="2025-01-13T21:33:15.315585455Z" level=info msg="CreateContainer within sandbox \"85a0b9028c85fd2bf7381e345ef32f1759686bcf9de3d53e0d3484f1934fd59a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 13 21:33:15.334825 containerd[2085]: time="2025-01-13T21:33:15.334684301Z" level=info msg="CreateContainer within sandbox \"85a0b9028c85fd2bf7381e345ef32f1759686bcf9de3d53e0d3484f1934fd59a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"47f783d09116d6ffb10bbad474145081ed3e185ec0f0fae29aeb8e8c229eee1c\"" Jan 13 21:33:15.335774 containerd[2085]: time="2025-01-13T21:33:15.335731669Z" level=info msg="StartContainer for \"47f783d09116d6ffb10bbad474145081ed3e185ec0f0fae29aeb8e8c229eee1c\"" Jan 13 21:33:15.408708 containerd[2085]: time="2025-01-13T21:33:15.407828231Z" level=info msg="StartContainer for \"47f783d09116d6ffb10bbad474145081ed3e185ec0f0fae29aeb8e8c229eee1c\" returns successfully" Jan 13 21:33:16.149220 kubelet[2580]: E0113 21:33:16.147961 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:33:16.317064 kubelet[2580]: E0113 21:33:16.316979 2580 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f6hv9" podUID="458fe02f-e573-4bee-9390-8d8b1d8e6284" Jan 13 21:33:16.380358 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-47f783d09116d6ffb10bbad474145081ed3e185ec0f0fae29aeb8e8c229eee1c-rootfs.mount: Deactivated successfully. Jan 13 21:33:16.453283 kubelet[2580]: I0113 21:33:16.452953 2580 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 13 21:33:16.625607 containerd[2085]: time="2025-01-13T21:33:16.625516672Z" level=info msg="shim disconnected" id=47f783d09116d6ffb10bbad474145081ed3e185ec0f0fae29aeb8e8c229eee1c namespace=k8s.io Jan 13 21:33:16.625607 containerd[2085]: time="2025-01-13T21:33:16.625584161Z" level=warning msg="cleaning up after shim disconnected" id=47f783d09116d6ffb10bbad474145081ed3e185ec0f0fae29aeb8e8c229eee1c namespace=k8s.io Jan 13 21:33:16.625607 containerd[2085]: time="2025-01-13T21:33:16.625599671Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:33:17.148805 kubelet[2580]: E0113 21:33:17.148750 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:33:17.374891 containerd[2085]: time="2025-01-13T21:33:17.374843783Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 13 21:33:18.149478 kubelet[2580]: E0113 21:33:18.149418 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:33:18.322361 containerd[2085]: time="2025-01-13T21:33:18.320727590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-f6hv9,Uid:458fe02f-e573-4bee-9390-8d8b1d8e6284,Namespace:calico-system,Attempt:0,}" Jan 13 21:33:18.488056 containerd[2085]: time="2025-01-13T21:33:18.485635626Z" level=error msg="Failed to destroy network for sandbox \"ebc4b7fa45d38e7844c1ceeede96b46d7580199d15a1ff769c1b19e839726ed7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:33:18.488979 containerd[2085]: time="2025-01-13T21:33:18.488857786Z" level=error msg="encountered an error cleaning up failed sandbox \"ebc4b7fa45d38e7844c1ceeede96b46d7580199d15a1ff769c1b19e839726ed7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:33:18.489220 containerd[2085]: time="2025-01-13T21:33:18.489078223Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-f6hv9,Uid:458fe02f-e573-4bee-9390-8d8b1d8e6284,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ebc4b7fa45d38e7844c1ceeede96b46d7580199d15a1ff769c1b19e839726ed7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:33:18.490132 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ebc4b7fa45d38e7844c1ceeede96b46d7580199d15a1ff769c1b19e839726ed7-shm.mount: Deactivated successfully. Jan 13 21:33:18.491706 kubelet[2580]: E0113 21:33:18.491675 2580 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ebc4b7fa45d38e7844c1ceeede96b46d7580199d15a1ff769c1b19e839726ed7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:33:18.491794 kubelet[2580]: E0113 21:33:18.491753 2580 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ebc4b7fa45d38e7844c1ceeede96b46d7580199d15a1ff769c1b19e839726ed7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-f6hv9" Jan 13 21:33:18.491794 kubelet[2580]: E0113 21:33:18.491783 2580 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ebc4b7fa45d38e7844c1ceeede96b46d7580199d15a1ff769c1b19e839726ed7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-f6hv9" Jan 13 21:33:18.491952 kubelet[2580]: E0113 21:33:18.491849 2580 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-f6hv9_calico-system(458fe02f-e573-4bee-9390-8d8b1d8e6284)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-f6hv9_calico-system(458fe02f-e573-4bee-9390-8d8b1d8e6284)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ebc4b7fa45d38e7844c1ceeede96b46d7580199d15a1ff769c1b19e839726ed7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-f6hv9" podUID="458fe02f-e573-4bee-9390-8d8b1d8e6284" Jan 13 21:33:19.149632 kubelet[2580]: E0113 21:33:19.149571 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:33:19.386908 kubelet[2580]: I0113 21:33:19.386873 2580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ebc4b7fa45d38e7844c1ceeede96b46d7580199d15a1ff769c1b19e839726ed7" Jan 13 21:33:19.388105 containerd[2085]: time="2025-01-13T21:33:19.388061832Z" level=info msg="StopPodSandbox for \"ebc4b7fa45d38e7844c1ceeede96b46d7580199d15a1ff769c1b19e839726ed7\"" Jan 13 21:33:19.388603 containerd[2085]: time="2025-01-13T21:33:19.388269735Z" level=info msg="Ensure that sandbox ebc4b7fa45d38e7844c1ceeede96b46d7580199d15a1ff769c1b19e839726ed7 in task-service has been cleanup successfully" Jan 13 21:33:19.417138 kubelet[2580]: I0113 21:33:19.415722 2580 topology_manager.go:215] "Topology Admit Handler" podUID="c5eda378-4eb7-4846-8551-05ef7a53a762" podNamespace="default" podName="nginx-deployment-6d5f899847-cnf6b" Jan 13 21:33:19.426966 containerd[2085]: time="2025-01-13T21:33:19.426864157Z" level=error msg="StopPodSandbox for \"ebc4b7fa45d38e7844c1ceeede96b46d7580199d15a1ff769c1b19e839726ed7\" failed" error="failed to destroy network for sandbox \"ebc4b7fa45d38e7844c1ceeede96b46d7580199d15a1ff769c1b19e839726ed7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:33:19.427158 kubelet[2580]: E0113 21:33:19.427127 2580 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ebc4b7fa45d38e7844c1ceeede96b46d7580199d15a1ff769c1b19e839726ed7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ebc4b7fa45d38e7844c1ceeede96b46d7580199d15a1ff769c1b19e839726ed7" Jan 13 21:33:19.427244 kubelet[2580]: E0113 21:33:19.427208 2580 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ebc4b7fa45d38e7844c1ceeede96b46d7580199d15a1ff769c1b19e839726ed7"} Jan 13 21:33:19.427298 kubelet[2580]: E0113 21:33:19.427258 2580 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"458fe02f-e573-4bee-9390-8d8b1d8e6284\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ebc4b7fa45d38e7844c1ceeede96b46d7580199d15a1ff769c1b19e839726ed7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 21:33:19.427392 kubelet[2580]: E0113 21:33:19.427299 2580 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"458fe02f-e573-4bee-9390-8d8b1d8e6284\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ebc4b7fa45d38e7844c1ceeede96b46d7580199d15a1ff769c1b19e839726ed7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-f6hv9" podUID="458fe02f-e573-4bee-9390-8d8b1d8e6284" Jan 13 21:33:19.495962 kubelet[2580]: I0113 21:33:19.495923 2580 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7nzq\" (UniqueName: \"kubernetes.io/projected/c5eda378-4eb7-4846-8551-05ef7a53a762-kube-api-access-q7nzq\") pod \"nginx-deployment-6d5f899847-cnf6b\" (UID: \"c5eda378-4eb7-4846-8551-05ef7a53a762\") " pod="default/nginx-deployment-6d5f899847-cnf6b" Jan 13 21:33:19.724001 containerd[2085]: time="2025-01-13T21:33:19.723892949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-cnf6b,Uid:c5eda378-4eb7-4846-8551-05ef7a53a762,Namespace:default,Attempt:0,}" Jan 13 21:33:19.853929 containerd[2085]: time="2025-01-13T21:33:19.853826157Z" level=error msg="Failed to destroy network for sandbox \"d092445449fe71e9942957d083de733411fb85617ca01c69cceb888b6907c827\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:33:19.857346 containerd[2085]: time="2025-01-13T21:33:19.855223793Z" level=error msg="encountered an error cleaning up failed sandbox \"d092445449fe71e9942957d083de733411fb85617ca01c69cceb888b6907c827\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:33:19.858462 containerd[2085]: time="2025-01-13T21:33:19.858412693Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-cnf6b,Uid:c5eda378-4eb7-4846-8551-05ef7a53a762,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d092445449fe71e9942957d083de733411fb85617ca01c69cceb888b6907c827\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:33:19.858934 kubelet[2580]: E0113 21:33:19.858900 2580 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d092445449fe71e9942957d083de733411fb85617ca01c69cceb888b6907c827\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:33:19.859021 kubelet[2580]: E0113 21:33:19.859004 2580 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d092445449fe71e9942957d083de733411fb85617ca01c69cceb888b6907c827\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-cnf6b" Jan 13 21:33:19.859069 kubelet[2580]: E0113 21:33:19.859034 2580 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d092445449fe71e9942957d083de733411fb85617ca01c69cceb888b6907c827\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-cnf6b" Jan 13 21:33:19.859111 kubelet[2580]: E0113 21:33:19.859102 2580 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-cnf6b_default(c5eda378-4eb7-4846-8551-05ef7a53a762)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-cnf6b_default(c5eda378-4eb7-4846-8551-05ef7a53a762)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d092445449fe71e9942957d083de733411fb85617ca01c69cceb888b6907c827\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-cnf6b" podUID="c5eda378-4eb7-4846-8551-05ef7a53a762" Jan 13 21:33:19.859547 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d092445449fe71e9942957d083de733411fb85617ca01c69cceb888b6907c827-shm.mount: Deactivated successfully. Jan 13 21:33:20.150446 kubelet[2580]: E0113 21:33:20.150412 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:33:20.390776 kubelet[2580]: I0113 21:33:20.390489 2580 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d092445449fe71e9942957d083de733411fb85617ca01c69cceb888b6907c827" Jan 13 21:33:20.391456 containerd[2085]: time="2025-01-13T21:33:20.391421132Z" level=info msg="StopPodSandbox for \"d092445449fe71e9942957d083de733411fb85617ca01c69cceb888b6907c827\"" Jan 13 21:33:20.392001 containerd[2085]: time="2025-01-13T21:33:20.391964945Z" level=info msg="Ensure that sandbox d092445449fe71e9942957d083de733411fb85617ca01c69cceb888b6907c827 in task-service has been cleanup successfully" Jan 13 21:33:20.460886 containerd[2085]: time="2025-01-13T21:33:20.460744086Z" level=error msg="StopPodSandbox for \"d092445449fe71e9942957d083de733411fb85617ca01c69cceb888b6907c827\" failed" error="failed to destroy network for sandbox \"d092445449fe71e9942957d083de733411fb85617ca01c69cceb888b6907c827\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:33:20.461273 kubelet[2580]: E0113 21:33:20.461052 2580 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d092445449fe71e9942957d083de733411fb85617ca01c69cceb888b6907c827\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d092445449fe71e9942957d083de733411fb85617ca01c69cceb888b6907c827" Jan 13 21:33:20.461273 kubelet[2580]: E0113 21:33:20.461199 2580 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d092445449fe71e9942957d083de733411fb85617ca01c69cceb888b6907c827"} Jan 13 21:33:20.461273 kubelet[2580]: E0113 21:33:20.461254 2580 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c5eda378-4eb7-4846-8551-05ef7a53a762\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d092445449fe71e9942957d083de733411fb85617ca01c69cceb888b6907c827\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 21:33:20.461698 kubelet[2580]: E0113 21:33:20.461311 2580 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c5eda378-4eb7-4846-8551-05ef7a53a762\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d092445449fe71e9942957d083de733411fb85617ca01c69cceb888b6907c827\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-cnf6b" podUID="c5eda378-4eb7-4846-8551-05ef7a53a762" Jan 13 21:33:20.737924 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 13 21:33:21.151744 kubelet[2580]: E0113 21:33:21.151703 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:33:22.152165 kubelet[2580]: E0113 21:33:22.152098 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:33:23.153047 kubelet[2580]: E0113 21:33:23.152978 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:33:23.925395 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1852581769.mount: Deactivated successfully. Jan 13 21:33:24.001477 containerd[2085]: time="2025-01-13T21:33:24.001420892Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:33:24.003359 containerd[2085]: time="2025-01-13T21:33:24.003183145Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 13 21:33:24.004941 containerd[2085]: time="2025-01-13T21:33:24.004882576Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:33:24.011389 containerd[2085]: time="2025-01-13T21:33:24.009962302Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:33:24.011389 containerd[2085]: time="2025-01-13T21:33:24.011213791Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 6.636316975s" Jan 13 21:33:24.011389 containerd[2085]: time="2025-01-13T21:33:24.011257463Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 13 21:33:24.030410 containerd[2085]: time="2025-01-13T21:33:24.030367647Z" level=info msg="CreateContainer within sandbox \"85a0b9028c85fd2bf7381e345ef32f1759686bcf9de3d53e0d3484f1934fd59a\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 13 21:33:24.061622 containerd[2085]: time="2025-01-13T21:33:24.061576887Z" level=info msg="CreateContainer within sandbox \"85a0b9028c85fd2bf7381e345ef32f1759686bcf9de3d53e0d3484f1934fd59a\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"043aef77c3666616d0a55299b7ede379a7d378b654c8b02457486bbbed961a0e\"" Jan 13 21:33:24.062395 containerd[2085]: time="2025-01-13T21:33:24.062362154Z" level=info msg="StartContainer for \"043aef77c3666616d0a55299b7ede379a7d378b654c8b02457486bbbed961a0e\"" Jan 13 21:33:24.132628 kubelet[2580]: E0113 21:33:24.132593 2580 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:33:24.153194 kubelet[2580]: E0113 21:33:24.153151 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:33:24.172693 containerd[2085]: time="2025-01-13T21:33:24.172645123Z" level=info msg="StartContainer for \"043aef77c3666616d0a55299b7ede379a7d378b654c8b02457486bbbed961a0e\" returns successfully" Jan 13 21:33:24.309887 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 13 21:33:24.310048 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 13 21:33:24.432255 kubelet[2580]: I0113 21:33:24.431809 2580 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-tpnq9" podStartSLOduration=3.970058754 podStartE2EDuration="20.431756295s" podCreationTimestamp="2025-01-13 21:33:04 +0000 UTC" firstStartedPulling="2025-01-13 21:33:07.550598138 +0000 UTC m=+4.086529929" lastFinishedPulling="2025-01-13 21:33:24.012295688 +0000 UTC m=+20.548227470" observedRunningTime="2025-01-13 21:33:24.431425258 +0000 UTC m=+20.967357056" watchObservedRunningTime="2025-01-13 21:33:24.431756295 +0000 UTC m=+20.967688073" Jan 13 21:33:25.154165 kubelet[2580]: E0113 21:33:25.154109 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:33:26.155200 kubelet[2580]: E0113 21:33:26.155150 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:33:26.251362 kernel: bpftool[3370]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 13 21:33:26.506095 (udev-worker)[3183]: Network interface NamePolicy= disabled on kernel command line. Jan 13 21:33:26.508488 systemd-networkd[1658]: vxlan.calico: Link UP Jan 13 21:33:26.508493 systemd-networkd[1658]: vxlan.calico: Gained carrier Jan 13 21:33:26.543026 (udev-worker)[3398]: Network interface NamePolicy= disabled on kernel command line. Jan 13 21:33:27.155960 kubelet[2580]: E0113 21:33:27.155908 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:33:28.156376 kubelet[2580]: E0113 21:33:28.156319 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:33:28.512139 systemd-networkd[1658]: vxlan.calico: Gained IPv6LL Jan 13 21:33:29.156541 kubelet[2580]: E0113 21:33:29.156483 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:33:30.156921 kubelet[2580]: E0113 21:33:30.156871 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:33:30.537518 ntpd[2044]: Listen normally on 6 vxlan.calico 192.168.6.0:123 Jan 13 21:33:30.538038 ntpd[2044]: 13 Jan 21:33:30 ntpd[2044]: Listen normally on 6 vxlan.calico 192.168.6.0:123 Jan 13 21:33:30.538038 ntpd[2044]: 13 Jan 21:33:30 ntpd[2044]: Listen normally on 7 vxlan.calico [fe80::6406:73ff:fec4:ca87%3]:123 Jan 13 21:33:30.537605 ntpd[2044]: Listen normally on 7 vxlan.calico [fe80::6406:73ff:fec4:ca87%3]:123 Jan 13 21:33:31.157666 kubelet[2580]: E0113 21:33:31.157614 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:33:32.158150 kubelet[2580]: E0113 21:33:32.158097 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:33:33.158824 kubelet[2580]: E0113 21:33:33.158767 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:33:33.316922 containerd[2085]: time="2025-01-13T21:33:33.316730956Z" level=info msg="StopPodSandbox for \"d092445449fe71e9942957d083de733411fb85617ca01c69cceb888b6907c827\"" Jan 13 21:33:34.121012 containerd[2085]: 2025-01-13 21:33:33.731 [INFO][3460] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d092445449fe71e9942957d083de733411fb85617ca01c69cceb888b6907c827" Jan 13 21:33:34.121012 containerd[2085]: 2025-01-13 21:33:33.733 [INFO][3460] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d092445449fe71e9942957d083de733411fb85617ca01c69cceb888b6907c827" iface="eth0" netns="/var/run/netns/cni-919efbfe-3177-17d2-8f55-2dc1deca040e" Jan 13 21:33:34.121012 containerd[2085]: 2025-01-13 21:33:33.734 [INFO][3460] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d092445449fe71e9942957d083de733411fb85617ca01c69cceb888b6907c827" iface="eth0" netns="/var/run/netns/cni-919efbfe-3177-17d2-8f55-2dc1deca040e" Jan 13 21:33:34.121012 containerd[2085]: 2025-01-13 21:33:33.745 [INFO][3460] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d092445449fe71e9942957d083de733411fb85617ca01c69cceb888b6907c827" iface="eth0" netns="/var/run/netns/cni-919efbfe-3177-17d2-8f55-2dc1deca040e" Jan 13 21:33:34.121012 containerd[2085]: 2025-01-13 21:33:33.745 [INFO][3460] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d092445449fe71e9942957d083de733411fb85617ca01c69cceb888b6907c827" Jan 13 21:33:34.121012 containerd[2085]: 2025-01-13 21:33:33.745 [INFO][3460] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d092445449fe71e9942957d083de733411fb85617ca01c69cceb888b6907c827" Jan 13 21:33:34.121012 containerd[2085]: 2025-01-13 21:33:34.081 [INFO][3466] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d092445449fe71e9942957d083de733411fb85617ca01c69cceb888b6907c827" HandleID="k8s-pod-network.d092445449fe71e9942957d083de733411fb85617ca01c69cceb888b6907c827" Workload="172.31.17.229-k8s-nginx--deployment--6d5f899847--cnf6b-eth0" Jan 13 21:33:34.121012 containerd[2085]: 2025-01-13 21:33:34.088 [INFO][3466] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:33:34.121012 containerd[2085]: 2025-01-13 21:33:34.088 [INFO][3466] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:33:34.121012 containerd[2085]: 2025-01-13 21:33:34.104 [WARNING][3466] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d092445449fe71e9942957d083de733411fb85617ca01c69cceb888b6907c827" HandleID="k8s-pod-network.d092445449fe71e9942957d083de733411fb85617ca01c69cceb888b6907c827" Workload="172.31.17.229-k8s-nginx--deployment--6d5f899847--cnf6b-eth0" Jan 13 21:33:34.121012 containerd[2085]: 2025-01-13 21:33:34.104 [INFO][3466] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d092445449fe71e9942957d083de733411fb85617ca01c69cceb888b6907c827" HandleID="k8s-pod-network.d092445449fe71e9942957d083de733411fb85617ca01c69cceb888b6907c827" Workload="172.31.17.229-k8s-nginx--deployment--6d5f899847--cnf6b-eth0" Jan 13 21:33:34.121012 containerd[2085]: 2025-01-13 21:33:34.115 [INFO][3466] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:33:34.121012 containerd[2085]: 2025-01-13 21:33:34.119 [INFO][3460] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d092445449fe71e9942957d083de733411fb85617ca01c69cceb888b6907c827" Jan 13 21:33:34.121649 containerd[2085]: time="2025-01-13T21:33:34.121170859Z" level=info msg="TearDown network for sandbox \"d092445449fe71e9942957d083de733411fb85617ca01c69cceb888b6907c827\" successfully" Jan 13 21:33:34.121649 containerd[2085]: time="2025-01-13T21:33:34.121475152Z" level=info msg="StopPodSandbox for \"d092445449fe71e9942957d083de733411fb85617ca01c69cceb888b6907c827\" returns successfully" Jan 13 21:33:34.139816 containerd[2085]: time="2025-01-13T21:33:34.131412343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-cnf6b,Uid:c5eda378-4eb7-4846-8551-05ef7a53a762,Namespace:default,Attempt:1,}" Jan 13 21:33:34.132707 systemd[1]: run-netns-cni\x2d919efbfe\x2d3177\x2d17d2\x2d8f55\x2d2dc1deca040e.mount: Deactivated successfully. Jan 13 21:33:34.159564 kubelet[2580]: E0113 21:33:34.159506 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:33:34.319590 containerd[2085]: time="2025-01-13T21:33:34.319279256Z" level=info msg="StopPodSandbox for \"ebc4b7fa45d38e7844c1ceeede96b46d7580199d15a1ff769c1b19e839726ed7\"" Jan 13 21:33:34.408231 systemd-networkd[1658]: cali4743b0bbb23: Link UP Jan 13 21:33:34.410721 systemd-networkd[1658]: cali4743b0bbb23: Gained carrier Jan 13 21:33:34.412278 (udev-worker)[3511]: Network interface NamePolicy= disabled on kernel command line. Jan 13 21:33:34.440865 containerd[2085]: 2025-01-13 21:33:34.215 [INFO][3472] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.17.229-k8s-nginx--deployment--6d5f899847--cnf6b-eth0 nginx-deployment-6d5f899847- default c5eda378-4eb7-4846-8551-05ef7a53a762 1079 0 2025-01-13 21:33:19 +0000 UTC map[app:nginx pod-template-hash:6d5f899847 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.17.229 nginx-deployment-6d5f899847-cnf6b eth0 default [] [] [kns.default ksa.default.default] cali4743b0bbb23 [] []}} ContainerID="a920a7a9d1cc9d01fa026143f462a508c2d15b734ba06128e7dc8cd0ec33ee12" Namespace="default" Pod="nginx-deployment-6d5f899847-cnf6b" WorkloadEndpoint="172.31.17.229-k8s-nginx--deployment--6d5f899847--cnf6b-" Jan 13 21:33:34.440865 containerd[2085]: 2025-01-13 21:33:34.215 [INFO][3472] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a920a7a9d1cc9d01fa026143f462a508c2d15b734ba06128e7dc8cd0ec33ee12" Namespace="default" Pod="nginx-deployment-6d5f899847-cnf6b" WorkloadEndpoint="172.31.17.229-k8s-nginx--deployment--6d5f899847--cnf6b-eth0" Jan 13 21:33:34.440865 containerd[2085]: 2025-01-13 21:33:34.261 [INFO][3483] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a920a7a9d1cc9d01fa026143f462a508c2d15b734ba06128e7dc8cd0ec33ee12" HandleID="k8s-pod-network.a920a7a9d1cc9d01fa026143f462a508c2d15b734ba06128e7dc8cd0ec33ee12" Workload="172.31.17.229-k8s-nginx--deployment--6d5f899847--cnf6b-eth0" Jan 13 21:33:34.440865 containerd[2085]: 2025-01-13 21:33:34.306 [INFO][3483] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a920a7a9d1cc9d01fa026143f462a508c2d15b734ba06128e7dc8cd0ec33ee12" HandleID="k8s-pod-network.a920a7a9d1cc9d01fa026143f462a508c2d15b734ba06128e7dc8cd0ec33ee12" Workload="172.31.17.229-k8s-nginx--deployment--6d5f899847--cnf6b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051ef0), Attrs:map[string]string{"namespace":"default", "node":"172.31.17.229", "pod":"nginx-deployment-6d5f899847-cnf6b", "timestamp":"2025-01-13 21:33:34.261200056 +0000 UTC"}, Hostname:"172.31.17.229", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:33:34.440865 containerd[2085]: 2025-01-13 21:33:34.306 [INFO][3483] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:33:34.440865 containerd[2085]: 2025-01-13 21:33:34.306 [INFO][3483] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:33:34.440865 containerd[2085]: 2025-01-13 21:33:34.306 [INFO][3483] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.17.229' Jan 13 21:33:34.440865 containerd[2085]: 2025-01-13 21:33:34.312 [INFO][3483] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a920a7a9d1cc9d01fa026143f462a508c2d15b734ba06128e7dc8cd0ec33ee12" host="172.31.17.229" Jan 13 21:33:34.440865 containerd[2085]: 2025-01-13 21:33:34.331 [INFO][3483] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.17.229" Jan 13 21:33:34.440865 containerd[2085]: 2025-01-13 21:33:34.344 [INFO][3483] ipam/ipam.go 489: Trying affinity for 192.168.6.0/26 host="172.31.17.229" Jan 13 21:33:34.440865 containerd[2085]: 2025-01-13 21:33:34.350 [INFO][3483] ipam/ipam.go 155: Attempting to load block cidr=192.168.6.0/26 host="172.31.17.229" Jan 13 21:33:34.440865 containerd[2085]: 2025-01-13 21:33:34.354 [INFO][3483] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.6.0/26 host="172.31.17.229" Jan 13 21:33:34.440865 containerd[2085]: 2025-01-13 21:33:34.354 [INFO][3483] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.6.0/26 handle="k8s-pod-network.a920a7a9d1cc9d01fa026143f462a508c2d15b734ba06128e7dc8cd0ec33ee12" host="172.31.17.229" Jan 13 21:33:34.440865 containerd[2085]: 2025-01-13 21:33:34.357 [INFO][3483] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a920a7a9d1cc9d01fa026143f462a508c2d15b734ba06128e7dc8cd0ec33ee12 Jan 13 21:33:34.440865 containerd[2085]: 2025-01-13 21:33:34.368 [INFO][3483] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.6.0/26 handle="k8s-pod-network.a920a7a9d1cc9d01fa026143f462a508c2d15b734ba06128e7dc8cd0ec33ee12" host="172.31.17.229" Jan 13 21:33:34.440865 containerd[2085]: 2025-01-13 21:33:34.394 [INFO][3483] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.6.1/26] block=192.168.6.0/26 handle="k8s-pod-network.a920a7a9d1cc9d01fa026143f462a508c2d15b734ba06128e7dc8cd0ec33ee12" host="172.31.17.229" Jan 13 21:33:34.440865 containerd[2085]: 2025-01-13 21:33:34.394 [INFO][3483] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.6.1/26] handle="k8s-pod-network.a920a7a9d1cc9d01fa026143f462a508c2d15b734ba06128e7dc8cd0ec33ee12" host="172.31.17.229" Jan 13 21:33:34.440865 containerd[2085]: 2025-01-13 21:33:34.394 [INFO][3483] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:33:34.440865 containerd[2085]: 2025-01-13 21:33:34.394 [INFO][3483] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.6.1/26] IPv6=[] ContainerID="a920a7a9d1cc9d01fa026143f462a508c2d15b734ba06128e7dc8cd0ec33ee12" HandleID="k8s-pod-network.a920a7a9d1cc9d01fa026143f462a508c2d15b734ba06128e7dc8cd0ec33ee12" Workload="172.31.17.229-k8s-nginx--deployment--6d5f899847--cnf6b-eth0" Jan 13 21:33:34.442055 containerd[2085]: 2025-01-13 21:33:34.397 [INFO][3472] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a920a7a9d1cc9d01fa026143f462a508c2d15b734ba06128e7dc8cd0ec33ee12" Namespace="default" Pod="nginx-deployment-6d5f899847-cnf6b" WorkloadEndpoint="172.31.17.229-k8s-nginx--deployment--6d5f899847--cnf6b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.17.229-k8s-nginx--deployment--6d5f899847--cnf6b-eth0", GenerateName:"nginx-deployment-6d5f899847-", Namespace:"default", SelfLink:"", UID:"c5eda378-4eb7-4846-8551-05ef7a53a762", ResourceVersion:"1079", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 33, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"6d5f899847", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.17.229", ContainerID:"", Pod:"nginx-deployment-6d5f899847-cnf6b", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.6.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali4743b0bbb23", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:33:34.442055 containerd[2085]: 2025-01-13 21:33:34.397 [INFO][3472] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.6.1/32] ContainerID="a920a7a9d1cc9d01fa026143f462a508c2d15b734ba06128e7dc8cd0ec33ee12" Namespace="default" Pod="nginx-deployment-6d5f899847-cnf6b" WorkloadEndpoint="172.31.17.229-k8s-nginx--deployment--6d5f899847--cnf6b-eth0" Jan 13 21:33:34.442055 containerd[2085]: 2025-01-13 21:33:34.397 [INFO][3472] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4743b0bbb23 ContainerID="a920a7a9d1cc9d01fa026143f462a508c2d15b734ba06128e7dc8cd0ec33ee12" Namespace="default" Pod="nginx-deployment-6d5f899847-cnf6b" WorkloadEndpoint="172.31.17.229-k8s-nginx--deployment--6d5f899847--cnf6b-eth0" Jan 13 21:33:34.442055 containerd[2085]: 2025-01-13 21:33:34.412 [INFO][3472] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a920a7a9d1cc9d01fa026143f462a508c2d15b734ba06128e7dc8cd0ec33ee12" Namespace="default" Pod="nginx-deployment-6d5f899847-cnf6b" WorkloadEndpoint="172.31.17.229-k8s-nginx--deployment--6d5f899847--cnf6b-eth0" Jan 13 21:33:34.442055 containerd[2085]: 2025-01-13 21:33:34.413 [INFO][3472] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a920a7a9d1cc9d01fa026143f462a508c2d15b734ba06128e7dc8cd0ec33ee12" Namespace="default" Pod="nginx-deployment-6d5f899847-cnf6b" WorkloadEndpoint="172.31.17.229-k8s-nginx--deployment--6d5f899847--cnf6b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.17.229-k8s-nginx--deployment--6d5f899847--cnf6b-eth0", GenerateName:"nginx-deployment-6d5f899847-", Namespace:"default", SelfLink:"", UID:"c5eda378-4eb7-4846-8551-05ef7a53a762", ResourceVersion:"1079", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 33, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"6d5f899847", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.17.229", ContainerID:"a920a7a9d1cc9d01fa026143f462a508c2d15b734ba06128e7dc8cd0ec33ee12", Pod:"nginx-deployment-6d5f899847-cnf6b", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.6.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali4743b0bbb23", MAC:"56:b0:61:3c:fd:89", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:33:34.442055 containerd[2085]: 2025-01-13 21:33:34.427 [INFO][3472] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a920a7a9d1cc9d01fa026143f462a508c2d15b734ba06128e7dc8cd0ec33ee12" Namespace="default" Pod="nginx-deployment-6d5f899847-cnf6b" WorkloadEndpoint="172.31.17.229-k8s-nginx--deployment--6d5f899847--cnf6b-eth0" Jan 13 21:33:34.482072 containerd[2085]: time="2025-01-13T21:33:34.481733970Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:33:34.482072 containerd[2085]: time="2025-01-13T21:33:34.481788036Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:33:34.482072 containerd[2085]: time="2025-01-13T21:33:34.481802341Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:33:34.482072 containerd[2085]: time="2025-01-13T21:33:34.481890015Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:33:34.504770 containerd[2085]: 2025-01-13 21:33:34.402 [INFO][3503] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ebc4b7fa45d38e7844c1ceeede96b46d7580199d15a1ff769c1b19e839726ed7" Jan 13 21:33:34.504770 containerd[2085]: 2025-01-13 21:33:34.403 [INFO][3503] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ebc4b7fa45d38e7844c1ceeede96b46d7580199d15a1ff769c1b19e839726ed7" iface="eth0" netns="/var/run/netns/cni-9c35bfbb-314c-d8cb-b5eb-114edc88dc88" Jan 13 21:33:34.504770 containerd[2085]: 2025-01-13 21:33:34.404 [INFO][3503] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ebc4b7fa45d38e7844c1ceeede96b46d7580199d15a1ff769c1b19e839726ed7" iface="eth0" netns="/var/run/netns/cni-9c35bfbb-314c-d8cb-b5eb-114edc88dc88" Jan 13 21:33:34.504770 containerd[2085]: 2025-01-13 21:33:34.405 [INFO][3503] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ebc4b7fa45d38e7844c1ceeede96b46d7580199d15a1ff769c1b19e839726ed7" iface="eth0" netns="/var/run/netns/cni-9c35bfbb-314c-d8cb-b5eb-114edc88dc88" Jan 13 21:33:34.504770 containerd[2085]: 2025-01-13 21:33:34.405 [INFO][3503] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ebc4b7fa45d38e7844c1ceeede96b46d7580199d15a1ff769c1b19e839726ed7" Jan 13 21:33:34.504770 containerd[2085]: 2025-01-13 21:33:34.405 [INFO][3503] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ebc4b7fa45d38e7844c1ceeede96b46d7580199d15a1ff769c1b19e839726ed7" Jan 13 21:33:34.504770 containerd[2085]: 2025-01-13 21:33:34.466 [INFO][3510] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ebc4b7fa45d38e7844c1ceeede96b46d7580199d15a1ff769c1b19e839726ed7" HandleID="k8s-pod-network.ebc4b7fa45d38e7844c1ceeede96b46d7580199d15a1ff769c1b19e839726ed7" Workload="172.31.17.229-k8s-csi--node--driver--f6hv9-eth0" Jan 13 21:33:34.504770 containerd[2085]: 2025-01-13 21:33:34.467 [INFO][3510] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:33:34.504770 containerd[2085]: 2025-01-13 21:33:34.467 [INFO][3510] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:33:34.504770 containerd[2085]: 2025-01-13 21:33:34.490 [WARNING][3510] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ebc4b7fa45d38e7844c1ceeede96b46d7580199d15a1ff769c1b19e839726ed7" HandleID="k8s-pod-network.ebc4b7fa45d38e7844c1ceeede96b46d7580199d15a1ff769c1b19e839726ed7" Workload="172.31.17.229-k8s-csi--node--driver--f6hv9-eth0" Jan 13 21:33:34.504770 containerd[2085]: 2025-01-13 21:33:34.490 [INFO][3510] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ebc4b7fa45d38e7844c1ceeede96b46d7580199d15a1ff769c1b19e839726ed7" HandleID="k8s-pod-network.ebc4b7fa45d38e7844c1ceeede96b46d7580199d15a1ff769c1b19e839726ed7" Workload="172.31.17.229-k8s-csi--node--driver--f6hv9-eth0" Jan 13 21:33:34.504770 containerd[2085]: 2025-01-13 21:33:34.498 [INFO][3510] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:33:34.504770 containerd[2085]: 2025-01-13 21:33:34.500 [INFO][3503] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ebc4b7fa45d38e7844c1ceeede96b46d7580199d15a1ff769c1b19e839726ed7" Jan 13 21:33:34.507769 containerd[2085]: time="2025-01-13T21:33:34.507485917Z" level=info msg="TearDown network for sandbox \"ebc4b7fa45d38e7844c1ceeede96b46d7580199d15a1ff769c1b19e839726ed7\" successfully" Jan 13 21:33:34.507769 containerd[2085]: time="2025-01-13T21:33:34.507529583Z" level=info msg="StopPodSandbox for \"ebc4b7fa45d38e7844c1ceeede96b46d7580199d15a1ff769c1b19e839726ed7\" returns successfully" Jan 13 21:33:34.510533 containerd[2085]: time="2025-01-13T21:33:34.510348225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-f6hv9,Uid:458fe02f-e573-4bee-9390-8d8b1d8e6284,Namespace:calico-system,Attempt:1,}" Jan 13 21:33:34.512957 systemd[1]: run-netns-cni\x2d9c35bfbb\x2d314c\x2dd8cb\x2db5eb\x2d114edc88dc88.mount: Deactivated successfully. Jan 13 21:33:34.551346 update_engine[2055]: I20250113 21:33:34.548369 2055 update_attempter.cc:509] Updating boot flags... Jan 13 21:33:34.600347 containerd[2085]: time="2025-01-13T21:33:34.599272845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-cnf6b,Uid:c5eda378-4eb7-4846-8551-05ef7a53a762,Namespace:default,Attempt:1,} returns sandbox id \"a920a7a9d1cc9d01fa026143f462a508c2d15b734ba06128e7dc8cd0ec33ee12\"" Jan 13 21:33:34.605511 containerd[2085]: time="2025-01-13T21:33:34.604732037Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 13 21:33:34.631352 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3596) Jan 13 21:33:34.880418 systemd-networkd[1658]: cali07fbce742cd: Link UP Jan 13 21:33:34.880958 (udev-worker)[3599]: Network interface NamePolicy= disabled on kernel command line. Jan 13 21:33:34.881276 systemd-networkd[1658]: cali07fbce742cd: Gained carrier Jan 13 21:33:34.902350 containerd[2085]: 2025-01-13 21:33:34.686 [INFO][3568] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.17.229-k8s-csi--node--driver--f6hv9-eth0 csi-node-driver- calico-system 458fe02f-e573-4bee-9390-8d8b1d8e6284 1083 0 2025-01-13 21:33:04 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172.31.17.229 csi-node-driver-f6hv9 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali07fbce742cd [] []}} ContainerID="1c6ec7536f0e060840ec8eb99e1e6495bc61244667a84548e83289426cfc917b" Namespace="calico-system" Pod="csi-node-driver-f6hv9" WorkloadEndpoint="172.31.17.229-k8s-csi--node--driver--f6hv9-" Jan 13 21:33:34.902350 containerd[2085]: 2025-01-13 21:33:34.686 [INFO][3568] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1c6ec7536f0e060840ec8eb99e1e6495bc61244667a84548e83289426cfc917b" Namespace="calico-system" Pod="csi-node-driver-f6hv9" WorkloadEndpoint="172.31.17.229-k8s-csi--node--driver--f6hv9-eth0" Jan 13 21:33:34.902350 containerd[2085]: 2025-01-13 21:33:34.785 [INFO][3652] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1c6ec7536f0e060840ec8eb99e1e6495bc61244667a84548e83289426cfc917b" HandleID="k8s-pod-network.1c6ec7536f0e060840ec8eb99e1e6495bc61244667a84548e83289426cfc917b" Workload="172.31.17.229-k8s-csi--node--driver--f6hv9-eth0" Jan 13 21:33:34.902350 containerd[2085]: 2025-01-13 21:33:34.808 [INFO][3652] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1c6ec7536f0e060840ec8eb99e1e6495bc61244667a84548e83289426cfc917b" HandleID="k8s-pod-network.1c6ec7536f0e060840ec8eb99e1e6495bc61244667a84548e83289426cfc917b" Workload="172.31.17.229-k8s-csi--node--driver--f6hv9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000319a00), Attrs:map[string]string{"namespace":"calico-system", "node":"172.31.17.229", "pod":"csi-node-driver-f6hv9", "timestamp":"2025-01-13 21:33:34.785759617 +0000 UTC"}, Hostname:"172.31.17.229", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:33:34.902350 containerd[2085]: 2025-01-13 21:33:34.808 [INFO][3652] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:33:34.902350 containerd[2085]: 2025-01-13 21:33:34.808 [INFO][3652] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:33:34.902350 containerd[2085]: 2025-01-13 21:33:34.808 [INFO][3652] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.17.229' Jan 13 21:33:34.902350 containerd[2085]: 2025-01-13 21:33:34.811 [INFO][3652] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1c6ec7536f0e060840ec8eb99e1e6495bc61244667a84548e83289426cfc917b" host="172.31.17.229" Jan 13 21:33:34.902350 containerd[2085]: 2025-01-13 21:33:34.823 [INFO][3652] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.17.229" Jan 13 21:33:34.902350 containerd[2085]: 2025-01-13 21:33:34.831 [INFO][3652] ipam/ipam.go 489: Trying affinity for 192.168.6.0/26 host="172.31.17.229" Jan 13 21:33:34.902350 containerd[2085]: 2025-01-13 21:33:34.834 [INFO][3652] ipam/ipam.go 155: Attempting to load block cidr=192.168.6.0/26 host="172.31.17.229" Jan 13 21:33:34.902350 containerd[2085]: 2025-01-13 21:33:34.844 [INFO][3652] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.6.0/26 host="172.31.17.229" Jan 13 21:33:34.902350 containerd[2085]: 2025-01-13 21:33:34.844 [INFO][3652] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.6.0/26 handle="k8s-pod-network.1c6ec7536f0e060840ec8eb99e1e6495bc61244667a84548e83289426cfc917b" host="172.31.17.229" Jan 13 21:33:34.902350 containerd[2085]: 2025-01-13 21:33:34.847 [INFO][3652] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1c6ec7536f0e060840ec8eb99e1e6495bc61244667a84548e83289426cfc917b Jan 13 21:33:34.902350 containerd[2085]: 2025-01-13 21:33:34.861 [INFO][3652] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.6.0/26 handle="k8s-pod-network.1c6ec7536f0e060840ec8eb99e1e6495bc61244667a84548e83289426cfc917b" host="172.31.17.229" Jan 13 21:33:34.902350 containerd[2085]: 2025-01-13 21:33:34.872 [INFO][3652] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.6.2/26] block=192.168.6.0/26 handle="k8s-pod-network.1c6ec7536f0e060840ec8eb99e1e6495bc61244667a84548e83289426cfc917b" host="172.31.17.229" Jan 13 21:33:34.902350 containerd[2085]: 2025-01-13 21:33:34.872 [INFO][3652] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.6.2/26] handle="k8s-pod-network.1c6ec7536f0e060840ec8eb99e1e6495bc61244667a84548e83289426cfc917b" host="172.31.17.229" Jan 13 21:33:34.902350 containerd[2085]: 2025-01-13 21:33:34.872 [INFO][3652] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:33:34.902350 containerd[2085]: 2025-01-13 21:33:34.872 [INFO][3652] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.6.2/26] IPv6=[] ContainerID="1c6ec7536f0e060840ec8eb99e1e6495bc61244667a84548e83289426cfc917b" HandleID="k8s-pod-network.1c6ec7536f0e060840ec8eb99e1e6495bc61244667a84548e83289426cfc917b" Workload="172.31.17.229-k8s-csi--node--driver--f6hv9-eth0" Jan 13 21:33:34.908487 containerd[2085]: 2025-01-13 21:33:34.875 [INFO][3568] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1c6ec7536f0e060840ec8eb99e1e6495bc61244667a84548e83289426cfc917b" Namespace="calico-system" Pod="csi-node-driver-f6hv9" WorkloadEndpoint="172.31.17.229-k8s-csi--node--driver--f6hv9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.17.229-k8s-csi--node--driver--f6hv9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"458fe02f-e573-4bee-9390-8d8b1d8e6284", ResourceVersion:"1083", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 33, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.17.229", ContainerID:"", Pod:"csi-node-driver-f6hv9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.6.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali07fbce742cd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:33:34.908487 containerd[2085]: 2025-01-13 21:33:34.875 [INFO][3568] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.6.2/32] ContainerID="1c6ec7536f0e060840ec8eb99e1e6495bc61244667a84548e83289426cfc917b" Namespace="calico-system" Pod="csi-node-driver-f6hv9" WorkloadEndpoint="172.31.17.229-k8s-csi--node--driver--f6hv9-eth0" Jan 13 21:33:34.908487 containerd[2085]: 2025-01-13 21:33:34.876 [INFO][3568] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali07fbce742cd ContainerID="1c6ec7536f0e060840ec8eb99e1e6495bc61244667a84548e83289426cfc917b" Namespace="calico-system" Pod="csi-node-driver-f6hv9" WorkloadEndpoint="172.31.17.229-k8s-csi--node--driver--f6hv9-eth0" Jan 13 21:33:34.908487 containerd[2085]: 2025-01-13 21:33:34.878 [INFO][3568] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1c6ec7536f0e060840ec8eb99e1e6495bc61244667a84548e83289426cfc917b" Namespace="calico-system" Pod="csi-node-driver-f6hv9" WorkloadEndpoint="172.31.17.229-k8s-csi--node--driver--f6hv9-eth0" Jan 13 21:33:34.908487 containerd[2085]: 2025-01-13 21:33:34.879 [INFO][3568] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1c6ec7536f0e060840ec8eb99e1e6495bc61244667a84548e83289426cfc917b" Namespace="calico-system" Pod="csi-node-driver-f6hv9" WorkloadEndpoint="172.31.17.229-k8s-csi--node--driver--f6hv9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.17.229-k8s-csi--node--driver--f6hv9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"458fe02f-e573-4bee-9390-8d8b1d8e6284", ResourceVersion:"1083", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 33, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.17.229", ContainerID:"1c6ec7536f0e060840ec8eb99e1e6495bc61244667a84548e83289426cfc917b", Pod:"csi-node-driver-f6hv9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.6.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali07fbce742cd", MAC:"9e:83:9b:34:eb:a4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:33:34.908487 containerd[2085]: 2025-01-13 21:33:34.894 [INFO][3568] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1c6ec7536f0e060840ec8eb99e1e6495bc61244667a84548e83289426cfc917b" Namespace="calico-system" Pod="csi-node-driver-f6hv9" WorkloadEndpoint="172.31.17.229-k8s-csi--node--driver--f6hv9-eth0" Jan 13 21:33:34.942266 containerd[2085]: time="2025-01-13T21:33:34.942014215Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:33:34.942266 containerd[2085]: time="2025-01-13T21:33:34.942161979Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:33:34.943201 containerd[2085]: time="2025-01-13T21:33:34.943066292Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:33:34.943898 containerd[2085]: time="2025-01-13T21:33:34.943816497Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:33:34.996762 containerd[2085]: time="2025-01-13T21:33:34.996727400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-f6hv9,Uid:458fe02f-e573-4bee-9390-8d8b1d8e6284,Namespace:calico-system,Attempt:1,} returns sandbox id \"1c6ec7536f0e060840ec8eb99e1e6495bc61244667a84548e83289426cfc917b\"" Jan 13 21:33:35.159971 kubelet[2580]: E0113 21:33:35.159899 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:33:35.551584 systemd-networkd[1658]: cali4743b0bbb23: Gained IPv6LL Jan 13 21:33:36.160906 kubelet[2580]: E0113 21:33:36.160454 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:33:36.319537 systemd-networkd[1658]: cali07fbce742cd: Gained IPv6LL Jan 13 21:33:37.161504 kubelet[2580]: E0113 21:33:37.161430 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:33:37.812725 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount174435143.mount: Deactivated successfully. Jan 13 21:33:38.166409 kubelet[2580]: E0113 21:33:38.166306 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:33:38.537681 ntpd[2044]: Listen normally on 8 cali4743b0bbb23 [fe80::ecee:eeff:feee:eeee%6]:123 Jan 13 21:33:38.539579 ntpd[2044]: 13 Jan 21:33:38 ntpd[2044]: Listen normally on 8 cali4743b0bbb23 [fe80::ecee:eeff:feee:eeee%6]:123 Jan 13 21:33:38.539579 ntpd[2044]: 13 Jan 21:33:38 ntpd[2044]: Listen normally on 9 cali07fbce742cd [fe80::ecee:eeff:feee:eeee%7]:123 Jan 13 21:33:38.537782 ntpd[2044]: Listen normally on 9 cali07fbce742cd [fe80::ecee:eeff:feee:eeee%7]:123 Jan 13 21:33:39.167086 kubelet[2580]: E0113 21:33:39.167052 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:33:39.587747 containerd[2085]: time="2025-01-13T21:33:39.587619194Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:33:39.589350 containerd[2085]: time="2025-01-13T21:33:39.589208821Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71036018" Jan 13 21:33:39.601226 containerd[2085]: time="2025-01-13T21:33:39.601133584Z" level=info msg="ImageCreate event name:\"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:33:39.623072 containerd[2085]: time="2025-01-13T21:33:39.623014637Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"71035896\" in 5.017500159s" Jan 13 21:33:39.623072 containerd[2085]: time="2025-01-13T21:33:39.623062739Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\"" Jan 13 21:33:39.696096 containerd[2085]: time="2025-01-13T21:33:39.695823437Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 13 21:33:39.705910 containerd[2085]: time="2025-01-13T21:33:39.703136157Z" level=info msg="CreateContainer within sandbox \"a920a7a9d1cc9d01fa026143f462a508c2d15b734ba06128e7dc8cd0ec33ee12\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 13 21:33:39.719189 containerd[2085]: time="2025-01-13T21:33:39.717281500Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:33:39.758129 containerd[2085]: time="2025-01-13T21:33:39.758053996Z" level=info msg="CreateContainer within sandbox \"a920a7a9d1cc9d01fa026143f462a508c2d15b734ba06128e7dc8cd0ec33ee12\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"1bff6a401ea792b87291cc4e394f737e33821059549a205a3086e5fefc832568\"" Jan 13 21:33:39.759259 containerd[2085]: time="2025-01-13T21:33:39.759221199Z" level=info msg="StartContainer for \"1bff6a401ea792b87291cc4e394f737e33821059549a205a3086e5fefc832568\"" Jan 13 21:33:39.804644 systemd[1]: run-containerd-runc-k8s.io-1bff6a401ea792b87291cc4e394f737e33821059549a205a3086e5fefc832568-runc.KMEHCT.mount: Deactivated successfully. Jan 13 21:33:39.838682 containerd[2085]: time="2025-01-13T21:33:39.838415721Z" level=info msg="StartContainer for \"1bff6a401ea792b87291cc4e394f737e33821059549a205a3086e5fefc832568\" returns successfully" Jan 13 21:33:40.170782 kubelet[2580]: E0113 21:33:40.170229 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:33:41.120246 containerd[2085]: time="2025-01-13T21:33:41.120196751Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:33:41.121571 containerd[2085]: time="2025-01-13T21:33:41.121322972Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 13 21:33:41.123817 containerd[2085]: time="2025-01-13T21:33:41.122625121Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:33:41.126353 containerd[2085]: time="2025-01-13T21:33:41.126173518Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:33:41.127011 containerd[2085]: time="2025-01-13T21:33:41.126974340Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.431095194s" Jan 13 21:33:41.127084 containerd[2085]: time="2025-01-13T21:33:41.127011917Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 13 21:33:41.128953 containerd[2085]: time="2025-01-13T21:33:41.128926866Z" level=info msg="CreateContainer within sandbox \"1c6ec7536f0e060840ec8eb99e1e6495bc61244667a84548e83289426cfc917b\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 13 21:33:41.148548 containerd[2085]: time="2025-01-13T21:33:41.148501243Z" level=info msg="CreateContainer within sandbox \"1c6ec7536f0e060840ec8eb99e1e6495bc61244667a84548e83289426cfc917b\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"9110c0ca5a7f227637e11eeb0781872467291f9af9cacf592fe31452d78d2d31\"" Jan 13 21:33:41.149412 containerd[2085]: time="2025-01-13T21:33:41.149372335Z" level=info msg="StartContainer for \"9110c0ca5a7f227637e11eeb0781872467291f9af9cacf592fe31452d78d2d31\"" Jan 13 21:33:41.171410 kubelet[2580]: E0113 21:33:41.171322 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:33:41.269176 containerd[2085]: time="2025-01-13T21:33:41.269131946Z" level=info msg="StartContainer for \"9110c0ca5a7f227637e11eeb0781872467291f9af9cacf592fe31452d78d2d31\" returns successfully" Jan 13 21:33:41.272382 containerd[2085]: time="2025-01-13T21:33:41.272340859Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 13 21:33:42.172402 kubelet[2580]: E0113 21:33:42.172315 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:33:42.915356 containerd[2085]: time="2025-01-13T21:33:42.915289578Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:33:42.916928 containerd[2085]: time="2025-01-13T21:33:42.916849498Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 13 21:33:42.918192 containerd[2085]: time="2025-01-13T21:33:42.918101534Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:33:42.928563 containerd[2085]: time="2025-01-13T21:33:42.928379543Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:33:42.930937 containerd[2085]: time="2025-01-13T21:33:42.929496874Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.65710157s" Jan 13 21:33:42.930937 containerd[2085]: time="2025-01-13T21:33:42.929545301Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 13 21:33:42.936712 containerd[2085]: time="2025-01-13T21:33:42.936664565Z" level=info msg="CreateContainer within sandbox \"1c6ec7536f0e060840ec8eb99e1e6495bc61244667a84548e83289426cfc917b\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 13 21:33:42.956082 containerd[2085]: time="2025-01-13T21:33:42.956035788Z" level=info msg="CreateContainer within sandbox \"1c6ec7536f0e060840ec8eb99e1e6495bc61244667a84548e83289426cfc917b\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"b9b67bc86a88e22de16807fa2e1c7a3aadc1dcfe8f5f58f995c08a7307a08bc0\"" Jan 13 21:33:42.956906 containerd[2085]: time="2025-01-13T21:33:42.956870418Z" level=info msg="StartContainer for \"b9b67bc86a88e22de16807fa2e1c7a3aadc1dcfe8f5f58f995c08a7307a08bc0\"" Jan 13 21:33:43.074674 containerd[2085]: time="2025-01-13T21:33:43.074607182Z" level=info msg="StartContainer for \"b9b67bc86a88e22de16807fa2e1c7a3aadc1dcfe8f5f58f995c08a7307a08bc0\" returns successfully" Jan 13 21:33:43.172575 kubelet[2580]: E0113 21:33:43.172437 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:33:43.273054 kubelet[2580]: I0113 21:33:43.272780 2580 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 13 21:33:43.275868 kubelet[2580]: I0113 21:33:43.275828 2580 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 13 21:33:43.528303 kubelet[2580]: I0113 21:33:43.528094 2580 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-6d5f899847-cnf6b" podStartSLOduration=19.5071265 podStartE2EDuration="24.528057437s" podCreationTimestamp="2025-01-13 21:33:19 +0000 UTC" firstStartedPulling="2025-01-13 21:33:34.602619912 +0000 UTC m=+31.138551698" lastFinishedPulling="2025-01-13 21:33:39.623550849 +0000 UTC m=+36.159482635" observedRunningTime="2025-01-13 21:33:40.486267295 +0000 UTC m=+37.022199094" watchObservedRunningTime="2025-01-13 21:33:43.528057437 +0000 UTC m=+40.063989235" Jan 13 21:33:43.528303 kubelet[2580]: I0113 21:33:43.528284 2580 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-f6hv9" podStartSLOduration=31.595008753 podStartE2EDuration="39.528253633s" podCreationTimestamp="2025-01-13 21:33:04 +0000 UTC" firstStartedPulling="2025-01-13 21:33:34.998212193 +0000 UTC m=+31.534143976" lastFinishedPulling="2025-01-13 21:33:42.931457072 +0000 UTC m=+39.467388856" observedRunningTime="2025-01-13 21:33:43.527929397 +0000 UTC m=+40.063861196" watchObservedRunningTime="2025-01-13 21:33:43.528253633 +0000 UTC m=+40.064185432" Jan 13 21:33:44.130696 kubelet[2580]: E0113 21:33:44.130641 2580 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:33:44.173051 kubelet[2580]: E0113 21:33:44.172983 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:33:45.173662 kubelet[2580]: E0113 21:33:45.173612 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:33:45.956438 kubelet[2580]: I0113 21:33:45.956396 2580 topology_manager.go:215] "Topology Admit Handler" podUID="7b8f7b5b-353f-4bf7-bfbb-e587c63b3fbb" podNamespace="calico-system" podName="calico-typha-59646c7d8-tcg9h" Jan 13 21:33:46.092722 kubelet[2580]: I0113 21:33:46.092684 2580 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b8f7b5b-353f-4bf7-bfbb-e587c63b3fbb-tigera-ca-bundle\") pod \"calico-typha-59646c7d8-tcg9h\" (UID: \"7b8f7b5b-353f-4bf7-bfbb-e587c63b3fbb\") " pod="calico-system/calico-typha-59646c7d8-tcg9h" Jan 13 21:33:46.092910 kubelet[2580]: I0113 21:33:46.092743 2580 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f48qc\" (UniqueName: \"kubernetes.io/projected/7b8f7b5b-353f-4bf7-bfbb-e587c63b3fbb-kube-api-access-f48qc\") pod \"calico-typha-59646c7d8-tcg9h\" (UID: \"7b8f7b5b-353f-4bf7-bfbb-e587c63b3fbb\") " pod="calico-system/calico-typha-59646c7d8-tcg9h" Jan 13 21:33:46.092910 kubelet[2580]: I0113 21:33:46.092780 2580 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/7b8f7b5b-353f-4bf7-bfbb-e587c63b3fbb-typha-certs\") pod \"calico-typha-59646c7d8-tcg9h\" (UID: \"7b8f7b5b-353f-4bf7-bfbb-e587c63b3fbb\") " pod="calico-system/calico-typha-59646c7d8-tcg9h" Jan 13 21:33:46.175073 kubelet[2580]: E0113 21:33:46.174117 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:33:46.561463 containerd[2085]: time="2025-01-13T21:33:46.561321931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-59646c7d8-tcg9h,Uid:7b8f7b5b-353f-4bf7-bfbb-e587c63b3fbb,Namespace:calico-system,Attempt:0,}" Jan 13 21:33:46.625988 containerd[2085]: time="2025-01-13T21:33:46.625449009Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:33:46.625988 containerd[2085]: time="2025-01-13T21:33:46.625526505Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:33:46.625988 containerd[2085]: time="2025-01-13T21:33:46.625544178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:33:46.625988 containerd[2085]: time="2025-01-13T21:33:46.625680671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:33:46.840167 containerd[2085]: time="2025-01-13T21:33:46.839652232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-59646c7d8-tcg9h,Uid:7b8f7b5b-353f-4bf7-bfbb-e587c63b3fbb,Namespace:calico-system,Attempt:0,} returns sandbox id \"6683657d433bcd9d0e9ba1fd1b9ffa86c76828d8f2e024f2f6f60951a96c28c3\"" Jan 13 21:33:46.849296 containerd[2085]: time="2025-01-13T21:33:46.847653890Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 13 21:33:46.973523 containerd[2085]: time="2025-01-13T21:33:46.973473804Z" level=info msg="StopContainer for \"043aef77c3666616d0a55299b7ede379a7d378b654c8b02457486bbbed961a0e\" with timeout 5 (s)" Jan 13 21:33:46.975249 containerd[2085]: time="2025-01-13T21:33:46.975079345Z" level=info msg="Stop container \"043aef77c3666616d0a55299b7ede379a7d378b654c8b02457486bbbed961a0e\" with signal terminated" Jan 13 21:33:47.128859 containerd[2085]: time="2025-01-13T21:33:47.070159255Z" level=info msg="shim disconnected" id=043aef77c3666616d0a55299b7ede379a7d378b654c8b02457486bbbed961a0e namespace=k8s.io Jan 13 21:33:47.128859 containerd[2085]: time="2025-01-13T21:33:47.128561934Z" level=warning msg="cleaning up after shim disconnected" id=043aef77c3666616d0a55299b7ede379a7d378b654c8b02457486bbbed961a0e namespace=k8s.io Jan 13 21:33:47.128859 containerd[2085]: time="2025-01-13T21:33:47.128637789Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:33:47.175038 kubelet[2580]: E0113 21:33:47.174914 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:33:47.210705 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-043aef77c3666616d0a55299b7ede379a7d378b654c8b02457486bbbed961a0e-rootfs.mount: Deactivated successfully. Jan 13 21:33:47.322949 containerd[2085]: time="2025-01-13T21:33:47.322902280Z" level=info msg="StopContainer for \"043aef77c3666616d0a55299b7ede379a7d378b654c8b02457486bbbed961a0e\" returns successfully" Jan 13 21:33:47.323650 containerd[2085]: time="2025-01-13T21:33:47.323538658Z" level=info msg="StopPodSandbox for \"85a0b9028c85fd2bf7381e345ef32f1759686bcf9de3d53e0d3484f1934fd59a\"" Jan 13 21:33:47.329008 containerd[2085]: time="2025-01-13T21:33:47.328949249Z" level=info msg="Container to stop \"5d391f846b2d5365504cd8fbabaaafc759716b7968eb714ceec3092d4e1301ae\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:33:47.329008 containerd[2085]: time="2025-01-13T21:33:47.329001463Z" level=info msg="Container to stop \"47f783d09116d6ffb10bbad474145081ed3e185ec0f0fae29aeb8e8c229eee1c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:33:47.329008 containerd[2085]: time="2025-01-13T21:33:47.329016551Z" level=info msg="Container to stop \"043aef77c3666616d0a55299b7ede379a7d378b654c8b02457486bbbed961a0e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:33:47.332451 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-85a0b9028c85fd2bf7381e345ef32f1759686bcf9de3d53e0d3484f1934fd59a-shm.mount: Deactivated successfully. Jan 13 21:33:47.417980 containerd[2085]: time="2025-01-13T21:33:47.417805365Z" level=info msg="shim disconnected" id=85a0b9028c85fd2bf7381e345ef32f1759686bcf9de3d53e0d3484f1934fd59a namespace=k8s.io Jan 13 21:33:47.419642 containerd[2085]: time="2025-01-13T21:33:47.417959553Z" level=warning msg="cleaning up after shim disconnected" id=85a0b9028c85fd2bf7381e345ef32f1759686bcf9de3d53e0d3484f1934fd59a namespace=k8s.io Jan 13 21:33:47.419642 containerd[2085]: time="2025-01-13T21:33:47.418384502Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:33:47.419848 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-85a0b9028c85fd2bf7381e345ef32f1759686bcf9de3d53e0d3484f1934fd59a-rootfs.mount: Deactivated successfully. Jan 13 21:33:47.463765 containerd[2085]: time="2025-01-13T21:33:47.462681264Z" level=info msg="TearDown network for sandbox \"85a0b9028c85fd2bf7381e345ef32f1759686bcf9de3d53e0d3484f1934fd59a\" successfully" Jan 13 21:33:47.463765 containerd[2085]: time="2025-01-13T21:33:47.462725007Z" level=info msg="StopPodSandbox for \"85a0b9028c85fd2bf7381e345ef32f1759686bcf9de3d53e0d3484f1934fd59a\" returns successfully" Jan 13 21:33:47.517048 kubelet[2580]: I0113 21:33:47.517013 2580 scope.go:117] "RemoveContainer" containerID="043aef77c3666616d0a55299b7ede379a7d378b654c8b02457486bbbed961a0e" Jan 13 21:33:47.520285 containerd[2085]: time="2025-01-13T21:33:47.520248244Z" level=info msg="RemoveContainer for \"043aef77c3666616d0a55299b7ede379a7d378b654c8b02457486bbbed961a0e\"" Jan 13 21:33:47.525277 containerd[2085]: time="2025-01-13T21:33:47.525136048Z" level=info msg="RemoveContainer for \"043aef77c3666616d0a55299b7ede379a7d378b654c8b02457486bbbed961a0e\" returns successfully" Jan 13 21:33:47.525929 kubelet[2580]: I0113 21:33:47.525901 2580 scope.go:117] "RemoveContainer" containerID="47f783d09116d6ffb10bbad474145081ed3e185ec0f0fae29aeb8e8c229eee1c" Jan 13 21:33:47.529751 containerd[2085]: time="2025-01-13T21:33:47.529697782Z" level=info msg="RemoveContainer for \"47f783d09116d6ffb10bbad474145081ed3e185ec0f0fae29aeb8e8c229eee1c\"" Jan 13 21:33:47.534036 containerd[2085]: time="2025-01-13T21:33:47.533998455Z" level=info msg="RemoveContainer for \"47f783d09116d6ffb10bbad474145081ed3e185ec0f0fae29aeb8e8c229eee1c\" returns successfully" Jan 13 21:33:47.536462 kubelet[2580]: I0113 21:33:47.535854 2580 scope.go:117] "RemoveContainer" containerID="5d391f846b2d5365504cd8fbabaaafc759716b7968eb714ceec3092d4e1301ae" Jan 13 21:33:47.537397 containerd[2085]: time="2025-01-13T21:33:47.537365225Z" level=info msg="RemoveContainer for \"5d391f846b2d5365504cd8fbabaaafc759716b7968eb714ceec3092d4e1301ae\"" Jan 13 21:33:47.542404 containerd[2085]: time="2025-01-13T21:33:47.542360090Z" level=info msg="RemoveContainer for \"5d391f846b2d5365504cd8fbabaaafc759716b7968eb714ceec3092d4e1301ae\" returns successfully" Jan 13 21:33:47.543082 kubelet[2580]: I0113 21:33:47.542830 2580 scope.go:117] "RemoveContainer" containerID="043aef77c3666616d0a55299b7ede379a7d378b654c8b02457486bbbed961a0e" Jan 13 21:33:47.551714 containerd[2085]: time="2025-01-13T21:33:47.551585382Z" level=error msg="ContainerStatus for \"043aef77c3666616d0a55299b7ede379a7d378b654c8b02457486bbbed961a0e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"043aef77c3666616d0a55299b7ede379a7d378b654c8b02457486bbbed961a0e\": not found" Jan 13 21:33:47.551961 kubelet[2580]: E0113 21:33:47.551932 2580 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"043aef77c3666616d0a55299b7ede379a7d378b654c8b02457486bbbed961a0e\": not found" containerID="043aef77c3666616d0a55299b7ede379a7d378b654c8b02457486bbbed961a0e" Jan 13 21:33:47.552068 kubelet[2580]: I0113 21:33:47.551997 2580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"043aef77c3666616d0a55299b7ede379a7d378b654c8b02457486bbbed961a0e"} err="failed to get container status \"043aef77c3666616d0a55299b7ede379a7d378b654c8b02457486bbbed961a0e\": rpc error: code = NotFound desc = an error occurred when try to find container \"043aef77c3666616d0a55299b7ede379a7d378b654c8b02457486bbbed961a0e\": not found" Jan 13 21:33:47.552068 kubelet[2580]: I0113 21:33:47.552016 2580 scope.go:117] "RemoveContainer" containerID="47f783d09116d6ffb10bbad474145081ed3e185ec0f0fae29aeb8e8c229eee1c" Jan 13 21:33:47.552943 containerd[2085]: time="2025-01-13T21:33:47.552795892Z" level=error msg="ContainerStatus for \"47f783d09116d6ffb10bbad474145081ed3e185ec0f0fae29aeb8e8c229eee1c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"47f783d09116d6ffb10bbad474145081ed3e185ec0f0fae29aeb8e8c229eee1c\": not found" Jan 13 21:33:47.553150 kubelet[2580]: E0113 21:33:47.553131 2580 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"47f783d09116d6ffb10bbad474145081ed3e185ec0f0fae29aeb8e8c229eee1c\": not found" containerID="47f783d09116d6ffb10bbad474145081ed3e185ec0f0fae29aeb8e8c229eee1c" Jan 13 21:33:47.553346 kubelet[2580]: I0113 21:33:47.553173 2580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"47f783d09116d6ffb10bbad474145081ed3e185ec0f0fae29aeb8e8c229eee1c"} err="failed to get container status \"47f783d09116d6ffb10bbad474145081ed3e185ec0f0fae29aeb8e8c229eee1c\": rpc error: code = NotFound desc = an error occurred when try to find container \"47f783d09116d6ffb10bbad474145081ed3e185ec0f0fae29aeb8e8c229eee1c\": not found" Jan 13 21:33:47.553346 kubelet[2580]: I0113 21:33:47.553189 2580 scope.go:117] "RemoveContainer" containerID="5d391f846b2d5365504cd8fbabaaafc759716b7968eb714ceec3092d4e1301ae" Jan 13 21:33:47.554035 containerd[2085]: time="2025-01-13T21:33:47.553987513Z" level=error msg="ContainerStatus for \"5d391f846b2d5365504cd8fbabaaafc759716b7968eb714ceec3092d4e1301ae\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5d391f846b2d5365504cd8fbabaaafc759716b7968eb714ceec3092d4e1301ae\": not found" Jan 13 21:33:47.554300 kubelet[2580]: E0113 21:33:47.554277 2580 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5d391f846b2d5365504cd8fbabaaafc759716b7968eb714ceec3092d4e1301ae\": not found" containerID="5d391f846b2d5365504cd8fbabaaafc759716b7968eb714ceec3092d4e1301ae" Jan 13 21:33:47.554376 kubelet[2580]: I0113 21:33:47.554315 2580 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5d391f846b2d5365504cd8fbabaaafc759716b7968eb714ceec3092d4e1301ae"} err="failed to get container status \"5d391f846b2d5365504cd8fbabaaafc759716b7968eb714ceec3092d4e1301ae\": rpc error: code = NotFound desc = an error occurred when try to find container \"5d391f846b2d5365504cd8fbabaaafc759716b7968eb714ceec3092d4e1301ae\": not found" Jan 13 21:33:47.582180 kubelet[2580]: I0113 21:33:47.580695 2580 topology_manager.go:215] "Topology Admit Handler" podUID="cfb3ba81-45f3-40f4-8d93-8558e04afafb" podNamespace="calico-system" podName="calico-node-pkjch" Jan 13 21:33:47.582180 kubelet[2580]: E0113 21:33:47.580759 2580 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f952e87e-8e4b-4c22-8c96-b28618d230a0" containerName="flexvol-driver" Jan 13 21:33:47.582180 kubelet[2580]: E0113 21:33:47.580775 2580 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f952e87e-8e4b-4c22-8c96-b28618d230a0" containerName="install-cni" Jan 13 21:33:47.582180 kubelet[2580]: E0113 21:33:47.580787 2580 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f952e87e-8e4b-4c22-8c96-b28618d230a0" containerName="calico-node" Jan 13 21:33:47.582180 kubelet[2580]: I0113 21:33:47.580817 2580 memory_manager.go:354] "RemoveStaleState removing state" podUID="f952e87e-8e4b-4c22-8c96-b28618d230a0" containerName="calico-node" Jan 13 21:33:47.616074 kubelet[2580]: I0113 21:33:47.616036 2580 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f952e87e-8e4b-4c22-8c96-b28618d230a0-policysync\") pod \"f952e87e-8e4b-4c22-8c96-b28618d230a0\" (UID: \"f952e87e-8e4b-4c22-8c96-b28618d230a0\") " Jan 13 21:33:47.616269 kubelet[2580]: I0113 21:33:47.616253 2580 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f952e87e-8e4b-4c22-8c96-b28618d230a0-xtables-lock\") pod \"f952e87e-8e4b-4c22-8c96-b28618d230a0\" (UID: \"f952e87e-8e4b-4c22-8c96-b28618d230a0\") " Jan 13 21:33:47.616405 kubelet[2580]: I0113 21:33:47.616386 2580 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f952e87e-8e4b-4c22-8c96-b28618d230a0-var-run-calico\") pod \"f952e87e-8e4b-4c22-8c96-b28618d230a0\" (UID: \"f952e87e-8e4b-4c22-8c96-b28618d230a0\") " Jan 13 21:33:47.616942 kubelet[2580]: I0113 21:33:47.616422 2580 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f952e87e-8e4b-4c22-8c96-b28618d230a0-cni-bin-dir\") pod \"f952e87e-8e4b-4c22-8c96-b28618d230a0\" (UID: \"f952e87e-8e4b-4c22-8c96-b28618d230a0\") " Jan 13 21:33:47.616942 kubelet[2580]: I0113 21:33:47.616255 2580 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f952e87e-8e4b-4c22-8c96-b28618d230a0-policysync" (OuterVolumeSpecName: "policysync") pod "f952e87e-8e4b-4c22-8c96-b28618d230a0" (UID: "f952e87e-8e4b-4c22-8c96-b28618d230a0"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:33:47.616942 kubelet[2580]: I0113 21:33:47.616448 2580 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f952e87e-8e4b-4c22-8c96-b28618d230a0-cni-log-dir\") pod \"f952e87e-8e4b-4c22-8c96-b28618d230a0\" (UID: \"f952e87e-8e4b-4c22-8c96-b28618d230a0\") " Jan 13 21:33:47.616942 kubelet[2580]: I0113 21:33:47.616283 2580 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f952e87e-8e4b-4c22-8c96-b28618d230a0-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f952e87e-8e4b-4c22-8c96-b28618d230a0" (UID: "f952e87e-8e4b-4c22-8c96-b28618d230a0"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:33:47.616942 kubelet[2580]: I0113 21:33:47.616495 2580 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f952e87e-8e4b-4c22-8c96-b28618d230a0-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "f952e87e-8e4b-4c22-8c96-b28618d230a0" (UID: "f952e87e-8e4b-4c22-8c96-b28618d230a0"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:33:47.617416 kubelet[2580]: I0113 21:33:47.616524 2580 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f952e87e-8e4b-4c22-8c96-b28618d230a0-node-certs\") pod \"f952e87e-8e4b-4c22-8c96-b28618d230a0\" (UID: \"f952e87e-8e4b-4c22-8c96-b28618d230a0\") " Jan 13 21:33:47.617416 kubelet[2580]: I0113 21:33:47.616525 2580 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f952e87e-8e4b-4c22-8c96-b28618d230a0-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "f952e87e-8e4b-4c22-8c96-b28618d230a0" (UID: "f952e87e-8e4b-4c22-8c96-b28618d230a0"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:33:47.617416 kubelet[2580]: I0113 21:33:47.616551 2580 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f952e87e-8e4b-4c22-8c96-b28618d230a0-cni-net-dir\") pod \"f952e87e-8e4b-4c22-8c96-b28618d230a0\" (UID: \"f952e87e-8e4b-4c22-8c96-b28618d230a0\") " Jan 13 21:33:47.617416 kubelet[2580]: I0113 21:33:47.616580 2580 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f952e87e-8e4b-4c22-8c96-b28618d230a0-tigera-ca-bundle\") pod \"f952e87e-8e4b-4c22-8c96-b28618d230a0\" (UID: \"f952e87e-8e4b-4c22-8c96-b28618d230a0\") " Jan 13 21:33:47.617416 kubelet[2580]: I0113 21:33:47.616611 2580 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f952e87e-8e4b-4c22-8c96-b28618d230a0-flexvol-driver-host\") pod \"f952e87e-8e4b-4c22-8c96-b28618d230a0\" (UID: \"f952e87e-8e4b-4c22-8c96-b28618d230a0\") " Jan 13 21:33:47.617416 kubelet[2580]: I0113 21:33:47.616641 2580 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwgzj\" (UniqueName: \"kubernetes.io/projected/f952e87e-8e4b-4c22-8c96-b28618d230a0-kube-api-access-mwgzj\") pod \"f952e87e-8e4b-4c22-8c96-b28618d230a0\" (UID: \"f952e87e-8e4b-4c22-8c96-b28618d230a0\") " Jan 13 21:33:47.617762 kubelet[2580]: I0113 21:33:47.616726 2580 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f952e87e-8e4b-4c22-8c96-b28618d230a0-lib-modules\") pod \"f952e87e-8e4b-4c22-8c96-b28618d230a0\" (UID: \"f952e87e-8e4b-4c22-8c96-b28618d230a0\") " Jan 13 21:33:47.617762 kubelet[2580]: I0113 21:33:47.616755 2580 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f952e87e-8e4b-4c22-8c96-b28618d230a0-var-lib-calico\") pod \"f952e87e-8e4b-4c22-8c96-b28618d230a0\" (UID: \"f952e87e-8e4b-4c22-8c96-b28618d230a0\") " Jan 13 21:33:47.617762 kubelet[2580]: I0113 21:33:47.616811 2580 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cfb3ba81-45f3-40f4-8d93-8558e04afafb-lib-modules\") pod \"calico-node-pkjch\" (UID: \"cfb3ba81-45f3-40f4-8d93-8558e04afafb\") " pod="calico-system/calico-node-pkjch" Jan 13 21:33:47.617762 kubelet[2580]: I0113 21:33:47.616841 2580 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/cfb3ba81-45f3-40f4-8d93-8558e04afafb-cni-bin-dir\") pod \"calico-node-pkjch\" (UID: \"cfb3ba81-45f3-40f4-8d93-8558e04afafb\") " pod="calico-system/calico-node-pkjch" Jan 13 21:33:47.617762 kubelet[2580]: I0113 21:33:47.616870 2580 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/cfb3ba81-45f3-40f4-8d93-8558e04afafb-cni-log-dir\") pod \"calico-node-pkjch\" (UID: \"cfb3ba81-45f3-40f4-8d93-8558e04afafb\") " pod="calico-system/calico-node-pkjch" Jan 13 21:33:47.617762 kubelet[2580]: I0113 21:33:47.616900 2580 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cfb3ba81-45f3-40f4-8d93-8558e04afafb-tigera-ca-bundle\") pod \"calico-node-pkjch\" (UID: \"cfb3ba81-45f3-40f4-8d93-8558e04afafb\") " pod="calico-system/calico-node-pkjch" Jan 13 21:33:47.618296 kubelet[2580]: I0113 21:33:47.616931 2580 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgkqs\" (UniqueName: \"kubernetes.io/projected/cfb3ba81-45f3-40f4-8d93-8558e04afafb-kube-api-access-jgkqs\") pod \"calico-node-pkjch\" (UID: \"cfb3ba81-45f3-40f4-8d93-8558e04afafb\") " pod="calico-system/calico-node-pkjch" Jan 13 21:33:47.618296 kubelet[2580]: I0113 21:33:47.616988 2580 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cfb3ba81-45f3-40f4-8d93-8558e04afafb-xtables-lock\") pod \"calico-node-pkjch\" (UID: \"cfb3ba81-45f3-40f4-8d93-8558e04afafb\") " pod="calico-system/calico-node-pkjch" Jan 13 21:33:47.618296 kubelet[2580]: I0113 21:33:47.617026 2580 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/cfb3ba81-45f3-40f4-8d93-8558e04afafb-var-lib-calico\") pod \"calico-node-pkjch\" (UID: \"cfb3ba81-45f3-40f4-8d93-8558e04afafb\") " pod="calico-system/calico-node-pkjch" Jan 13 21:33:47.618296 kubelet[2580]: I0113 21:33:47.617057 2580 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/cfb3ba81-45f3-40f4-8d93-8558e04afafb-cni-net-dir\") pod \"calico-node-pkjch\" (UID: \"cfb3ba81-45f3-40f4-8d93-8558e04afafb\") " pod="calico-system/calico-node-pkjch" Jan 13 21:33:47.618296 kubelet[2580]: I0113 21:33:47.617093 2580 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/cfb3ba81-45f3-40f4-8d93-8558e04afafb-node-certs\") pod \"calico-node-pkjch\" (UID: \"cfb3ba81-45f3-40f4-8d93-8558e04afafb\") " pod="calico-system/calico-node-pkjch" Jan 13 21:33:47.619595 kubelet[2580]: I0113 21:33:47.617125 2580 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/cfb3ba81-45f3-40f4-8d93-8558e04afafb-var-run-calico\") pod \"calico-node-pkjch\" (UID: \"cfb3ba81-45f3-40f4-8d93-8558e04afafb\") " pod="calico-system/calico-node-pkjch" Jan 13 21:33:47.619595 kubelet[2580]: I0113 21:33:47.617155 2580 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/cfb3ba81-45f3-40f4-8d93-8558e04afafb-policysync\") pod \"calico-node-pkjch\" (UID: \"cfb3ba81-45f3-40f4-8d93-8558e04afafb\") " pod="calico-system/calico-node-pkjch" Jan 13 21:33:47.619595 kubelet[2580]: I0113 21:33:47.617631 2580 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/cfb3ba81-45f3-40f4-8d93-8558e04afafb-flexvol-driver-host\") pod \"calico-node-pkjch\" (UID: \"cfb3ba81-45f3-40f4-8d93-8558e04afafb\") " pod="calico-system/calico-node-pkjch" Jan 13 21:33:47.619595 kubelet[2580]: I0113 21:33:47.617670 2580 reconciler_common.go:300] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f952e87e-8e4b-4c22-8c96-b28618d230a0-cni-bin-dir\") on node \"172.31.17.229\" DevicePath \"\"" Jan 13 21:33:47.619595 kubelet[2580]: I0113 21:33:47.617688 2580 reconciler_common.go:300] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f952e87e-8e4b-4c22-8c96-b28618d230a0-policysync\") on node \"172.31.17.229\" DevicePath \"\"" Jan 13 21:33:47.619595 kubelet[2580]: I0113 21:33:47.617706 2580 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f952e87e-8e4b-4c22-8c96-b28618d230a0-xtables-lock\") on node \"172.31.17.229\" DevicePath \"\"" Jan 13 21:33:47.620516 kubelet[2580]: I0113 21:33:47.617783 2580 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f952e87e-8e4b-4c22-8c96-b28618d230a0-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "f952e87e-8e4b-4c22-8c96-b28618d230a0" (UID: "f952e87e-8e4b-4c22-8c96-b28618d230a0"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:33:47.620516 kubelet[2580]: I0113 21:33:47.618710 2580 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f952e87e-8e4b-4c22-8c96-b28618d230a0-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "f952e87e-8e4b-4c22-8c96-b28618d230a0" (UID: "f952e87e-8e4b-4c22-8c96-b28618d230a0"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:33:47.624947 kubelet[2580]: I0113 21:33:47.624898 2580 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f952e87e-8e4b-4c22-8c96-b28618d230a0-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "f952e87e-8e4b-4c22-8c96-b28618d230a0" (UID: "f952e87e-8e4b-4c22-8c96-b28618d230a0"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:33:47.634099 systemd[1]: var-lib-kubelet-pods-f952e87e\x2d8e4b\x2d4c22\x2d8c96\x2db28618d230a0-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Jan 13 21:33:47.635740 kubelet[2580]: I0113 21:33:47.634836 2580 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f952e87e-8e4b-4c22-8c96-b28618d230a0-node-certs" (OuterVolumeSpecName: "node-certs") pod "f952e87e-8e4b-4c22-8c96-b28618d230a0" (UID: "f952e87e-8e4b-4c22-8c96-b28618d230a0"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 13 21:33:47.635740 kubelet[2580]: I0113 21:33:47.634943 2580 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f952e87e-8e4b-4c22-8c96-b28618d230a0-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "f952e87e-8e4b-4c22-8c96-b28618d230a0" (UID: "f952e87e-8e4b-4c22-8c96-b28618d230a0"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:33:47.636522 kubelet[2580]: I0113 21:33:47.636242 2580 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f952e87e-8e4b-4c22-8c96-b28618d230a0-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f952e87e-8e4b-4c22-8c96-b28618d230a0" (UID: "f952e87e-8e4b-4c22-8c96-b28618d230a0"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:33:47.642385 systemd[1]: var-lib-kubelet-pods-f952e87e\x2d8e4b\x2d4c22\x2d8c96\x2db28618d230a0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmwgzj.mount: Deactivated successfully. Jan 13 21:33:47.645029 kubelet[2580]: I0113 21:33:47.643708 2580 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f952e87e-8e4b-4c22-8c96-b28618d230a0-kube-api-access-mwgzj" (OuterVolumeSpecName: "kube-api-access-mwgzj") pod "f952e87e-8e4b-4c22-8c96-b28618d230a0" (UID: "f952e87e-8e4b-4c22-8c96-b28618d230a0"). InnerVolumeSpecName "kube-api-access-mwgzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 21:33:47.645411 kubelet[2580]: I0113 21:33:47.645378 2580 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f952e87e-8e4b-4c22-8c96-b28618d230a0-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "f952e87e-8e4b-4c22-8c96-b28618d230a0" (UID: "f952e87e-8e4b-4c22-8c96-b28618d230a0"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 21:33:47.720036 kubelet[2580]: I0113 21:33:47.718887 2580 reconciler_common.go:300] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f952e87e-8e4b-4c22-8c96-b28618d230a0-var-run-calico\") on node \"172.31.17.229\" DevicePath \"\"" Jan 13 21:33:47.720036 kubelet[2580]: I0113 21:33:47.718925 2580 reconciler_common.go:300] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f952e87e-8e4b-4c22-8c96-b28618d230a0-cni-log-dir\") on node \"172.31.17.229\" DevicePath \"\"" Jan 13 21:33:47.720036 kubelet[2580]: I0113 21:33:47.718941 2580 reconciler_common.go:300] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f952e87e-8e4b-4c22-8c96-b28618d230a0-node-certs\") on node \"172.31.17.229\" DevicePath \"\"" Jan 13 21:33:47.720036 kubelet[2580]: I0113 21:33:47.718956 2580 reconciler_common.go:300] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f952e87e-8e4b-4c22-8c96-b28618d230a0-cni-net-dir\") on node \"172.31.17.229\" DevicePath \"\"" Jan 13 21:33:47.720036 kubelet[2580]: I0113 21:33:47.718971 2580 reconciler_common.go:300] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f952e87e-8e4b-4c22-8c96-b28618d230a0-tigera-ca-bundle\") on node \"172.31.17.229\" DevicePath \"\"" Jan 13 21:33:47.720036 kubelet[2580]: I0113 21:33:47.718987 2580 reconciler_common.go:300] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f952e87e-8e4b-4c22-8c96-b28618d230a0-flexvol-driver-host\") on node \"172.31.17.229\" DevicePath \"\"" Jan 13 21:33:47.720036 kubelet[2580]: I0113 21:33:47.719001 2580 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-mwgzj\" (UniqueName: \"kubernetes.io/projected/f952e87e-8e4b-4c22-8c96-b28618d230a0-kube-api-access-mwgzj\") on node \"172.31.17.229\" DevicePath \"\"" Jan 13 21:33:47.720036 kubelet[2580]: I0113 21:33:47.719015 2580 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f952e87e-8e4b-4c22-8c96-b28618d230a0-lib-modules\") on node \"172.31.17.229\" DevicePath \"\"" Jan 13 21:33:47.722379 kubelet[2580]: I0113 21:33:47.719030 2580 reconciler_common.go:300] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f952e87e-8e4b-4c22-8c96-b28618d230a0-var-lib-calico\") on node \"172.31.17.229\" DevicePath \"\"" Jan 13 21:33:47.900524 containerd[2085]: time="2025-01-13T21:33:47.900015476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pkjch,Uid:cfb3ba81-45f3-40f4-8d93-8558e04afafb,Namespace:calico-system,Attempt:0,}" Jan 13 21:33:47.955122 containerd[2085]: time="2025-01-13T21:33:47.954793723Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:33:47.955348 containerd[2085]: time="2025-01-13T21:33:47.955202106Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:33:47.958425 containerd[2085]: time="2025-01-13T21:33:47.958341427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:33:47.958946 containerd[2085]: time="2025-01-13T21:33:47.958524595Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:33:48.010499 containerd[2085]: time="2025-01-13T21:33:48.010182190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pkjch,Uid:cfb3ba81-45f3-40f4-8d93-8558e04afafb,Namespace:calico-system,Attempt:0,} returns sandbox id \"37ade67d46461762b9b66b82b485d76e4c7783c658d5d36e158cf842612f9e6b\"" Jan 13 21:33:48.014004 containerd[2085]: time="2025-01-13T21:33:48.013964780Z" level=info msg="CreateContainer within sandbox \"37ade67d46461762b9b66b82b485d76e4c7783c658d5d36e158cf842612f9e6b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 13 21:33:48.039040 containerd[2085]: time="2025-01-13T21:33:48.038910812Z" level=info msg="CreateContainer within sandbox \"37ade67d46461762b9b66b82b485d76e4c7783c658d5d36e158cf842612f9e6b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"2b9f04a03565bf491568c544372a6dc5925b7b09edb3f2c9569f3c8b9251d8be\"" Jan 13 21:33:48.039738 containerd[2085]: time="2025-01-13T21:33:48.039584076Z" level=info msg="StartContainer for \"2b9f04a03565bf491568c544372a6dc5925b7b09edb3f2c9569f3c8b9251d8be\"" Jan 13 21:33:48.173412 containerd[2085]: time="2025-01-13T21:33:48.172997198Z" level=info msg="StartContainer for \"2b9f04a03565bf491568c544372a6dc5925b7b09edb3f2c9569f3c8b9251d8be\" returns successfully" Jan 13 21:33:48.176004 kubelet[2580]: E0113 21:33:48.175972 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:33:48.234316 systemd[1]: var-lib-kubelet-pods-f952e87e\x2d8e4b\x2d4c22\x2d8c96\x2db28618d230a0-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dnode-1.mount: Deactivated successfully. Jan 13 21:33:48.320937 kubelet[2580]: I0113 21:33:48.320803 2580 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="f952e87e-8e4b-4c22-8c96-b28618d230a0" path="/var/lib/kubelet/pods/f952e87e-8e4b-4c22-8c96-b28618d230a0/volumes" Jan 13 21:33:48.548908 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2b9f04a03565bf491568c544372a6dc5925b7b09edb3f2c9569f3c8b9251d8be-rootfs.mount: Deactivated successfully. Jan 13 21:33:48.570490 containerd[2085]: time="2025-01-13T21:33:48.570429052Z" level=info msg="shim disconnected" id=2b9f04a03565bf491568c544372a6dc5925b7b09edb3f2c9569f3c8b9251d8be namespace=k8s.io Jan 13 21:33:48.570490 containerd[2085]: time="2025-01-13T21:33:48.570481868Z" level=warning msg="cleaning up after shim disconnected" id=2b9f04a03565bf491568c544372a6dc5925b7b09edb3f2c9569f3c8b9251d8be namespace=k8s.io Jan 13 21:33:48.570490 containerd[2085]: time="2025-01-13T21:33:48.570493135Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:33:49.177093 kubelet[2580]: E0113 21:33:49.177052 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:33:49.266255 containerd[2085]: time="2025-01-13T21:33:49.266199601Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:33:49.269450 containerd[2085]: time="2025-01-13T21:33:49.268135121Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29850141" Jan 13 21:33:49.271183 containerd[2085]: time="2025-01-13T21:33:49.271146916Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:33:49.274858 containerd[2085]: time="2025-01-13T21:33:49.274807483Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:33:49.275820 containerd[2085]: time="2025-01-13T21:33:49.275781626Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.428080217s" Jan 13 21:33:49.275916 containerd[2085]: time="2025-01-13T21:33:49.275828700Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 13 21:33:49.293847 containerd[2085]: time="2025-01-13T21:33:49.293806835Z" level=info msg="CreateContainer within sandbox \"6683657d433bcd9d0e9ba1fd1b9ffa86c76828d8f2e024f2f6f60951a96c28c3\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 13 21:33:49.312827 containerd[2085]: time="2025-01-13T21:33:49.312782984Z" level=info msg="CreateContainer within sandbox \"6683657d433bcd9d0e9ba1fd1b9ffa86c76828d8f2e024f2f6f60951a96c28c3\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"7c42298b50c59acd80f16445b90401d9fc6c49451e1b510fd5bece01d50f91bd\"" Jan 13 21:33:49.313702 containerd[2085]: time="2025-01-13T21:33:49.313567281Z" level=info msg="StartContainer for \"7c42298b50c59acd80f16445b90401d9fc6c49451e1b510fd5bece01d50f91bd\"" Jan 13 21:33:49.440887 containerd[2085]: time="2025-01-13T21:33:49.440446967Z" level=info msg="StartContainer for \"7c42298b50c59acd80f16445b90401d9fc6c49451e1b510fd5bece01d50f91bd\" returns successfully" Jan 13 21:33:49.557371 containerd[2085]: time="2025-01-13T21:33:49.557309015Z" level=info msg="CreateContainer within sandbox \"37ade67d46461762b9b66b82b485d76e4c7783c658d5d36e158cf842612f9e6b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 13 21:33:49.576555 containerd[2085]: time="2025-01-13T21:33:49.576500254Z" level=info msg="CreateContainer within sandbox \"37ade67d46461762b9b66b82b485d76e4c7783c658d5d36e158cf842612f9e6b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"4a1001677c5a00be6f62741cf9222c89d8ebc27bd54e6d0cf4856db1208e4c0e\"" Jan 13 21:33:49.577991 containerd[2085]: time="2025-01-13T21:33:49.577272658Z" level=info msg="StartContainer for \"4a1001677c5a00be6f62741cf9222c89d8ebc27bd54e6d0cf4856db1208e4c0e\"" Jan 13 21:33:49.611213 kubelet[2580]: I0113 21:33:49.611175 2580 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-59646c7d8-tcg9h" podStartSLOduration=2.180986656 podStartE2EDuration="4.611130645s" podCreationTimestamp="2025-01-13 21:33:45 +0000 UTC" firstStartedPulling="2025-01-13 21:33:46.845988529 +0000 UTC m=+43.381920306" lastFinishedPulling="2025-01-13 21:33:49.276132514 +0000 UTC m=+45.812064295" observedRunningTime="2025-01-13 21:33:49.56925403 +0000 UTC m=+46.105185824" watchObservedRunningTime="2025-01-13 21:33:49.611130645 +0000 UTC m=+46.147062442" Jan 13 21:33:49.643871 containerd[2085]: time="2025-01-13T21:33:49.643827862Z" level=info msg="StartContainer for \"4a1001677c5a00be6f62741cf9222c89d8ebc27bd54e6d0cf4856db1208e4c0e\" returns successfully" Jan 13 21:33:50.178087 kubelet[2580]: E0113 21:33:50.177833 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:33:50.288364 systemd[1]: run-containerd-runc-k8s.io-7c42298b50c59acd80f16445b90401d9fc6c49451e1b510fd5bece01d50f91bd-runc.nRfrbw.mount: Deactivated successfully. Jan 13 21:33:51.178479 kubelet[2580]: E0113 21:33:51.178398 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:33:51.695788 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4a1001677c5a00be6f62741cf9222c89d8ebc27bd54e6d0cf4856db1208e4c0e-rootfs.mount: Deactivated successfully. Jan 13 21:33:51.898445 containerd[2085]: time="2025-01-13T21:33:51.898348869Z" level=info msg="shim disconnected" id=4a1001677c5a00be6f62741cf9222c89d8ebc27bd54e6d0cf4856db1208e4c0e namespace=k8s.io Jan 13 21:33:51.898445 containerd[2085]: time="2025-01-13T21:33:51.898419541Z" level=warning msg="cleaning up after shim disconnected" id=4a1001677c5a00be6f62741cf9222c89d8ebc27bd54e6d0cf4856db1208e4c0e namespace=k8s.io Jan 13 21:33:51.898445 containerd[2085]: time="2025-01-13T21:33:51.898434269Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:33:52.179617 kubelet[2580]: E0113 21:33:52.179569 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:33:52.647972 containerd[2085]: time="2025-01-13T21:33:52.645980285Z" level=info msg="CreateContainer within sandbox \"37ade67d46461762b9b66b82b485d76e4c7783c658d5d36e158cf842612f9e6b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 13 21:33:52.670868 containerd[2085]: time="2025-01-13T21:33:52.670806843Z" level=info msg="CreateContainer within sandbox \"37ade67d46461762b9b66b82b485d76e4c7783c658d5d36e158cf842612f9e6b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"388c04078b9dbd65afb079b18ce0aff28214c1bd0bf9cd59acfe658ccb5142ad\"" Jan 13 21:33:52.672957 containerd[2085]: time="2025-01-13T21:33:52.672825285Z" level=info msg="StartContainer for \"388c04078b9dbd65afb079b18ce0aff28214c1bd0bf9cd59acfe658ccb5142ad\"" Jan 13 21:33:52.735601 systemd[1]: run-containerd-runc-k8s.io-388c04078b9dbd65afb079b18ce0aff28214c1bd0bf9cd59acfe658ccb5142ad-runc.qwXgBA.mount: Deactivated successfully. Jan 13 21:33:52.792794 containerd[2085]: time="2025-01-13T21:33:52.792746897Z" level=info msg="StartContainer for \"388c04078b9dbd65afb079b18ce0aff28214c1bd0bf9cd59acfe658ccb5142ad\" returns successfully" Jan 13 21:33:53.021163 kubelet[2580]: I0113 21:33:53.021040 2580 topology_manager.go:215] "Topology Admit Handler" podUID="a84442d7-4a0c-4f46-ae71-4dc99b3935cb" podNamespace="default" podName="nfs-server-provisioner-0" Jan 13 21:33:53.031353 kubelet[2580]: I0113 21:33:53.030756 2580 topology_manager.go:215] "Topology Admit Handler" podUID="10f93562-cdc2-490d-a91b-1b4c12917da6" podNamespace="calico-system" podName="calico-kube-controllers-54bdff566f-qt9p2" Jan 13 21:33:53.069837 kubelet[2580]: I0113 21:33:53.069779 2580 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv2fr\" (UniqueName: \"kubernetes.io/projected/a84442d7-4a0c-4f46-ae71-4dc99b3935cb-kube-api-access-fv2fr\") pod \"nfs-server-provisioner-0\" (UID: \"a84442d7-4a0c-4f46-ae71-4dc99b3935cb\") " pod="default/nfs-server-provisioner-0" Jan 13 21:33:53.070400 kubelet[2580]: I0113 21:33:53.069866 2580 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/10f93562-cdc2-490d-a91b-1b4c12917da6-tigera-ca-bundle\") pod \"calico-kube-controllers-54bdff566f-qt9p2\" (UID: \"10f93562-cdc2-490d-a91b-1b4c12917da6\") " pod="calico-system/calico-kube-controllers-54bdff566f-qt9p2" Jan 13 21:33:53.070400 kubelet[2580]: I0113 21:33:53.069932 2580 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zn9cl\" (UniqueName: \"kubernetes.io/projected/10f93562-cdc2-490d-a91b-1b4c12917da6-kube-api-access-zn9cl\") pod \"calico-kube-controllers-54bdff566f-qt9p2\" (UID: \"10f93562-cdc2-490d-a91b-1b4c12917da6\") " pod="calico-system/calico-kube-controllers-54bdff566f-qt9p2" Jan 13 21:33:53.070400 kubelet[2580]: I0113 21:33:53.069964 2580 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/a84442d7-4a0c-4f46-ae71-4dc99b3935cb-data\") pod \"nfs-server-provisioner-0\" (UID: \"a84442d7-4a0c-4f46-ae71-4dc99b3935cb\") " pod="default/nfs-server-provisioner-0" Jan 13 21:33:53.183662 kubelet[2580]: E0113 21:33:53.183623 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:33:53.332923 containerd[2085]: time="2025-01-13T21:33:53.332792154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:a84442d7-4a0c-4f46-ae71-4dc99b3935cb,Namespace:default,Attempt:0,}" Jan 13 21:33:53.341402 containerd[2085]: time="2025-01-13T21:33:53.340898727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54bdff566f-qt9p2,Uid:10f93562-cdc2-490d-a91b-1b4c12917da6,Namespace:calico-system,Attempt:0,}" Jan 13 21:33:53.610217 (udev-worker)[4394]: Network interface NamePolicy= disabled on kernel command line. Jan 13 21:33:53.613014 systemd-networkd[1658]: cali1f89e62e411: Link UP Jan 13 21:33:53.615079 systemd-networkd[1658]: cali1f89e62e411: Gained carrier Jan 13 21:33:53.654883 kubelet[2580]: I0113 21:33:53.654545 2580 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-pkjch" podStartSLOduration=6.654476889 podStartE2EDuration="6.654476889s" podCreationTimestamp="2025-01-13 21:33:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:33:53.648913637 +0000 UTC m=+50.184845434" watchObservedRunningTime="2025-01-13 21:33:53.654476889 +0000 UTC m=+50.190408689" Jan 13 21:33:53.657938 containerd[2085]: 2025-01-13 21:33:53.442 [INFO][4366] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.17.229-k8s-calico--kube--controllers--54bdff566f--qt9p2-eth0 calico-kube-controllers-54bdff566f- calico-system 10f93562-cdc2-490d-a91b-1b4c12917da6 1327 0 2025-01-13 21:33:51 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:54bdff566f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 172.31.17.229 calico-kube-controllers-54bdff566f-qt9p2 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali1f89e62e411 [] []}} ContainerID="ce15a226ff79fa0aaaaee26c09ee0565655e425c733345b659fbc27100fb7a07" Namespace="calico-system" Pod="calico-kube-controllers-54bdff566f-qt9p2" WorkloadEndpoint="172.31.17.229-k8s-calico--kube--controllers--54bdff566f--qt9p2-" Jan 13 21:33:53.657938 containerd[2085]: 2025-01-13 21:33:53.442 [INFO][4366] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ce15a226ff79fa0aaaaee26c09ee0565655e425c733345b659fbc27100fb7a07" Namespace="calico-system" Pod="calico-kube-controllers-54bdff566f-qt9p2" WorkloadEndpoint="172.31.17.229-k8s-calico--kube--controllers--54bdff566f--qt9p2-eth0" Jan 13 21:33:53.657938 containerd[2085]: 2025-01-13 21:33:53.506 [INFO][4379] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ce15a226ff79fa0aaaaee26c09ee0565655e425c733345b659fbc27100fb7a07" HandleID="k8s-pod-network.ce15a226ff79fa0aaaaee26c09ee0565655e425c733345b659fbc27100fb7a07" Workload="172.31.17.229-k8s-calico--kube--controllers--54bdff566f--qt9p2-eth0" Jan 13 21:33:53.657938 containerd[2085]: 2025-01-13 21:33:53.531 [INFO][4379] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ce15a226ff79fa0aaaaee26c09ee0565655e425c733345b659fbc27100fb7a07" HandleID="k8s-pod-network.ce15a226ff79fa0aaaaee26c09ee0565655e425c733345b659fbc27100fb7a07" Workload="172.31.17.229-k8s-calico--kube--controllers--54bdff566f--qt9p2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000334480), Attrs:map[string]string{"namespace":"calico-system", "node":"172.31.17.229", "pod":"calico-kube-controllers-54bdff566f-qt9p2", "timestamp":"2025-01-13 21:33:53.506606094 +0000 UTC"}, Hostname:"172.31.17.229", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:33:53.657938 containerd[2085]: 2025-01-13 21:33:53.531 [INFO][4379] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:33:53.657938 containerd[2085]: 2025-01-13 21:33:53.531 [INFO][4379] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:33:53.657938 containerd[2085]: 2025-01-13 21:33:53.531 [INFO][4379] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.17.229' Jan 13 21:33:53.657938 containerd[2085]: 2025-01-13 21:33:53.539 [INFO][4379] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ce15a226ff79fa0aaaaee26c09ee0565655e425c733345b659fbc27100fb7a07" host="172.31.17.229" Jan 13 21:33:53.657938 containerd[2085]: 2025-01-13 21:33:53.554 [INFO][4379] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.17.229" Jan 13 21:33:53.657938 containerd[2085]: 2025-01-13 21:33:53.567 [INFO][4379] ipam/ipam.go 489: Trying affinity for 192.168.6.0/26 host="172.31.17.229" Jan 13 21:33:53.657938 containerd[2085]: 2025-01-13 21:33:53.570 [INFO][4379] ipam/ipam.go 155: Attempting to load block cidr=192.168.6.0/26 host="172.31.17.229" Jan 13 21:33:53.657938 containerd[2085]: 2025-01-13 21:33:53.574 [INFO][4379] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.6.0/26 host="172.31.17.229" Jan 13 21:33:53.657938 containerd[2085]: 2025-01-13 21:33:53.574 [INFO][4379] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.6.0/26 handle="k8s-pod-network.ce15a226ff79fa0aaaaee26c09ee0565655e425c733345b659fbc27100fb7a07" host="172.31.17.229" Jan 13 21:33:53.657938 containerd[2085]: 2025-01-13 21:33:53.577 [INFO][4379] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ce15a226ff79fa0aaaaee26c09ee0565655e425c733345b659fbc27100fb7a07 Jan 13 21:33:53.657938 containerd[2085]: 2025-01-13 21:33:53.586 [INFO][4379] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.6.0/26 handle="k8s-pod-network.ce15a226ff79fa0aaaaee26c09ee0565655e425c733345b659fbc27100fb7a07" host="172.31.17.229" Jan 13 21:33:53.657938 containerd[2085]: 2025-01-13 21:33:53.597 [INFO][4379] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.6.3/26] block=192.168.6.0/26 handle="k8s-pod-network.ce15a226ff79fa0aaaaee26c09ee0565655e425c733345b659fbc27100fb7a07" host="172.31.17.229" Jan 13 21:33:53.657938 containerd[2085]: 2025-01-13 21:33:53.597 [INFO][4379] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.6.3/26] handle="k8s-pod-network.ce15a226ff79fa0aaaaee26c09ee0565655e425c733345b659fbc27100fb7a07" host="172.31.17.229" Jan 13 21:33:53.657938 containerd[2085]: 2025-01-13 21:33:53.597 [INFO][4379] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:33:53.657938 containerd[2085]: 2025-01-13 21:33:53.597 [INFO][4379] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.6.3/26] IPv6=[] ContainerID="ce15a226ff79fa0aaaaee26c09ee0565655e425c733345b659fbc27100fb7a07" HandleID="k8s-pod-network.ce15a226ff79fa0aaaaee26c09ee0565655e425c733345b659fbc27100fb7a07" Workload="172.31.17.229-k8s-calico--kube--controllers--54bdff566f--qt9p2-eth0" Jan 13 21:33:53.660178 containerd[2085]: 2025-01-13 21:33:53.600 [INFO][4366] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ce15a226ff79fa0aaaaee26c09ee0565655e425c733345b659fbc27100fb7a07" Namespace="calico-system" Pod="calico-kube-controllers-54bdff566f-qt9p2" WorkloadEndpoint="172.31.17.229-k8s-calico--kube--controllers--54bdff566f--qt9p2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.17.229-k8s-calico--kube--controllers--54bdff566f--qt9p2-eth0", GenerateName:"calico-kube-controllers-54bdff566f-", Namespace:"calico-system", SelfLink:"", UID:"10f93562-cdc2-490d-a91b-1b4c12917da6", ResourceVersion:"1327", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 33, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54bdff566f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.17.229", ContainerID:"", Pod:"calico-kube-controllers-54bdff566f-qt9p2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.6.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1f89e62e411", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:33:53.660178 containerd[2085]: 2025-01-13 21:33:53.600 [INFO][4366] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.6.3/32] ContainerID="ce15a226ff79fa0aaaaee26c09ee0565655e425c733345b659fbc27100fb7a07" Namespace="calico-system" Pod="calico-kube-controllers-54bdff566f-qt9p2" WorkloadEndpoint="172.31.17.229-k8s-calico--kube--controllers--54bdff566f--qt9p2-eth0" Jan 13 21:33:53.660178 containerd[2085]: 2025-01-13 21:33:53.600 [INFO][4366] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1f89e62e411 ContainerID="ce15a226ff79fa0aaaaee26c09ee0565655e425c733345b659fbc27100fb7a07" Namespace="calico-system" Pod="calico-kube-controllers-54bdff566f-qt9p2" WorkloadEndpoint="172.31.17.229-k8s-calico--kube--controllers--54bdff566f--qt9p2-eth0" Jan 13 21:33:53.660178 containerd[2085]: 2025-01-13 21:33:53.616 [INFO][4366] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ce15a226ff79fa0aaaaee26c09ee0565655e425c733345b659fbc27100fb7a07" Namespace="calico-system" Pod="calico-kube-controllers-54bdff566f-qt9p2" WorkloadEndpoint="172.31.17.229-k8s-calico--kube--controllers--54bdff566f--qt9p2-eth0" Jan 13 21:33:53.660178 containerd[2085]: 2025-01-13 21:33:53.627 [INFO][4366] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ce15a226ff79fa0aaaaee26c09ee0565655e425c733345b659fbc27100fb7a07" Namespace="calico-system" Pod="calico-kube-controllers-54bdff566f-qt9p2" WorkloadEndpoint="172.31.17.229-k8s-calico--kube--controllers--54bdff566f--qt9p2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.17.229-k8s-calico--kube--controllers--54bdff566f--qt9p2-eth0", GenerateName:"calico-kube-controllers-54bdff566f-", Namespace:"calico-system", SelfLink:"", UID:"10f93562-cdc2-490d-a91b-1b4c12917da6", ResourceVersion:"1327", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 33, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54bdff566f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.17.229", ContainerID:"ce15a226ff79fa0aaaaee26c09ee0565655e425c733345b659fbc27100fb7a07", Pod:"calico-kube-controllers-54bdff566f-qt9p2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.6.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1f89e62e411", MAC:"d6:16:55:67:ed:e9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:33:53.660178 containerd[2085]: 2025-01-13 21:33:53.655 [INFO][4366] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ce15a226ff79fa0aaaaee26c09ee0565655e425c733345b659fbc27100fb7a07" Namespace="calico-system" Pod="calico-kube-controllers-54bdff566f-qt9p2" WorkloadEndpoint="172.31.17.229-k8s-calico--kube--controllers--54bdff566f--qt9p2-eth0" Jan 13 21:33:53.715796 systemd-networkd[1658]: cali60e51b789ff: Link UP Jan 13 21:33:53.719653 systemd-networkd[1658]: cali60e51b789ff: Gained carrier Jan 13 21:33:53.748627 containerd[2085]: 2025-01-13 21:33:53.451 [INFO][4357] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.17.229-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default a84442d7-4a0c-4f46-ae71-4dc99b3935cb 1326 0 2025-01-13 21:33:49 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 172.31.17.229 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="6d2f45f64b58a85e8a195f7070a59387141050924d0e1c30aabbbc040f030740" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.17.229-k8s-nfs--server--provisioner--0-" Jan 13 21:33:53.748627 containerd[2085]: 2025-01-13 21:33:53.451 [INFO][4357] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6d2f45f64b58a85e8a195f7070a59387141050924d0e1c30aabbbc040f030740" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.17.229-k8s-nfs--server--provisioner--0-eth0" Jan 13 21:33:53.748627 containerd[2085]: 2025-01-13 21:33:53.513 [INFO][4384] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6d2f45f64b58a85e8a195f7070a59387141050924d0e1c30aabbbc040f030740" HandleID="k8s-pod-network.6d2f45f64b58a85e8a195f7070a59387141050924d0e1c30aabbbc040f030740" Workload="172.31.17.229-k8s-nfs--server--provisioner--0-eth0" Jan 13 21:33:53.748627 containerd[2085]: 2025-01-13 21:33:53.538 [INFO][4384] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6d2f45f64b58a85e8a195f7070a59387141050924d0e1c30aabbbc040f030740" HandleID="k8s-pod-network.6d2f45f64b58a85e8a195f7070a59387141050924d0e1c30aabbbc040f030740" Workload="172.31.17.229-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004d0ce0), Attrs:map[string]string{"namespace":"default", "node":"172.31.17.229", "pod":"nfs-server-provisioner-0", "timestamp":"2025-01-13 21:33:53.513427592 +0000 UTC"}, Hostname:"172.31.17.229", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:33:53.748627 containerd[2085]: 2025-01-13 21:33:53.538 [INFO][4384] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:33:53.748627 containerd[2085]: 2025-01-13 21:33:53.597 [INFO][4384] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:33:53.748627 containerd[2085]: 2025-01-13 21:33:53.597 [INFO][4384] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.17.229' Jan 13 21:33:53.748627 containerd[2085]: 2025-01-13 21:33:53.601 [INFO][4384] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6d2f45f64b58a85e8a195f7070a59387141050924d0e1c30aabbbc040f030740" host="172.31.17.229" Jan 13 21:33:53.748627 containerd[2085]: 2025-01-13 21:33:53.612 [INFO][4384] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.17.229" Jan 13 21:33:53.748627 containerd[2085]: 2025-01-13 21:33:53.637 [INFO][4384] ipam/ipam.go 489: Trying affinity for 192.168.6.0/26 host="172.31.17.229" Jan 13 21:33:53.748627 containerd[2085]: 2025-01-13 21:33:53.643 [INFO][4384] ipam/ipam.go 155: Attempting to load block cidr=192.168.6.0/26 host="172.31.17.229" Jan 13 21:33:53.748627 containerd[2085]: 2025-01-13 21:33:53.652 [INFO][4384] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.6.0/26 host="172.31.17.229" Jan 13 21:33:53.748627 containerd[2085]: 2025-01-13 21:33:53.652 [INFO][4384] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.6.0/26 handle="k8s-pod-network.6d2f45f64b58a85e8a195f7070a59387141050924d0e1c30aabbbc040f030740" host="172.31.17.229" Jan 13 21:33:53.748627 containerd[2085]: 2025-01-13 21:33:53.655 [INFO][4384] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6d2f45f64b58a85e8a195f7070a59387141050924d0e1c30aabbbc040f030740 Jan 13 21:33:53.748627 containerd[2085]: 2025-01-13 21:33:53.664 [INFO][4384] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.6.0/26 handle="k8s-pod-network.6d2f45f64b58a85e8a195f7070a59387141050924d0e1c30aabbbc040f030740" host="172.31.17.229" Jan 13 21:33:53.748627 containerd[2085]: 2025-01-13 21:33:53.680 [INFO][4384] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.6.4/26] block=192.168.6.0/26 handle="k8s-pod-network.6d2f45f64b58a85e8a195f7070a59387141050924d0e1c30aabbbc040f030740" host="172.31.17.229" Jan 13 21:33:53.748627 containerd[2085]: 2025-01-13 21:33:53.680 [INFO][4384] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.6.4/26] handle="k8s-pod-network.6d2f45f64b58a85e8a195f7070a59387141050924d0e1c30aabbbc040f030740" host="172.31.17.229" Jan 13 21:33:53.748627 containerd[2085]: 2025-01-13 21:33:53.680 [INFO][4384] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:33:53.748627 containerd[2085]: 2025-01-13 21:33:53.680 [INFO][4384] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.6.4/26] IPv6=[] ContainerID="6d2f45f64b58a85e8a195f7070a59387141050924d0e1c30aabbbc040f030740" HandleID="k8s-pod-network.6d2f45f64b58a85e8a195f7070a59387141050924d0e1c30aabbbc040f030740" Workload="172.31.17.229-k8s-nfs--server--provisioner--0-eth0" Jan 13 21:33:53.749716 containerd[2085]: 2025-01-13 21:33:53.687 [INFO][4357] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6d2f45f64b58a85e8a195f7070a59387141050924d0e1c30aabbbc040f030740" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.17.229-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.17.229-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"a84442d7-4a0c-4f46-ae71-4dc99b3935cb", ResourceVersion:"1326", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 33, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.17.229", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.6.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:33:53.749716 containerd[2085]: 2025-01-13 21:33:53.688 [INFO][4357] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.6.4/32] ContainerID="6d2f45f64b58a85e8a195f7070a59387141050924d0e1c30aabbbc040f030740" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.17.229-k8s-nfs--server--provisioner--0-eth0" Jan 13 21:33:53.749716 containerd[2085]: 2025-01-13 21:33:53.688 [INFO][4357] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="6d2f45f64b58a85e8a195f7070a59387141050924d0e1c30aabbbc040f030740" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.17.229-k8s-nfs--server--provisioner--0-eth0" Jan 13 21:33:53.749716 containerd[2085]: 2025-01-13 21:33:53.718 [INFO][4357] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6d2f45f64b58a85e8a195f7070a59387141050924d0e1c30aabbbc040f030740" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.17.229-k8s-nfs--server--provisioner--0-eth0" Jan 13 21:33:53.750007 containerd[2085]: 2025-01-13 21:33:53.721 [INFO][4357] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6d2f45f64b58a85e8a195f7070a59387141050924d0e1c30aabbbc040f030740" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.17.229-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.17.229-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"a84442d7-4a0c-4f46-ae71-4dc99b3935cb", ResourceVersion:"1326", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 33, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.17.229", ContainerID:"6d2f45f64b58a85e8a195f7070a59387141050924d0e1c30aabbbc040f030740", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.6.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"9a:e5:d7:95:17:ff", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:33:53.750007 containerd[2085]: 2025-01-13 21:33:53.741 [INFO][4357] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6d2f45f64b58a85e8a195f7070a59387141050924d0e1c30aabbbc040f030740" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.17.229-k8s-nfs--server--provisioner--0-eth0" Jan 13 21:33:53.770358 containerd[2085]: time="2025-01-13T21:33:53.769842929Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:33:53.770556 containerd[2085]: time="2025-01-13T21:33:53.770137842Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:33:53.770556 containerd[2085]: time="2025-01-13T21:33:53.770159876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:33:53.770556 containerd[2085]: time="2025-01-13T21:33:53.770273764Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:33:53.827101 containerd[2085]: time="2025-01-13T21:33:53.826734876Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:33:53.827101 containerd[2085]: time="2025-01-13T21:33:53.826788632Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:33:53.827101 containerd[2085]: time="2025-01-13T21:33:53.826803727Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:33:53.827101 containerd[2085]: time="2025-01-13T21:33:53.826890510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:33:53.919584 containerd[2085]: time="2025-01-13T21:33:53.919543722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54bdff566f-qt9p2,Uid:10f93562-cdc2-490d-a91b-1b4c12917da6,Namespace:calico-system,Attempt:0,} returns sandbox id \"ce15a226ff79fa0aaaaee26c09ee0565655e425c733345b659fbc27100fb7a07\"" Jan 13 21:33:53.922659 containerd[2085]: time="2025-01-13T21:33:53.922170726Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 13 21:33:53.942231 containerd[2085]: time="2025-01-13T21:33:53.942078641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:a84442d7-4a0c-4f46-ae71-4dc99b3935cb,Namespace:default,Attempt:0,} returns sandbox id \"6d2f45f64b58a85e8a195f7070a59387141050924d0e1c30aabbbc040f030740\"" Jan 13 21:33:54.183944 kubelet[2580]: E0113 21:33:54.183807 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:33:54.711132 systemd[1]: run-containerd-runc-k8s.io-6d2f45f64b58a85e8a195f7070a59387141050924d0e1c30aabbbc040f030740-runc.d0c0JZ.mount: Deactivated successfully. Jan 13 21:33:55.073828 systemd-networkd[1658]: cali1f89e62e411: Gained IPv6LL Jan 13 21:33:55.185598 kubelet[2580]: E0113 21:33:55.185516 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:33:55.199646 systemd-networkd[1658]: cali60e51b789ff: Gained IPv6LL Jan 13 21:33:55.471357 (udev-worker)[4397]: Network interface NamePolicy= disabled on kernel command line. Jan 13 21:33:56.186621 kubelet[2580]: E0113 21:33:56.186399 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:33:56.591525 containerd[2085]: time="2025-01-13T21:33:56.591168891Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:33:56.593190 containerd[2085]: time="2025-01-13T21:33:56.593037306Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 13 21:33:56.594236 containerd[2085]: time="2025-01-13T21:33:56.594182271Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:33:56.596971 containerd[2085]: time="2025-01-13T21:33:56.596901822Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:33:56.597863 containerd[2085]: time="2025-01-13T21:33:56.597630958Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 2.675358076s" Jan 13 21:33:56.597863 containerd[2085]: time="2025-01-13T21:33:56.597674891Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 13 21:33:56.598305 containerd[2085]: time="2025-01-13T21:33:56.598228230Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 13 21:33:56.615352 containerd[2085]: time="2025-01-13T21:33:56.613068646Z" level=info msg="CreateContainer within sandbox \"ce15a226ff79fa0aaaaee26c09ee0565655e425c733345b659fbc27100fb7a07\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 13 21:33:56.685567 containerd[2085]: time="2025-01-13T21:33:56.685517889Z" level=info msg="CreateContainer within sandbox \"ce15a226ff79fa0aaaaee26c09ee0565655e425c733345b659fbc27100fb7a07\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"898c4a4b3bc6c198ed9b6062bc1d9878f1df93f0c4a3e27c581dde1a219d6082\"" Jan 13 21:33:56.687705 containerd[2085]: time="2025-01-13T21:33:56.687655660Z" level=info msg="StartContainer for \"898c4a4b3bc6c198ed9b6062bc1d9878f1df93f0c4a3e27c581dde1a219d6082\"" Jan 13 21:33:56.791591 containerd[2085]: time="2025-01-13T21:33:56.791426748Z" level=info msg="StartContainer for \"898c4a4b3bc6c198ed9b6062bc1d9878f1df93f0c4a3e27c581dde1a219d6082\" returns successfully" Jan 13 21:33:57.187488 kubelet[2580]: E0113 21:33:57.187443 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:33:57.772946 kubelet[2580]: I0113 21:33:57.772260 2580 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-54bdff566f-qt9p2" podStartSLOduration=4.095746621 podStartE2EDuration="6.772204605s" podCreationTimestamp="2025-01-13 21:33:51 +0000 UTC" firstStartedPulling="2025-01-13 21:33:53.921554103 +0000 UTC m=+50.457485891" lastFinishedPulling="2025-01-13 21:33:56.59801209 +0000 UTC m=+53.133943875" observedRunningTime="2025-01-13 21:33:57.769786741 +0000 UTC m=+54.305718538" watchObservedRunningTime="2025-01-13 21:33:57.772204605 +0000 UTC m=+54.308136403" Jan 13 21:33:58.188569 kubelet[2580]: E0113 21:33:58.188336 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:33:58.271620 systemd-resolved[1977]: Under memory pressure, flushing caches. Jan 13 21:33:58.271711 systemd-resolved[1977]: Flushed all caches. Jan 13 21:33:58.273489 systemd-journald[1572]: Under memory pressure, flushing caches. Jan 13 21:33:58.537865 ntpd[2044]: Listen normally on 10 cali1f89e62e411 [fe80::ecee:eeff:feee:eeee%8]:123 Jan 13 21:33:58.539222 ntpd[2044]: 13 Jan 21:33:58 ntpd[2044]: Listen normally on 10 cali1f89e62e411 [fe80::ecee:eeff:feee:eeee%8]:123 Jan 13 21:33:58.539222 ntpd[2044]: 13 Jan 21:33:58 ntpd[2044]: Listen normally on 11 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%9]:123 Jan 13 21:33:58.538280 ntpd[2044]: Listen normally on 11 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%9]:123 Jan 13 21:33:59.189521 kubelet[2580]: E0113 21:33:59.189480 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:33:59.577210 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount718025147.mount: Deactivated successfully. Jan 13 21:34:00.189954 kubelet[2580]: E0113 21:34:00.189877 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:34:01.190130 kubelet[2580]: E0113 21:34:01.190079 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:34:02.194089 kubelet[2580]: E0113 21:34:02.193927 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:34:03.202034 kubelet[2580]: E0113 21:34:03.201997 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:34:03.378649 containerd[2085]: time="2025-01-13T21:34:03.378595939Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:34:03.379867 containerd[2085]: time="2025-01-13T21:34:03.379821776Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Jan 13 21:34:03.381555 containerd[2085]: time="2025-01-13T21:34:03.381416546Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:34:03.384762 containerd[2085]: time="2025-01-13T21:34:03.384703768Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:34:03.386855 containerd[2085]: time="2025-01-13T21:34:03.386184438Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 6.787840165s" Jan 13 21:34:03.386855 containerd[2085]: time="2025-01-13T21:34:03.386232135Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 13 21:34:03.389885 containerd[2085]: time="2025-01-13T21:34:03.389849483Z" level=info msg="CreateContainer within sandbox \"6d2f45f64b58a85e8a195f7070a59387141050924d0e1c30aabbbc040f030740\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 13 21:34:03.432580 containerd[2085]: time="2025-01-13T21:34:03.432531983Z" level=info msg="CreateContainer within sandbox \"6d2f45f64b58a85e8a195f7070a59387141050924d0e1c30aabbbc040f030740\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"49184c95b81df408159726a621cd06a62b056d3188a32373fa3ed1155cf192b2\"" Jan 13 21:34:03.435833 containerd[2085]: time="2025-01-13T21:34:03.433250887Z" level=info msg="StartContainer for \"49184c95b81df408159726a621cd06a62b056d3188a32373fa3ed1155cf192b2\"" Jan 13 21:34:03.489653 systemd[1]: run-containerd-runc-k8s.io-49184c95b81df408159726a621cd06a62b056d3188a32373fa3ed1155cf192b2-runc.y9gTuA.mount: Deactivated successfully. Jan 13 21:34:03.539830 containerd[2085]: time="2025-01-13T21:34:03.539139995Z" level=info msg="StartContainer for \"49184c95b81df408159726a621cd06a62b056d3188a32373fa3ed1155cf192b2\" returns successfully" Jan 13 21:34:04.131483 kubelet[2580]: E0113 21:34:04.131428 2580 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:34:04.248369 kubelet[2580]: E0113 21:34:04.246673 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:34:04.276165 containerd[2085]: time="2025-01-13T21:34:04.275995587Z" level=info msg="StopPodSandbox for \"ebc4b7fa45d38e7844c1ceeede96b46d7580199d15a1ff769c1b19e839726ed7\"" Jan 13 21:34:04.289527 systemd-journald[1572]: Under memory pressure, flushing caches. Jan 13 21:34:04.287424 systemd-resolved[1977]: Under memory pressure, flushing caches. Jan 13 21:34:04.287432 systemd-resolved[1977]: Flushed all caches. Jan 13 21:34:04.683249 containerd[2085]: 2025-01-13 21:34:04.626 [WARNING][4895] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ebc4b7fa45d38e7844c1ceeede96b46d7580199d15a1ff769c1b19e839726ed7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.17.229-k8s-csi--node--driver--f6hv9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"458fe02f-e573-4bee-9390-8d8b1d8e6284", ResourceVersion:"1137", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 33, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.17.229", ContainerID:"1c6ec7536f0e060840ec8eb99e1e6495bc61244667a84548e83289426cfc917b", Pod:"csi-node-driver-f6hv9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.6.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali07fbce742cd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:34:04.683249 containerd[2085]: 2025-01-13 21:34:04.631 [INFO][4895] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ebc4b7fa45d38e7844c1ceeede96b46d7580199d15a1ff769c1b19e839726ed7" Jan 13 21:34:04.683249 containerd[2085]: 2025-01-13 21:34:04.631 [INFO][4895] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ebc4b7fa45d38e7844c1ceeede96b46d7580199d15a1ff769c1b19e839726ed7" iface="eth0" netns="" Jan 13 21:34:04.683249 containerd[2085]: 2025-01-13 21:34:04.631 [INFO][4895] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ebc4b7fa45d38e7844c1ceeede96b46d7580199d15a1ff769c1b19e839726ed7" Jan 13 21:34:04.683249 containerd[2085]: 2025-01-13 21:34:04.631 [INFO][4895] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ebc4b7fa45d38e7844c1ceeede96b46d7580199d15a1ff769c1b19e839726ed7" Jan 13 21:34:04.683249 containerd[2085]: 2025-01-13 21:34:04.668 [INFO][4901] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ebc4b7fa45d38e7844c1ceeede96b46d7580199d15a1ff769c1b19e839726ed7" HandleID="k8s-pod-network.ebc4b7fa45d38e7844c1ceeede96b46d7580199d15a1ff769c1b19e839726ed7" Workload="172.31.17.229-k8s-csi--node--driver--f6hv9-eth0" Jan 13 21:34:04.683249 containerd[2085]: 2025-01-13 21:34:04.668 [INFO][4901] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:34:04.683249 containerd[2085]: 2025-01-13 21:34:04.668 [INFO][4901] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:34:04.683249 containerd[2085]: 2025-01-13 21:34:04.676 [WARNING][4901] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ebc4b7fa45d38e7844c1ceeede96b46d7580199d15a1ff769c1b19e839726ed7" HandleID="k8s-pod-network.ebc4b7fa45d38e7844c1ceeede96b46d7580199d15a1ff769c1b19e839726ed7" Workload="172.31.17.229-k8s-csi--node--driver--f6hv9-eth0" Jan 13 21:34:04.683249 containerd[2085]: 2025-01-13 21:34:04.676 [INFO][4901] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ebc4b7fa45d38e7844c1ceeede96b46d7580199d15a1ff769c1b19e839726ed7" HandleID="k8s-pod-network.ebc4b7fa45d38e7844c1ceeede96b46d7580199d15a1ff769c1b19e839726ed7" Workload="172.31.17.229-k8s-csi--node--driver--f6hv9-eth0" Jan 13 21:34:04.683249 containerd[2085]: 2025-01-13 21:34:04.679 [INFO][4901] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:34:04.683249 containerd[2085]: 2025-01-13 21:34:04.681 [INFO][4895] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ebc4b7fa45d38e7844c1ceeede96b46d7580199d15a1ff769c1b19e839726ed7" Jan 13 21:34:04.684785 containerd[2085]: time="2025-01-13T21:34:04.683300282Z" level=info msg="TearDown network for sandbox \"ebc4b7fa45d38e7844c1ceeede96b46d7580199d15a1ff769c1b19e839726ed7\" successfully" Jan 13 21:34:04.684785 containerd[2085]: time="2025-01-13T21:34:04.683359890Z" level=info msg="StopPodSandbox for \"ebc4b7fa45d38e7844c1ceeede96b46d7580199d15a1ff769c1b19e839726ed7\" returns successfully" Jan 13 21:34:04.838188 containerd[2085]: time="2025-01-13T21:34:04.838126571Z" level=info msg="RemovePodSandbox for \"ebc4b7fa45d38e7844c1ceeede96b46d7580199d15a1ff769c1b19e839726ed7\"" Jan 13 21:34:04.838188 containerd[2085]: time="2025-01-13T21:34:04.838180383Z" level=info msg="Forcibly stopping sandbox \"ebc4b7fa45d38e7844c1ceeede96b46d7580199d15a1ff769c1b19e839726ed7\"" Jan 13 21:34:04.969493 containerd[2085]: 2025-01-13 21:34:04.928 [WARNING][4919] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ebc4b7fa45d38e7844c1ceeede96b46d7580199d15a1ff769c1b19e839726ed7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.17.229-k8s-csi--node--driver--f6hv9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"458fe02f-e573-4bee-9390-8d8b1d8e6284", ResourceVersion:"1137", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 33, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.17.229", ContainerID:"1c6ec7536f0e060840ec8eb99e1e6495bc61244667a84548e83289426cfc917b", Pod:"csi-node-driver-f6hv9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.6.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali07fbce742cd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:34:04.969493 containerd[2085]: 2025-01-13 21:34:04.928 [INFO][4919] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ebc4b7fa45d38e7844c1ceeede96b46d7580199d15a1ff769c1b19e839726ed7" Jan 13 21:34:04.969493 containerd[2085]: 2025-01-13 21:34:04.928 [INFO][4919] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ebc4b7fa45d38e7844c1ceeede96b46d7580199d15a1ff769c1b19e839726ed7" iface="eth0" netns="" Jan 13 21:34:04.969493 containerd[2085]: 2025-01-13 21:34:04.928 [INFO][4919] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ebc4b7fa45d38e7844c1ceeede96b46d7580199d15a1ff769c1b19e839726ed7" Jan 13 21:34:04.969493 containerd[2085]: 2025-01-13 21:34:04.928 [INFO][4919] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ebc4b7fa45d38e7844c1ceeede96b46d7580199d15a1ff769c1b19e839726ed7" Jan 13 21:34:04.969493 containerd[2085]: 2025-01-13 21:34:04.955 [INFO][4925] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ebc4b7fa45d38e7844c1ceeede96b46d7580199d15a1ff769c1b19e839726ed7" HandleID="k8s-pod-network.ebc4b7fa45d38e7844c1ceeede96b46d7580199d15a1ff769c1b19e839726ed7" Workload="172.31.17.229-k8s-csi--node--driver--f6hv9-eth0" Jan 13 21:34:04.969493 containerd[2085]: 2025-01-13 21:34:04.955 [INFO][4925] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:34:04.969493 containerd[2085]: 2025-01-13 21:34:04.955 [INFO][4925] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:34:04.969493 containerd[2085]: 2025-01-13 21:34:04.964 [WARNING][4925] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ebc4b7fa45d38e7844c1ceeede96b46d7580199d15a1ff769c1b19e839726ed7" HandleID="k8s-pod-network.ebc4b7fa45d38e7844c1ceeede96b46d7580199d15a1ff769c1b19e839726ed7" Workload="172.31.17.229-k8s-csi--node--driver--f6hv9-eth0" Jan 13 21:34:04.969493 containerd[2085]: 2025-01-13 21:34:04.964 [INFO][4925] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ebc4b7fa45d38e7844c1ceeede96b46d7580199d15a1ff769c1b19e839726ed7" HandleID="k8s-pod-network.ebc4b7fa45d38e7844c1ceeede96b46d7580199d15a1ff769c1b19e839726ed7" Workload="172.31.17.229-k8s-csi--node--driver--f6hv9-eth0" Jan 13 21:34:04.969493 containerd[2085]: 2025-01-13 21:34:04.966 [INFO][4925] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:34:04.969493 containerd[2085]: 2025-01-13 21:34:04.967 [INFO][4919] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ebc4b7fa45d38e7844c1ceeede96b46d7580199d15a1ff769c1b19e839726ed7" Jan 13 21:34:04.969493 containerd[2085]: time="2025-01-13T21:34:04.969411282Z" level=info msg="TearDown network for sandbox \"ebc4b7fa45d38e7844c1ceeede96b46d7580199d15a1ff769c1b19e839726ed7\" successfully" Jan 13 21:34:04.973548 containerd[2085]: time="2025-01-13T21:34:04.973500833Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ebc4b7fa45d38e7844c1ceeede96b46d7580199d15a1ff769c1b19e839726ed7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:34:04.973644 containerd[2085]: time="2025-01-13T21:34:04.973570900Z" level=info msg="RemovePodSandbox \"ebc4b7fa45d38e7844c1ceeede96b46d7580199d15a1ff769c1b19e839726ed7\" returns successfully" Jan 13 21:34:04.974154 containerd[2085]: time="2025-01-13T21:34:04.974120477Z" level=info msg="StopPodSandbox for \"85a0b9028c85fd2bf7381e345ef32f1759686bcf9de3d53e0d3484f1934fd59a\"" Jan 13 21:34:04.974337 containerd[2085]: time="2025-01-13T21:34:04.974206547Z" level=info msg="TearDown network for sandbox \"85a0b9028c85fd2bf7381e345ef32f1759686bcf9de3d53e0d3484f1934fd59a\" successfully" Jan 13 21:34:04.974337 containerd[2085]: time="2025-01-13T21:34:04.974224070Z" level=info msg="StopPodSandbox for \"85a0b9028c85fd2bf7381e345ef32f1759686bcf9de3d53e0d3484f1934fd59a\" returns successfully" Jan 13 21:34:04.974735 containerd[2085]: time="2025-01-13T21:34:04.974703664Z" level=info msg="RemovePodSandbox for \"85a0b9028c85fd2bf7381e345ef32f1759686bcf9de3d53e0d3484f1934fd59a\"" Jan 13 21:34:04.974822 containerd[2085]: time="2025-01-13T21:34:04.974738272Z" level=info msg="Forcibly stopping sandbox \"85a0b9028c85fd2bf7381e345ef32f1759686bcf9de3d53e0d3484f1934fd59a\"" Jan 13 21:34:04.974822 containerd[2085]: time="2025-01-13T21:34:04.974799031Z" level=info msg="TearDown network for sandbox \"85a0b9028c85fd2bf7381e345ef32f1759686bcf9de3d53e0d3484f1934fd59a\" successfully" Jan 13 21:34:04.978085 containerd[2085]: time="2025-01-13T21:34:04.978046907Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"85a0b9028c85fd2bf7381e345ef32f1759686bcf9de3d53e0d3484f1934fd59a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:34:04.978181 containerd[2085]: time="2025-01-13T21:34:04.978142741Z" level=info msg="RemovePodSandbox \"85a0b9028c85fd2bf7381e345ef32f1759686bcf9de3d53e0d3484f1934fd59a\" returns successfully" Jan 13 21:34:04.979082 containerd[2085]: time="2025-01-13T21:34:04.978782717Z" level=info msg="StopPodSandbox for \"d092445449fe71e9942957d083de733411fb85617ca01c69cceb888b6907c827\"" Jan 13 21:34:05.070470 containerd[2085]: 2025-01-13 21:34:05.028 [WARNING][4943] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d092445449fe71e9942957d083de733411fb85617ca01c69cceb888b6907c827" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.17.229-k8s-nginx--deployment--6d5f899847--cnf6b-eth0", GenerateName:"nginx-deployment-6d5f899847-", Namespace:"default", SelfLink:"", UID:"c5eda378-4eb7-4846-8551-05ef7a53a762", ResourceVersion:"1107", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 33, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"6d5f899847", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.17.229", ContainerID:"a920a7a9d1cc9d01fa026143f462a508c2d15b734ba06128e7dc8cd0ec33ee12", Pod:"nginx-deployment-6d5f899847-cnf6b", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.6.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali4743b0bbb23", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:34:05.070470 containerd[2085]: 2025-01-13 21:34:05.028 [INFO][4943] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d092445449fe71e9942957d083de733411fb85617ca01c69cceb888b6907c827" Jan 13 21:34:05.070470 containerd[2085]: 2025-01-13 21:34:05.028 [INFO][4943] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d092445449fe71e9942957d083de733411fb85617ca01c69cceb888b6907c827" iface="eth0" netns="" Jan 13 21:34:05.070470 containerd[2085]: 2025-01-13 21:34:05.028 [INFO][4943] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d092445449fe71e9942957d083de733411fb85617ca01c69cceb888b6907c827" Jan 13 21:34:05.070470 containerd[2085]: 2025-01-13 21:34:05.028 [INFO][4943] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d092445449fe71e9942957d083de733411fb85617ca01c69cceb888b6907c827" Jan 13 21:34:05.070470 containerd[2085]: 2025-01-13 21:34:05.053 [INFO][4949] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d092445449fe71e9942957d083de733411fb85617ca01c69cceb888b6907c827" HandleID="k8s-pod-network.d092445449fe71e9942957d083de733411fb85617ca01c69cceb888b6907c827" Workload="172.31.17.229-k8s-nginx--deployment--6d5f899847--cnf6b-eth0" Jan 13 21:34:05.070470 containerd[2085]: 2025-01-13 21:34:05.053 [INFO][4949] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:34:05.070470 containerd[2085]: 2025-01-13 21:34:05.053 [INFO][4949] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:34:05.070470 containerd[2085]: 2025-01-13 21:34:05.062 [WARNING][4949] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d092445449fe71e9942957d083de733411fb85617ca01c69cceb888b6907c827" HandleID="k8s-pod-network.d092445449fe71e9942957d083de733411fb85617ca01c69cceb888b6907c827" Workload="172.31.17.229-k8s-nginx--deployment--6d5f899847--cnf6b-eth0" Jan 13 21:34:05.070470 containerd[2085]: 2025-01-13 21:34:05.062 [INFO][4949] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d092445449fe71e9942957d083de733411fb85617ca01c69cceb888b6907c827" HandleID="k8s-pod-network.d092445449fe71e9942957d083de733411fb85617ca01c69cceb888b6907c827" Workload="172.31.17.229-k8s-nginx--deployment--6d5f899847--cnf6b-eth0" Jan 13 21:34:05.070470 containerd[2085]: 2025-01-13 21:34:05.066 [INFO][4949] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:34:05.070470 containerd[2085]: 2025-01-13 21:34:05.068 [INFO][4943] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d092445449fe71e9942957d083de733411fb85617ca01c69cceb888b6907c827" Jan 13 21:34:05.071631 containerd[2085]: time="2025-01-13T21:34:05.070520964Z" level=info msg="TearDown network for sandbox \"d092445449fe71e9942957d083de733411fb85617ca01c69cceb888b6907c827\" successfully" Jan 13 21:34:05.071631 containerd[2085]: time="2025-01-13T21:34:05.070564548Z" level=info msg="StopPodSandbox for \"d092445449fe71e9942957d083de733411fb85617ca01c69cceb888b6907c827\" returns successfully" Jan 13 21:34:05.074439 containerd[2085]: time="2025-01-13T21:34:05.074358028Z" level=info msg="RemovePodSandbox for \"d092445449fe71e9942957d083de733411fb85617ca01c69cceb888b6907c827\"" Jan 13 21:34:05.074713 containerd[2085]: time="2025-01-13T21:34:05.074440655Z" level=info msg="Forcibly stopping sandbox \"d092445449fe71e9942957d083de733411fb85617ca01c69cceb888b6907c827\"" Jan 13 21:34:05.200736 containerd[2085]: 2025-01-13 21:34:05.134 [WARNING][4967] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d092445449fe71e9942957d083de733411fb85617ca01c69cceb888b6907c827" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.17.229-k8s-nginx--deployment--6d5f899847--cnf6b-eth0", GenerateName:"nginx-deployment-6d5f899847-", Namespace:"default", SelfLink:"", UID:"c5eda378-4eb7-4846-8551-05ef7a53a762", ResourceVersion:"1107", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 33, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"6d5f899847", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.17.229", ContainerID:"a920a7a9d1cc9d01fa026143f462a508c2d15b734ba06128e7dc8cd0ec33ee12", Pod:"nginx-deployment-6d5f899847-cnf6b", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.6.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali4743b0bbb23", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:34:05.200736 containerd[2085]: 2025-01-13 21:34:05.134 [INFO][4967] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d092445449fe71e9942957d083de733411fb85617ca01c69cceb888b6907c827" Jan 13 21:34:05.200736 containerd[2085]: 2025-01-13 21:34:05.134 [INFO][4967] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d092445449fe71e9942957d083de733411fb85617ca01c69cceb888b6907c827" iface="eth0" netns="" Jan 13 21:34:05.200736 containerd[2085]: 2025-01-13 21:34:05.134 [INFO][4967] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d092445449fe71e9942957d083de733411fb85617ca01c69cceb888b6907c827" Jan 13 21:34:05.200736 containerd[2085]: 2025-01-13 21:34:05.134 [INFO][4967] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d092445449fe71e9942957d083de733411fb85617ca01c69cceb888b6907c827" Jan 13 21:34:05.200736 containerd[2085]: 2025-01-13 21:34:05.180 [INFO][4973] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d092445449fe71e9942957d083de733411fb85617ca01c69cceb888b6907c827" HandleID="k8s-pod-network.d092445449fe71e9942957d083de733411fb85617ca01c69cceb888b6907c827" Workload="172.31.17.229-k8s-nginx--deployment--6d5f899847--cnf6b-eth0" Jan 13 21:34:05.200736 containerd[2085]: 2025-01-13 21:34:05.180 [INFO][4973] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:34:05.200736 containerd[2085]: 2025-01-13 21:34:05.180 [INFO][4973] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:34:05.200736 containerd[2085]: 2025-01-13 21:34:05.194 [WARNING][4973] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d092445449fe71e9942957d083de733411fb85617ca01c69cceb888b6907c827" HandleID="k8s-pod-network.d092445449fe71e9942957d083de733411fb85617ca01c69cceb888b6907c827" Workload="172.31.17.229-k8s-nginx--deployment--6d5f899847--cnf6b-eth0" Jan 13 21:34:05.200736 containerd[2085]: 2025-01-13 21:34:05.194 [INFO][4973] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d092445449fe71e9942957d083de733411fb85617ca01c69cceb888b6907c827" HandleID="k8s-pod-network.d092445449fe71e9942957d083de733411fb85617ca01c69cceb888b6907c827" Workload="172.31.17.229-k8s-nginx--deployment--6d5f899847--cnf6b-eth0" Jan 13 21:34:05.200736 containerd[2085]: 2025-01-13 21:34:05.197 [INFO][4973] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:34:05.200736 containerd[2085]: 2025-01-13 21:34:05.199 [INFO][4967] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d092445449fe71e9942957d083de733411fb85617ca01c69cceb888b6907c827" Jan 13 21:34:05.201965 containerd[2085]: time="2025-01-13T21:34:05.200784034Z" level=info msg="TearDown network for sandbox \"d092445449fe71e9942957d083de733411fb85617ca01c69cceb888b6907c827\" successfully" Jan 13 21:34:05.216217 containerd[2085]: time="2025-01-13T21:34:05.216130608Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d092445449fe71e9942957d083de733411fb85617ca01c69cceb888b6907c827\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:34:05.216642 containerd[2085]: time="2025-01-13T21:34:05.216237445Z" level=info msg="RemovePodSandbox \"d092445449fe71e9942957d083de733411fb85617ca01c69cceb888b6907c827\" returns successfully" Jan 13 21:34:05.247117 kubelet[2580]: E0113 21:34:05.246921 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:34:06.247999 kubelet[2580]: E0113 21:34:06.247780 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:34:06.335783 systemd-resolved[1977]: Under memory pressure, flushing caches. Jan 13 21:34:06.335792 systemd-resolved[1977]: Flushed all caches. Jan 13 21:34:06.337349 systemd-journald[1572]: Under memory pressure, flushing caches. Jan 13 21:34:07.248840 kubelet[2580]: E0113 21:34:07.248785 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:34:08.256476 kubelet[2580]: E0113 21:34:08.256431 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:34:09.257228 kubelet[2580]: E0113 21:34:09.257176 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:34:10.257740 kubelet[2580]: E0113 21:34:10.257689 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:34:11.258255 kubelet[2580]: E0113 21:34:11.258201 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:34:12.259026 kubelet[2580]: E0113 21:34:12.258975 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:34:13.259455 kubelet[2580]: E0113 21:34:13.259400 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:34:14.260350 kubelet[2580]: E0113 21:34:14.260285 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:34:15.260711 kubelet[2580]: E0113 21:34:15.260664 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:34:16.261294 kubelet[2580]: E0113 21:34:16.261249 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:34:17.262005 kubelet[2580]: E0113 21:34:17.261951 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:34:18.034667 kubelet[2580]: I0113 21:34:18.033703 2580 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=19.590238507 podStartE2EDuration="29.033653919s" podCreationTimestamp="2025-01-13 21:33:49 +0000 UTC" firstStartedPulling="2025-01-13 21:33:53.943321621 +0000 UTC m=+50.479253397" lastFinishedPulling="2025-01-13 21:34:03.386737019 +0000 UTC m=+59.922668809" observedRunningTime="2025-01-13 21:34:03.827516645 +0000 UTC m=+60.363448443" watchObservedRunningTime="2025-01-13 21:34:18.033653919 +0000 UTC m=+74.569585717" Jan 13 21:34:18.262569 kubelet[2580]: E0113 21:34:18.262519 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:34:19.262866 kubelet[2580]: E0113 21:34:19.262819 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:34:20.264009 kubelet[2580]: E0113 21:34:20.263958 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:34:21.264543 kubelet[2580]: E0113 21:34:21.264435 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:34:22.265623 kubelet[2580]: E0113 21:34:22.265585 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:34:23.265996 kubelet[2580]: E0113 21:34:23.265943 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:34:24.131017 kubelet[2580]: E0113 21:34:24.130968 2580 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:34:24.266478 kubelet[2580]: E0113 21:34:24.266426 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:34:25.267311 kubelet[2580]: E0113 21:34:25.267226 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:34:26.267662 kubelet[2580]: E0113 21:34:26.267615 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:34:27.268749 kubelet[2580]: E0113 21:34:27.268695 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:34:28.269912 kubelet[2580]: E0113 21:34:28.269855 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:34:28.573025 kubelet[2580]: I0113 21:34:28.572893 2580 topology_manager.go:215] "Topology Admit Handler" podUID="4af8b350-6400-408b-aa6b-ac486c9fbe01" podNamespace="default" podName="test-pod-1" Jan 13 21:34:28.702273 kubelet[2580]: I0113 21:34:28.702028 2580 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ddc23303-04dd-4715-a3bc-911758ac7945\" (UniqueName: \"kubernetes.io/nfs/4af8b350-6400-408b-aa6b-ac486c9fbe01-pvc-ddc23303-04dd-4715-a3bc-911758ac7945\") pod \"test-pod-1\" (UID: \"4af8b350-6400-408b-aa6b-ac486c9fbe01\") " pod="default/test-pod-1" Jan 13 21:34:28.702273 kubelet[2580]: I0113 21:34:28.702156 2580 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gd98\" (UniqueName: \"kubernetes.io/projected/4af8b350-6400-408b-aa6b-ac486c9fbe01-kube-api-access-7gd98\") pod \"test-pod-1\" (UID: \"4af8b350-6400-408b-aa6b-ac486c9fbe01\") " pod="default/test-pod-1" Jan 13 21:34:29.038446 kernel: FS-Cache: Loaded Jan 13 21:34:29.272042 kubelet[2580]: E0113 21:34:29.271758 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:34:29.332091 kernel: RPC: Registered named UNIX socket transport module. Jan 13 21:34:29.332310 kernel: RPC: Registered udp transport module. Jan 13 21:34:29.332394 kernel: RPC: Registered tcp transport module. Jan 13 21:34:29.333382 kernel: RPC: Registered tcp-with-tls transport module. Jan 13 21:34:29.333461 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 13 21:34:29.958548 kernel: NFS: Registering the id_resolver key type Jan 13 21:34:29.958682 kernel: Key type id_resolver registered Jan 13 21:34:29.958716 kernel: Key type id_legacy registered Jan 13 21:34:30.104180 nfsidmap[5070]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Jan 13 21:34:30.121742 nfsidmap[5071]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Jan 13 21:34:30.272804 kubelet[2580]: E0113 21:34:30.272653 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:34:30.411774 containerd[2085]: time="2025-01-13T21:34:30.411727345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:4af8b350-6400-408b-aa6b-ac486c9fbe01,Namespace:default,Attempt:0,}" Jan 13 21:34:30.624990 (udev-worker)[5057]: Network interface NamePolicy= disabled on kernel command line. Jan 13 21:34:30.627959 systemd-networkd[1658]: cali5ec59c6bf6e: Link UP Jan 13 21:34:30.629321 systemd-networkd[1658]: cali5ec59c6bf6e: Gained carrier Jan 13 21:34:30.662688 containerd[2085]: 2025-01-13 21:34:30.493 [INFO][5076] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.17.229-k8s-test--pod--1-eth0 default 4af8b350-6400-408b-aa6b-ac486c9fbe01 1460 0 2025-01-13 21:33:51 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.17.229 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="53548172f343c40a3ced3d35ab74fe30555fe1b184174121cfc3eead170be1dd" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.17.229-k8s-test--pod--1-" Jan 13 21:34:30.662688 containerd[2085]: 2025-01-13 21:34:30.494 [INFO][5076] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="53548172f343c40a3ced3d35ab74fe30555fe1b184174121cfc3eead170be1dd" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.17.229-k8s-test--pod--1-eth0" Jan 13 21:34:30.662688 containerd[2085]: 2025-01-13 21:34:30.550 [INFO][5083] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="53548172f343c40a3ced3d35ab74fe30555fe1b184174121cfc3eead170be1dd" HandleID="k8s-pod-network.53548172f343c40a3ced3d35ab74fe30555fe1b184174121cfc3eead170be1dd" Workload="172.31.17.229-k8s-test--pod--1-eth0" Jan 13 21:34:30.662688 containerd[2085]: 2025-01-13 21:34:30.566 [INFO][5083] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="53548172f343c40a3ced3d35ab74fe30555fe1b184174121cfc3eead170be1dd" HandleID="k8s-pod-network.53548172f343c40a3ced3d35ab74fe30555fe1b184174121cfc3eead170be1dd" Workload="172.31.17.229-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000319610), Attrs:map[string]string{"namespace":"default", "node":"172.31.17.229", "pod":"test-pod-1", "timestamp":"2025-01-13 21:34:30.550427026 +0000 UTC"}, Hostname:"172.31.17.229", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:34:30.662688 containerd[2085]: 2025-01-13 21:34:30.566 [INFO][5083] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:34:30.662688 containerd[2085]: 2025-01-13 21:34:30.566 [INFO][5083] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:34:30.662688 containerd[2085]: 2025-01-13 21:34:30.566 [INFO][5083] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.17.229' Jan 13 21:34:30.662688 containerd[2085]: 2025-01-13 21:34:30.572 [INFO][5083] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.53548172f343c40a3ced3d35ab74fe30555fe1b184174121cfc3eead170be1dd" host="172.31.17.229" Jan 13 21:34:30.662688 containerd[2085]: 2025-01-13 21:34:30.581 [INFO][5083] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.17.229" Jan 13 21:34:30.662688 containerd[2085]: 2025-01-13 21:34:30.590 [INFO][5083] ipam/ipam.go 489: Trying affinity for 192.168.6.0/26 host="172.31.17.229" Jan 13 21:34:30.662688 containerd[2085]: 2025-01-13 21:34:30.592 [INFO][5083] ipam/ipam.go 155: Attempting to load block cidr=192.168.6.0/26 host="172.31.17.229" Jan 13 21:34:30.662688 containerd[2085]: 2025-01-13 21:34:30.595 [INFO][5083] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.6.0/26 host="172.31.17.229" Jan 13 21:34:30.662688 containerd[2085]: 2025-01-13 21:34:30.595 [INFO][5083] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.6.0/26 handle="k8s-pod-network.53548172f343c40a3ced3d35ab74fe30555fe1b184174121cfc3eead170be1dd" host="172.31.17.229" Jan 13 21:34:30.662688 containerd[2085]: 2025-01-13 21:34:30.597 [INFO][5083] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.53548172f343c40a3ced3d35ab74fe30555fe1b184174121cfc3eead170be1dd Jan 13 21:34:30.662688 containerd[2085]: 2025-01-13 21:34:30.603 [INFO][5083] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.6.0/26 handle="k8s-pod-network.53548172f343c40a3ced3d35ab74fe30555fe1b184174121cfc3eead170be1dd" host="172.31.17.229" Jan 13 21:34:30.662688 containerd[2085]: 2025-01-13 21:34:30.613 [INFO][5083] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.6.5/26] block=192.168.6.0/26 handle="k8s-pod-network.53548172f343c40a3ced3d35ab74fe30555fe1b184174121cfc3eead170be1dd" host="172.31.17.229" Jan 13 21:34:30.662688 containerd[2085]: 2025-01-13 21:34:30.613 [INFO][5083] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.6.5/26] handle="k8s-pod-network.53548172f343c40a3ced3d35ab74fe30555fe1b184174121cfc3eead170be1dd" host="172.31.17.229" Jan 13 21:34:30.662688 containerd[2085]: 2025-01-13 21:34:30.613 [INFO][5083] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:34:30.662688 containerd[2085]: 2025-01-13 21:34:30.613 [INFO][5083] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.6.5/26] IPv6=[] ContainerID="53548172f343c40a3ced3d35ab74fe30555fe1b184174121cfc3eead170be1dd" HandleID="k8s-pod-network.53548172f343c40a3ced3d35ab74fe30555fe1b184174121cfc3eead170be1dd" Workload="172.31.17.229-k8s-test--pod--1-eth0" Jan 13 21:34:30.662688 containerd[2085]: 2025-01-13 21:34:30.618 [INFO][5076] cni-plugin/k8s.go 386: Populated endpoint ContainerID="53548172f343c40a3ced3d35ab74fe30555fe1b184174121cfc3eead170be1dd" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.17.229-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.17.229-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"4af8b350-6400-408b-aa6b-ac486c9fbe01", ResourceVersion:"1460", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 33, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.17.229", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.6.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:34:30.664816 containerd[2085]: 2025-01-13 21:34:30.620 [INFO][5076] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.6.5/32] ContainerID="53548172f343c40a3ced3d35ab74fe30555fe1b184174121cfc3eead170be1dd" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.17.229-k8s-test--pod--1-eth0" Jan 13 21:34:30.664816 containerd[2085]: 2025-01-13 21:34:30.620 [INFO][5076] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="53548172f343c40a3ced3d35ab74fe30555fe1b184174121cfc3eead170be1dd" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.17.229-k8s-test--pod--1-eth0" Jan 13 21:34:30.664816 containerd[2085]: 2025-01-13 21:34:30.630 [INFO][5076] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="53548172f343c40a3ced3d35ab74fe30555fe1b184174121cfc3eead170be1dd" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.17.229-k8s-test--pod--1-eth0" Jan 13 21:34:30.664816 containerd[2085]: 2025-01-13 21:34:30.631 [INFO][5076] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="53548172f343c40a3ced3d35ab74fe30555fe1b184174121cfc3eead170be1dd" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.17.229-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.17.229-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"4af8b350-6400-408b-aa6b-ac486c9fbe01", ResourceVersion:"1460", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 33, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.17.229", ContainerID:"53548172f343c40a3ced3d35ab74fe30555fe1b184174121cfc3eead170be1dd", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.6.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"32:d1:4b:42:7e:a6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:34:30.664816 containerd[2085]: 2025-01-13 21:34:30.651 [INFO][5076] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="53548172f343c40a3ced3d35ab74fe30555fe1b184174121cfc3eead170be1dd" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.17.229-k8s-test--pod--1-eth0" Jan 13 21:34:30.712072 containerd[2085]: time="2025-01-13T21:34:30.709550675Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:34:30.712072 containerd[2085]: time="2025-01-13T21:34:30.709614643Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:34:30.712682 containerd[2085]: time="2025-01-13T21:34:30.712587316Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:34:30.712922 containerd[2085]: time="2025-01-13T21:34:30.712870395Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:34:30.816686 containerd[2085]: time="2025-01-13T21:34:30.816639251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:4af8b350-6400-408b-aa6b-ac486c9fbe01,Namespace:default,Attempt:0,} returns sandbox id \"53548172f343c40a3ced3d35ab74fe30555fe1b184174121cfc3eead170be1dd\"" Jan 13 21:34:30.828391 containerd[2085]: time="2025-01-13T21:34:30.828357040Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 13 21:34:31.153752 containerd[2085]: time="2025-01-13T21:34:31.153633623Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:34:31.155647 containerd[2085]: time="2025-01-13T21:34:31.155581435Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 13 21:34:31.166010 containerd[2085]: time="2025-01-13T21:34:31.165631356Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"71035896\" in 337.017082ms" Jan 13 21:34:31.166010 containerd[2085]: time="2025-01-13T21:34:31.166009205Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\"" Jan 13 21:34:31.172912 containerd[2085]: time="2025-01-13T21:34:31.172769527Z" level=info msg="CreateContainer within sandbox \"53548172f343c40a3ced3d35ab74fe30555fe1b184174121cfc3eead170be1dd\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 13 21:34:31.199765 containerd[2085]: time="2025-01-13T21:34:31.199674656Z" level=info msg="CreateContainer within sandbox \"53548172f343c40a3ced3d35ab74fe30555fe1b184174121cfc3eead170be1dd\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"393757a5c8a3ba08d90606661c7221e829988f68f697e1086efd594036d19fa8\"" Jan 13 21:34:31.201352 containerd[2085]: time="2025-01-13T21:34:31.201047018Z" level=info msg="StartContainer for \"393757a5c8a3ba08d90606661c7221e829988f68f697e1086efd594036d19fa8\"" Jan 13 21:34:31.202024 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3920764731.mount: Deactivated successfully. Jan 13 21:34:31.273774 kubelet[2580]: E0113 21:34:31.273741 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:34:31.282488 containerd[2085]: time="2025-01-13T21:34:31.282446837Z" level=info msg="StartContainer for \"393757a5c8a3ba08d90606661c7221e829988f68f697e1086efd594036d19fa8\" returns successfully" Jan 13 21:34:32.275127 kubelet[2580]: E0113 21:34:32.275058 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:34:32.384492 systemd-networkd[1658]: cali5ec59c6bf6e: Gained IPv6LL Jan 13 21:34:33.275905 kubelet[2580]: E0113 21:34:33.275636 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:34:34.277137 kubelet[2580]: E0113 21:34:34.277065 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:34:34.537612 ntpd[2044]: Listen normally on 12 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%12]:123 Jan 13 21:34:34.538184 ntpd[2044]: 13 Jan 21:34:34 ntpd[2044]: Listen normally on 12 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%12]:123 Jan 13 21:34:35.277983 kubelet[2580]: E0113 21:34:35.277927 2580 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"