Jan 30 13:54:04.142217 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 30 13:54:04.142263 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:54:04.142278 kernel: BIOS-provided physical RAM map: Jan 30 13:54:04.142290 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 30 13:54:04.142302 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 30 13:54:04.142313 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 30 13:54:04.142331 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Jan 30 13:54:04.142344 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Jan 30 13:54:04.142368 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Jan 30 13:54:04.142381 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 30 13:54:04.142393 kernel: NX (Execute Disable) protection: active Jan 30 13:54:04.142404 kernel: APIC: Static calls initialized Jan 30 13:54:04.142416 kernel: SMBIOS 2.7 present. Jan 30 13:54:04.142429 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Jan 30 13:54:04.142447 kernel: Hypervisor detected: KVM Jan 30 13:54:04.142462 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 30 13:54:04.142475 kernel: kvm-clock: using sched offset of 7282854651 cycles Jan 30 13:54:04.142490 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 30 13:54:04.142505 kernel: tsc: Detected 2499.996 MHz processor Jan 30 13:54:04.142519 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 13:54:04.142534 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 13:54:04.142551 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Jan 30 13:54:04.142565 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 30 13:54:04.142579 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 13:54:04.142594 kernel: Using GB pages for direct mapping Jan 30 13:54:04.142607 kernel: ACPI: Early table checksum verification disabled Jan 30 13:54:04.142713 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Jan 30 13:54:04.142728 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Jan 30 13:54:04.142742 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 30 13:54:04.142757 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 30 13:54:04.142775 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Jan 30 13:54:04.142789 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 30 13:54:04.142804 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 30 13:54:04.142819 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Jan 30 13:54:04.142833 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 30 13:54:04.142847 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Jan 30 13:54:04.142861 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Jan 30 13:54:04.142876 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 30 13:54:04.142890 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Jan 30 13:54:04.142908 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Jan 30 13:54:04.142928 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Jan 30 13:54:04.142944 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Jan 30 13:54:04.142959 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Jan 30 13:54:04.142975 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Jan 30 13:54:04.142993 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Jan 30 13:54:04.143008 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Jan 30 13:54:04.143225 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Jan 30 13:54:04.143243 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Jan 30 13:54:04.143259 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 30 13:54:04.143275 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 30 13:54:04.143291 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Jan 30 13:54:04.143306 kernel: NUMA: Initialized distance table, cnt=1 Jan 30 13:54:04.143321 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Jan 30 13:54:04.143342 kernel: Zone ranges: Jan 30 13:54:04.143358 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 13:54:04.143373 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Jan 30 13:54:04.143389 kernel: Normal empty Jan 30 13:54:04.143405 kernel: Movable zone start for each node Jan 30 13:54:04.143420 kernel: Early memory node ranges Jan 30 13:54:04.143435 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 30 13:54:04.143450 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Jan 30 13:54:04.143466 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Jan 30 13:54:04.143481 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 13:54:04.143499 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 30 13:54:04.143512 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Jan 30 13:54:04.143527 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 30 13:54:04.143542 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 30 13:54:04.143556 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Jan 30 13:54:04.143571 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 30 13:54:04.143587 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 13:54:04.143602 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 30 13:54:04.143616 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 30 13:54:04.143636 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 13:54:04.143650 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 30 13:54:04.143664 kernel: TSC deadline timer available Jan 30 13:54:04.143677 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 30 13:54:04.143692 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 30 13:54:04.143708 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Jan 30 13:54:04.143723 kernel: Booting paravirtualized kernel on KVM Jan 30 13:54:04.143739 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 13:54:04.143755 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 30 13:54:04.143774 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 30 13:54:04.143790 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 30 13:54:04.143805 kernel: pcpu-alloc: [0] 0 1 Jan 30 13:54:04.143819 kernel: kvm-guest: PV spinlocks enabled Jan 30 13:54:04.143833 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 30 13:54:04.143851 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:54:04.143867 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 13:54:04.143882 kernel: random: crng init done Jan 30 13:54:04.143901 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 13:54:04.143916 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 30 13:54:04.143931 kernel: Fallback order for Node 0: 0 Jan 30 13:54:04.143947 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Jan 30 13:54:04.143962 kernel: Policy zone: DMA32 Jan 30 13:54:04.143977 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 13:54:04.146773 kernel: Memory: 1932344K/2057760K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 125156K reserved, 0K cma-reserved) Jan 30 13:54:04.146795 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 30 13:54:04.146811 kernel: Kernel/User page tables isolation: enabled Jan 30 13:54:04.146836 kernel: ftrace: allocating 37921 entries in 149 pages Jan 30 13:54:04.146851 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 13:54:04.146864 kernel: Dynamic Preempt: voluntary Jan 30 13:54:04.146877 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 13:54:04.146895 kernel: rcu: RCU event tracing is enabled. Jan 30 13:54:04.146910 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 30 13:54:04.146926 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 13:54:04.146942 kernel: Rude variant of Tasks RCU enabled. Jan 30 13:54:04.146958 kernel: Tracing variant of Tasks RCU enabled. Jan 30 13:54:04.146977 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 13:54:04.146992 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 30 13:54:04.147008 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 30 13:54:04.147206 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 13:54:04.147223 kernel: Console: colour VGA+ 80x25 Jan 30 13:54:04.147238 kernel: printk: console [ttyS0] enabled Jan 30 13:54:04.147253 kernel: ACPI: Core revision 20230628 Jan 30 13:54:04.147269 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Jan 30 13:54:04.147284 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 13:54:04.147304 kernel: x2apic enabled Jan 30 13:54:04.147320 kernel: APIC: Switched APIC routing to: physical x2apic Jan 30 13:54:04.147348 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Jan 30 13:54:04.147369 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Jan 30 13:54:04.147386 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 30 13:54:04.147403 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 30 13:54:04.147419 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 13:54:04.147435 kernel: Spectre V2 : Mitigation: Retpolines Jan 30 13:54:04.147451 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 13:54:04.147468 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 30 13:54:04.147484 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 30 13:54:04.147501 kernel: RETBleed: Vulnerable Jan 30 13:54:04.147521 kernel: Speculative Store Bypass: Vulnerable Jan 30 13:54:04.147537 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Jan 30 13:54:04.149900 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 30 13:54:04.149922 kernel: GDS: Unknown: Dependent on hypervisor status Jan 30 13:54:04.149939 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 30 13:54:04.149956 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 30 13:54:04.149974 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 30 13:54:04.149998 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jan 30 13:54:04.150014 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jan 30 13:54:04.150030 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 30 13:54:04.150047 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 30 13:54:04.150083 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 30 13:54:04.150100 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 30 13:54:04.150117 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 30 13:54:04.150230 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jan 30 13:54:04.150252 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jan 30 13:54:04.150269 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Jan 30 13:54:04.150285 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Jan 30 13:54:04.150306 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Jan 30 13:54:04.150323 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Jan 30 13:54:04.150340 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Jan 30 13:54:04.150367 kernel: Freeing SMP alternatives memory: 32K Jan 30 13:54:04.150383 kernel: pid_max: default: 32768 minimum: 301 Jan 30 13:54:04.150400 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 13:54:04.150416 kernel: landlock: Up and running. Jan 30 13:54:04.150432 kernel: SELinux: Initializing. Jan 30 13:54:04.150449 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 30 13:54:04.150465 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 30 13:54:04.150481 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8175M CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x4) Jan 30 13:54:04.150502 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:54:04.150519 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:54:04.150537 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:54:04.150554 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 30 13:54:04.150570 kernel: signal: max sigframe size: 3632 Jan 30 13:54:04.150588 kernel: rcu: Hierarchical SRCU implementation. Jan 30 13:54:04.150605 kernel: rcu: Max phase no-delay instances is 400. Jan 30 13:54:04.150622 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 30 13:54:04.150638 kernel: smp: Bringing up secondary CPUs ... Jan 30 13:54:04.150656 kernel: smpboot: x86: Booting SMP configuration: Jan 30 13:54:04.150673 kernel: .... node #0, CPUs: #1 Jan 30 13:54:04.150691 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 30 13:54:04.150709 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 30 13:54:04.150726 kernel: smp: Brought up 1 node, 2 CPUs Jan 30 13:54:04.150863 kernel: smpboot: Max logical packages: 1 Jan 30 13:54:04.150881 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Jan 30 13:54:04.150968 kernel: devtmpfs: initialized Jan 30 13:54:04.150993 kernel: x86/mm: Memory block size: 128MB Jan 30 13:54:04.151009 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 13:54:04.151206 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 30 13:54:04.151224 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 13:54:04.151241 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 13:54:04.151258 kernel: audit: initializing netlink subsys (disabled) Jan 30 13:54:04.151274 kernel: audit: type=2000 audit(1738245242.270:1): state=initialized audit_enabled=0 res=1 Jan 30 13:54:04.151291 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 13:54:04.151308 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 13:54:04.151329 kernel: cpuidle: using governor menu Jan 30 13:54:04.151346 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 13:54:04.151363 kernel: dca service started, version 1.12.1 Jan 30 13:54:04.151379 kernel: PCI: Using configuration type 1 for base access Jan 30 13:54:04.151396 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 13:54:04.151414 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 13:54:04.151431 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 13:54:04.151448 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 13:54:04.151465 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 13:54:04.151486 kernel: ACPI: Added _OSI(Module Device) Jan 30 13:54:04.151504 kernel: ACPI: Added _OSI(Processor Device) Jan 30 13:54:04.151521 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 13:54:04.151538 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 13:54:04.151554 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 30 13:54:04.151572 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 30 13:54:04.151589 kernel: ACPI: Interpreter enabled Jan 30 13:54:04.151606 kernel: ACPI: PM: (supports S0 S5) Jan 30 13:54:04.151623 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 13:54:04.151640 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 13:54:04.151659 kernel: PCI: Using E820 reservations for host bridge windows Jan 30 13:54:04.151676 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jan 30 13:54:04.151693 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 13:54:04.155826 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 30 13:54:04.156008 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 30 13:54:04.156171 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 30 13:54:04.156194 kernel: acpiphp: Slot [3] registered Jan 30 13:54:04.156218 kernel: acpiphp: Slot [4] registered Jan 30 13:54:04.156235 kernel: acpiphp: Slot [5] registered Jan 30 13:54:04.156252 kernel: acpiphp: Slot [6] registered Jan 30 13:54:04.156268 kernel: acpiphp: Slot [7] registered Jan 30 13:54:04.156285 kernel: acpiphp: Slot [8] registered Jan 30 13:54:04.156302 kernel: acpiphp: Slot [9] registered Jan 30 13:54:04.156317 kernel: acpiphp: Slot [10] registered Jan 30 13:54:04.156334 kernel: acpiphp: Slot [11] registered Jan 30 13:54:04.156351 kernel: acpiphp: Slot [12] registered Jan 30 13:54:04.156371 kernel: acpiphp: Slot [13] registered Jan 30 13:54:04.156388 kernel: acpiphp: Slot [14] registered Jan 30 13:54:04.156404 kernel: acpiphp: Slot [15] registered Jan 30 13:54:04.156421 kernel: acpiphp: Slot [16] registered Jan 30 13:54:04.156437 kernel: acpiphp: Slot [17] registered Jan 30 13:54:04.156454 kernel: acpiphp: Slot [18] registered Jan 30 13:54:04.156470 kernel: acpiphp: Slot [19] registered Jan 30 13:54:04.156487 kernel: acpiphp: Slot [20] registered Jan 30 13:54:04.156503 kernel: acpiphp: Slot [21] registered Jan 30 13:54:04.156520 kernel: acpiphp: Slot [22] registered Jan 30 13:54:04.156539 kernel: acpiphp: Slot [23] registered Jan 30 13:54:04.156556 kernel: acpiphp: Slot [24] registered Jan 30 13:54:04.156572 kernel: acpiphp: Slot [25] registered Jan 30 13:54:04.156589 kernel: acpiphp: Slot [26] registered Jan 30 13:54:04.156605 kernel: acpiphp: Slot [27] registered Jan 30 13:54:04.156621 kernel: acpiphp: Slot [28] registered Jan 30 13:54:04.156637 kernel: acpiphp: Slot [29] registered Jan 30 13:54:04.156653 kernel: acpiphp: Slot [30] registered Jan 30 13:54:04.156669 kernel: acpiphp: Slot [31] registered Jan 30 13:54:04.156689 kernel: PCI host bridge to bus 0000:00 Jan 30 13:54:04.156844 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 30 13:54:04.156989 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 30 13:54:04.157318 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 30 13:54:04.157450 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 30 13:54:04.157572 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 13:54:04.157838 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 30 13:54:04.158001 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 30 13:54:04.158170 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Jan 30 13:54:04.158429 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 30 13:54:04.162767 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Jan 30 13:54:04.162943 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Jan 30 13:54:04.163101 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Jan 30 13:54:04.163241 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Jan 30 13:54:04.163391 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Jan 30 13:54:04.163527 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Jan 30 13:54:04.163665 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Jan 30 13:54:04.163798 kernel: pci 0000:00:01.3: quirk_piix4_acpi+0x0/0x180 took 12695 usecs Jan 30 13:54:04.163946 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Jan 30 13:54:04.164099 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Jan 30 13:54:04.164243 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jan 30 13:54:04.164378 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 30 13:54:04.164522 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 30 13:54:04.164658 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Jan 30 13:54:04.164801 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 30 13:54:04.166654 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Jan 30 13:54:04.166703 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 30 13:54:04.166734 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 30 13:54:04.166751 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 30 13:54:04.166767 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 30 13:54:04.166783 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 30 13:54:04.166800 kernel: iommu: Default domain type: Translated Jan 30 13:54:04.166815 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 13:54:04.166828 kernel: PCI: Using ACPI for IRQ routing Jan 30 13:54:04.166842 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 30 13:54:04.166856 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 30 13:54:04.166875 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Jan 30 13:54:04.167027 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Jan 30 13:54:04.167376 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Jan 30 13:54:04.167541 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 30 13:54:04.167561 kernel: vgaarb: loaded Jan 30 13:54:04.167578 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jan 30 13:54:04.167594 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Jan 30 13:54:04.167611 kernel: clocksource: Switched to clocksource kvm-clock Jan 30 13:54:04.167627 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 13:54:04.167650 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 13:54:04.167666 kernel: pnp: PnP ACPI init Jan 30 13:54:04.167682 kernel: pnp: PnP ACPI: found 5 devices Jan 30 13:54:04.167699 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 13:54:04.167715 kernel: NET: Registered PF_INET protocol family Jan 30 13:54:04.167731 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 13:54:04.167747 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 30 13:54:04.167763 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 13:54:04.167781 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 30 13:54:04.167796 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 30 13:54:04.167811 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 30 13:54:04.167827 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 30 13:54:04.167843 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 30 13:54:04.167858 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 13:54:04.167873 kernel: NET: Registered PF_XDP protocol family Jan 30 13:54:04.168012 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 30 13:54:04.168166 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 30 13:54:04.168289 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 30 13:54:04.168525 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 30 13:54:04.168785 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 30 13:54:04.168812 kernel: PCI: CLS 0 bytes, default 64 Jan 30 13:54:04.168830 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 30 13:54:04.168846 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Jan 30 13:54:04.168863 kernel: clocksource: Switched to clocksource tsc Jan 30 13:54:04.168878 kernel: Initialise system trusted keyrings Jan 30 13:54:04.168900 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 30 13:54:04.168915 kernel: Key type asymmetric registered Jan 30 13:54:04.168930 kernel: Asymmetric key parser 'x509' registered Jan 30 13:54:04.168945 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 13:54:04.168961 kernel: io scheduler mq-deadline registered Jan 30 13:54:04.168976 kernel: io scheduler kyber registered Jan 30 13:54:04.168991 kernel: io scheduler bfq registered Jan 30 13:54:04.169006 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 13:54:04.169022 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 13:54:04.169040 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 13:54:04.169094 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 30 13:54:04.169110 kernel: i8042: Warning: Keylock active Jan 30 13:54:04.169125 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 30 13:54:04.169140 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 30 13:54:04.169283 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 30 13:54:04.169402 kernel: rtc_cmos 00:00: registered as rtc0 Jan 30 13:54:04.169519 kernel: rtc_cmos 00:00: setting system clock to 2025-01-30T13:54:03 UTC (1738245243) Jan 30 13:54:04.169718 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 30 13:54:04.169741 kernel: intel_pstate: CPU model not supported Jan 30 13:54:04.169757 kernel: NET: Registered PF_INET6 protocol family Jan 30 13:54:04.169772 kernel: Segment Routing with IPv6 Jan 30 13:54:04.169787 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 13:54:04.169813 kernel: NET: Registered PF_PACKET protocol family Jan 30 13:54:04.169829 kernel: Key type dns_resolver registered Jan 30 13:54:04.169844 kernel: IPI shorthand broadcast: enabled Jan 30 13:54:04.169860 kernel: sched_clock: Marking stable (971004494, 457444840)->(1657685909, -229236575) Jan 30 13:54:04.169880 kernel: registered taskstats version 1 Jan 30 13:54:04.169895 kernel: Loading compiled-in X.509 certificates Jan 30 13:54:04.169910 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 30 13:54:04.169925 kernel: Key type .fscrypt registered Jan 30 13:54:04.169940 kernel: Key type fscrypt-provisioning registered Jan 30 13:54:04.169956 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 13:54:04.169972 kernel: ima: Allocated hash algorithm: sha1 Jan 30 13:54:04.169987 kernel: ima: No architecture policies found Jan 30 13:54:04.170003 kernel: clk: Disabling unused clocks Jan 30 13:54:04.170021 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 30 13:54:04.170036 kernel: Write protecting the kernel read-only data: 36864k Jan 30 13:54:04.170052 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 30 13:54:04.170107 kernel: Run /init as init process Jan 30 13:54:04.170122 kernel: with arguments: Jan 30 13:54:04.170138 kernel: /init Jan 30 13:54:04.170152 kernel: with environment: Jan 30 13:54:04.170167 kernel: HOME=/ Jan 30 13:54:04.170182 kernel: TERM=linux Jan 30 13:54:04.170201 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 13:54:04.170243 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:54:04.170263 systemd[1]: Detected virtualization amazon. Jan 30 13:54:04.170280 systemd[1]: Detected architecture x86-64. Jan 30 13:54:04.170295 systemd[1]: Running in initrd. Jan 30 13:54:04.170310 systemd[1]: No hostname configured, using default hostname. Jan 30 13:54:04.170327 systemd[1]: Hostname set to . Jan 30 13:54:04.170347 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:54:04.170393 systemd[1]: Queued start job for default target initrd.target. Jan 30 13:54:04.170411 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:54:04.170427 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:54:04.170534 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 13:54:04.170553 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:54:04.170570 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 13:54:04.170655 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 13:54:04.170697 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 13:54:04.170729 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 13:54:04.170747 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:54:04.170765 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:54:04.170783 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:54:04.170801 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:54:04.170822 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:54:04.170840 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:54:04.170859 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:54:04.170877 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:54:04.170895 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 13:54:04.170913 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 13:54:04.170931 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:54:04.171024 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:54:04.171045 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 30 13:54:04.171106 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:54:04.171124 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:54:04.171192 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 13:54:04.171212 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:54:04.171230 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 13:54:04.171255 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 13:54:04.171273 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:54:04.171291 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:54:04.171350 systemd-journald[178]: Collecting audit messages is disabled. Jan 30 13:54:04.171393 systemd-journald[178]: Journal started Jan 30 13:54:04.171469 systemd-journald[178]: Runtime Journal (/run/log/journal/ec28d207c21420a8f8add82a24ad1135) is 4.8M, max 38.6M, 33.7M free. Jan 30 13:54:04.173482 systemd-modules-load[179]: Inserted module 'overlay' Jan 30 13:54:04.188288 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:54:04.191925 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:54:04.192975 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 13:54:04.194991 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:54:04.196957 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 13:54:04.214410 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:54:04.219243 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:54:04.248248 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 13:54:04.250541 systemd-modules-load[179]: Inserted module 'br_netfilter' Jan 30 13:54:04.421465 kernel: Bridge firewalling registered Jan 30 13:54:04.251872 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:54:04.430585 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:54:04.438188 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:54:04.445604 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:54:04.470710 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:54:04.505291 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:54:04.543714 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:54:04.548238 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:54:04.581356 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:54:04.583522 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:54:04.590987 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:54:04.615247 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 13:54:04.650546 dracut-cmdline[213]: dracut-dracut-053 Jan 30 13:54:04.658737 dracut-cmdline[213]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:54:04.670933 systemd-resolved[206]: Positive Trust Anchors: Jan 30 13:54:04.671045 systemd-resolved[206]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:54:04.671153 systemd-resolved[206]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:54:04.679347 systemd-resolved[206]: Defaulting to hostname 'linux'. Jan 30 13:54:04.681208 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:54:04.705335 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:54:04.909095 kernel: SCSI subsystem initialized Jan 30 13:54:04.920090 kernel: Loading iSCSI transport class v2.0-870. Jan 30 13:54:04.931089 kernel: iscsi: registered transport (tcp) Jan 30 13:54:04.967536 kernel: iscsi: registered transport (qla4xxx) Jan 30 13:54:04.967644 kernel: QLogic iSCSI HBA Driver Jan 30 13:54:05.082026 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 13:54:05.091489 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 13:54:05.168082 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 13:54:05.168177 kernel: device-mapper: uevent: version 1.0.3 Jan 30 13:54:05.168207 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 13:54:05.255111 kernel: raid6: avx512x4 gen() 13989 MB/s Jan 30 13:54:05.280118 kernel: raid6: avx512x2 gen() 13205 MB/s Jan 30 13:54:05.297115 kernel: raid6: avx512x1 gen() 6345 MB/s Jan 30 13:54:05.316111 kernel: raid6: avx2x4 gen() 6055 MB/s Jan 30 13:54:05.335118 kernel: raid6: avx2x2 gen() 4494 MB/s Jan 30 13:54:05.352813 kernel: raid6: avx2x1 gen() 8163 MB/s Jan 30 13:54:05.352891 kernel: raid6: using algorithm avx512x4 gen() 13989 MB/s Jan 30 13:54:05.377790 kernel: raid6: .... xor() 4869 MB/s, rmw enabled Jan 30 13:54:05.377884 kernel: raid6: using avx512x2 recovery algorithm Jan 30 13:54:05.417099 kernel: xor: automatically using best checksumming function avx Jan 30 13:54:05.678090 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 13:54:05.691786 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:54:05.706453 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:54:05.766294 systemd-udevd[396]: Using default interface naming scheme 'v255'. Jan 30 13:54:05.779762 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:54:05.805919 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 13:54:05.870623 dracut-pre-trigger[401]: rd.md=0: removing MD RAID activation Jan 30 13:54:05.912799 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:54:05.923512 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:54:06.013587 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:54:06.025318 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 13:54:06.063351 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 13:54:06.069332 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:54:06.076173 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:54:06.078766 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:54:06.091385 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 13:54:06.163785 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:54:06.193183 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 13:54:06.207952 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 30 13:54:06.245252 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 30 13:54:06.245451 kernel: AVX2 version of gcm_enc/dec engaged. Jan 30 13:54:06.245474 kernel: AES CTR mode by8 optimization enabled Jan 30 13:54:06.245494 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Jan 30 13:54:06.245779 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:31:d9:d8:04:e9 Jan 30 13:54:06.248029 (udev-worker)[443]: Network interface NamePolicy= disabled on kernel command line. Jan 30 13:54:06.285120 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 30 13:54:06.285444 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 30 13:54:06.290684 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:54:06.292494 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:54:06.300089 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:54:06.302794 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:54:06.303168 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:54:06.305235 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:54:06.324200 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 30 13:54:06.316530 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:54:06.336390 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 13:54:06.336484 kernel: GPT:9289727 != 16777215 Jan 30 13:54:06.336508 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 13:54:06.339646 kernel: GPT:9289727 != 16777215 Jan 30 13:54:06.339851 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 13:54:06.339867 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 30 13:54:06.499154 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (453) Jan 30 13:54:06.505090 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/nvme0n1p3 scanned by (udev-worker) (445) Jan 30 13:54:06.643590 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:54:06.659016 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:54:06.725288 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 30 13:54:06.744131 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:54:06.762562 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 30 13:54:06.776028 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 30 13:54:06.782229 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 30 13:54:06.791305 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 30 13:54:06.798253 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 13:54:06.810711 disk-uuid[626]: Primary Header is updated. Jan 30 13:54:06.810711 disk-uuid[626]: Secondary Entries is updated. Jan 30 13:54:06.810711 disk-uuid[626]: Secondary Header is updated. Jan 30 13:54:06.820089 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 30 13:54:06.824080 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 30 13:54:06.836093 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 30 13:54:07.845088 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 30 13:54:07.850517 disk-uuid[627]: The operation has completed successfully. Jan 30 13:54:08.009236 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 13:54:08.009359 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 13:54:08.027497 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 13:54:08.044538 sh[970]: Success Jan 30 13:54:08.059157 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 30 13:54:08.172528 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 13:54:08.184210 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 13:54:08.188427 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 13:54:08.250270 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 30 13:54:08.250371 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:54:08.250394 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 13:54:08.251375 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 13:54:08.252158 kernel: BTRFS info (device dm-0): using free space tree Jan 30 13:54:08.294090 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 30 13:54:08.297026 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 13:54:08.302177 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 13:54:08.314680 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 13:54:08.330603 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 13:54:08.356085 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:54:08.356199 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:54:08.356218 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 30 13:54:08.367084 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 30 13:54:08.383233 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 13:54:08.384994 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:54:08.407623 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 13:54:08.419379 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 13:54:08.538783 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:54:08.555411 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:54:08.629367 systemd-networkd[1163]: lo: Link UP Jan 30 13:54:08.629377 systemd-networkd[1163]: lo: Gained carrier Jan 30 13:54:08.634420 systemd-networkd[1163]: Enumeration completed Jan 30 13:54:08.635199 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:54:08.635332 systemd-networkd[1163]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:54:08.635338 systemd-networkd[1163]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:54:08.640345 systemd[1]: Reached target network.target - Network. Jan 30 13:54:08.662905 systemd-networkd[1163]: eth0: Link UP Jan 30 13:54:08.662915 systemd-networkd[1163]: eth0: Gained carrier Jan 30 13:54:08.662935 systemd-networkd[1163]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:54:08.684456 systemd-networkd[1163]: eth0: DHCPv4 address 172.31.31.232/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 30 13:54:08.705248 ignition[1076]: Ignition 2.19.0 Jan 30 13:54:08.705263 ignition[1076]: Stage: fetch-offline Jan 30 13:54:08.705528 ignition[1076]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:54:08.705542 ignition[1076]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 30 13:54:08.709038 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:54:08.706303 ignition[1076]: Ignition finished successfully Jan 30 13:54:08.725357 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 30 13:54:08.757432 ignition[1171]: Ignition 2.19.0 Jan 30 13:54:08.757448 ignition[1171]: Stage: fetch Jan 30 13:54:08.757908 ignition[1171]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:54:08.757928 ignition[1171]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 30 13:54:08.758091 ignition[1171]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 30 13:54:08.770981 ignition[1171]: PUT result: OK Jan 30 13:54:08.774503 ignition[1171]: parsed url from cmdline: "" Jan 30 13:54:08.774519 ignition[1171]: no config URL provided Jan 30 13:54:08.774530 ignition[1171]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:54:08.774546 ignition[1171]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:54:08.774568 ignition[1171]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 30 13:54:08.775685 ignition[1171]: PUT result: OK Jan 30 13:54:08.775733 ignition[1171]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 30 13:54:08.783133 ignition[1171]: GET result: OK Jan 30 13:54:08.784078 ignition[1171]: parsing config with SHA512: b3098426efbe9db6dd96809a45e2525a34f946ae9c98947de5265643b82e378d918ff5b1d3a172d7c6b41d8e35b608b0dea6270dbdf9fd29807ae23bc8cf4d7c Jan 30 13:54:08.791232 unknown[1171]: fetched base config from "system" Jan 30 13:54:08.791259 unknown[1171]: fetched base config from "system" Jan 30 13:54:08.792045 ignition[1171]: fetch: fetch complete Jan 30 13:54:08.791266 unknown[1171]: fetched user config from "aws" Jan 30 13:54:08.792131 ignition[1171]: fetch: fetch passed Jan 30 13:54:08.796731 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 30 13:54:08.792375 ignition[1171]: Ignition finished successfully Jan 30 13:54:08.811403 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 13:54:08.841891 ignition[1177]: Ignition 2.19.0 Jan 30 13:54:08.841908 ignition[1177]: Stage: kargs Jan 30 13:54:08.842425 ignition[1177]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:54:08.842441 ignition[1177]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 30 13:54:08.842653 ignition[1177]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 30 13:54:08.844378 ignition[1177]: PUT result: OK Jan 30 13:54:08.852578 ignition[1177]: kargs: kargs passed Jan 30 13:54:08.853975 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 13:54:08.852638 ignition[1177]: Ignition finished successfully Jan 30 13:54:08.868369 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 13:54:08.905349 ignition[1183]: Ignition 2.19.0 Jan 30 13:54:08.905366 ignition[1183]: Stage: disks Jan 30 13:54:08.905905 ignition[1183]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:54:08.905923 ignition[1183]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 30 13:54:08.906103 ignition[1183]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 30 13:54:08.907818 ignition[1183]: PUT result: OK Jan 30 13:54:08.917882 ignition[1183]: disks: disks passed Jan 30 13:54:08.917985 ignition[1183]: Ignition finished successfully Jan 30 13:54:08.922884 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 13:54:08.923540 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 13:54:08.929607 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 13:54:08.933987 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:54:08.938415 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:54:08.938550 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:54:08.949544 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 13:54:09.010781 systemd-fsck[1191]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 30 13:54:09.019674 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 13:54:09.038251 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 13:54:09.155088 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 30 13:54:09.155357 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 13:54:09.158191 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 13:54:09.166207 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:54:09.174349 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 13:54:09.182955 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 30 13:54:09.183044 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 13:54:09.185955 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:54:09.198275 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1210) Jan 30 13:54:09.198322 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:54:09.198340 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:54:09.198367 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 30 13:54:09.202096 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 30 13:54:09.205164 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:54:09.209490 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 13:54:09.226279 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 13:54:09.415505 initrd-setup-root[1234]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 13:54:09.422671 initrd-setup-root[1241]: cut: /sysroot/etc/group: No such file or directory Jan 30 13:54:09.439472 initrd-setup-root[1248]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 13:54:09.457406 initrd-setup-root[1255]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 13:54:09.613302 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 13:54:09.622251 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 13:54:09.627280 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 13:54:09.639206 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 13:54:09.640239 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:54:09.678300 ignition[1323]: INFO : Ignition 2.19.0 Jan 30 13:54:09.678300 ignition[1323]: INFO : Stage: mount Jan 30 13:54:09.681099 ignition[1323]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:54:09.681099 ignition[1323]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 30 13:54:09.681099 ignition[1323]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 30 13:54:09.686817 ignition[1323]: INFO : PUT result: OK Jan 30 13:54:09.691190 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 13:54:09.704529 ignition[1323]: INFO : mount: mount passed Jan 30 13:54:09.704529 ignition[1323]: INFO : Ignition finished successfully Jan 30 13:54:09.700211 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 13:54:09.722914 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 13:54:09.768436 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:54:09.793104 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1335) Jan 30 13:54:09.797145 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:54:09.797243 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:54:09.797269 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 30 13:54:09.809099 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 30 13:54:09.815595 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:54:09.849898 ignition[1352]: INFO : Ignition 2.19.0 Jan 30 13:54:09.849898 ignition[1352]: INFO : Stage: files Jan 30 13:54:09.852509 ignition[1352]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:54:09.852509 ignition[1352]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 30 13:54:09.852509 ignition[1352]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 30 13:54:09.857676 ignition[1352]: INFO : PUT result: OK Jan 30 13:54:09.861748 ignition[1352]: DEBUG : files: compiled without relabeling support, skipping Jan 30 13:54:09.866196 ignition[1352]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 13:54:09.866196 ignition[1352]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 13:54:09.874443 ignition[1352]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 13:54:09.877110 ignition[1352]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 13:54:09.882922 unknown[1352]: wrote ssh authorized keys file for user: core Jan 30 13:54:09.885359 ignition[1352]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 13:54:09.891649 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 30 13:54:09.891649 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 30 13:54:09.891649 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 30 13:54:09.891649 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 13:54:09.891649 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:54:09.891649 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:54:09.891649 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:54:09.891649 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:54:09.891649 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:54:09.965010 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 30 13:54:10.247275 systemd-networkd[1163]: eth0: Gained IPv6LL Jan 30 13:54:10.300668 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Jan 30 13:54:10.893657 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:54:10.893657 ignition[1352]: INFO : files: op(8): [started] processing unit "containerd.service" Jan 30 13:54:10.899629 ignition[1352]: INFO : files: op(8): op(9): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 30 13:54:10.903293 ignition[1352]: INFO : files: op(8): op(9): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 30 13:54:10.903293 ignition[1352]: INFO : files: op(8): [finished] processing unit "containerd.service" Jan 30 13:54:10.909382 ignition[1352]: INFO : files: createResultFile: createFiles: op(a): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:54:10.912857 ignition[1352]: INFO : files: createResultFile: createFiles: op(a): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:54:10.915829 ignition[1352]: INFO : files: files passed Jan 30 13:54:10.915829 ignition[1352]: INFO : Ignition finished successfully Jan 30 13:54:10.918875 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 13:54:10.932386 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 13:54:10.938900 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 13:54:10.947836 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 13:54:10.947978 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 13:54:10.979659 initrd-setup-root-after-ignition[1381]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:54:10.979659 initrd-setup-root-after-ignition[1381]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:54:10.986836 initrd-setup-root-after-ignition[1385]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:54:10.990437 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:54:10.997212 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 13:54:11.005304 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 13:54:11.065866 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 13:54:11.066009 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 13:54:11.075304 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 13:54:11.075442 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 13:54:11.079672 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 13:54:11.086447 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 13:54:11.129555 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:54:11.139545 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 13:54:11.157230 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:54:11.157558 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:54:11.164436 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 13:54:11.168543 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 13:54:11.170583 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:54:11.174662 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 13:54:11.176160 systemd[1]: Stopped target basic.target - Basic System. Jan 30 13:54:11.180717 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 13:54:11.184148 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:54:11.188144 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 13:54:11.188560 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 13:54:11.194412 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:54:11.197917 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 13:54:11.200764 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 13:54:11.203510 systemd[1]: Stopped target swap.target - Swaps. Jan 30 13:54:11.203822 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 13:54:11.204024 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:54:11.205045 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:54:11.205594 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:54:11.206639 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 13:54:11.210275 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:54:11.217048 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 13:54:11.217318 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 13:54:11.221047 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 13:54:11.221200 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:54:11.226979 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 13:54:11.227133 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 13:54:11.255891 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 13:54:11.258210 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 13:54:11.259005 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:54:11.276738 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 13:54:11.279442 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 13:54:11.279749 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:54:11.282408 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 13:54:11.282728 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:54:11.298849 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 13:54:11.299037 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 13:54:11.306095 ignition[1405]: INFO : Ignition 2.19.0 Jan 30 13:54:11.306095 ignition[1405]: INFO : Stage: umount Jan 30 13:54:11.311992 ignition[1405]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:54:11.311992 ignition[1405]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 30 13:54:11.311992 ignition[1405]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 30 13:54:11.318022 ignition[1405]: INFO : PUT result: OK Jan 30 13:54:11.321847 ignition[1405]: INFO : umount: umount passed Jan 30 13:54:11.324519 ignition[1405]: INFO : Ignition finished successfully Jan 30 13:54:11.327299 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 13:54:11.327545 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 13:54:11.333412 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 13:54:11.333530 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 13:54:11.335702 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 13:54:11.335838 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 13:54:11.338930 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 30 13:54:11.338988 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 30 13:54:11.340574 systemd[1]: Stopped target network.target - Network. Jan 30 13:54:11.342002 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 13:54:11.342106 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:54:11.344163 systemd[1]: Stopped target paths.target - Path Units. Jan 30 13:54:11.349174 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 13:54:11.354147 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:54:11.371215 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 13:54:11.373097 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 13:54:11.385930 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 13:54:11.386004 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:54:11.389402 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 13:54:11.389587 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:54:11.406344 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 13:54:11.406543 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 13:54:11.408995 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 13:54:11.409103 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 13:54:11.415136 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 13:54:11.419140 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 13:54:11.432139 systemd-networkd[1163]: eth0: DHCPv6 lease lost Jan 30 13:54:11.434120 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 13:54:11.436897 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 13:54:11.438250 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 13:54:11.441769 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 13:54:11.441832 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:54:11.450207 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 13:54:11.451655 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 13:54:11.451726 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:54:11.455033 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:54:11.463539 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 13:54:11.463644 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 13:54:11.476294 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 13:54:11.476481 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:54:11.480978 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 13:54:11.481054 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 13:54:11.482848 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 13:54:11.482887 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:54:11.490794 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 13:54:11.490874 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:54:11.495258 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 13:54:11.495317 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 13:54:11.499641 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:54:11.499705 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:54:11.514403 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 13:54:11.516029 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:54:11.516111 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:54:11.517690 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 13:54:11.517737 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 13:54:11.522944 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 13:54:11.523011 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:54:11.525269 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 13:54:11.525325 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:54:11.529507 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:54:11.529574 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:54:11.534464 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 13:54:11.534630 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 13:54:11.547639 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 13:54:11.547797 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 13:54:11.569035 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 13:54:11.569214 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 13:54:11.572020 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 13:54:11.573309 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 13:54:11.573384 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 13:54:11.581308 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 13:54:11.604137 systemd[1]: Switching root. Jan 30 13:54:11.636440 systemd-journald[178]: Journal stopped Jan 30 13:54:13.498499 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Jan 30 13:54:13.498585 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 13:54:13.498616 kernel: SELinux: policy capability open_perms=1 Jan 30 13:54:13.498641 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 13:54:13.498665 kernel: SELinux: policy capability always_check_network=0 Jan 30 13:54:13.498683 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 13:54:13.498705 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 13:54:13.498725 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 13:54:13.498745 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 13:54:13.498766 kernel: audit: type=1403 audit(1738245252.074:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 13:54:13.498793 systemd[1]: Successfully loaded SELinux policy in 40.361ms. Jan 30 13:54:13.498822 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.861ms. Jan 30 13:54:13.498852 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:54:13.498874 systemd[1]: Detected virtualization amazon. Jan 30 13:54:13.498896 systemd[1]: Detected architecture x86-64. Jan 30 13:54:13.498920 systemd[1]: Detected first boot. Jan 30 13:54:13.498947 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:54:13.498969 zram_generator::config[1469]: No configuration found. Jan 30 13:54:13.498991 systemd[1]: Populated /etc with preset unit settings. Jan 30 13:54:13.499014 systemd[1]: Queued start job for default target multi-user.target. Jan 30 13:54:13.499036 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 30 13:54:13.499083 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 13:54:13.499107 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 13:54:13.499141 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 13:54:13.499169 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 13:54:13.499193 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 13:54:13.499215 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 13:54:13.499238 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 13:54:13.499263 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 13:54:13.499287 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:54:13.499313 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:54:13.499337 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 13:54:13.499367 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 13:54:13.499391 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 13:54:13.499415 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:54:13.499439 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 30 13:54:13.499470 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:54:13.499494 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 13:54:13.499519 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:54:13.499545 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:54:13.499567 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:54:13.499597 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:54:13.499622 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 13:54:13.499648 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 13:54:13.499674 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 13:54:13.499698 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 13:54:13.499722 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:54:13.499746 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:54:13.499771 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:54:13.499796 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 13:54:13.499825 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 13:54:13.499849 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 13:54:13.499874 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 13:54:13.499897 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:54:13.499922 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 13:54:13.499945 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 13:54:13.499971 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 13:54:13.499996 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 13:54:13.500024 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:54:13.500049 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:54:13.500091 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 13:54:13.500117 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:54:13.500142 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:54:13.500165 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:54:13.500408 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 13:54:13.500433 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:54:13.500459 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 13:54:13.500487 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 30 13:54:13.500515 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 30 13:54:13.500538 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:54:13.500562 kernel: fuse: init (API version 7.39) Jan 30 13:54:13.500587 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:54:13.500611 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 13:54:13.500636 kernel: loop: module loaded Jan 30 13:54:13.500659 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 13:54:13.500688 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:54:13.500715 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:54:13.500740 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 13:54:13.500761 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 13:54:13.500783 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 13:54:13.500807 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 13:54:13.500831 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 13:54:13.500854 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 13:54:13.500879 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 13:54:13.500908 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:54:13.500935 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 13:54:13.500958 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 13:54:13.500984 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:54:13.501008 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:54:13.501033 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:54:13.501082 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:54:13.501105 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 13:54:13.501210 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 13:54:13.501241 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:54:13.501263 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:54:13.501284 kernel: ACPI: bus type drm_connector registered Jan 30 13:54:13.501305 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:54:13.501327 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:54:13.501352 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:54:13.501375 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 13:54:13.501397 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 13:54:13.501420 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 13:54:13.501477 systemd-journald[1566]: Collecting audit messages is disabled. Jan 30 13:54:13.501525 systemd-journald[1566]: Journal started Jan 30 13:54:13.501567 systemd-journald[1566]: Runtime Journal (/run/log/journal/ec28d207c21420a8f8add82a24ad1135) is 4.8M, max 38.6M, 33.7M free. Jan 30 13:54:13.505341 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 13:54:13.513084 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 13:54:13.523179 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 13:54:13.528110 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 13:54:13.546097 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:54:13.580253 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 13:54:13.580358 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:54:13.588081 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:54:13.603720 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:54:13.609027 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:54:13.614976 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 13:54:13.616974 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 13:54:13.626022 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 13:54:13.664299 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 13:54:13.678314 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 13:54:13.699695 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:54:13.709891 systemd-tmpfiles[1596]: ACLs are not supported, ignoring. Jan 30 13:54:13.709924 systemd-tmpfiles[1596]: ACLs are not supported, ignoring. Jan 30 13:54:13.720653 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:54:13.738746 systemd-journald[1566]: Time spent on flushing to /var/log/journal/ec28d207c21420a8f8add82a24ad1135 is 54.288ms for 938 entries. Jan 30 13:54:13.738746 systemd-journald[1566]: System Journal (/var/log/journal/ec28d207c21420a8f8add82a24ad1135) is 8.0M, max 195.6M, 187.6M free. Jan 30 13:54:13.814909 systemd-journald[1566]: Received client request to flush runtime journal. Jan 30 13:54:13.739370 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 13:54:13.742711 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:54:13.752594 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 13:54:13.788237 udevadm[1627]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 30 13:54:13.818346 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 13:54:13.821812 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 13:54:13.845692 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:54:13.873878 systemd-tmpfiles[1636]: ACLs are not supported, ignoring. Jan 30 13:54:13.874394 systemd-tmpfiles[1636]: ACLs are not supported, ignoring. Jan 30 13:54:13.882079 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:54:14.568713 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 13:54:14.582964 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:54:14.646471 systemd-udevd[1642]: Using default interface naming scheme 'v255'. Jan 30 13:54:14.698843 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:54:14.715443 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:54:14.769386 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 13:54:14.846329 (udev-worker)[1654]: Network interface NamePolicy= disabled on kernel command line. Jan 30 13:54:14.850671 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 30 13:54:14.930610 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 13:54:14.974107 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 30 13:54:14.991112 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Jan 30 13:54:15.002621 kernel: ACPI: button: Power Button [PWRF] Jan 30 13:54:15.002653 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Jan 30 13:54:15.017708 kernel: ACPI: button: Sleep Button [SLPF] Jan 30 13:54:15.056158 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input5 Jan 30 13:54:15.073509 systemd-networkd[1646]: lo: Link UP Jan 30 13:54:15.073523 systemd-networkd[1646]: lo: Gained carrier Jan 30 13:54:15.076546 systemd-networkd[1646]: Enumeration completed Jan 30 13:54:15.076733 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:54:15.077212 systemd-networkd[1646]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:54:15.077218 systemd-networkd[1646]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:54:15.083343 systemd-networkd[1646]: eth0: Link UP Jan 30 13:54:15.083558 systemd-networkd[1646]: eth0: Gained carrier Jan 30 13:54:15.083588 systemd-networkd[1646]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:54:15.085916 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 13:54:15.094634 systemd-networkd[1646]: eth0: DHCPv4 address 172.31.31.232/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 30 13:54:15.099974 systemd-networkd[1646]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:54:15.109186 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1652) Jan 30 13:54:15.218385 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:54:15.230089 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 13:54:15.374892 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 13:54:15.404650 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 30 13:54:15.416896 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 13:54:15.601338 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:54:15.627033 lvm[1764]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:54:15.668298 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 13:54:15.670890 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:54:15.679429 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 13:54:15.700476 lvm[1769]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:54:15.738096 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 13:54:15.742412 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 13:54:15.744628 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 13:54:15.744669 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:54:15.746823 systemd[1]: Reached target machines.target - Containers. Jan 30 13:54:15.750010 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 13:54:15.759405 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 13:54:15.773017 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 13:54:15.774978 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:54:15.790161 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 13:54:15.802444 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 13:54:15.814497 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 13:54:15.818707 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 13:54:15.868884 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 13:54:15.885186 kernel: loop0: detected capacity change from 0 to 61336 Jan 30 13:54:15.920500 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 13:54:15.921999 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 13:54:15.947094 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 13:54:15.972087 kernel: loop1: detected capacity change from 0 to 140768 Jan 30 13:54:16.042092 kernel: loop2: detected capacity change from 0 to 142488 Jan 30 13:54:16.154553 kernel: loop3: detected capacity change from 0 to 210664 Jan 30 13:54:16.245096 kernel: loop4: detected capacity change from 0 to 61336 Jan 30 13:54:16.283098 kernel: loop5: detected capacity change from 0 to 140768 Jan 30 13:54:16.321097 kernel: loop6: detected capacity change from 0 to 142488 Jan 30 13:54:16.365089 kernel: loop7: detected capacity change from 0 to 210664 Jan 30 13:54:16.404649 (sd-merge)[1791]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 30 13:54:16.405429 (sd-merge)[1791]: Merged extensions into '/usr'. Jan 30 13:54:16.419108 systemd[1]: Reloading requested from client PID 1777 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 13:54:16.419129 systemd[1]: Reloading... Jan 30 13:54:16.527260 zram_generator::config[1819]: No configuration found. Jan 30 13:54:16.584886 systemd-networkd[1646]: eth0: Gained IPv6LL Jan 30 13:54:16.768503 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:54:16.791089 ldconfig[1773]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 13:54:16.871357 systemd[1]: Reloading finished in 450 ms. Jan 30 13:54:16.891873 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 13:54:16.894405 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 13:54:16.897682 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 13:54:16.919961 systemd[1]: Starting ensure-sysext.service... Jan 30 13:54:16.933650 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:54:16.953253 systemd[1]: Reloading requested from client PID 1877 ('systemctl') (unit ensure-sysext.service)... Jan 30 13:54:16.953438 systemd[1]: Reloading... Jan 30 13:54:17.018830 systemd-tmpfiles[1878]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 13:54:17.019663 systemd-tmpfiles[1878]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 13:54:17.026969 systemd-tmpfiles[1878]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 13:54:17.027433 systemd-tmpfiles[1878]: ACLs are not supported, ignoring. Jan 30 13:54:17.027533 systemd-tmpfiles[1878]: ACLs are not supported, ignoring. Jan 30 13:54:17.032822 systemd-tmpfiles[1878]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:54:17.032839 systemd-tmpfiles[1878]: Skipping /boot Jan 30 13:54:17.055442 systemd-tmpfiles[1878]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:54:17.055458 systemd-tmpfiles[1878]: Skipping /boot Jan 30 13:54:17.125112 zram_generator::config[1906]: No configuration found. Jan 30 13:54:17.352525 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:54:17.540275 systemd[1]: Reloading finished in 585 ms. Jan 30 13:54:17.564905 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:54:17.573041 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 13:54:17.580988 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 13:54:17.585343 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 13:54:17.598678 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:54:17.608403 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 13:54:17.628430 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:54:17.628744 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:54:17.639867 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:54:17.652896 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:54:17.664987 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:54:17.667711 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:54:17.669254 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:54:17.684681 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:54:17.690048 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:54:17.691227 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:54:17.691462 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:54:17.695967 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 13:54:17.709537 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:54:17.709917 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:54:17.716612 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:54:17.718332 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:54:17.718645 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 13:54:17.720219 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:54:17.721651 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:54:17.721917 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:54:17.725940 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:54:17.727807 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:54:17.732303 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:54:17.737375 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:54:17.749920 systemd[1]: Finished ensure-sysext.service. Jan 30 13:54:17.753005 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:54:17.753323 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:54:17.760353 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:54:17.762752 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:54:17.772786 augenrules[1997]: No rules Jan 30 13:54:17.772507 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 13:54:17.780750 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 13:54:17.796406 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 13:54:17.830612 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 13:54:17.848648 systemd-resolved[1966]: Positive Trust Anchors: Jan 30 13:54:17.848674 systemd-resolved[1966]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:54:17.848722 systemd-resolved[1966]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:54:17.855483 systemd-resolved[1966]: Defaulting to hostname 'linux'. Jan 30 13:54:17.858112 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:54:17.859837 systemd[1]: Reached target network.target - Network. Jan 30 13:54:17.861390 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 13:54:17.863216 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:54:17.877695 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 13:54:17.880671 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 13:54:17.880741 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:54:17.883174 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 13:54:17.885332 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 13:54:17.887911 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 13:54:17.890035 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 13:54:17.892049 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 13:54:17.893990 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 13:54:17.894042 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:54:17.895752 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:54:17.899106 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 13:54:17.902979 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 13:54:17.910241 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 13:54:17.918989 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 13:54:17.928240 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:54:17.935977 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:54:17.937763 systemd[1]: System is tainted: cgroupsv1 Jan 30 13:54:17.937822 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:54:17.937851 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:54:17.951219 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 13:54:17.958481 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 30 13:54:17.963431 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 13:54:17.981731 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 13:54:17.987031 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 13:54:17.990245 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 13:54:18.011364 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:54:18.037429 jq[2020]: false Jan 30 13:54:18.043905 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 13:54:18.059397 systemd[1]: Started ntpd.service - Network Time Service. Jan 30 13:54:18.069316 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 13:54:18.081990 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 30 13:54:18.097274 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 13:54:18.124236 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 13:54:18.142296 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 13:54:18.145657 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 13:54:18.158475 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 13:54:18.173252 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 13:54:18.200744 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 13:54:18.201296 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 13:54:18.225712 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 13:54:18.245653 jq[2041]: true Jan 30 13:54:18.229235 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 13:54:18.266673 extend-filesystems[2021]: Found loop4 Jan 30 13:54:18.266673 extend-filesystems[2021]: Found loop5 Jan 30 13:54:18.266673 extend-filesystems[2021]: Found loop6 Jan 30 13:54:18.266673 extend-filesystems[2021]: Found loop7 Jan 30 13:54:18.266673 extend-filesystems[2021]: Found nvme0n1 Jan 30 13:54:18.266673 extend-filesystems[2021]: Found nvme0n1p1 Jan 30 13:54:18.266673 extend-filesystems[2021]: Found nvme0n1p2 Jan 30 13:54:18.266673 extend-filesystems[2021]: Found nvme0n1p3 Jan 30 13:54:18.266673 extend-filesystems[2021]: Found usr Jan 30 13:54:18.266673 extend-filesystems[2021]: Found nvme0n1p4 Jan 30 13:54:18.266673 extend-filesystems[2021]: Found nvme0n1p6 Jan 30 13:54:18.266673 extend-filesystems[2021]: Found nvme0n1p7 Jan 30 13:54:18.266673 extend-filesystems[2021]: Found nvme0n1p9 Jan 30 13:54:18.266673 extend-filesystems[2021]: Checking size of /dev/nvme0n1p9 Jan 30 13:54:18.287291 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 13:54:18.274487 dbus-daemon[2019]: [system] SELinux support is enabled Jan 30 13:54:18.336899 dbus-daemon[2019]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1646 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 30 13:54:18.353940 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 13:54:18.363289 update_engine[2038]: I20250130 13:54:18.355671 2038 main.cc:92] Flatcar Update Engine starting Jan 30 13:54:18.363289 update_engine[2038]: I20250130 13:54:18.359901 2038 update_check_scheduler.cc:74] Next update check in 11m55s Jan 30 13:54:18.399110 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 13:54:18.403792 extend-filesystems[2021]: Resized partition /dev/nvme0n1p9 Jan 30 13:54:18.409281 ntpd[2027]: ntpd 4.2.8p17@1.4004-o Wed Jan 29 09:31:52 UTC 2025 (1): Starting Jan 30 13:54:18.409314 ntpd[2027]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 30 13:54:18.409941 ntpd[2027]: 30 Jan 13:54:18 ntpd[2027]: ntpd 4.2.8p17@1.4004-o Wed Jan 29 09:31:52 UTC 2025 (1): Starting Jan 30 13:54:18.409941 ntpd[2027]: 30 Jan 13:54:18 ntpd[2027]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 30 13:54:18.409941 ntpd[2027]: 30 Jan 13:54:18 ntpd[2027]: ---------------------------------------------------- Jan 30 13:54:18.409941 ntpd[2027]: 30 Jan 13:54:18 ntpd[2027]: ntp-4 is maintained by Network Time Foundation, Jan 30 13:54:18.409941 ntpd[2027]: 30 Jan 13:54:18 ntpd[2027]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 30 13:54:18.409941 ntpd[2027]: 30 Jan 13:54:18 ntpd[2027]: corporation. Support and training for ntp-4 are Jan 30 13:54:18.409941 ntpd[2027]: 30 Jan 13:54:18 ntpd[2027]: available at https://www.nwtime.org/support Jan 30 13:54:18.409941 ntpd[2027]: 30 Jan 13:54:18 ntpd[2027]: ---------------------------------------------------- Jan 30 13:54:18.409324 ntpd[2027]: ---------------------------------------------------- Jan 30 13:54:18.409332 ntpd[2027]: ntp-4 is maintained by Network Time Foundation, Jan 30 13:54:18.409341 ntpd[2027]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 30 13:54:18.409349 ntpd[2027]: corporation. Support and training for ntp-4 are Jan 30 13:54:18.409358 ntpd[2027]: available at https://www.nwtime.org/support Jan 30 13:54:18.409367 ntpd[2027]: ---------------------------------------------------- Jan 30 13:54:18.414755 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 13:54:18.415438 (ntainerd)[2060]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 13:54:18.418340 jq[2058]: true Jan 30 13:54:18.428888 ntpd[2027]: proto: precision = 0.064 usec (-24) Jan 30 13:54:18.431156 ntpd[2027]: 30 Jan 13:54:18 ntpd[2027]: proto: precision = 0.064 usec (-24) Jan 30 13:54:18.431156 ntpd[2027]: 30 Jan 13:54:18 ntpd[2027]: basedate set to 2025-01-17 Jan 30 13:54:18.431156 ntpd[2027]: 30 Jan 13:54:18 ntpd[2027]: gps base set to 2025-01-19 (week 2350) Jan 30 13:54:18.429340 ntpd[2027]: basedate set to 2025-01-17 Jan 30 13:54:18.429360 ntpd[2027]: gps base set to 2025-01-19 (week 2350) Jan 30 13:54:18.432581 extend-filesystems[2072]: resize2fs 1.47.1 (20-May-2024) Jan 30 13:54:18.461043 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jan 30 13:54:18.461141 coreos-metadata[2017]: Jan 30 13:54:18.461 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 30 13:54:18.469013 ntpd[2027]: Listen and drop on 0 v6wildcard [::]:123 Jan 30 13:54:18.487230 ntpd[2027]: 30 Jan 13:54:18 ntpd[2027]: Listen and drop on 0 v6wildcard [::]:123 Jan 30 13:54:18.487230 ntpd[2027]: 30 Jan 13:54:18 ntpd[2027]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 30 13:54:18.487230 ntpd[2027]: 30 Jan 13:54:18 ntpd[2027]: Listen normally on 2 lo 127.0.0.1:123 Jan 30 13:54:18.487230 ntpd[2027]: 30 Jan 13:54:18 ntpd[2027]: Listen normally on 3 eth0 172.31.31.232:123 Jan 30 13:54:18.487230 ntpd[2027]: 30 Jan 13:54:18 ntpd[2027]: Listen normally on 4 lo [::1]:123 Jan 30 13:54:18.487230 ntpd[2027]: 30 Jan 13:54:18 ntpd[2027]: Listen normally on 5 eth0 [fe80::431:d9ff:fed8:4e9%2]:123 Jan 30 13:54:18.487230 ntpd[2027]: 30 Jan 13:54:18 ntpd[2027]: Listening on routing socket on fd #22 for interface updates Jan 30 13:54:18.487572 coreos-metadata[2017]: Jan 30 13:54:18.486 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 30 13:54:18.480386 ntpd[2027]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 30 13:54:18.480597 ntpd[2027]: Listen normally on 2 lo 127.0.0.1:123 Jan 30 13:54:18.480635 ntpd[2027]: Listen normally on 3 eth0 172.31.31.232:123 Jan 30 13:54:18.480694 ntpd[2027]: Listen normally on 4 lo [::1]:123 Jan 30 13:54:18.480743 ntpd[2027]: Listen normally on 5 eth0 [fe80::431:d9ff:fed8:4e9%2]:123 Jan 30 13:54:18.480784 ntpd[2027]: Listening on routing socket on fd #22 for interface updates Jan 30 13:54:18.492619 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 13:54:18.492693 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 13:54:18.495410 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 13:54:18.495439 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 13:54:18.499028 coreos-metadata[2017]: Jan 30 13:54:18.498 INFO Fetch successful Jan 30 13:54:18.499028 coreos-metadata[2017]: Jan 30 13:54:18.498 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 30 13:54:18.500318 systemd[1]: Started update-engine.service - Update Engine. Jan 30 13:54:18.503989 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 13:54:18.508511 coreos-metadata[2017]: Jan 30 13:54:18.506 INFO Fetch successful Jan 30 13:54:18.508511 coreos-metadata[2017]: Jan 30 13:54:18.508 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 30 13:54:18.514303 systemd-logind[2036]: Watching system buttons on /dev/input/event1 (Power Button) Jan 30 13:54:18.516227 ntpd[2027]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 30 13:54:18.541394 ntpd[2027]: 30 Jan 13:54:18 ntpd[2027]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 30 13:54:18.541394 ntpd[2027]: 30 Jan 13:54:18 ntpd[2027]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 30 13:54:18.514331 systemd-logind[2036]: Watching system buttons on /dev/input/event2 (Sleep Button) Jan 30 13:54:18.516265 ntpd[2027]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 30 13:54:18.541739 coreos-metadata[2017]: Jan 30 13:54:18.535 INFO Fetch successful Jan 30 13:54:18.541739 coreos-metadata[2017]: Jan 30 13:54:18.535 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 30 13:54:18.541739 coreos-metadata[2017]: Jan 30 13:54:18.536 INFO Fetch successful Jan 30 13:54:18.541739 coreos-metadata[2017]: Jan 30 13:54:18.536 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 30 13:54:18.541739 coreos-metadata[2017]: Jan 30 13:54:18.538 INFO Fetch failed with 404: resource not found Jan 30 13:54:18.541739 coreos-metadata[2017]: Jan 30 13:54:18.538 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 30 13:54:18.514355 systemd-logind[2036]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 30 13:54:18.516726 dbus-daemon[2019]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 30 13:54:18.514586 systemd-logind[2036]: New seat seat0. Jan 30 13:54:18.550896 coreos-metadata[2017]: Jan 30 13:54:18.548 INFO Fetch successful Jan 30 13:54:18.550896 coreos-metadata[2017]: Jan 30 13:54:18.548 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 30 13:54:18.514659 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 13:54:18.557340 coreos-metadata[2017]: Jan 30 13:54:18.551 INFO Fetch successful Jan 30 13:54:18.557340 coreos-metadata[2017]: Jan 30 13:54:18.551 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 30 13:54:18.557340 coreos-metadata[2017]: Jan 30 13:54:18.555 INFO Fetch successful Jan 30 13:54:18.557340 coreos-metadata[2017]: Jan 30 13:54:18.555 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 30 13:54:18.521023 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 13:54:18.562856 coreos-metadata[2017]: Jan 30 13:54:18.560 INFO Fetch successful Jan 30 13:54:18.562856 coreos-metadata[2017]: Jan 30 13:54:18.560 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 30 13:54:18.539480 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 30 13:54:18.577991 coreos-metadata[2017]: Jan 30 13:54:18.568 INFO Fetch successful Jan 30 13:54:18.565271 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 30 13:54:18.774178 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jan 30 13:54:18.687328 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 30 13:54:18.788108 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 30 13:54:18.801343 extend-filesystems[2072]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 30 13:54:18.801343 extend-filesystems[2072]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 30 13:54:18.801343 extend-filesystems[2072]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jan 30 13:54:18.793479 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 13:54:18.822909 extend-filesystems[2021]: Resized filesystem in /dev/nvme0n1p9 Jan 30 13:54:18.793861 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 13:54:18.838389 bash[2104]: Updated "/home/core/.ssh/authorized_keys" Jan 30 13:54:18.804361 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 13:54:18.859152 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (2117) Jan 30 13:54:18.840164 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 13:54:18.864565 systemd[1]: Starting sshkeys.service... Jan 30 13:54:18.878945 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 30 13:54:18.888610 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 30 13:54:18.989835 amazon-ssm-agent[2105]: Initializing new seelog logger Jan 30 13:54:18.989835 amazon-ssm-agent[2105]: New Seelog Logger Creation Complete Jan 30 13:54:18.989835 amazon-ssm-agent[2105]: 2025/01/30 13:54:18 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 13:54:18.989835 amazon-ssm-agent[2105]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 13:54:18.989835 amazon-ssm-agent[2105]: 2025/01/30 13:54:18 processing appconfig overrides Jan 30 13:54:18.997492 amazon-ssm-agent[2105]: 2025/01/30 13:54:18 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 13:54:18.997492 amazon-ssm-agent[2105]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 13:54:18.997492 amazon-ssm-agent[2105]: 2025/01/30 13:54:18 processing appconfig overrides Jan 30 13:54:18.997492 amazon-ssm-agent[2105]: 2025/01/30 13:54:18 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 13:54:18.997492 amazon-ssm-agent[2105]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 13:54:18.997492 amazon-ssm-agent[2105]: 2025/01/30 13:54:18 processing appconfig overrides Jan 30 13:54:18.997492 amazon-ssm-agent[2105]: 2025-01-30 13:54:18 INFO Proxy environment variables: Jan 30 13:54:19.015706 amazon-ssm-agent[2105]: 2025/01/30 13:54:19 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 13:54:19.015706 amazon-ssm-agent[2105]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 13:54:19.015706 amazon-ssm-agent[2105]: 2025/01/30 13:54:19 processing appconfig overrides Jan 30 13:54:19.048383 locksmithd[2089]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 13:54:19.069525 coreos-metadata[2170]: Jan 30 13:54:19.069 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 30 13:54:19.069525 coreos-metadata[2170]: Jan 30 13:54:19.069 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 30 13:54:19.069525 coreos-metadata[2170]: Jan 30 13:54:19.069 INFO Fetch successful Jan 30 13:54:19.069525 coreos-metadata[2170]: Jan 30 13:54:19.069 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 30 13:54:19.070177 coreos-metadata[2170]: Jan 30 13:54:19.070 INFO Fetch successful Jan 30 13:54:19.078976 unknown[2170]: wrote ssh authorized keys file for user: core Jan 30 13:54:19.100417 amazon-ssm-agent[2105]: 2025-01-30 13:54:18 INFO http_proxy: Jan 30 13:54:19.164867 update-ssh-keys[2221]: Updated "/home/core/.ssh/authorized_keys" Jan 30 13:54:19.168752 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 30 13:54:19.190880 systemd[1]: Finished sshkeys.service. Jan 30 13:54:19.212170 amazon-ssm-agent[2105]: 2025-01-30 13:54:18 INFO no_proxy: Jan 30 13:54:19.304285 amazon-ssm-agent[2105]: 2025-01-30 13:54:18 INFO https_proxy: Jan 30 13:54:19.406411 amazon-ssm-agent[2105]: 2025-01-30 13:54:18 INFO Checking if agent identity type OnPrem can be assumed Jan 30 13:54:19.467386 dbus-daemon[2019]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 30 13:54:19.467955 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 30 13:54:19.479221 dbus-daemon[2019]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2125 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 30 13:54:19.495584 systemd[1]: Starting polkit.service - Authorization Manager... Jan 30 13:54:19.509762 amazon-ssm-agent[2105]: 2025-01-30 13:54:18 INFO Checking if agent identity type EC2 can be assumed Jan 30 13:54:19.539483 polkitd[2258]: Started polkitd version 121 Jan 30 13:54:19.560967 polkitd[2258]: Loading rules from directory /etc/polkit-1/rules.d Jan 30 13:54:19.561135 polkitd[2258]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 30 13:54:19.564101 polkitd[2258]: Finished loading, compiling and executing 2 rules Jan 30 13:54:19.565550 dbus-daemon[2019]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 30 13:54:19.565996 systemd[1]: Started polkit.service - Authorization Manager. Jan 30 13:54:19.570582 polkitd[2258]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 30 13:54:19.610470 systemd-resolved[1966]: System hostname changed to 'ip-172-31-31-232'. Jan 30 13:54:19.610470 systemd-hostnamed[2125]: Hostname set to (transient) Jan 30 13:54:19.611349 amazon-ssm-agent[2105]: 2025-01-30 13:54:19 INFO Agent will take identity from EC2 Jan 30 13:54:19.625674 sshd_keygen[2059]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 13:54:19.671140 containerd[2060]: time="2025-01-30T13:54:19.670286662Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 30 13:54:19.710926 amazon-ssm-agent[2105]: 2025-01-30 13:54:19 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 30 13:54:19.737245 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 13:54:19.755832 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 13:54:19.788474 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 13:54:19.788938 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 13:54:19.811493 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 13:54:19.825075 amazon-ssm-agent[2105]: 2025-01-30 13:54:19 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 30 13:54:19.830718 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 13:54:19.831452 containerd[2060]: time="2025-01-30T13:54:19.831356030Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:54:19.836583 containerd[2060]: time="2025-01-30T13:54:19.836527223Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:54:19.836723 containerd[2060]: time="2025-01-30T13:54:19.836704360Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 13:54:19.836801 containerd[2060]: time="2025-01-30T13:54:19.836786288Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 13:54:19.837099 containerd[2060]: time="2025-01-30T13:54:19.837049115Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 13:54:19.837199 containerd[2060]: time="2025-01-30T13:54:19.837183193Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 13:54:19.837366 containerd[2060]: time="2025-01-30T13:54:19.837333235Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:54:19.837444 containerd[2060]: time="2025-01-30T13:54:19.837428615Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:54:19.837889 containerd[2060]: time="2025-01-30T13:54:19.837794310Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:54:19.838005 containerd[2060]: time="2025-01-30T13:54:19.837987603Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 13:54:19.838107 containerd[2060]: time="2025-01-30T13:54:19.838089151Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:54:19.838197 containerd[2060]: time="2025-01-30T13:54:19.838184049Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 13:54:19.838435 containerd[2060]: time="2025-01-30T13:54:19.838399814Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:54:19.838924 containerd[2060]: time="2025-01-30T13:54:19.838894586Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:54:19.839306 containerd[2060]: time="2025-01-30T13:54:19.839282372Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:54:19.839410 containerd[2060]: time="2025-01-30T13:54:19.839394490Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 13:54:19.839603 containerd[2060]: time="2025-01-30T13:54:19.839585156Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 13:54:19.839765 containerd[2060]: time="2025-01-30T13:54:19.839731711Z" level=info msg="metadata content store policy set" policy=shared Jan 30 13:54:19.840456 systemd[1]: Started sshd@0-172.31.31.232:22-139.178.68.195:53218.service - OpenSSH per-connection server daemon (139.178.68.195:53218). Jan 30 13:54:19.852706 containerd[2060]: time="2025-01-30T13:54:19.852166197Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 13:54:19.852706 containerd[2060]: time="2025-01-30T13:54:19.852324372Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 13:54:19.852706 containerd[2060]: time="2025-01-30T13:54:19.852355593Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 13:54:19.852706 containerd[2060]: time="2025-01-30T13:54:19.852388608Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 13:54:19.852706 containerd[2060]: time="2025-01-30T13:54:19.852408531Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 13:54:19.854567 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 13:54:19.858809 containerd[2060]: time="2025-01-30T13:54:19.858138044Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 13:54:19.869288 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 13:54:19.872898 containerd[2060]: time="2025-01-30T13:54:19.871729757Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 13:54:19.872898 containerd[2060]: time="2025-01-30T13:54:19.872052462Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 13:54:19.872898 containerd[2060]: time="2025-01-30T13:54:19.872095324Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 13:54:19.872898 containerd[2060]: time="2025-01-30T13:54:19.872114866Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 13:54:19.872898 containerd[2060]: time="2025-01-30T13:54:19.872134811Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 13:54:19.872898 containerd[2060]: time="2025-01-30T13:54:19.872156008Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 13:54:19.872898 containerd[2060]: time="2025-01-30T13:54:19.872174384Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 13:54:19.872898 containerd[2060]: time="2025-01-30T13:54:19.872194361Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 13:54:19.872898 containerd[2060]: time="2025-01-30T13:54:19.872215503Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 13:54:19.872898 containerd[2060]: time="2025-01-30T13:54:19.872235324Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 13:54:19.872898 containerd[2060]: time="2025-01-30T13:54:19.872252385Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 13:54:19.872898 containerd[2060]: time="2025-01-30T13:54:19.872268765Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 13:54:19.882268 containerd[2060]: time="2025-01-30T13:54:19.882220506Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 13:54:19.882380 containerd[2060]: time="2025-01-30T13:54:19.882284257Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 13:54:19.882380 containerd[2060]: time="2025-01-30T13:54:19.882305308Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 13:54:19.882380 containerd[2060]: time="2025-01-30T13:54:19.882325249Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 13:54:19.882380 containerd[2060]: time="2025-01-30T13:54:19.882343119Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 13:54:19.882380 containerd[2060]: time="2025-01-30T13:54:19.882373109Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 13:54:19.882749 containerd[2060]: time="2025-01-30T13:54:19.882393290Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 13:54:19.882749 containerd[2060]: time="2025-01-30T13:54:19.882415785Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 13:54:19.882749 containerd[2060]: time="2025-01-30T13:54:19.882435722Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 13:54:19.882749 containerd[2060]: time="2025-01-30T13:54:19.882468634Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 13:54:19.882749 containerd[2060]: time="2025-01-30T13:54:19.882629875Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 13:54:19.882749 containerd[2060]: time="2025-01-30T13:54:19.882659380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 13:54:19.882749 containerd[2060]: time="2025-01-30T13:54:19.882678655Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 13:54:19.882749 containerd[2060]: time="2025-01-30T13:54:19.882705628Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 13:54:19.882749 containerd[2060]: time="2025-01-30T13:54:19.882744542Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 13:54:19.886176 containerd[2060]: time="2025-01-30T13:54:19.882763280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 13:54:19.886176 containerd[2060]: time="2025-01-30T13:54:19.882781287Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 13:54:19.886176 containerd[2060]: time="2025-01-30T13:54:19.882854544Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 13:54:19.886176 containerd[2060]: time="2025-01-30T13:54:19.882880485Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 13:54:19.886176 containerd[2060]: time="2025-01-30T13:54:19.882898097Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 13:54:19.886176 containerd[2060]: time="2025-01-30T13:54:19.882919073Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 13:54:19.886176 containerd[2060]: time="2025-01-30T13:54:19.882934664Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 13:54:19.886176 containerd[2060]: time="2025-01-30T13:54:19.882952941Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 13:54:19.886176 containerd[2060]: time="2025-01-30T13:54:19.882968551Z" level=info msg="NRI interface is disabled by configuration." Jan 30 13:54:19.886176 containerd[2060]: time="2025-01-30T13:54:19.882985171Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 13:54:19.883677 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 30 13:54:19.886925 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 13:54:19.890319 containerd[2060]: time="2025-01-30T13:54:19.888449240Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 13:54:19.890319 containerd[2060]: time="2025-01-30T13:54:19.888596569Z" level=info msg="Connect containerd service" Jan 30 13:54:19.890319 containerd[2060]: time="2025-01-30T13:54:19.888665302Z" level=info msg="using legacy CRI server" Jan 30 13:54:19.890319 containerd[2060]: time="2025-01-30T13:54:19.888678030Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 13:54:19.890319 containerd[2060]: time="2025-01-30T13:54:19.888924664Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 13:54:19.899441 containerd[2060]: time="2025-01-30T13:54:19.898320936Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:54:19.902518 containerd[2060]: time="2025-01-30T13:54:19.900856746Z" level=info msg="Start subscribing containerd event" Jan 30 13:54:19.902833 containerd[2060]: time="2025-01-30T13:54:19.902783391Z" level=info msg="Start recovering state" Jan 30 13:54:19.902967 containerd[2060]: time="2025-01-30T13:54:19.901263378Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 13:54:19.905083 containerd[2060]: time="2025-01-30T13:54:19.903138603Z" level=info msg="Start event monitor" Jan 30 13:54:19.905083 containerd[2060]: time="2025-01-30T13:54:19.904200900Z" level=info msg="Start snapshots syncer" Jan 30 13:54:19.905083 containerd[2060]: time="2025-01-30T13:54:19.904216429Z" level=info msg="Start cni network conf syncer for default" Jan 30 13:54:19.905083 containerd[2060]: time="2025-01-30T13:54:19.904227222Z" level=info msg="Start streaming server" Jan 30 13:54:19.905083 containerd[2060]: time="2025-01-30T13:54:19.904523176Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 13:54:19.905083 containerd[2060]: time="2025-01-30T13:54:19.904873646Z" level=info msg="containerd successfully booted in 0.241442s" Jan 30 13:54:19.904745 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 13:54:19.927080 amazon-ssm-agent[2105]: 2025-01-30 13:54:19 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 30 13:54:20.024452 amazon-ssm-agent[2105]: 2025-01-30 13:54:19 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 30 13:54:20.126707 amazon-ssm-agent[2105]: 2025-01-30 13:54:19 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Jan 30 13:54:20.164165 sshd[2287]: Accepted publickey for core from 139.178.68.195 port 53218 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:54:20.170170 sshd[2287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:54:20.190999 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 13:54:20.198530 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 13:54:20.200256 amazon-ssm-agent[2105]: 2025-01-30 13:54:19 INFO [amazon-ssm-agent] Starting Core Agent Jan 30 13:54:20.200256 amazon-ssm-agent[2105]: 2025-01-30 13:54:19 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 30 13:54:20.200256 amazon-ssm-agent[2105]: 2025-01-30 13:54:19 INFO [Registrar] Starting registrar module Jan 30 13:54:20.200256 amazon-ssm-agent[2105]: 2025-01-30 13:54:19 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 30 13:54:20.200256 amazon-ssm-agent[2105]: 2025-01-30 13:54:20 INFO [EC2Identity] EC2 registration was successful. Jan 30 13:54:20.200256 amazon-ssm-agent[2105]: 2025-01-30 13:54:20 INFO [CredentialRefresher] credentialRefresher has started Jan 30 13:54:20.200256 amazon-ssm-agent[2105]: 2025-01-30 13:54:20 INFO [CredentialRefresher] Starting credentials refresher loop Jan 30 13:54:20.200256 amazon-ssm-agent[2105]: 2025-01-30 13:54:20 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 30 13:54:20.207347 systemd-logind[2036]: New session 1 of user core. Jan 30 13:54:20.230393 amazon-ssm-agent[2105]: 2025-01-30 13:54:20 INFO [CredentialRefresher] Next credential rotation will be in 32.43332601845 minutes Jan 30 13:54:20.236939 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 13:54:20.251496 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 13:54:20.275784 (systemd)[2299]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 13:54:20.483266 systemd[2299]: Queued start job for default target default.target. Jan 30 13:54:20.483890 systemd[2299]: Created slice app.slice - User Application Slice. Jan 30 13:54:20.483931 systemd[2299]: Reached target paths.target - Paths. Jan 30 13:54:20.483952 systemd[2299]: Reached target timers.target - Timers. Jan 30 13:54:20.491342 systemd[2299]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 13:54:20.512190 systemd[2299]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 13:54:20.512288 systemd[2299]: Reached target sockets.target - Sockets. Jan 30 13:54:20.512309 systemd[2299]: Reached target basic.target - Basic System. Jan 30 13:54:20.512373 systemd[2299]: Reached target default.target - Main User Target. Jan 30 13:54:20.512414 systemd[2299]: Startup finished in 226ms. Jan 30 13:54:20.514270 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 13:54:20.522250 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 13:54:20.689593 systemd[1]: Started sshd@1-172.31.31.232:22-139.178.68.195:53220.service - OpenSSH per-connection server daemon (139.178.68.195:53220). Jan 30 13:54:20.872556 sshd[2311]: Accepted publickey for core from 139.178.68.195 port 53220 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:54:20.877378 sshd[2311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:54:20.885045 systemd-logind[2036]: New session 2 of user core. Jan 30 13:54:20.889509 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 13:54:20.996408 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:54:20.999685 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 13:54:21.006966 systemd[1]: Startup finished in 9.480s (kernel) + 8.970s (userspace) = 18.450s. Jan 30 13:54:21.031326 sshd[2311]: pam_unix(sshd:session): session closed for user core Jan 30 13:54:21.036681 systemd-logind[2036]: Session 2 logged out. Waiting for processes to exit. Jan 30 13:54:21.038508 systemd[1]: sshd@1-172.31.31.232:22-139.178.68.195:53220.service: Deactivated successfully. Jan 30 13:54:21.045087 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 13:54:21.061756 systemd-logind[2036]: Removed session 2. Jan 30 13:54:21.071007 systemd[1]: Started sshd@2-172.31.31.232:22-139.178.68.195:53230.service - OpenSSH per-connection server daemon (139.178.68.195:53230). Jan 30 13:54:21.148741 (kubelet)[2324]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:54:21.229944 amazon-ssm-agent[2105]: 2025-01-30 13:54:21 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 30 13:54:21.320042 sshd[2329]: Accepted publickey for core from 139.178.68.195 port 53230 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:54:21.323891 sshd[2329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:54:21.331875 amazon-ssm-agent[2105]: 2025-01-30 13:54:21 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2335) started Jan 30 13:54:21.342176 systemd-logind[2036]: New session 3 of user core. Jan 30 13:54:21.344865 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 13:54:21.432433 amazon-ssm-agent[2105]: 2025-01-30 13:54:21 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 30 13:54:21.484390 sshd[2329]: pam_unix(sshd:session): session closed for user core Jan 30 13:54:21.492051 systemd[1]: sshd@2-172.31.31.232:22-139.178.68.195:53230.service: Deactivated successfully. Jan 30 13:54:21.499120 systemd[1]: session-3.scope: Deactivated successfully. Jan 30 13:54:21.501426 systemd-logind[2036]: Session 3 logged out. Waiting for processes to exit. Jan 30 13:54:21.506546 systemd-logind[2036]: Removed session 3. Jan 30 13:54:22.262161 kubelet[2324]: E0130 13:54:22.262047 2324 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:54:22.264823 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:54:22.265425 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:54:25.722247 systemd-resolved[1966]: Clock change detected. Flushing caches. Jan 30 13:54:31.822598 systemd[1]: Started sshd@3-172.31.31.232:22-139.178.68.195:37966.service - OpenSSH per-connection server daemon (139.178.68.195:37966). Jan 30 13:54:31.980815 sshd[2360]: Accepted publickey for core from 139.178.68.195 port 37966 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:54:31.982547 sshd[2360]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:54:31.987128 systemd-logind[2036]: New session 4 of user core. Jan 30 13:54:31.994579 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 13:54:32.117169 sshd[2360]: pam_unix(sshd:session): session closed for user core Jan 30 13:54:32.121712 systemd[1]: sshd@3-172.31.31.232:22-139.178.68.195:37966.service: Deactivated successfully. Jan 30 13:54:32.127191 systemd-logind[2036]: Session 4 logged out. Waiting for processes to exit. Jan 30 13:54:32.128080 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 13:54:32.129641 systemd-logind[2036]: Removed session 4. Jan 30 13:54:32.149954 systemd[1]: Started sshd@4-172.31.31.232:22-139.178.68.195:37972.service - OpenSSH per-connection server daemon (139.178.68.195:37972). Jan 30 13:54:32.323828 sshd[2368]: Accepted publickey for core from 139.178.68.195 port 37972 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:54:32.325477 sshd[2368]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:54:32.345034 systemd-logind[2036]: New session 5 of user core. Jan 30 13:54:32.361112 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 13:54:32.496675 sshd[2368]: pam_unix(sshd:session): session closed for user core Jan 30 13:54:32.503026 systemd[1]: sshd@4-172.31.31.232:22-139.178.68.195:37972.service: Deactivated successfully. Jan 30 13:54:32.507998 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 13:54:32.509143 systemd-logind[2036]: Session 5 logged out. Waiting for processes to exit. Jan 30 13:54:32.510438 systemd-logind[2036]: Removed session 5. Jan 30 13:54:32.534635 systemd[1]: Started sshd@5-172.31.31.232:22-139.178.68.195:37982.service - OpenSSH per-connection server daemon (139.178.68.195:37982). Jan 30 13:54:32.694863 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 13:54:32.701647 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:54:32.737315 sshd[2376]: Accepted publickey for core from 139.178.68.195 port 37982 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:54:32.740505 sshd[2376]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:54:32.756975 systemd-logind[2036]: New session 6 of user core. Jan 30 13:54:32.767082 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 13:54:32.893806 sshd[2376]: pam_unix(sshd:session): session closed for user core Jan 30 13:54:32.904124 systemd[1]: sshd@5-172.31.31.232:22-139.178.68.195:37982.service: Deactivated successfully. Jan 30 13:54:32.908701 systemd-logind[2036]: Session 6 logged out. Waiting for processes to exit. Jan 30 13:54:32.909906 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 13:54:32.911717 systemd-logind[2036]: Removed session 6. Jan 30 13:54:32.923760 systemd[1]: Started sshd@6-172.31.31.232:22-139.178.68.195:37994.service - OpenSSH per-connection server daemon (139.178.68.195:37994). Jan 30 13:54:33.004513 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:54:33.007977 (kubelet)[2397]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:54:33.074370 kubelet[2397]: E0130 13:54:33.074288 2397 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:54:33.080635 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:54:33.081008 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:54:33.111805 sshd[2388]: Accepted publickey for core from 139.178.68.195 port 37994 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:54:33.113503 sshd[2388]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:54:33.118262 systemd-logind[2036]: New session 7 of user core. Jan 30 13:54:33.128577 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 13:54:33.259261 sudo[2409]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 13:54:33.260112 sudo[2409]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:54:33.279352 sudo[2409]: pam_unix(sudo:session): session closed for user root Jan 30 13:54:33.305028 sshd[2388]: pam_unix(sshd:session): session closed for user core Jan 30 13:54:33.309976 systemd[1]: sshd@6-172.31.31.232:22-139.178.68.195:37994.service: Deactivated successfully. Jan 30 13:54:33.316653 systemd-logind[2036]: Session 7 logged out. Waiting for processes to exit. Jan 30 13:54:33.317559 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 13:54:33.320351 systemd-logind[2036]: Removed session 7. Jan 30 13:54:33.335120 systemd[1]: Started sshd@7-172.31.31.232:22-139.178.68.195:38010.service - OpenSSH per-connection server daemon (139.178.68.195:38010). Jan 30 13:54:33.507952 sshd[2414]: Accepted publickey for core from 139.178.68.195 port 38010 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:54:33.509747 sshd[2414]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:54:33.517924 systemd-logind[2036]: New session 8 of user core. Jan 30 13:54:33.528834 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 13:54:33.631500 sudo[2419]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 13:54:33.632706 sudo[2419]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:54:33.638874 sudo[2419]: pam_unix(sudo:session): session closed for user root Jan 30 13:54:33.645115 sudo[2418]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 30 13:54:33.645850 sudo[2418]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:54:33.666726 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 30 13:54:33.670965 auditctl[2422]: No rules Jan 30 13:54:33.672023 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 13:54:33.672436 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 30 13:54:33.686183 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 13:54:33.716917 augenrules[2441]: No rules Jan 30 13:54:33.719689 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 13:54:33.723799 sudo[2418]: pam_unix(sudo:session): session closed for user root Jan 30 13:54:33.746957 sshd[2414]: pam_unix(sshd:session): session closed for user core Jan 30 13:54:33.751215 systemd[1]: sshd@7-172.31.31.232:22-139.178.68.195:38010.service: Deactivated successfully. Jan 30 13:54:33.757043 systemd-logind[2036]: Session 8 logged out. Waiting for processes to exit. Jan 30 13:54:33.757923 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 13:54:33.760808 systemd-logind[2036]: Removed session 8. Jan 30 13:54:33.777865 systemd[1]: Started sshd@8-172.31.31.232:22-139.178.68.195:38024.service - OpenSSH per-connection server daemon (139.178.68.195:38024). Jan 30 13:54:33.947185 sshd[2450]: Accepted publickey for core from 139.178.68.195 port 38024 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:54:33.948718 sshd[2450]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:54:33.954836 systemd-logind[2036]: New session 9 of user core. Jan 30 13:54:33.964646 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 13:54:34.068057 sudo[2454]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 13:54:34.068564 sudo[2454]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:54:35.235699 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:54:35.247942 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:54:35.305534 systemd[1]: Reloading requested from client PID 2493 ('systemctl') (unit session-9.scope)... Jan 30 13:54:35.305556 systemd[1]: Reloading... Jan 30 13:54:35.504229 zram_generator::config[2536]: No configuration found. Jan 30 13:54:35.747620 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:54:35.870481 systemd[1]: Reloading finished in 564 ms. Jan 30 13:54:35.958878 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 30 13:54:35.959156 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 30 13:54:35.959569 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:54:35.969463 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:54:36.383574 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:54:36.389033 (kubelet)[2605]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:54:36.463967 kubelet[2605]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:54:36.463967 kubelet[2605]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 13:54:36.463967 kubelet[2605]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:54:36.465047 kubelet[2605]: I0130 13:54:36.464266 2605 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:54:37.303722 kubelet[2605]: I0130 13:54:37.303682 2605 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 13:54:37.303722 kubelet[2605]: I0130 13:54:37.303712 2605 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:54:37.304081 kubelet[2605]: I0130 13:54:37.304053 2605 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 13:54:37.327162 kubelet[2605]: I0130 13:54:37.326932 2605 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:54:37.355628 kubelet[2605]: I0130 13:54:37.355591 2605 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:54:37.359752 kubelet[2605]: I0130 13:54:37.359684 2605 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:54:37.360064 kubelet[2605]: I0130 13:54:37.359750 2605 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172.31.31.232","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 13:54:37.360806 kubelet[2605]: I0130 13:54:37.360776 2605 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:54:37.360806 kubelet[2605]: I0130 13:54:37.360809 2605 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 13:54:37.360982 kubelet[2605]: I0130 13:54:37.360961 2605 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:54:37.362023 kubelet[2605]: I0130 13:54:37.361998 2605 kubelet.go:400] "Attempting to sync node with API server" Jan 30 13:54:37.362102 kubelet[2605]: I0130 13:54:37.362029 2605 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:54:37.362102 kubelet[2605]: I0130 13:54:37.362070 2605 kubelet.go:312] "Adding apiserver pod source" Jan 30 13:54:37.362102 kubelet[2605]: I0130 13:54:37.362096 2605 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:54:37.362952 kubelet[2605]: E0130 13:54:37.362907 2605 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:37.364242 kubelet[2605]: E0130 13:54:37.363307 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:37.367487 kubelet[2605]: I0130 13:54:37.367462 2605 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 13:54:37.369095 kubelet[2605]: I0130 13:54:37.369073 2605 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:54:37.369179 kubelet[2605]: W0130 13:54:37.369147 2605 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 13:54:37.370105 kubelet[2605]: I0130 13:54:37.370026 2605 server.go:1264] "Started kubelet" Jan 30 13:54:37.373244 kubelet[2605]: I0130 13:54:37.373192 2605 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:54:37.381248 kubelet[2605]: I0130 13:54:37.380519 2605 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:54:37.382301 kubelet[2605]: I0130 13:54:37.382272 2605 server.go:455] "Adding debug handlers to kubelet server" Jan 30 13:54:37.384178 kubelet[2605]: I0130 13:54:37.384083 2605 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:54:37.384688 kubelet[2605]: I0130 13:54:37.384659 2605 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:54:37.393744 kubelet[2605]: I0130 13:54:37.389488 2605 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 13:54:37.393744 kubelet[2605]: I0130 13:54:37.390246 2605 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 13:54:37.393744 kubelet[2605]: I0130 13:54:37.390348 2605 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:54:37.393744 kubelet[2605]: W0130 13:54:37.393449 2605 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "172.31.31.232" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 30 13:54:37.393744 kubelet[2605]: E0130 13:54:37.393660 2605 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.31.232" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 30 13:54:37.393744 kubelet[2605]: W0130 13:54:37.393504 2605 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 30 13:54:37.394173 kubelet[2605]: E0130 13:54:37.393785 2605 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 30 13:54:37.394369 kubelet[2605]: I0130 13:54:37.394348 2605 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:54:37.394618 kubelet[2605]: I0130 13:54:37.394593 2605 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:54:37.395356 kubelet[2605]: E0130 13:54:37.395236 2605 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.31.232.181f7cdec222043a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.31.232,UID:172.31.31.232,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172.31.31.232,},FirstTimestamp:2025-01-30 13:54:37.369893946 +0000 UTC m=+0.972530004,LastTimestamp:2025-01-30 13:54:37.369893946 +0000 UTC m=+0.972530004,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.31.232,}" Jan 30 13:54:37.395676 kubelet[2605]: E0130 13:54:37.395646 2605 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.31.31.232\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Jan 30 13:54:37.395803 kubelet[2605]: W0130 13:54:37.395786 2605 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 30 13:54:37.395890 kubelet[2605]: E0130 13:54:37.395819 2605 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 30 13:54:37.396621 kubelet[2605]: E0130 13:54:37.396602 2605 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:54:37.396831 kubelet[2605]: I0130 13:54:37.396812 2605 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:54:37.427617 kubelet[2605]: E0130 13:54:37.427289 2605 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.31.232.181f7cdec3b95a0e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.31.232,UID:172.31.31.232,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:172.31.31.232,},FirstTimestamp:2025-01-30 13:54:37.39658907 +0000 UTC m=+0.999225127,LastTimestamp:2025-01-30 13:54:37.39658907 +0000 UTC m=+0.999225127,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.31.232,}" Jan 30 13:54:37.472331 kubelet[2605]: I0130 13:54:37.472143 2605 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 13:54:37.472331 kubelet[2605]: I0130 13:54:37.472164 2605 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 13:54:37.472331 kubelet[2605]: I0130 13:54:37.472231 2605 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:54:37.480597 kubelet[2605]: I0130 13:54:37.480558 2605 policy_none.go:49] "None policy: Start" Jan 30 13:54:37.482765 kubelet[2605]: I0130 13:54:37.482726 2605 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 13:54:37.482859 kubelet[2605]: I0130 13:54:37.482757 2605 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:54:37.501756 kubelet[2605]: I0130 13:54:37.501118 2605 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:54:37.501896 kubelet[2605]: I0130 13:54:37.501847 2605 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:54:37.502069 kubelet[2605]: I0130 13:54:37.502048 2605 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:54:37.507794 kubelet[2605]: I0130 13:54:37.504321 2605 kubelet_node_status.go:73] "Attempting to register node" node="172.31.31.232" Jan 30 13:54:37.511265 kubelet[2605]: E0130 13:54:37.509080 2605 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.31.232\" not found" Jan 30 13:54:37.531645 kubelet[2605]: I0130 13:54:37.531518 2605 kubelet_node_status.go:76] "Successfully registered node" node="172.31.31.232" Jan 30 13:54:37.536235 kubelet[2605]: I0130 13:54:37.536178 2605 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:54:37.539456 kubelet[2605]: I0130 13:54:37.539271 2605 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:54:37.539456 kubelet[2605]: I0130 13:54:37.539460 2605 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 13:54:37.539715 kubelet[2605]: I0130 13:54:37.539482 2605 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 13:54:37.539715 kubelet[2605]: E0130 13:54:37.539627 2605 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 30 13:54:37.571437 kubelet[2605]: E0130 13:54:37.571308 2605 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.31.232\" not found" Jan 30 13:54:37.672332 kubelet[2605]: E0130 13:54:37.672278 2605 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.31.232\" not found" Jan 30 13:54:37.773267 kubelet[2605]: E0130 13:54:37.773222 2605 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.31.232\" not found" Jan 30 13:54:37.873670 kubelet[2605]: E0130 13:54:37.873547 2605 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.31.232\" not found" Jan 30 13:54:37.974231 kubelet[2605]: E0130 13:54:37.974172 2605 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.31.232\" not found" Jan 30 13:54:38.074983 kubelet[2605]: E0130 13:54:38.074932 2605 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.31.232\" not found" Jan 30 13:54:38.175861 kubelet[2605]: E0130 13:54:38.175732 2605 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.31.232\" not found" Jan 30 13:54:38.276556 kubelet[2605]: E0130 13:54:38.276506 2605 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.31.232\" not found" Jan 30 13:54:38.306217 kubelet[2605]: I0130 13:54:38.306169 2605 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 30 13:54:38.306409 kubelet[2605]: W0130 13:54:38.306390 2605 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 30 13:54:38.363460 kubelet[2605]: I0130 13:54:38.363405 2605 apiserver.go:52] "Watching apiserver" Jan 30 13:54:38.363672 kubelet[2605]: E0130 13:54:38.363402 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:38.375401 kubelet[2605]: I0130 13:54:38.375356 2605 topology_manager.go:215] "Topology Admit Handler" podUID="13378ebe-1ab0-4d88-9cda-3c266dc75263" podNamespace="calico-system" podName="calico-node-99xwr" Jan 30 13:54:38.375549 kubelet[2605]: I0130 13:54:38.375483 2605 topology_manager.go:215] "Topology Admit Handler" podUID="41331529-9d0a-4578-9d4a-d0617145104a" podNamespace="calico-system" podName="csi-node-driver-6m9m7" Jan 30 13:54:38.375596 kubelet[2605]: I0130 13:54:38.375574 2605 topology_manager.go:215] "Topology Admit Handler" podUID="6b93eabd-fc89-48b4-92bf-623f1324cc25" podNamespace="kube-system" podName="kube-proxy-76mkq" Jan 30 13:54:38.377224 kubelet[2605]: E0130 13:54:38.376013 2605 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6m9m7" podUID="41331529-9d0a-4578-9d4a-d0617145104a" Jan 30 13:54:38.383182 kubelet[2605]: I0130 13:54:38.383152 2605 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 30 13:54:38.388648 containerd[2060]: time="2025-01-30T13:54:38.388577338Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 13:54:38.393283 kubelet[2605]: I0130 13:54:38.393260 2605 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 30 13:54:38.395780 kubelet[2605]: I0130 13:54:38.393747 2605 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 13:54:38.415999 kubelet[2605]: I0130 13:54:38.415664 2605 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6b93eabd-fc89-48b4-92bf-623f1324cc25-xtables-lock\") pod \"kube-proxy-76mkq\" (UID: \"6b93eabd-fc89-48b4-92bf-623f1324cc25\") " pod="kube-system/kube-proxy-76mkq" Jan 30 13:54:38.415999 kubelet[2605]: I0130 13:54:38.415727 2605 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vc6h\" (UniqueName: \"kubernetes.io/projected/6b93eabd-fc89-48b4-92bf-623f1324cc25-kube-api-access-8vc6h\") pod \"kube-proxy-76mkq\" (UID: \"6b93eabd-fc89-48b4-92bf-623f1324cc25\") " pod="kube-system/kube-proxy-76mkq" Jan 30 13:54:38.415999 kubelet[2605]: I0130 13:54:38.415772 2605 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/13378ebe-1ab0-4d88-9cda-3c266dc75263-cni-bin-dir\") pod \"calico-node-99xwr\" (UID: \"13378ebe-1ab0-4d88-9cda-3c266dc75263\") " pod="calico-system/calico-node-99xwr" Jan 30 13:54:38.415999 kubelet[2605]: I0130 13:54:38.415805 2605 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/13378ebe-1ab0-4d88-9cda-3c266dc75263-flexvol-driver-host\") pod \"calico-node-99xwr\" (UID: \"13378ebe-1ab0-4d88-9cda-3c266dc75263\") " pod="calico-system/calico-node-99xwr" Jan 30 13:54:38.415999 kubelet[2605]: I0130 13:54:38.415832 2605 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/41331529-9d0a-4578-9d4a-d0617145104a-socket-dir\") pod \"csi-node-driver-6m9m7\" (UID: \"41331529-9d0a-4578-9d4a-d0617145104a\") " pod="calico-system/csi-node-driver-6m9m7" Jan 30 13:54:38.416341 kubelet[2605]: I0130 13:54:38.415865 2605 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/41331529-9d0a-4578-9d4a-d0617145104a-registration-dir\") pod \"csi-node-driver-6m9m7\" (UID: \"41331529-9d0a-4578-9d4a-d0617145104a\") " pod="calico-system/csi-node-driver-6m9m7" Jan 30 13:54:38.416341 kubelet[2605]: I0130 13:54:38.415893 2605 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/13378ebe-1ab0-4d88-9cda-3c266dc75263-lib-modules\") pod \"calico-node-99xwr\" (UID: \"13378ebe-1ab0-4d88-9cda-3c266dc75263\") " pod="calico-system/calico-node-99xwr" Jan 30 13:54:38.416341 kubelet[2605]: I0130 13:54:38.415926 2605 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/13378ebe-1ab0-4d88-9cda-3c266dc75263-xtables-lock\") pod \"calico-node-99xwr\" (UID: \"13378ebe-1ab0-4d88-9cda-3c266dc75263\") " pod="calico-system/calico-node-99xwr" Jan 30 13:54:38.416341 kubelet[2605]: I0130 13:54:38.415956 2605 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13378ebe-1ab0-4d88-9cda-3c266dc75263-tigera-ca-bundle\") pod \"calico-node-99xwr\" (UID: \"13378ebe-1ab0-4d88-9cda-3c266dc75263\") " pod="calico-system/calico-node-99xwr" Jan 30 13:54:38.416341 kubelet[2605]: I0130 13:54:38.415982 2605 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/41331529-9d0a-4578-9d4a-d0617145104a-kubelet-dir\") pod \"csi-node-driver-6m9m7\" (UID: \"41331529-9d0a-4578-9d4a-d0617145104a\") " pod="calico-system/csi-node-driver-6m9m7" Jan 30 13:54:38.417583 kubelet[2605]: I0130 13:54:38.416009 2605 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/13378ebe-1ab0-4d88-9cda-3c266dc75263-node-certs\") pod \"calico-node-99xwr\" (UID: \"13378ebe-1ab0-4d88-9cda-3c266dc75263\") " pod="calico-system/calico-node-99xwr" Jan 30 13:54:38.417583 kubelet[2605]: I0130 13:54:38.416037 2605 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/13378ebe-1ab0-4d88-9cda-3c266dc75263-var-run-calico\") pod \"calico-node-99xwr\" (UID: \"13378ebe-1ab0-4d88-9cda-3c266dc75263\") " pod="calico-system/calico-node-99xwr" Jan 30 13:54:38.417583 kubelet[2605]: I0130 13:54:38.416062 2605 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6b93eabd-fc89-48b4-92bf-623f1324cc25-lib-modules\") pod \"kube-proxy-76mkq\" (UID: \"6b93eabd-fc89-48b4-92bf-623f1324cc25\") " pod="kube-system/kube-proxy-76mkq" Jan 30 13:54:38.417583 kubelet[2605]: I0130 13:54:38.416088 2605 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bwr8\" (UniqueName: \"kubernetes.io/projected/13378ebe-1ab0-4d88-9cda-3c266dc75263-kube-api-access-6bwr8\") pod \"calico-node-99xwr\" (UID: \"13378ebe-1ab0-4d88-9cda-3c266dc75263\") " pod="calico-system/calico-node-99xwr" Jan 30 13:54:38.417583 kubelet[2605]: I0130 13:54:38.416109 2605 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/41331529-9d0a-4578-9d4a-d0617145104a-varrun\") pod \"csi-node-driver-6m9m7\" (UID: \"41331529-9d0a-4578-9d4a-d0617145104a\") " pod="calico-system/csi-node-driver-6m9m7" Jan 30 13:54:38.421534 kubelet[2605]: I0130 13:54:38.416138 2605 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzhh2\" (UniqueName: \"kubernetes.io/projected/41331529-9d0a-4578-9d4a-d0617145104a-kube-api-access-hzhh2\") pod \"csi-node-driver-6m9m7\" (UID: \"41331529-9d0a-4578-9d4a-d0617145104a\") " pod="calico-system/csi-node-driver-6m9m7" Jan 30 13:54:38.421534 kubelet[2605]: I0130 13:54:38.416164 2605 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6b93eabd-fc89-48b4-92bf-623f1324cc25-kube-proxy\") pod \"kube-proxy-76mkq\" (UID: \"6b93eabd-fc89-48b4-92bf-623f1324cc25\") " pod="kube-system/kube-proxy-76mkq" Jan 30 13:54:38.421534 kubelet[2605]: I0130 13:54:38.416211 2605 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/13378ebe-1ab0-4d88-9cda-3c266dc75263-policysync\") pod \"calico-node-99xwr\" (UID: \"13378ebe-1ab0-4d88-9cda-3c266dc75263\") " pod="calico-system/calico-node-99xwr" Jan 30 13:54:38.421534 kubelet[2605]: I0130 13:54:38.416243 2605 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/13378ebe-1ab0-4d88-9cda-3c266dc75263-var-lib-calico\") pod \"calico-node-99xwr\" (UID: \"13378ebe-1ab0-4d88-9cda-3c266dc75263\") " pod="calico-system/calico-node-99xwr" Jan 30 13:54:38.421534 kubelet[2605]: I0130 13:54:38.416277 2605 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/13378ebe-1ab0-4d88-9cda-3c266dc75263-cni-net-dir\") pod \"calico-node-99xwr\" (UID: \"13378ebe-1ab0-4d88-9cda-3c266dc75263\") " pod="calico-system/calico-node-99xwr" Jan 30 13:54:38.421743 kubelet[2605]: I0130 13:54:38.416303 2605 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/13378ebe-1ab0-4d88-9cda-3c266dc75263-cni-log-dir\") pod \"calico-node-99xwr\" (UID: \"13378ebe-1ab0-4d88-9cda-3c266dc75263\") " pod="calico-system/calico-node-99xwr" Jan 30 13:54:38.452720 sudo[2454]: pam_unix(sudo:session): session closed for user root Jan 30 13:54:38.478503 sshd[2450]: pam_unix(sshd:session): session closed for user core Jan 30 13:54:38.489016 systemd[1]: sshd@8-172.31.31.232:22-139.178.68.195:38024.service: Deactivated successfully. Jan 30 13:54:38.499433 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 13:54:38.503110 systemd-logind[2036]: Session 9 logged out. Waiting for processes to exit. Jan 30 13:54:38.505246 systemd-logind[2036]: Removed session 9. Jan 30 13:54:38.524268 kubelet[2605]: E0130 13:54:38.524244 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:38.526951 kubelet[2605]: W0130 13:54:38.525139 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:38.526951 kubelet[2605]: E0130 13:54:38.525185 2605 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:38.526951 kubelet[2605]: E0130 13:54:38.525569 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:38.526951 kubelet[2605]: W0130 13:54:38.525601 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:38.526951 kubelet[2605]: E0130 13:54:38.525619 2605 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:38.526951 kubelet[2605]: E0130 13:54:38.526072 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:38.526951 kubelet[2605]: W0130 13:54:38.526086 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:38.526951 kubelet[2605]: E0130 13:54:38.526100 2605 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:38.526951 kubelet[2605]: E0130 13:54:38.526798 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:38.526951 kubelet[2605]: W0130 13:54:38.526809 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:38.527733 kubelet[2605]: E0130 13:54:38.526823 2605 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:38.530676 kubelet[2605]: E0130 13:54:38.530656 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:38.532411 kubelet[2605]: W0130 13:54:38.530729 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:38.532411 kubelet[2605]: E0130 13:54:38.530748 2605 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:38.583082 kubelet[2605]: E0130 13:54:38.576647 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:38.583082 kubelet[2605]: W0130 13:54:38.576673 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:38.583082 kubelet[2605]: E0130 13:54:38.576702 2605 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:38.583082 kubelet[2605]: E0130 13:54:38.580766 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:38.583082 kubelet[2605]: W0130 13:54:38.580786 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:38.583082 kubelet[2605]: E0130 13:54:38.580965 2605 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:38.594491 kubelet[2605]: E0130 13:54:38.594378 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:38.594854 kubelet[2605]: W0130 13:54:38.594402 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:38.594854 kubelet[2605]: E0130 13:54:38.594736 2605 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:38.682627 containerd[2060]: time="2025-01-30T13:54:38.682584890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-76mkq,Uid:6b93eabd-fc89-48b4-92bf-623f1324cc25,Namespace:kube-system,Attempt:0,}" Jan 30 13:54:38.686316 containerd[2060]: time="2025-01-30T13:54:38.686275413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-99xwr,Uid:13378ebe-1ab0-4d88-9cda-3c266dc75263,Namespace:calico-system,Attempt:0,}" Jan 30 13:54:39.283269 containerd[2060]: time="2025-01-30T13:54:39.283191029Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:54:39.284767 containerd[2060]: time="2025-01-30T13:54:39.284724688Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:54:39.292702 containerd[2060]: time="2025-01-30T13:54:39.291880702Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 30 13:54:39.292702 containerd[2060]: time="2025-01-30T13:54:39.292309240Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:54:39.293756 containerd[2060]: time="2025-01-30T13:54:39.293726843Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:54:39.296088 containerd[2060]: time="2025-01-30T13:54:39.296052310Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:54:39.296911 containerd[2060]: time="2025-01-30T13:54:39.296876708Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 614.190592ms" Jan 30 13:54:39.301092 containerd[2060]: time="2025-01-30T13:54:39.301033454Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 614.660967ms" Jan 30 13:54:39.365797 kubelet[2605]: E0130 13:54:39.365606 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:39.538103 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1053541738.mount: Deactivated successfully. Jan 30 13:54:39.544166 kubelet[2605]: E0130 13:54:39.544108 2605 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6m9m7" podUID="41331529-9d0a-4578-9d4a-d0617145104a" Jan 30 13:54:39.560044 containerd[2060]: time="2025-01-30T13:54:39.559598711Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:54:39.560044 containerd[2060]: time="2025-01-30T13:54:39.559665829Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:54:39.560044 containerd[2060]: time="2025-01-30T13:54:39.559684216Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:54:39.560044 containerd[2060]: time="2025-01-30T13:54:39.559813954Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:54:39.564249 containerd[2060]: time="2025-01-30T13:54:39.563604935Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:54:39.564249 containerd[2060]: time="2025-01-30T13:54:39.563726065Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:54:39.564249 containerd[2060]: time="2025-01-30T13:54:39.563795126Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:54:39.564249 containerd[2060]: time="2025-01-30T13:54:39.563940481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:54:39.712959 containerd[2060]: time="2025-01-30T13:54:39.712918202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-76mkq,Uid:6b93eabd-fc89-48b4-92bf-623f1324cc25,Namespace:kube-system,Attempt:0,} returns sandbox id \"2cb207649b91c6d7a5476ddf2a3546f55cbd827c420506eb15c257e727da85f5\"" Jan 30 13:54:39.715605 containerd[2060]: time="2025-01-30T13:54:39.715359917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-99xwr,Uid:13378ebe-1ab0-4d88-9cda-3c266dc75263,Namespace:calico-system,Attempt:0,} returns sandbox id \"c0fdf4180a1985526bcd661228ea0675700768e22c407cb56a89ce3ead7f09b2\"" Jan 30 13:54:39.717192 containerd[2060]: time="2025-01-30T13:54:39.717084713Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 30 13:54:40.367304 kubelet[2605]: E0130 13:54:40.366612 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:41.251985 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2420847928.mount: Deactivated successfully. Jan 30 13:54:41.367347 kubelet[2605]: E0130 13:54:41.367312 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:41.544132 kubelet[2605]: E0130 13:54:41.541835 2605 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6m9m7" podUID="41331529-9d0a-4578-9d4a-d0617145104a" Jan 30 13:54:42.001352 containerd[2060]: time="2025-01-30T13:54:42.001074246Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:42.004927 containerd[2060]: time="2025-01-30T13:54:42.004827801Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=29058337" Jan 30 13:54:42.012231 containerd[2060]: time="2025-01-30T13:54:42.011695473Z" level=info msg="ImageCreate event name:\"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:42.021063 containerd[2060]: time="2025-01-30T13:54:42.021010213Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:42.032609 containerd[2060]: time="2025-01-30T13:54:42.032549218Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"29057356\" in 2.3136973s" Jan 30 13:54:42.033111 containerd[2060]: time="2025-01-30T13:54:42.033072201Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\"" Jan 30 13:54:42.035668 containerd[2060]: time="2025-01-30T13:54:42.035640627Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 30 13:54:42.036909 containerd[2060]: time="2025-01-30T13:54:42.036873216Z" level=info msg="CreateContainer within sandbox \"2cb207649b91c6d7a5476ddf2a3546f55cbd827c420506eb15c257e727da85f5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 13:54:42.062409 containerd[2060]: time="2025-01-30T13:54:42.062354319Z" level=info msg="CreateContainer within sandbox \"2cb207649b91c6d7a5476ddf2a3546f55cbd827c420506eb15c257e727da85f5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3ffbba3b2d1d869de02116f66ff8c964afd5b4516d35e18e78ff0aa61104a01d\"" Jan 30 13:54:42.070217 containerd[2060]: time="2025-01-30T13:54:42.070152710Z" level=info msg="StartContainer for \"3ffbba3b2d1d869de02116f66ff8c964afd5b4516d35e18e78ff0aa61104a01d\"" Jan 30 13:54:42.173082 containerd[2060]: time="2025-01-30T13:54:42.173038927Z" level=info msg="StartContainer for \"3ffbba3b2d1d869de02116f66ff8c964afd5b4516d35e18e78ff0aa61104a01d\" returns successfully" Jan 30 13:54:42.368351 kubelet[2605]: E0130 13:54:42.368238 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:42.601521 kubelet[2605]: I0130 13:54:42.601322 2605 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-76mkq" podStartSLOduration=3.283062107 podStartE2EDuration="5.601297737s" podCreationTimestamp="2025-01-30 13:54:37 +0000 UTC" firstStartedPulling="2025-01-30 13:54:39.716318004 +0000 UTC m=+3.318954253" lastFinishedPulling="2025-01-30 13:54:42.034553844 +0000 UTC m=+5.637189883" observedRunningTime="2025-01-30 13:54:42.60066048 +0000 UTC m=+6.203296539" watchObservedRunningTime="2025-01-30 13:54:42.601297737 +0000 UTC m=+6.203933796" Jan 30 13:54:42.635429 kubelet[2605]: E0130 13:54:42.634931 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:42.635429 kubelet[2605]: W0130 13:54:42.635089 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:42.635429 kubelet[2605]: E0130 13:54:42.635119 2605 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:42.636112 kubelet[2605]: E0130 13:54:42.635923 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:42.636112 kubelet[2605]: W0130 13:54:42.635955 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:42.636112 kubelet[2605]: E0130 13:54:42.635974 2605 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:42.637358 kubelet[2605]: E0130 13:54:42.637140 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:42.637358 kubelet[2605]: W0130 13:54:42.637288 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:42.637358 kubelet[2605]: E0130 13:54:42.637308 2605 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:42.637985 kubelet[2605]: E0130 13:54:42.637763 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:42.637985 kubelet[2605]: W0130 13:54:42.637779 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:42.637985 kubelet[2605]: E0130 13:54:42.637805 2605 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:42.638409 kubelet[2605]: E0130 13:54:42.638335 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:42.638638 kubelet[2605]: W0130 13:54:42.638472 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:42.638638 kubelet[2605]: E0130 13:54:42.638493 2605 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:42.639058 kubelet[2605]: E0130 13:54:42.638922 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:42.639058 kubelet[2605]: W0130 13:54:42.638935 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:42.639058 kubelet[2605]: E0130 13:54:42.638963 2605 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:42.639946 kubelet[2605]: E0130 13:54:42.639696 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:42.639946 kubelet[2605]: W0130 13:54:42.639771 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:42.639946 kubelet[2605]: E0130 13:54:42.639789 2605 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:42.641882 kubelet[2605]: E0130 13:54:42.641543 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:42.641882 kubelet[2605]: W0130 13:54:42.641558 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:42.641882 kubelet[2605]: E0130 13:54:42.641575 2605 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:42.642802 kubelet[2605]: E0130 13:54:42.642680 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:42.642802 kubelet[2605]: W0130 13:54:42.642695 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:42.642802 kubelet[2605]: E0130 13:54:42.642711 2605 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:42.644775 kubelet[2605]: E0130 13:54:42.644067 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:42.644775 kubelet[2605]: W0130 13:54:42.644081 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:42.644775 kubelet[2605]: E0130 13:54:42.644096 2605 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:42.647527 kubelet[2605]: E0130 13:54:42.647501 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:42.647527 kubelet[2605]: W0130 13:54:42.647523 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:42.647649 kubelet[2605]: E0130 13:54:42.647547 2605 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:42.648243 kubelet[2605]: E0130 13:54:42.647981 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:42.648243 kubelet[2605]: W0130 13:54:42.647997 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:42.648243 kubelet[2605]: E0130 13:54:42.648016 2605 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:42.648615 kubelet[2605]: E0130 13:54:42.648540 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:42.648615 kubelet[2605]: W0130 13:54:42.648554 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:42.648615 kubelet[2605]: E0130 13:54:42.648568 2605 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:42.651253 kubelet[2605]: E0130 13:54:42.648781 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:42.651253 kubelet[2605]: W0130 13:54:42.648792 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:42.651253 kubelet[2605]: E0130 13:54:42.648803 2605 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:42.651253 kubelet[2605]: E0130 13:54:42.649358 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:42.651253 kubelet[2605]: W0130 13:54:42.649369 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:42.651253 kubelet[2605]: E0130 13:54:42.649383 2605 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:42.651253 kubelet[2605]: E0130 13:54:42.650138 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:42.651253 kubelet[2605]: W0130 13:54:42.650150 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:42.651253 kubelet[2605]: E0130 13:54:42.650214 2605 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:42.651253 kubelet[2605]: E0130 13:54:42.650861 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:42.652017 kubelet[2605]: W0130 13:54:42.650871 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:42.652017 kubelet[2605]: E0130 13:54:42.650884 2605 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:42.652017 kubelet[2605]: E0130 13:54:42.651267 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:42.652017 kubelet[2605]: W0130 13:54:42.651277 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:42.652017 kubelet[2605]: E0130 13:54:42.651330 2605 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:42.652017 kubelet[2605]: E0130 13:54:42.651748 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:42.652017 kubelet[2605]: W0130 13:54:42.651759 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:42.652017 kubelet[2605]: E0130 13:54:42.651772 2605 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:42.652017 kubelet[2605]: E0130 13:54:42.651973 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:42.652017 kubelet[2605]: W0130 13:54:42.651981 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:42.652563 kubelet[2605]: E0130 13:54:42.651993 2605 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:42.652563 kubelet[2605]: E0130 13:54:42.652328 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:42.652563 kubelet[2605]: W0130 13:54:42.652337 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:42.652563 kubelet[2605]: E0130 13:54:42.652349 2605 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:42.653713 kubelet[2605]: E0130 13:54:42.652766 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:42.653713 kubelet[2605]: W0130 13:54:42.652780 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:42.653713 kubelet[2605]: E0130 13:54:42.652797 2605 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:42.653713 kubelet[2605]: E0130 13:54:42.653269 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:42.653713 kubelet[2605]: W0130 13:54:42.653280 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:42.653713 kubelet[2605]: E0130 13:54:42.653308 2605 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:42.653713 kubelet[2605]: E0130 13:54:42.653625 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:42.653713 kubelet[2605]: W0130 13:54:42.653635 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:42.653713 kubelet[2605]: E0130 13:54:42.653652 2605 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:42.655875 kubelet[2605]: E0130 13:54:42.655803 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:42.655936 kubelet[2605]: W0130 13:54:42.655892 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:42.658221 kubelet[2605]: E0130 13:54:42.656067 2605 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:42.658221 kubelet[2605]: E0130 13:54:42.656679 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:42.658221 kubelet[2605]: W0130 13:54:42.656692 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:42.658221 kubelet[2605]: E0130 13:54:42.656719 2605 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:42.658221 kubelet[2605]: E0130 13:54:42.657744 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:42.658221 kubelet[2605]: W0130 13:54:42.657756 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:42.658221 kubelet[2605]: E0130 13:54:42.657883 2605 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:42.658221 kubelet[2605]: E0130 13:54:42.658226 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:42.658712 kubelet[2605]: W0130 13:54:42.658238 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:42.658712 kubelet[2605]: E0130 13:54:42.658263 2605 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:42.658712 kubelet[2605]: E0130 13:54:42.658591 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:42.658712 kubelet[2605]: W0130 13:54:42.658602 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:42.658712 kubelet[2605]: E0130 13:54:42.658614 2605 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:42.658931 kubelet[2605]: E0130 13:54:42.658863 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:42.658931 kubelet[2605]: W0130 13:54:42.658872 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:42.658931 kubelet[2605]: E0130 13:54:42.658885 2605 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:42.659250 kubelet[2605]: E0130 13:54:42.659237 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:42.659720 kubelet[2605]: W0130 13:54:42.659319 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:42.659720 kubelet[2605]: E0130 13:54:42.659338 2605 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:42.660046 kubelet[2605]: E0130 13:54:42.660033 2605 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:42.661145 kubelet[2605]: W0130 13:54:42.661125 2605 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:42.661397 kubelet[2605]: E0130 13:54:42.661358 2605 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:43.296497 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount277669759.mount: Deactivated successfully. Jan 30 13:54:43.368889 kubelet[2605]: E0130 13:54:43.368629 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:43.472546 containerd[2060]: time="2025-01-30T13:54:43.472480247Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:43.474600 containerd[2060]: time="2025-01-30T13:54:43.474097717Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Jan 30 13:54:43.475819 containerd[2060]: time="2025-01-30T13:54:43.475615774Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:43.493240 containerd[2060]: time="2025-01-30T13:54:43.490147905Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:43.493240 containerd[2060]: time="2025-01-30T13:54:43.492231501Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.455913628s" Jan 30 13:54:43.493240 containerd[2060]: time="2025-01-30T13:54:43.492279762Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 30 13:54:43.500734 containerd[2060]: time="2025-01-30T13:54:43.499617060Z" level=info msg="CreateContainer within sandbox \"c0fdf4180a1985526bcd661228ea0675700768e22c407cb56a89ce3ead7f09b2\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 30 13:54:43.541168 kubelet[2605]: E0130 13:54:43.539820 2605 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6m9m7" podUID="41331529-9d0a-4578-9d4a-d0617145104a" Jan 30 13:54:43.541995 containerd[2060]: time="2025-01-30T13:54:43.541947121Z" level=info msg="CreateContainer within sandbox \"c0fdf4180a1985526bcd661228ea0675700768e22c407cb56a89ce3ead7f09b2\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"479b1b761a09cf0044ae604a46075f766ec5900e16fbddbd4d48778e27b32dc7\"" Jan 30 13:54:43.542629 containerd[2060]: time="2025-01-30T13:54:43.542594651Z" level=info msg="StartContainer for \"479b1b761a09cf0044ae604a46075f766ec5900e16fbddbd4d48778e27b32dc7\"" Jan 30 13:54:43.611283 containerd[2060]: time="2025-01-30T13:54:43.609663012Z" level=info msg="StartContainer for \"479b1b761a09cf0044ae604a46075f766ec5900e16fbddbd4d48778e27b32dc7\" returns successfully" Jan 30 13:54:44.049227 containerd[2060]: time="2025-01-30T13:54:44.049133548Z" level=info msg="shim disconnected" id=479b1b761a09cf0044ae604a46075f766ec5900e16fbddbd4d48778e27b32dc7 namespace=k8s.io Jan 30 13:54:44.049227 containerd[2060]: time="2025-01-30T13:54:44.049216684Z" level=warning msg="cleaning up after shim disconnected" id=479b1b761a09cf0044ae604a46075f766ec5900e16fbddbd4d48778e27b32dc7 namespace=k8s.io Jan 30 13:54:44.049227 containerd[2060]: time="2025-01-30T13:54:44.049229864Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:54:44.213614 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-479b1b761a09cf0044ae604a46075f766ec5900e16fbddbd4d48778e27b32dc7-rootfs.mount: Deactivated successfully. Jan 30 13:54:44.369378 kubelet[2605]: E0130 13:54:44.369229 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:44.578235 containerd[2060]: time="2025-01-30T13:54:44.577029597Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 30 13:54:45.370341 kubelet[2605]: E0130 13:54:45.370295 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:45.540367 kubelet[2605]: E0130 13:54:45.539956 2605 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6m9m7" podUID="41331529-9d0a-4578-9d4a-d0617145104a" Jan 30 13:54:46.371317 kubelet[2605]: E0130 13:54:46.371267 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:47.371915 kubelet[2605]: E0130 13:54:47.371859 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:47.541864 kubelet[2605]: E0130 13:54:47.540333 2605 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6m9m7" podUID="41331529-9d0a-4578-9d4a-d0617145104a" Jan 30 13:54:48.372953 kubelet[2605]: E0130 13:54:48.372802 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:48.842789 containerd[2060]: time="2025-01-30T13:54:48.842733615Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:48.846559 containerd[2060]: time="2025-01-30T13:54:48.846458111Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 30 13:54:48.848415 containerd[2060]: time="2025-01-30T13:54:48.848335743Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:48.854711 containerd[2060]: time="2025-01-30T13:54:48.854635017Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:48.857175 containerd[2060]: time="2025-01-30T13:54:48.856950561Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 4.278956253s" Jan 30 13:54:48.857175 containerd[2060]: time="2025-01-30T13:54:48.856997646Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 30 13:54:48.867360 containerd[2060]: time="2025-01-30T13:54:48.867309946Z" level=info msg="CreateContainer within sandbox \"c0fdf4180a1985526bcd661228ea0675700768e22c407cb56a89ce3ead7f09b2\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 30 13:54:48.899350 containerd[2060]: time="2025-01-30T13:54:48.899303482Z" level=info msg="CreateContainer within sandbox \"c0fdf4180a1985526bcd661228ea0675700768e22c407cb56a89ce3ead7f09b2\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"106445672307a9f3b6d120b50d14751f644fb2cac05495a5d8866c41aeed16dc\"" Jan 30 13:54:48.900161 containerd[2060]: time="2025-01-30T13:54:48.900123939Z" level=info msg="StartContainer for \"106445672307a9f3b6d120b50d14751f644fb2cac05495a5d8866c41aeed16dc\"" Jan 30 13:54:48.984120 containerd[2060]: time="2025-01-30T13:54:48.984034247Z" level=info msg="StartContainer for \"106445672307a9f3b6d120b50d14751f644fb2cac05495a5d8866c41aeed16dc\" returns successfully" Jan 30 13:54:49.373721 kubelet[2605]: E0130 13:54:49.373564 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:49.542770 kubelet[2605]: E0130 13:54:49.540827 2605 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6m9m7" podUID="41331529-9d0a-4578-9d4a-d0617145104a" Jan 30 13:54:49.960218 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 30 13:54:50.273123 containerd[2060]: time="2025-01-30T13:54:50.273059905Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:54:50.310829 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-106445672307a9f3b6d120b50d14751f644fb2cac05495a5d8866c41aeed16dc-rootfs.mount: Deactivated successfully. Jan 30 13:54:50.317382 kubelet[2605]: I0130 13:54:50.317279 2605 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 30 13:54:50.373874 kubelet[2605]: E0130 13:54:50.373830 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:50.495680 containerd[2060]: time="2025-01-30T13:54:50.495496086Z" level=info msg="shim disconnected" id=106445672307a9f3b6d120b50d14751f644fb2cac05495a5d8866c41aeed16dc namespace=k8s.io Jan 30 13:54:50.495680 containerd[2060]: time="2025-01-30T13:54:50.495670296Z" level=warning msg="cleaning up after shim disconnected" id=106445672307a9f3b6d120b50d14751f644fb2cac05495a5d8866c41aeed16dc namespace=k8s.io Jan 30 13:54:50.495680 containerd[2060]: time="2025-01-30T13:54:50.495757348Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:54:50.516131 containerd[2060]: time="2025-01-30T13:54:50.516069085Z" level=warning msg="cleanup warnings time=\"2025-01-30T13:54:50Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 30 13:54:50.606226 containerd[2060]: time="2025-01-30T13:54:50.605410309Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 30 13:54:51.374731 kubelet[2605]: E0130 13:54:51.374119 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:51.544711 containerd[2060]: time="2025-01-30T13:54:51.544398126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6m9m7,Uid:41331529-9d0a-4578-9d4a-d0617145104a,Namespace:calico-system,Attempt:0,}" Jan 30 13:54:51.660175 containerd[2060]: time="2025-01-30T13:54:51.660040202Z" level=error msg="Failed to destroy network for sandbox \"a16bb9f1e891dfb4700ab9d0041498de6d4449b42f087249a64a0d631c532456\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:54:51.660449 containerd[2060]: time="2025-01-30T13:54:51.660414178Z" level=error msg="encountered an error cleaning up failed sandbox \"a16bb9f1e891dfb4700ab9d0041498de6d4449b42f087249a64a0d631c532456\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:54:51.660535 containerd[2060]: time="2025-01-30T13:54:51.660488190Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6m9m7,Uid:41331529-9d0a-4578-9d4a-d0617145104a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a16bb9f1e891dfb4700ab9d0041498de6d4449b42f087249a64a0d631c532456\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:54:51.663221 kubelet[2605]: E0130 13:54:51.661213 2605 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a16bb9f1e891dfb4700ab9d0041498de6d4449b42f087249a64a0d631c532456\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:54:51.663221 kubelet[2605]: E0130 13:54:51.661305 2605 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a16bb9f1e891dfb4700ab9d0041498de6d4449b42f087249a64a0d631c532456\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6m9m7" Jan 30 13:54:51.664649 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a16bb9f1e891dfb4700ab9d0041498de6d4449b42f087249a64a0d631c532456-shm.mount: Deactivated successfully. Jan 30 13:54:51.666486 kubelet[2605]: E0130 13:54:51.664684 2605 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a16bb9f1e891dfb4700ab9d0041498de6d4449b42f087249a64a0d631c532456\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6m9m7" Jan 30 13:54:51.666486 kubelet[2605]: E0130 13:54:51.664790 2605 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-6m9m7_calico-system(41331529-9d0a-4578-9d4a-d0617145104a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-6m9m7_calico-system(41331529-9d0a-4578-9d4a-d0617145104a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a16bb9f1e891dfb4700ab9d0041498de6d4449b42f087249a64a0d631c532456\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6m9m7" podUID="41331529-9d0a-4578-9d4a-d0617145104a" Jan 30 13:54:52.374356 kubelet[2605]: E0130 13:54:52.374299 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:52.607979 kubelet[2605]: I0130 13:54:52.607933 2605 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a16bb9f1e891dfb4700ab9d0041498de6d4449b42f087249a64a0d631c532456" Jan 30 13:54:52.610111 containerd[2060]: time="2025-01-30T13:54:52.610069701Z" level=info msg="StopPodSandbox for \"a16bb9f1e891dfb4700ab9d0041498de6d4449b42f087249a64a0d631c532456\"" Jan 30 13:54:52.611134 containerd[2060]: time="2025-01-30T13:54:52.610364418Z" level=info msg="Ensure that sandbox a16bb9f1e891dfb4700ab9d0041498de6d4449b42f087249a64a0d631c532456 in task-service has been cleanup successfully" Jan 30 13:54:52.684329 containerd[2060]: time="2025-01-30T13:54:52.682586160Z" level=error msg="StopPodSandbox for \"a16bb9f1e891dfb4700ab9d0041498de6d4449b42f087249a64a0d631c532456\" failed" error="failed to destroy network for sandbox \"a16bb9f1e891dfb4700ab9d0041498de6d4449b42f087249a64a0d631c532456\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:54:52.684547 kubelet[2605]: E0130 13:54:52.682911 2605 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a16bb9f1e891dfb4700ab9d0041498de6d4449b42f087249a64a0d631c532456\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a16bb9f1e891dfb4700ab9d0041498de6d4449b42f087249a64a0d631c532456" Jan 30 13:54:52.684547 kubelet[2605]: E0130 13:54:52.682974 2605 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a16bb9f1e891dfb4700ab9d0041498de6d4449b42f087249a64a0d631c532456"} Jan 30 13:54:52.684547 kubelet[2605]: E0130 13:54:52.683047 2605 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"41331529-9d0a-4578-9d4a-d0617145104a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a16bb9f1e891dfb4700ab9d0041498de6d4449b42f087249a64a0d631c532456\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:54:52.684547 kubelet[2605]: E0130 13:54:52.683078 2605 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"41331529-9d0a-4578-9d4a-d0617145104a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a16bb9f1e891dfb4700ab9d0041498de6d4449b42f087249a64a0d631c532456\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6m9m7" podUID="41331529-9d0a-4578-9d4a-d0617145104a" Jan 30 13:54:52.782232 kubelet[2605]: I0130 13:54:52.780158 2605 topology_manager.go:215] "Topology Admit Handler" podUID="90c22885-b57e-41ea-be7e-6de1aec04565" podNamespace="default" podName="nginx-deployment-85f456d6dd-25dwr" Jan 30 13:54:52.946593 kubelet[2605]: I0130 13:54:52.946438 2605 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fh62k\" (UniqueName: \"kubernetes.io/projected/90c22885-b57e-41ea-be7e-6de1aec04565-kube-api-access-fh62k\") pod \"nginx-deployment-85f456d6dd-25dwr\" (UID: \"90c22885-b57e-41ea-be7e-6de1aec04565\") " pod="default/nginx-deployment-85f456d6dd-25dwr" Jan 30 13:54:53.089824 containerd[2060]: time="2025-01-30T13:54:53.089699094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-25dwr,Uid:90c22885-b57e-41ea-be7e-6de1aec04565,Namespace:default,Attempt:0,}" Jan 30 13:54:53.274292 containerd[2060]: time="2025-01-30T13:54:53.274245416Z" level=error msg="Failed to destroy network for sandbox \"41172b15d8f71a9554d5d71612174b3e9e800e0551ccddf9c7e8d586c8402e81\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:54:53.279281 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-41172b15d8f71a9554d5d71612174b3e9e800e0551ccddf9c7e8d586c8402e81-shm.mount: Deactivated successfully. Jan 30 13:54:53.280066 containerd[2060]: time="2025-01-30T13:54:53.279396983Z" level=error msg="encountered an error cleaning up failed sandbox \"41172b15d8f71a9554d5d71612174b3e9e800e0551ccddf9c7e8d586c8402e81\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:54:53.280066 containerd[2060]: time="2025-01-30T13:54:53.279473652Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-25dwr,Uid:90c22885-b57e-41ea-be7e-6de1aec04565,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"41172b15d8f71a9554d5d71612174b3e9e800e0551ccddf9c7e8d586c8402e81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:54:53.280240 kubelet[2605]: E0130 13:54:53.279813 2605 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41172b15d8f71a9554d5d71612174b3e9e800e0551ccddf9c7e8d586c8402e81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:54:53.280240 kubelet[2605]: E0130 13:54:53.279886 2605 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41172b15d8f71a9554d5d71612174b3e9e800e0551ccddf9c7e8d586c8402e81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-25dwr" Jan 30 13:54:53.280240 kubelet[2605]: E0130 13:54:53.279914 2605 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41172b15d8f71a9554d5d71612174b3e9e800e0551ccddf9c7e8d586c8402e81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-25dwr" Jan 30 13:54:53.280398 kubelet[2605]: E0130 13:54:53.279967 2605 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-25dwr_default(90c22885-b57e-41ea-be7e-6de1aec04565)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-25dwr_default(90c22885-b57e-41ea-be7e-6de1aec04565)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"41172b15d8f71a9554d5d71612174b3e9e800e0551ccddf9c7e8d586c8402e81\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-25dwr" podUID="90c22885-b57e-41ea-be7e-6de1aec04565" Jan 30 13:54:53.375374 kubelet[2605]: E0130 13:54:53.375293 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:53.615544 kubelet[2605]: I0130 13:54:53.614890 2605 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41172b15d8f71a9554d5d71612174b3e9e800e0551ccddf9c7e8d586c8402e81" Jan 30 13:54:53.617672 containerd[2060]: time="2025-01-30T13:54:53.617403360Z" level=info msg="StopPodSandbox for \"41172b15d8f71a9554d5d71612174b3e9e800e0551ccddf9c7e8d586c8402e81\"" Jan 30 13:54:53.617672 containerd[2060]: time="2025-01-30T13:54:53.617604919Z" level=info msg="Ensure that sandbox 41172b15d8f71a9554d5d71612174b3e9e800e0551ccddf9c7e8d586c8402e81 in task-service has been cleanup successfully" Jan 30 13:54:53.673723 containerd[2060]: time="2025-01-30T13:54:53.673667024Z" level=error msg="StopPodSandbox for \"41172b15d8f71a9554d5d71612174b3e9e800e0551ccddf9c7e8d586c8402e81\" failed" error="failed to destroy network for sandbox \"41172b15d8f71a9554d5d71612174b3e9e800e0551ccddf9c7e8d586c8402e81\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:54:53.674049 kubelet[2605]: E0130 13:54:53.674010 2605 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"41172b15d8f71a9554d5d71612174b3e9e800e0551ccddf9c7e8d586c8402e81\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="41172b15d8f71a9554d5d71612174b3e9e800e0551ccddf9c7e8d586c8402e81" Jan 30 13:54:53.674160 kubelet[2605]: E0130 13:54:53.674070 2605 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"41172b15d8f71a9554d5d71612174b3e9e800e0551ccddf9c7e8d586c8402e81"} Jan 30 13:54:53.674160 kubelet[2605]: E0130 13:54:53.674117 2605 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"90c22885-b57e-41ea-be7e-6de1aec04565\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"41172b15d8f71a9554d5d71612174b3e9e800e0551ccddf9c7e8d586c8402e81\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:54:53.674329 kubelet[2605]: E0130 13:54:53.674150 2605 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"90c22885-b57e-41ea-be7e-6de1aec04565\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"41172b15d8f71a9554d5d71612174b3e9e800e0551ccddf9c7e8d586c8402e81\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-25dwr" podUID="90c22885-b57e-41ea-be7e-6de1aec04565" Jan 30 13:54:54.375773 kubelet[2605]: E0130 13:54:54.375729 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:55.376787 kubelet[2605]: E0130 13:54:55.376719 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:56.381434 kubelet[2605]: E0130 13:54:56.381321 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:57.362388 kubelet[2605]: E0130 13:54:57.362332 2605 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:57.382577 kubelet[2605]: E0130 13:54:57.382299 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:58.157774 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1975653981.mount: Deactivated successfully. Jan 30 13:54:58.231578 containerd[2060]: time="2025-01-30T13:54:58.229504419Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 30 13:54:58.231578 containerd[2060]: time="2025-01-30T13:54:58.231501700Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:58.233626 containerd[2060]: time="2025-01-30T13:54:58.233577223Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:58.235720 containerd[2060]: time="2025-01-30T13:54:58.235676227Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 7.630218488s" Jan 30 13:54:58.236248 containerd[2060]: time="2025-01-30T13:54:58.235722884Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 30 13:54:58.236248 containerd[2060]: time="2025-01-30T13:54:58.236153963Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:58.319855 containerd[2060]: time="2025-01-30T13:54:58.319812436Z" level=info msg="CreateContainer within sandbox \"c0fdf4180a1985526bcd661228ea0675700768e22c407cb56a89ce3ead7f09b2\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 30 13:54:58.361327 containerd[2060]: time="2025-01-30T13:54:58.359715348Z" level=info msg="CreateContainer within sandbox \"c0fdf4180a1985526bcd661228ea0675700768e22c407cb56a89ce3ead7f09b2\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"1f292d1c97dde9ed417114fae7b7f7b097221f9c7abb4d0b0fe1e78aaedac643\"" Jan 30 13:54:58.371745 containerd[2060]: time="2025-01-30T13:54:58.371688428Z" level=info msg="StartContainer for \"1f292d1c97dde9ed417114fae7b7f7b097221f9c7abb4d0b0fe1e78aaedac643\"" Jan 30 13:54:58.382798 kubelet[2605]: E0130 13:54:58.382740 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:58.550100 containerd[2060]: time="2025-01-30T13:54:58.550052289Z" level=info msg="StartContainer for \"1f292d1c97dde9ed417114fae7b7f7b097221f9c7abb4d0b0fe1e78aaedac643\" returns successfully" Jan 30 13:54:58.654894 kubelet[2605]: I0130 13:54:58.654832 2605 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-99xwr" podStartSLOduration=3.137005462 podStartE2EDuration="21.654812552s" podCreationTimestamp="2025-01-30 13:54:37 +0000 UTC" firstStartedPulling="2025-01-30 13:54:39.720175864 +0000 UTC m=+3.322811901" lastFinishedPulling="2025-01-30 13:54:58.237982949 +0000 UTC m=+21.840618991" observedRunningTime="2025-01-30 13:54:58.654813471 +0000 UTC m=+22.257449528" watchObservedRunningTime="2025-01-30 13:54:58.654812552 +0000 UTC m=+22.257448608" Jan 30 13:54:58.670175 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 30 13:54:58.670471 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 30 13:54:59.383450 kubelet[2605]: E0130 13:54:59.383390 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:59.637624 kubelet[2605]: I0130 13:54:59.637510 2605 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:55:00.384314 kubelet[2605]: E0130 13:55:00.384264 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:00.757264 kernel: bpftool[3376]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 30 13:55:01.058850 (udev-worker)[3234]: Network interface NamePolicy= disabled on kernel command line. Jan 30 13:55:01.059570 systemd-networkd[1646]: vxlan.calico: Link UP Jan 30 13:55:01.059576 systemd-networkd[1646]: vxlan.calico: Gained carrier Jan 30 13:55:01.102738 (udev-worker)[3405]: Network interface NamePolicy= disabled on kernel command line. Jan 30 13:55:01.386847 kubelet[2605]: E0130 13:55:01.386122 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:01.606433 kubelet[2605]: I0130 13:55:01.605818 2605 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:55:02.386687 kubelet[2605]: E0130 13:55:02.386638 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:03.039529 systemd-networkd[1646]: vxlan.calico: Gained IPv6LL Jan 30 13:55:03.387965 kubelet[2605]: E0130 13:55:03.387757 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:03.883403 update_engine[2038]: I20250130 13:55:03.883160 2038 update_attempter.cc:509] Updating boot flags... Jan 30 13:55:03.960311 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3413) Jan 30 13:55:04.172305 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3508) Jan 30 13:55:04.388594 kubelet[2605]: E0130 13:55:04.388539 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:04.541156 containerd[2060]: time="2025-01-30T13:55:04.541005927Z" level=info msg="StopPodSandbox for \"a16bb9f1e891dfb4700ab9d0041498de6d4449b42f087249a64a0d631c532456\"" Jan 30 13:55:04.853964 containerd[2060]: 2025-01-30 13:55:04.666 [INFO][3684] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a16bb9f1e891dfb4700ab9d0041498de6d4449b42f087249a64a0d631c532456" Jan 30 13:55:04.853964 containerd[2060]: 2025-01-30 13:55:04.667 [INFO][3684] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a16bb9f1e891dfb4700ab9d0041498de6d4449b42f087249a64a0d631c532456" iface="eth0" netns="/var/run/netns/cni-050af98d-5419-4d70-5497-277026558712" Jan 30 13:55:04.853964 containerd[2060]: 2025-01-30 13:55:04.667 [INFO][3684] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a16bb9f1e891dfb4700ab9d0041498de6d4449b42f087249a64a0d631c532456" iface="eth0" netns="/var/run/netns/cni-050af98d-5419-4d70-5497-277026558712" Jan 30 13:55:04.853964 containerd[2060]: 2025-01-30 13:55:04.669 [INFO][3684] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a16bb9f1e891dfb4700ab9d0041498de6d4449b42f087249a64a0d631c532456" iface="eth0" netns="/var/run/netns/cni-050af98d-5419-4d70-5497-277026558712" Jan 30 13:55:04.853964 containerd[2060]: 2025-01-30 13:55:04.669 [INFO][3684] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a16bb9f1e891dfb4700ab9d0041498de6d4449b42f087249a64a0d631c532456" Jan 30 13:55:04.853964 containerd[2060]: 2025-01-30 13:55:04.669 [INFO][3684] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a16bb9f1e891dfb4700ab9d0041498de6d4449b42f087249a64a0d631c532456" Jan 30 13:55:04.853964 containerd[2060]: 2025-01-30 13:55:04.816 [INFO][3690] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a16bb9f1e891dfb4700ab9d0041498de6d4449b42f087249a64a0d631c532456" HandleID="k8s-pod-network.a16bb9f1e891dfb4700ab9d0041498de6d4449b42f087249a64a0d631c532456" Workload="172.31.31.232-k8s-csi--node--driver--6m9m7-eth0" Jan 30 13:55:04.853964 containerd[2060]: 2025-01-30 13:55:04.820 [INFO][3690] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:55:04.853964 containerd[2060]: 2025-01-30 13:55:04.820 [INFO][3690] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:55:04.853964 containerd[2060]: 2025-01-30 13:55:04.844 [WARNING][3690] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a16bb9f1e891dfb4700ab9d0041498de6d4449b42f087249a64a0d631c532456" HandleID="k8s-pod-network.a16bb9f1e891dfb4700ab9d0041498de6d4449b42f087249a64a0d631c532456" Workload="172.31.31.232-k8s-csi--node--driver--6m9m7-eth0" Jan 30 13:55:04.853964 containerd[2060]: 2025-01-30 13:55:04.844 [INFO][3690] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a16bb9f1e891dfb4700ab9d0041498de6d4449b42f087249a64a0d631c532456" HandleID="k8s-pod-network.a16bb9f1e891dfb4700ab9d0041498de6d4449b42f087249a64a0d631c532456" Workload="172.31.31.232-k8s-csi--node--driver--6m9m7-eth0" Jan 30 13:55:04.853964 containerd[2060]: 2025-01-30 13:55:04.846 [INFO][3690] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:55:04.853964 containerd[2060]: 2025-01-30 13:55:04.851 [INFO][3684] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a16bb9f1e891dfb4700ab9d0041498de6d4449b42f087249a64a0d631c532456" Jan 30 13:55:04.858430 containerd[2060]: time="2025-01-30T13:55:04.858259774Z" level=info msg="TearDown network for sandbox \"a16bb9f1e891dfb4700ab9d0041498de6d4449b42f087249a64a0d631c532456\" successfully" Jan 30 13:55:04.858430 containerd[2060]: time="2025-01-30T13:55:04.858302726Z" level=info msg="StopPodSandbox for \"a16bb9f1e891dfb4700ab9d0041498de6d4449b42f087249a64a0d631c532456\" returns successfully" Jan 30 13:55:04.859533 containerd[2060]: time="2025-01-30T13:55:04.859496642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6m9m7,Uid:41331529-9d0a-4578-9d4a-d0617145104a,Namespace:calico-system,Attempt:1,}" Jan 30 13:55:04.870101 systemd[1]: run-netns-cni\x2d050af98d\x2d5419\x2d4d70\x2d5497\x2d277026558712.mount: Deactivated successfully. Jan 30 13:55:05.174520 systemd-networkd[1646]: cali38f85c2e272: Link UP Jan 30 13:55:05.181832 systemd-networkd[1646]: cali38f85c2e272: Gained carrier Jan 30 13:55:05.207510 containerd[2060]: 2025-01-30 13:55:04.978 [INFO][3700] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.31.232-k8s-csi--node--driver--6m9m7-eth0 csi-node-driver- calico-system 41331529-9d0a-4578-9d4a-d0617145104a 1042 0 2025-01-30 13:54:37 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172.31.31.232 csi-node-driver-6m9m7 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali38f85c2e272 [] []}} ContainerID="5977a8d79470f60be5369a757cf306ffbe44b9b47d41d3467d16e1ceeae0dc53" Namespace="calico-system" Pod="csi-node-driver-6m9m7" WorkloadEndpoint="172.31.31.232-k8s-csi--node--driver--6m9m7-" Jan 30 13:55:05.207510 containerd[2060]: 2025-01-30 13:55:04.978 [INFO][3700] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5977a8d79470f60be5369a757cf306ffbe44b9b47d41d3467d16e1ceeae0dc53" Namespace="calico-system" Pod="csi-node-driver-6m9m7" WorkloadEndpoint="172.31.31.232-k8s-csi--node--driver--6m9m7-eth0" Jan 30 13:55:05.207510 containerd[2060]: 2025-01-30 13:55:05.066 [INFO][3707] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5977a8d79470f60be5369a757cf306ffbe44b9b47d41d3467d16e1ceeae0dc53" HandleID="k8s-pod-network.5977a8d79470f60be5369a757cf306ffbe44b9b47d41d3467d16e1ceeae0dc53" Workload="172.31.31.232-k8s-csi--node--driver--6m9m7-eth0" Jan 30 13:55:05.207510 containerd[2060]: 2025-01-30 13:55:05.088 [INFO][3707] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5977a8d79470f60be5369a757cf306ffbe44b9b47d41d3467d16e1ceeae0dc53" HandleID="k8s-pod-network.5977a8d79470f60be5369a757cf306ffbe44b9b47d41d3467d16e1ceeae0dc53" Workload="172.31.31.232-k8s-csi--node--driver--6m9m7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051ac0), Attrs:map[string]string{"namespace":"calico-system", "node":"172.31.31.232", "pod":"csi-node-driver-6m9m7", "timestamp":"2025-01-30 13:55:05.066741121 +0000 UTC"}, Hostname:"172.31.31.232", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:55:05.207510 containerd[2060]: 2025-01-30 13:55:05.089 [INFO][3707] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:55:05.207510 containerd[2060]: 2025-01-30 13:55:05.089 [INFO][3707] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:55:05.207510 containerd[2060]: 2025-01-30 13:55:05.089 [INFO][3707] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.31.232' Jan 30 13:55:05.207510 containerd[2060]: 2025-01-30 13:55:05.093 [INFO][3707] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5977a8d79470f60be5369a757cf306ffbe44b9b47d41d3467d16e1ceeae0dc53" host="172.31.31.232" Jan 30 13:55:05.207510 containerd[2060]: 2025-01-30 13:55:05.104 [INFO][3707] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.31.232" Jan 30 13:55:05.207510 containerd[2060]: 2025-01-30 13:55:05.112 [INFO][3707] ipam/ipam.go 489: Trying affinity for 192.168.119.192/26 host="172.31.31.232" Jan 30 13:55:05.207510 containerd[2060]: 2025-01-30 13:55:05.115 [INFO][3707] ipam/ipam.go 155: Attempting to load block cidr=192.168.119.192/26 host="172.31.31.232" Jan 30 13:55:05.207510 containerd[2060]: 2025-01-30 13:55:05.121 [INFO][3707] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.119.192/26 host="172.31.31.232" Jan 30 13:55:05.207510 containerd[2060]: 2025-01-30 13:55:05.122 [INFO][3707] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.119.192/26 handle="k8s-pod-network.5977a8d79470f60be5369a757cf306ffbe44b9b47d41d3467d16e1ceeae0dc53" host="172.31.31.232" Jan 30 13:55:05.207510 containerd[2060]: 2025-01-30 13:55:05.126 [INFO][3707] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5977a8d79470f60be5369a757cf306ffbe44b9b47d41d3467d16e1ceeae0dc53 Jan 30 13:55:05.207510 containerd[2060]: 2025-01-30 13:55:05.132 [INFO][3707] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.119.192/26 handle="k8s-pod-network.5977a8d79470f60be5369a757cf306ffbe44b9b47d41d3467d16e1ceeae0dc53" host="172.31.31.232" Jan 30 13:55:05.207510 containerd[2060]: 2025-01-30 13:55:05.147 [INFO][3707] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.119.193/26] block=192.168.119.192/26 handle="k8s-pod-network.5977a8d79470f60be5369a757cf306ffbe44b9b47d41d3467d16e1ceeae0dc53" host="172.31.31.232" Jan 30 13:55:05.207510 containerd[2060]: 2025-01-30 13:55:05.148 [INFO][3707] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.119.193/26] handle="k8s-pod-network.5977a8d79470f60be5369a757cf306ffbe44b9b47d41d3467d16e1ceeae0dc53" host="172.31.31.232" Jan 30 13:55:05.207510 containerd[2060]: 2025-01-30 13:55:05.149 [INFO][3707] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:55:05.207510 containerd[2060]: 2025-01-30 13:55:05.149 [INFO][3707] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.119.193/26] IPv6=[] ContainerID="5977a8d79470f60be5369a757cf306ffbe44b9b47d41d3467d16e1ceeae0dc53" HandleID="k8s-pod-network.5977a8d79470f60be5369a757cf306ffbe44b9b47d41d3467d16e1ceeae0dc53" Workload="172.31.31.232-k8s-csi--node--driver--6m9m7-eth0" Jan 30 13:55:05.213572 containerd[2060]: 2025-01-30 13:55:05.159 [INFO][3700] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5977a8d79470f60be5369a757cf306ffbe44b9b47d41d3467d16e1ceeae0dc53" Namespace="calico-system" Pod="csi-node-driver-6m9m7" WorkloadEndpoint="172.31.31.232-k8s-csi--node--driver--6m9m7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.31.232-k8s-csi--node--driver--6m9m7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"41331529-9d0a-4578-9d4a-d0617145104a", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 54, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.31.232", ContainerID:"", Pod:"csi-node-driver-6m9m7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.119.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali38f85c2e272", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:55:05.213572 containerd[2060]: 2025-01-30 13:55:05.161 [INFO][3700] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.119.193/32] ContainerID="5977a8d79470f60be5369a757cf306ffbe44b9b47d41d3467d16e1ceeae0dc53" Namespace="calico-system" Pod="csi-node-driver-6m9m7" WorkloadEndpoint="172.31.31.232-k8s-csi--node--driver--6m9m7-eth0" Jan 30 13:55:05.213572 containerd[2060]: 2025-01-30 13:55:05.161 [INFO][3700] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali38f85c2e272 ContainerID="5977a8d79470f60be5369a757cf306ffbe44b9b47d41d3467d16e1ceeae0dc53" Namespace="calico-system" Pod="csi-node-driver-6m9m7" WorkloadEndpoint="172.31.31.232-k8s-csi--node--driver--6m9m7-eth0" Jan 30 13:55:05.213572 containerd[2060]: 2025-01-30 13:55:05.186 [INFO][3700] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5977a8d79470f60be5369a757cf306ffbe44b9b47d41d3467d16e1ceeae0dc53" Namespace="calico-system" Pod="csi-node-driver-6m9m7" WorkloadEndpoint="172.31.31.232-k8s-csi--node--driver--6m9m7-eth0" Jan 30 13:55:05.213572 containerd[2060]: 2025-01-30 13:55:05.186 [INFO][3700] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5977a8d79470f60be5369a757cf306ffbe44b9b47d41d3467d16e1ceeae0dc53" Namespace="calico-system" Pod="csi-node-driver-6m9m7" WorkloadEndpoint="172.31.31.232-k8s-csi--node--driver--6m9m7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.31.232-k8s-csi--node--driver--6m9m7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"41331529-9d0a-4578-9d4a-d0617145104a", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 54, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.31.232", ContainerID:"5977a8d79470f60be5369a757cf306ffbe44b9b47d41d3467d16e1ceeae0dc53", Pod:"csi-node-driver-6m9m7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.119.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali38f85c2e272", MAC:"36:43:f6:a5:c3:f7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:55:05.213572 containerd[2060]: 2025-01-30 13:55:05.199 [INFO][3700] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5977a8d79470f60be5369a757cf306ffbe44b9b47d41d3467d16e1ceeae0dc53" Namespace="calico-system" Pod="csi-node-driver-6m9m7" WorkloadEndpoint="172.31.31.232-k8s-csi--node--driver--6m9m7-eth0" Jan 30 13:55:05.270852 containerd[2060]: time="2025-01-30T13:55:05.270667734Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:55:05.270852 containerd[2060]: time="2025-01-30T13:55:05.270739576Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:55:05.270852 containerd[2060]: time="2025-01-30T13:55:05.270753384Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:55:05.271495 containerd[2060]: time="2025-01-30T13:55:05.271292213Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:55:05.346504 containerd[2060]: time="2025-01-30T13:55:05.346460029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6m9m7,Uid:41331529-9d0a-4578-9d4a-d0617145104a,Namespace:calico-system,Attempt:1,} returns sandbox id \"5977a8d79470f60be5369a757cf306ffbe44b9b47d41d3467d16e1ceeae0dc53\"" Jan 30 13:55:05.349395 containerd[2060]: time="2025-01-30T13:55:05.349069261Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 30 13:55:05.389298 kubelet[2605]: E0130 13:55:05.389256 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:06.390353 kubelet[2605]: E0130 13:55:06.390293 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:06.752016 systemd-networkd[1646]: cali38f85c2e272: Gained IPv6LL Jan 30 13:55:06.931985 containerd[2060]: time="2025-01-30T13:55:06.931921951Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:06.935667 containerd[2060]: time="2025-01-30T13:55:06.935553399Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 30 13:55:06.948643 containerd[2060]: time="2025-01-30T13:55:06.948568494Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:06.965261 containerd[2060]: time="2025-01-30T13:55:06.964178074Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:06.967030 containerd[2060]: time="2025-01-30T13:55:06.966989808Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.617811931s" Jan 30 13:55:06.967469 containerd[2060]: time="2025-01-30T13:55:06.967034503Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 30 13:55:06.969765 containerd[2060]: time="2025-01-30T13:55:06.969730167Z" level=info msg="CreateContainer within sandbox \"5977a8d79470f60be5369a757cf306ffbe44b9b47d41d3467d16e1ceeae0dc53\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 30 13:55:07.056026 containerd[2060]: time="2025-01-30T13:55:07.055847626Z" level=info msg="CreateContainer within sandbox \"5977a8d79470f60be5369a757cf306ffbe44b9b47d41d3467d16e1ceeae0dc53\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"698a930945ffe832bcf3c49e02d15cf3cfbe23e26064d38223009073f8cd9082\"" Jan 30 13:55:07.059207 containerd[2060]: time="2025-01-30T13:55:07.057238353Z" level=info msg="StartContainer for \"698a930945ffe832bcf3c49e02d15cf3cfbe23e26064d38223009073f8cd9082\"" Jan 30 13:55:07.146436 containerd[2060]: time="2025-01-30T13:55:07.146304954Z" level=info msg="StartContainer for \"698a930945ffe832bcf3c49e02d15cf3cfbe23e26064d38223009073f8cd9082\" returns successfully" Jan 30 13:55:07.150049 containerd[2060]: time="2025-01-30T13:55:07.148786551Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 30 13:55:07.391487 kubelet[2605]: E0130 13:55:07.391274 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:08.402273 kubelet[2605]: E0130 13:55:08.391455 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:08.541680 containerd[2060]: time="2025-01-30T13:55:08.541585142Z" level=info msg="StopPodSandbox for \"41172b15d8f71a9554d5d71612174b3e9e800e0551ccddf9c7e8d586c8402e81\"" Jan 30 13:55:08.777816 containerd[2060]: 2025-01-30 13:55:08.687 [INFO][3827] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="41172b15d8f71a9554d5d71612174b3e9e800e0551ccddf9c7e8d586c8402e81" Jan 30 13:55:08.777816 containerd[2060]: 2025-01-30 13:55:08.687 [INFO][3827] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="41172b15d8f71a9554d5d71612174b3e9e800e0551ccddf9c7e8d586c8402e81" iface="eth0" netns="/var/run/netns/cni-1fa00b24-2c4b-519a-0806-e4d7b2382217" Jan 30 13:55:08.777816 containerd[2060]: 2025-01-30 13:55:08.688 [INFO][3827] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="41172b15d8f71a9554d5d71612174b3e9e800e0551ccddf9c7e8d586c8402e81" iface="eth0" netns="/var/run/netns/cni-1fa00b24-2c4b-519a-0806-e4d7b2382217" Jan 30 13:55:08.777816 containerd[2060]: 2025-01-30 13:55:08.688 [INFO][3827] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="41172b15d8f71a9554d5d71612174b3e9e800e0551ccddf9c7e8d586c8402e81" iface="eth0" netns="/var/run/netns/cni-1fa00b24-2c4b-519a-0806-e4d7b2382217" Jan 30 13:55:08.777816 containerd[2060]: 2025-01-30 13:55:08.688 [INFO][3827] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="41172b15d8f71a9554d5d71612174b3e9e800e0551ccddf9c7e8d586c8402e81" Jan 30 13:55:08.777816 containerd[2060]: 2025-01-30 13:55:08.689 [INFO][3827] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="41172b15d8f71a9554d5d71612174b3e9e800e0551ccddf9c7e8d586c8402e81" Jan 30 13:55:08.777816 containerd[2060]: 2025-01-30 13:55:08.756 [INFO][3833] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="41172b15d8f71a9554d5d71612174b3e9e800e0551ccddf9c7e8d586c8402e81" HandleID="k8s-pod-network.41172b15d8f71a9554d5d71612174b3e9e800e0551ccddf9c7e8d586c8402e81" Workload="172.31.31.232-k8s-nginx--deployment--85f456d6dd--25dwr-eth0" Jan 30 13:55:08.777816 containerd[2060]: 2025-01-30 13:55:08.756 [INFO][3833] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:55:08.777816 containerd[2060]: 2025-01-30 13:55:08.756 [INFO][3833] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:55:08.777816 containerd[2060]: 2025-01-30 13:55:08.769 [WARNING][3833] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="41172b15d8f71a9554d5d71612174b3e9e800e0551ccddf9c7e8d586c8402e81" HandleID="k8s-pod-network.41172b15d8f71a9554d5d71612174b3e9e800e0551ccddf9c7e8d586c8402e81" Workload="172.31.31.232-k8s-nginx--deployment--85f456d6dd--25dwr-eth0" Jan 30 13:55:08.777816 containerd[2060]: 2025-01-30 13:55:08.769 [INFO][3833] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="41172b15d8f71a9554d5d71612174b3e9e800e0551ccddf9c7e8d586c8402e81" HandleID="k8s-pod-network.41172b15d8f71a9554d5d71612174b3e9e800e0551ccddf9c7e8d586c8402e81" Workload="172.31.31.232-k8s-nginx--deployment--85f456d6dd--25dwr-eth0" Jan 30 13:55:08.777816 containerd[2060]: 2025-01-30 13:55:08.772 [INFO][3833] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:55:08.777816 containerd[2060]: 2025-01-30 13:55:08.775 [INFO][3827] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="41172b15d8f71a9554d5d71612174b3e9e800e0551ccddf9c7e8d586c8402e81" Jan 30 13:55:08.779774 containerd[2060]: time="2025-01-30T13:55:08.779416813Z" level=info msg="TearDown network for sandbox \"41172b15d8f71a9554d5d71612174b3e9e800e0551ccddf9c7e8d586c8402e81\" successfully" Jan 30 13:55:08.779774 containerd[2060]: time="2025-01-30T13:55:08.779453146Z" level=info msg="StopPodSandbox for \"41172b15d8f71a9554d5d71612174b3e9e800e0551ccddf9c7e8d586c8402e81\" returns successfully" Jan 30 13:55:08.782517 containerd[2060]: time="2025-01-30T13:55:08.782240384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-25dwr,Uid:90c22885-b57e-41ea-be7e-6de1aec04565,Namespace:default,Attempt:1,}" Jan 30 13:55:08.784523 systemd[1]: run-netns-cni\x2d1fa00b24\x2d2c4b\x2d519a\x2d0806\x2de4d7b2382217.mount: Deactivated successfully. Jan 30 13:55:09.082656 containerd[2060]: time="2025-01-30T13:55:09.082306545Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:09.087280 containerd[2060]: time="2025-01-30T13:55:09.087121885Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 30 13:55:09.090699 containerd[2060]: time="2025-01-30T13:55:09.090584391Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:09.104536 containerd[2060]: time="2025-01-30T13:55:09.104365451Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:09.106408 containerd[2060]: time="2025-01-30T13:55:09.106247408Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.957420118s" Jan 30 13:55:09.106408 containerd[2060]: time="2025-01-30T13:55:09.106297243Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 30 13:55:09.111070 containerd[2060]: time="2025-01-30T13:55:09.111026976Z" level=info msg="CreateContainer within sandbox \"5977a8d79470f60be5369a757cf306ffbe44b9b47d41d3467d16e1ceeae0dc53\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 30 13:55:09.130059 systemd-networkd[1646]: calief24f5028f7: Link UP Jan 30 13:55:09.130463 systemd-networkd[1646]: calief24f5028f7: Gained carrier Jan 30 13:55:09.135183 (udev-worker)[3859]: Network interface NamePolicy= disabled on kernel command line. Jan 30 13:55:09.170533 containerd[2060]: 2025-01-30 13:55:08.917 [INFO][3844] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.31.232-k8s-nginx--deployment--85f456d6dd--25dwr-eth0 nginx-deployment-85f456d6dd- default 90c22885-b57e-41ea-be7e-6de1aec04565 1063 0 2025-01-30 13:54:52 +0000 UTC map[app:nginx pod-template-hash:85f456d6dd projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.31.232 nginx-deployment-85f456d6dd-25dwr eth0 default [] [] [kns.default ksa.default.default] calief24f5028f7 [] []}} ContainerID="4781ce471daa5c54b2acbb4062bdca8dcc1f6fcee54a9fde3539aad2cde3d3f2" Namespace="default" Pod="nginx-deployment-85f456d6dd-25dwr" WorkloadEndpoint="172.31.31.232-k8s-nginx--deployment--85f456d6dd--25dwr-" Jan 30 13:55:09.170533 containerd[2060]: 2025-01-30 13:55:08.918 [INFO][3844] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4781ce471daa5c54b2acbb4062bdca8dcc1f6fcee54a9fde3539aad2cde3d3f2" Namespace="default" Pod="nginx-deployment-85f456d6dd-25dwr" WorkloadEndpoint="172.31.31.232-k8s-nginx--deployment--85f456d6dd--25dwr-eth0" Jan 30 13:55:09.170533 containerd[2060]: 2025-01-30 13:55:09.008 [INFO][3851] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4781ce471daa5c54b2acbb4062bdca8dcc1f6fcee54a9fde3539aad2cde3d3f2" HandleID="k8s-pod-network.4781ce471daa5c54b2acbb4062bdca8dcc1f6fcee54a9fde3539aad2cde3d3f2" Workload="172.31.31.232-k8s-nginx--deployment--85f456d6dd--25dwr-eth0" Jan 30 13:55:09.170533 containerd[2060]: 2025-01-30 13:55:09.048 [INFO][3851] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4781ce471daa5c54b2acbb4062bdca8dcc1f6fcee54a9fde3539aad2cde3d3f2" HandleID="k8s-pod-network.4781ce471daa5c54b2acbb4062bdca8dcc1f6fcee54a9fde3539aad2cde3d3f2" Workload="172.31.31.232-k8s-nginx--deployment--85f456d6dd--25dwr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003bcb50), Attrs:map[string]string{"namespace":"default", "node":"172.31.31.232", "pod":"nginx-deployment-85f456d6dd-25dwr", "timestamp":"2025-01-30 13:55:09.008191878 +0000 UTC"}, Hostname:"172.31.31.232", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:55:09.170533 containerd[2060]: 2025-01-30 13:55:09.048 [INFO][3851] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:55:09.170533 containerd[2060]: 2025-01-30 13:55:09.049 [INFO][3851] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:55:09.170533 containerd[2060]: 2025-01-30 13:55:09.049 [INFO][3851] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.31.232' Jan 30 13:55:09.170533 containerd[2060]: 2025-01-30 13:55:09.058 [INFO][3851] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4781ce471daa5c54b2acbb4062bdca8dcc1f6fcee54a9fde3539aad2cde3d3f2" host="172.31.31.232" Jan 30 13:55:09.170533 containerd[2060]: 2025-01-30 13:55:09.073 [INFO][3851] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.31.232" Jan 30 13:55:09.170533 containerd[2060]: 2025-01-30 13:55:09.080 [INFO][3851] ipam/ipam.go 489: Trying affinity for 192.168.119.192/26 host="172.31.31.232" Jan 30 13:55:09.170533 containerd[2060]: 2025-01-30 13:55:09.083 [INFO][3851] ipam/ipam.go 155: Attempting to load block cidr=192.168.119.192/26 host="172.31.31.232" Jan 30 13:55:09.170533 containerd[2060]: 2025-01-30 13:55:09.087 [INFO][3851] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.119.192/26 host="172.31.31.232" Jan 30 13:55:09.170533 containerd[2060]: 2025-01-30 13:55:09.088 [INFO][3851] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.119.192/26 handle="k8s-pod-network.4781ce471daa5c54b2acbb4062bdca8dcc1f6fcee54a9fde3539aad2cde3d3f2" host="172.31.31.232" Jan 30 13:55:09.170533 containerd[2060]: 2025-01-30 13:55:09.091 [INFO][3851] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4781ce471daa5c54b2acbb4062bdca8dcc1f6fcee54a9fde3539aad2cde3d3f2 Jan 30 13:55:09.170533 containerd[2060]: 2025-01-30 13:55:09.103 [INFO][3851] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.119.192/26 handle="k8s-pod-network.4781ce471daa5c54b2acbb4062bdca8dcc1f6fcee54a9fde3539aad2cde3d3f2" host="172.31.31.232" Jan 30 13:55:09.170533 containerd[2060]: 2025-01-30 13:55:09.121 [INFO][3851] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.119.194/26] block=192.168.119.192/26 handle="k8s-pod-network.4781ce471daa5c54b2acbb4062bdca8dcc1f6fcee54a9fde3539aad2cde3d3f2" host="172.31.31.232" Jan 30 13:55:09.170533 containerd[2060]: 2025-01-30 13:55:09.121 [INFO][3851] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.119.194/26] handle="k8s-pod-network.4781ce471daa5c54b2acbb4062bdca8dcc1f6fcee54a9fde3539aad2cde3d3f2" host="172.31.31.232" Jan 30 13:55:09.170533 containerd[2060]: 2025-01-30 13:55:09.121 [INFO][3851] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:55:09.170533 containerd[2060]: 2025-01-30 13:55:09.121 [INFO][3851] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.119.194/26] IPv6=[] ContainerID="4781ce471daa5c54b2acbb4062bdca8dcc1f6fcee54a9fde3539aad2cde3d3f2" HandleID="k8s-pod-network.4781ce471daa5c54b2acbb4062bdca8dcc1f6fcee54a9fde3539aad2cde3d3f2" Workload="172.31.31.232-k8s-nginx--deployment--85f456d6dd--25dwr-eth0" Jan 30 13:55:09.181276 containerd[2060]: 2025-01-30 13:55:09.123 [INFO][3844] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4781ce471daa5c54b2acbb4062bdca8dcc1f6fcee54a9fde3539aad2cde3d3f2" Namespace="default" Pod="nginx-deployment-85f456d6dd-25dwr" WorkloadEndpoint="172.31.31.232-k8s-nginx--deployment--85f456d6dd--25dwr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.31.232-k8s-nginx--deployment--85f456d6dd--25dwr-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"90c22885-b57e-41ea-be7e-6de1aec04565", ResourceVersion:"1063", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 54, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.31.232", ContainerID:"", Pod:"nginx-deployment-85f456d6dd-25dwr", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.119.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calief24f5028f7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:55:09.181276 containerd[2060]: 2025-01-30 13:55:09.123 [INFO][3844] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.119.194/32] ContainerID="4781ce471daa5c54b2acbb4062bdca8dcc1f6fcee54a9fde3539aad2cde3d3f2" Namespace="default" Pod="nginx-deployment-85f456d6dd-25dwr" WorkloadEndpoint="172.31.31.232-k8s-nginx--deployment--85f456d6dd--25dwr-eth0" Jan 30 13:55:09.181276 containerd[2060]: 2025-01-30 13:55:09.123 [INFO][3844] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calief24f5028f7 ContainerID="4781ce471daa5c54b2acbb4062bdca8dcc1f6fcee54a9fde3539aad2cde3d3f2" Namespace="default" Pod="nginx-deployment-85f456d6dd-25dwr" WorkloadEndpoint="172.31.31.232-k8s-nginx--deployment--85f456d6dd--25dwr-eth0" Jan 30 13:55:09.181276 containerd[2060]: 2025-01-30 13:55:09.131 [INFO][3844] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4781ce471daa5c54b2acbb4062bdca8dcc1f6fcee54a9fde3539aad2cde3d3f2" Namespace="default" Pod="nginx-deployment-85f456d6dd-25dwr" WorkloadEndpoint="172.31.31.232-k8s-nginx--deployment--85f456d6dd--25dwr-eth0" Jan 30 13:55:09.181276 containerd[2060]: 2025-01-30 13:55:09.133 [INFO][3844] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4781ce471daa5c54b2acbb4062bdca8dcc1f6fcee54a9fde3539aad2cde3d3f2" Namespace="default" Pod="nginx-deployment-85f456d6dd-25dwr" WorkloadEndpoint="172.31.31.232-k8s-nginx--deployment--85f456d6dd--25dwr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.31.232-k8s-nginx--deployment--85f456d6dd--25dwr-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"90c22885-b57e-41ea-be7e-6de1aec04565", ResourceVersion:"1063", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 54, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.31.232", ContainerID:"4781ce471daa5c54b2acbb4062bdca8dcc1f6fcee54a9fde3539aad2cde3d3f2", Pod:"nginx-deployment-85f456d6dd-25dwr", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.119.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calief24f5028f7", MAC:"b6:a3:6d:8b:90:eb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:55:09.181276 containerd[2060]: 2025-01-30 13:55:09.144 [INFO][3844] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4781ce471daa5c54b2acbb4062bdca8dcc1f6fcee54a9fde3539aad2cde3d3f2" Namespace="default" Pod="nginx-deployment-85f456d6dd-25dwr" WorkloadEndpoint="172.31.31.232-k8s-nginx--deployment--85f456d6dd--25dwr-eth0" Jan 30 13:55:09.201285 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount771334923.mount: Deactivated successfully. Jan 30 13:55:09.213879 containerd[2060]: time="2025-01-30T13:55:09.213565574Z" level=info msg="CreateContainer within sandbox \"5977a8d79470f60be5369a757cf306ffbe44b9b47d41d3467d16e1ceeae0dc53\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"e1a6abe78ff73fbeb6cc8f94adacbc0a75b21c34342cf3ab59d2d554695ac9ec\"" Jan 30 13:55:09.216237 containerd[2060]: time="2025-01-30T13:55:09.214754279Z" level=info msg="StartContainer for \"e1a6abe78ff73fbeb6cc8f94adacbc0a75b21c34342cf3ab59d2d554695ac9ec\"" Jan 30 13:55:09.275939 containerd[2060]: time="2025-01-30T13:55:09.275821223Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:55:09.276094 containerd[2060]: time="2025-01-30T13:55:09.276033337Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:55:09.276094 containerd[2060]: time="2025-01-30T13:55:09.276077197Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:55:09.276352 containerd[2060]: time="2025-01-30T13:55:09.276305082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:55:09.358298 containerd[2060]: time="2025-01-30T13:55:09.356553335Z" level=info msg="StartContainer for \"e1a6abe78ff73fbeb6cc8f94adacbc0a75b21c34342cf3ab59d2d554695ac9ec\" returns successfully" Jan 30 13:55:09.391062 containerd[2060]: time="2025-01-30T13:55:09.391030908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-25dwr,Uid:90c22885-b57e-41ea-be7e-6de1aec04565,Namespace:default,Attempt:1,} returns sandbox id \"4781ce471daa5c54b2acbb4062bdca8dcc1f6fcee54a9fde3539aad2cde3d3f2\"" Jan 30 13:55:09.391694 kubelet[2605]: E0130 13:55:09.391663 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:09.393272 containerd[2060]: time="2025-01-30T13:55:09.393236037Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 30 13:55:09.533176 kubelet[2605]: I0130 13:55:09.533141 2605 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 30 13:55:09.533176 kubelet[2605]: I0130 13:55:09.533178 2605 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 30 13:55:10.391910 kubelet[2605]: E0130 13:55:10.391852 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:10.784253 systemd-networkd[1646]: calief24f5028f7: Gained IPv6LL Jan 30 13:55:11.393135 kubelet[2605]: E0130 13:55:11.392819 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:12.396367 kubelet[2605]: E0130 13:55:12.396307 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:12.730844 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount502379218.mount: Deactivated successfully. Jan 30 13:55:13.398315 kubelet[2605]: E0130 13:55:13.398275 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:13.723521 ntpd[2027]: Listen normally on 6 vxlan.calico 192.168.119.192:123 Jan 30 13:55:13.727150 ntpd[2027]: 30 Jan 13:55:13 ntpd[2027]: Listen normally on 6 vxlan.calico 192.168.119.192:123 Jan 30 13:55:13.727150 ntpd[2027]: 30 Jan 13:55:13 ntpd[2027]: Listen normally on 7 vxlan.calico [fe80::64c4:a6ff:fe79:f0b1%3]:123 Jan 30 13:55:13.727150 ntpd[2027]: 30 Jan 13:55:13 ntpd[2027]: Listen normally on 8 cali38f85c2e272 [fe80::ecee:eeff:feee:eeee%6]:123 Jan 30 13:55:13.727150 ntpd[2027]: 30 Jan 13:55:13 ntpd[2027]: Listen normally on 9 calief24f5028f7 [fe80::ecee:eeff:feee:eeee%7]:123 Jan 30 13:55:13.724645 ntpd[2027]: Listen normally on 7 vxlan.calico [fe80::64c4:a6ff:fe79:f0b1%3]:123 Jan 30 13:55:13.724713 ntpd[2027]: Listen normally on 8 cali38f85c2e272 [fe80::ecee:eeff:feee:eeee%6]:123 Jan 30 13:55:13.724742 ntpd[2027]: Listen normally on 9 calief24f5028f7 [fe80::ecee:eeff:feee:eeee%7]:123 Jan 30 13:55:14.399058 kubelet[2605]: E0130 13:55:14.398866 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:14.670955 containerd[2060]: time="2025-01-30T13:55:14.670585688Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:14.689986 containerd[2060]: time="2025-01-30T13:55:14.689916655Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71015561" Jan 30 13:55:14.691614 containerd[2060]: time="2025-01-30T13:55:14.691555671Z" level=info msg="ImageCreate event name:\"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:14.702239 containerd[2060]: time="2025-01-30T13:55:14.702169961Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"71015439\" in 5.308889878s" Jan 30 13:55:14.702434 containerd[2060]: time="2025-01-30T13:55:14.702401964Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\"" Jan 30 13:55:14.702964 containerd[2060]: time="2025-01-30T13:55:14.702258310Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:14.716185 containerd[2060]: time="2025-01-30T13:55:14.716133137Z" level=info msg="CreateContainer within sandbox \"4781ce471daa5c54b2acbb4062bdca8dcc1f6fcee54a9fde3539aad2cde3d3f2\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 30 13:55:14.740574 containerd[2060]: time="2025-01-30T13:55:14.740531292Z" level=info msg="CreateContainer within sandbox \"4781ce471daa5c54b2acbb4062bdca8dcc1f6fcee54a9fde3539aad2cde3d3f2\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"bce2647dd8c1d0a262d496ae490963c24ae9e8741de2da187d828506d3a7f1c6\"" Jan 30 13:55:14.741468 containerd[2060]: time="2025-01-30T13:55:14.741381627Z" level=info msg="StartContainer for \"bce2647dd8c1d0a262d496ae490963c24ae9e8741de2da187d828506d3a7f1c6\"" Jan 30 13:55:14.787058 systemd[1]: run-containerd-runc-k8s.io-bce2647dd8c1d0a262d496ae490963c24ae9e8741de2da187d828506d3a7f1c6-runc.1Tyy8i.mount: Deactivated successfully. Jan 30 13:55:14.841673 containerd[2060]: time="2025-01-30T13:55:14.841629027Z" level=info msg="StartContainer for \"bce2647dd8c1d0a262d496ae490963c24ae9e8741de2da187d828506d3a7f1c6\" returns successfully" Jan 30 13:55:15.400015 kubelet[2605]: E0130 13:55:15.399820 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:15.777931 kubelet[2605]: I0130 13:55:15.777583 2605 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-6m9m7" podStartSLOduration=35.018623038 podStartE2EDuration="38.777561263s" podCreationTimestamp="2025-01-30 13:54:37 +0000 UTC" firstStartedPulling="2025-01-30 13:55:05.348762756 +0000 UTC m=+28.951398794" lastFinishedPulling="2025-01-30 13:55:09.107700975 +0000 UTC m=+32.710337019" observedRunningTime="2025-01-30 13:55:09.721067167 +0000 UTC m=+33.323703225" watchObservedRunningTime="2025-01-30 13:55:15.777561263 +0000 UTC m=+39.380197318" Jan 30 13:55:15.777931 kubelet[2605]: I0130 13:55:15.777768 2605 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-25dwr" podStartSLOduration=18.466627138 podStartE2EDuration="23.777762859s" podCreationTimestamp="2025-01-30 13:54:52 +0000 UTC" firstStartedPulling="2025-01-30 13:55:09.392891556 +0000 UTC m=+32.995527594" lastFinishedPulling="2025-01-30 13:55:14.704027268 +0000 UTC m=+38.306663315" observedRunningTime="2025-01-30 13:55:15.777642995 +0000 UTC m=+39.380279053" watchObservedRunningTime="2025-01-30 13:55:15.777762859 +0000 UTC m=+39.380398915" Jan 30 13:55:16.400435 kubelet[2605]: E0130 13:55:16.400366 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:17.362879 kubelet[2605]: E0130 13:55:17.362818 2605 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:17.401534 kubelet[2605]: E0130 13:55:17.401470 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:18.402263 kubelet[2605]: E0130 13:55:18.402216 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:19.402671 kubelet[2605]: E0130 13:55:19.402608 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:20.242817 kubelet[2605]: I0130 13:55:20.242766 2605 topology_manager.go:215] "Topology Admit Handler" podUID="856bc662-691f-442e-a1cf-8c5e7652bd3c" podNamespace="default" podName="nfs-server-provisioner-0" Jan 30 13:55:20.369688 kubelet[2605]: I0130 13:55:20.369638 2605 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqnkv\" (UniqueName: \"kubernetes.io/projected/856bc662-691f-442e-a1cf-8c5e7652bd3c-kube-api-access-fqnkv\") pod \"nfs-server-provisioner-0\" (UID: \"856bc662-691f-442e-a1cf-8c5e7652bd3c\") " pod="default/nfs-server-provisioner-0" Jan 30 13:55:20.369879 kubelet[2605]: I0130 13:55:20.369697 2605 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/856bc662-691f-442e-a1cf-8c5e7652bd3c-data\") pod \"nfs-server-provisioner-0\" (UID: \"856bc662-691f-442e-a1cf-8c5e7652bd3c\") " pod="default/nfs-server-provisioner-0" Jan 30 13:55:20.402817 kubelet[2605]: E0130 13:55:20.402739 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:20.549518 containerd[2060]: time="2025-01-30T13:55:20.548988601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:856bc662-691f-442e-a1cf-8c5e7652bd3c,Namespace:default,Attempt:0,}" Jan 30 13:55:20.764929 systemd-networkd[1646]: cali60e51b789ff: Link UP Jan 30 13:55:20.767377 (udev-worker)[4067]: Network interface NamePolicy= disabled on kernel command line. Jan 30 13:55:20.771099 systemd-networkd[1646]: cali60e51b789ff: Gained carrier Jan 30 13:55:20.793092 containerd[2060]: 2025-01-30 13:55:20.633 [INFO][4049] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.31.232-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 856bc662-691f-442e-a1cf-8c5e7652bd3c 1123 0 2025-01-30 13:55:20 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 172.31.31.232 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="86a0178510f7a81a5b2dab8e17ba9bff1f4d71e5362efb222000a8b8120c4f5c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.31.232-k8s-nfs--server--provisioner--0-" Jan 30 13:55:20.793092 containerd[2060]: 2025-01-30 13:55:20.634 [INFO][4049] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="86a0178510f7a81a5b2dab8e17ba9bff1f4d71e5362efb222000a8b8120c4f5c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.31.232-k8s-nfs--server--provisioner--0-eth0" Jan 30 13:55:20.793092 containerd[2060]: 2025-01-30 13:55:20.681 [INFO][4060] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="86a0178510f7a81a5b2dab8e17ba9bff1f4d71e5362efb222000a8b8120c4f5c" HandleID="k8s-pod-network.86a0178510f7a81a5b2dab8e17ba9bff1f4d71e5362efb222000a8b8120c4f5c" Workload="172.31.31.232-k8s-nfs--server--provisioner--0-eth0" Jan 30 13:55:20.793092 containerd[2060]: 2025-01-30 13:55:20.697 [INFO][4060] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="86a0178510f7a81a5b2dab8e17ba9bff1f4d71e5362efb222000a8b8120c4f5c" HandleID="k8s-pod-network.86a0178510f7a81a5b2dab8e17ba9bff1f4d71e5362efb222000a8b8120c4f5c" Workload="172.31.31.232-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318e80), Attrs:map[string]string{"namespace":"default", "node":"172.31.31.232", "pod":"nfs-server-provisioner-0", "timestamp":"2025-01-30 13:55:20.681541634 +0000 UTC"}, Hostname:"172.31.31.232", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:55:20.793092 containerd[2060]: 2025-01-30 13:55:20.698 [INFO][4060] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:55:20.793092 containerd[2060]: 2025-01-30 13:55:20.698 [INFO][4060] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:55:20.793092 containerd[2060]: 2025-01-30 13:55:20.698 [INFO][4060] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.31.232' Jan 30 13:55:20.793092 containerd[2060]: 2025-01-30 13:55:20.700 [INFO][4060] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.86a0178510f7a81a5b2dab8e17ba9bff1f4d71e5362efb222000a8b8120c4f5c" host="172.31.31.232" Jan 30 13:55:20.793092 containerd[2060]: 2025-01-30 13:55:20.706 [INFO][4060] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.31.232" Jan 30 13:55:20.793092 containerd[2060]: 2025-01-30 13:55:20.717 [INFO][4060] ipam/ipam.go 489: Trying affinity for 192.168.119.192/26 host="172.31.31.232" Jan 30 13:55:20.793092 containerd[2060]: 2025-01-30 13:55:20.722 [INFO][4060] ipam/ipam.go 155: Attempting to load block cidr=192.168.119.192/26 host="172.31.31.232" Jan 30 13:55:20.793092 containerd[2060]: 2025-01-30 13:55:20.727 [INFO][4060] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.119.192/26 host="172.31.31.232" Jan 30 13:55:20.793092 containerd[2060]: 2025-01-30 13:55:20.727 [INFO][4060] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.119.192/26 handle="k8s-pod-network.86a0178510f7a81a5b2dab8e17ba9bff1f4d71e5362efb222000a8b8120c4f5c" host="172.31.31.232" Jan 30 13:55:20.793092 containerd[2060]: 2025-01-30 13:55:20.730 [INFO][4060] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.86a0178510f7a81a5b2dab8e17ba9bff1f4d71e5362efb222000a8b8120c4f5c Jan 30 13:55:20.793092 containerd[2060]: 2025-01-30 13:55:20.740 [INFO][4060] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.119.192/26 handle="k8s-pod-network.86a0178510f7a81a5b2dab8e17ba9bff1f4d71e5362efb222000a8b8120c4f5c" host="172.31.31.232" Jan 30 13:55:20.793092 containerd[2060]: 2025-01-30 13:55:20.752 [INFO][4060] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.119.195/26] block=192.168.119.192/26 handle="k8s-pod-network.86a0178510f7a81a5b2dab8e17ba9bff1f4d71e5362efb222000a8b8120c4f5c" host="172.31.31.232" Jan 30 13:55:20.793092 containerd[2060]: 2025-01-30 13:55:20.752 [INFO][4060] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.119.195/26] handle="k8s-pod-network.86a0178510f7a81a5b2dab8e17ba9bff1f4d71e5362efb222000a8b8120c4f5c" host="172.31.31.232" Jan 30 13:55:20.793092 containerd[2060]: 2025-01-30 13:55:20.752 [INFO][4060] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:55:20.793092 containerd[2060]: 2025-01-30 13:55:20.752 [INFO][4060] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.119.195/26] IPv6=[] ContainerID="86a0178510f7a81a5b2dab8e17ba9bff1f4d71e5362efb222000a8b8120c4f5c" HandleID="k8s-pod-network.86a0178510f7a81a5b2dab8e17ba9bff1f4d71e5362efb222000a8b8120c4f5c" Workload="172.31.31.232-k8s-nfs--server--provisioner--0-eth0" Jan 30 13:55:20.794711 containerd[2060]: 2025-01-30 13:55:20.754 [INFO][4049] cni-plugin/k8s.go 386: Populated endpoint ContainerID="86a0178510f7a81a5b2dab8e17ba9bff1f4d71e5362efb222000a8b8120c4f5c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.31.232-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.31.232-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"856bc662-691f-442e-a1cf-8c5e7652bd3c", ResourceVersion:"1123", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 55, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.31.232", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.119.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:55:20.794711 containerd[2060]: 2025-01-30 13:55:20.755 [INFO][4049] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.119.195/32] ContainerID="86a0178510f7a81a5b2dab8e17ba9bff1f4d71e5362efb222000a8b8120c4f5c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.31.232-k8s-nfs--server--provisioner--0-eth0" Jan 30 13:55:20.794711 containerd[2060]: 2025-01-30 13:55:20.755 [INFO][4049] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="86a0178510f7a81a5b2dab8e17ba9bff1f4d71e5362efb222000a8b8120c4f5c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.31.232-k8s-nfs--server--provisioner--0-eth0" Jan 30 13:55:20.794711 containerd[2060]: 2025-01-30 13:55:20.767 [INFO][4049] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="86a0178510f7a81a5b2dab8e17ba9bff1f4d71e5362efb222000a8b8120c4f5c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.31.232-k8s-nfs--server--provisioner--0-eth0" Jan 30 13:55:20.795363 containerd[2060]: 2025-01-30 13:55:20.773 [INFO][4049] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="86a0178510f7a81a5b2dab8e17ba9bff1f4d71e5362efb222000a8b8120c4f5c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.31.232-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.31.232-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"856bc662-691f-442e-a1cf-8c5e7652bd3c", ResourceVersion:"1123", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 55, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.31.232", ContainerID:"86a0178510f7a81a5b2dab8e17ba9bff1f4d71e5362efb222000a8b8120c4f5c", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.119.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"96:fd:8b:38:46:6a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:55:20.795363 containerd[2060]: 2025-01-30 13:55:20.789 [INFO][4049] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="86a0178510f7a81a5b2dab8e17ba9bff1f4d71e5362efb222000a8b8120c4f5c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.31.232-k8s-nfs--server--provisioner--0-eth0" Jan 30 13:55:20.872269 containerd[2060]: time="2025-01-30T13:55:20.870828562Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:55:20.874255 containerd[2060]: time="2025-01-30T13:55:20.873724099Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:55:20.874255 containerd[2060]: time="2025-01-30T13:55:20.873823680Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:55:20.874255 containerd[2060]: time="2025-01-30T13:55:20.874155065Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:55:20.979718 containerd[2060]: time="2025-01-30T13:55:20.979677146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:856bc662-691f-442e-a1cf-8c5e7652bd3c,Namespace:default,Attempt:0,} returns sandbox id \"86a0178510f7a81a5b2dab8e17ba9bff1f4d71e5362efb222000a8b8120c4f5c\"" Jan 30 13:55:20.982002 containerd[2060]: time="2025-01-30T13:55:20.981840807Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 30 13:55:21.403454 kubelet[2605]: E0130 13:55:21.403393 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:22.404338 kubelet[2605]: E0130 13:55:22.404141 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:22.560227 systemd-networkd[1646]: cali60e51b789ff: Gained IPv6LL Jan 30 13:55:23.404896 kubelet[2605]: E0130 13:55:23.404854 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:24.337108 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3644979743.mount: Deactivated successfully. Jan 30 13:55:24.405338 kubelet[2605]: E0130 13:55:24.405290 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:24.722469 ntpd[2027]: Listen normally on 10 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%8]:123 Jan 30 13:55:24.724104 ntpd[2027]: 30 Jan 13:55:24 ntpd[2027]: Listen normally on 10 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%8]:123 Jan 30 13:55:25.406805 kubelet[2605]: E0130 13:55:25.406761 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:26.407860 kubelet[2605]: E0130 13:55:26.407818 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:27.196345 containerd[2060]: time="2025-01-30T13:55:27.196291201Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:27.197983 containerd[2060]: time="2025-01-30T13:55:27.197808068Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Jan 30 13:55:27.199725 containerd[2060]: time="2025-01-30T13:55:27.199358660Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:27.203074 containerd[2060]: time="2025-01-30T13:55:27.203032512Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:27.204827 containerd[2060]: time="2025-01-30T13:55:27.204735769Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 6.222684174s" Jan 30 13:55:27.205405 containerd[2060]: time="2025-01-30T13:55:27.205376899Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 30 13:55:27.208716 containerd[2060]: time="2025-01-30T13:55:27.208494915Z" level=info msg="CreateContainer within sandbox \"86a0178510f7a81a5b2dab8e17ba9bff1f4d71e5362efb222000a8b8120c4f5c\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 30 13:55:27.271532 containerd[2060]: time="2025-01-30T13:55:27.271480378Z" level=info msg="CreateContainer within sandbox \"86a0178510f7a81a5b2dab8e17ba9bff1f4d71e5362efb222000a8b8120c4f5c\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"0d807a713ded77e3176c15da93995b9f720ed3c7623b9ddc1e1a353998436776\"" Jan 30 13:55:27.272822 containerd[2060]: time="2025-01-30T13:55:27.272780238Z" level=info msg="StartContainer for \"0d807a713ded77e3176c15da93995b9f720ed3c7623b9ddc1e1a353998436776\"" Jan 30 13:55:27.401346 containerd[2060]: time="2025-01-30T13:55:27.401186189Z" level=info msg="StartContainer for \"0d807a713ded77e3176c15da93995b9f720ed3c7623b9ddc1e1a353998436776\" returns successfully" Jan 30 13:55:27.409604 kubelet[2605]: E0130 13:55:27.409567 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:27.817407 kubelet[2605]: I0130 13:55:27.817304 2605 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.591712585 podStartE2EDuration="7.817287635s" podCreationTimestamp="2025-01-30 13:55:20 +0000 UTC" firstStartedPulling="2025-01-30 13:55:20.98109949 +0000 UTC m=+44.583735537" lastFinishedPulling="2025-01-30 13:55:27.206674549 +0000 UTC m=+50.809310587" observedRunningTime="2025-01-30 13:55:27.816838236 +0000 UTC m=+51.419474336" watchObservedRunningTime="2025-01-30 13:55:27.817287635 +0000 UTC m=+51.419923692" Jan 30 13:55:28.410734 kubelet[2605]: E0130 13:55:28.410677 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:29.411741 kubelet[2605]: E0130 13:55:29.411678 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:30.412507 kubelet[2605]: E0130 13:55:30.412452 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:31.413060 kubelet[2605]: E0130 13:55:31.412995 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:32.413830 kubelet[2605]: E0130 13:55:32.413774 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:33.414141 kubelet[2605]: E0130 13:55:33.413952 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:34.415442 kubelet[2605]: E0130 13:55:34.415373 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:35.416382 kubelet[2605]: E0130 13:55:35.416242 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:36.417167 kubelet[2605]: E0130 13:55:36.417066 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:37.363299 kubelet[2605]: E0130 13:55:37.363241 2605 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:37.406745 containerd[2060]: time="2025-01-30T13:55:37.403355214Z" level=info msg="StopPodSandbox for \"a16bb9f1e891dfb4700ab9d0041498de6d4449b42f087249a64a0d631c532456\"" Jan 30 13:55:37.420653 kubelet[2605]: E0130 13:55:37.419298 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:37.596375 containerd[2060]: 2025-01-30 13:55:37.524 [WARNING][4257] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a16bb9f1e891dfb4700ab9d0041498de6d4449b42f087249a64a0d631c532456" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.31.232-k8s-csi--node--driver--6m9m7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"41331529-9d0a-4578-9d4a-d0617145104a", ResourceVersion:"1076", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 54, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.31.232", ContainerID:"5977a8d79470f60be5369a757cf306ffbe44b9b47d41d3467d16e1ceeae0dc53", Pod:"csi-node-driver-6m9m7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.119.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali38f85c2e272", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:55:37.596375 containerd[2060]: 2025-01-30 13:55:37.524 [INFO][4257] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a16bb9f1e891dfb4700ab9d0041498de6d4449b42f087249a64a0d631c532456" Jan 30 13:55:37.596375 containerd[2060]: 2025-01-30 13:55:37.524 [INFO][4257] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a16bb9f1e891dfb4700ab9d0041498de6d4449b42f087249a64a0d631c532456" iface="eth0" netns="" Jan 30 13:55:37.596375 containerd[2060]: 2025-01-30 13:55:37.524 [INFO][4257] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a16bb9f1e891dfb4700ab9d0041498de6d4449b42f087249a64a0d631c532456" Jan 30 13:55:37.596375 containerd[2060]: 2025-01-30 13:55:37.525 [INFO][4257] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a16bb9f1e891dfb4700ab9d0041498de6d4449b42f087249a64a0d631c532456" Jan 30 13:55:37.596375 containerd[2060]: 2025-01-30 13:55:37.577 [INFO][4263] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a16bb9f1e891dfb4700ab9d0041498de6d4449b42f087249a64a0d631c532456" HandleID="k8s-pod-network.a16bb9f1e891dfb4700ab9d0041498de6d4449b42f087249a64a0d631c532456" Workload="172.31.31.232-k8s-csi--node--driver--6m9m7-eth0" Jan 30 13:55:37.596375 containerd[2060]: 2025-01-30 13:55:37.577 [INFO][4263] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:55:37.596375 containerd[2060]: 2025-01-30 13:55:37.577 [INFO][4263] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:55:37.596375 containerd[2060]: 2025-01-30 13:55:37.589 [WARNING][4263] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a16bb9f1e891dfb4700ab9d0041498de6d4449b42f087249a64a0d631c532456" HandleID="k8s-pod-network.a16bb9f1e891dfb4700ab9d0041498de6d4449b42f087249a64a0d631c532456" Workload="172.31.31.232-k8s-csi--node--driver--6m9m7-eth0" Jan 30 13:55:37.596375 containerd[2060]: 2025-01-30 13:55:37.589 [INFO][4263] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a16bb9f1e891dfb4700ab9d0041498de6d4449b42f087249a64a0d631c532456" HandleID="k8s-pod-network.a16bb9f1e891dfb4700ab9d0041498de6d4449b42f087249a64a0d631c532456" Workload="172.31.31.232-k8s-csi--node--driver--6m9m7-eth0" Jan 30 13:55:37.596375 containerd[2060]: 2025-01-30 13:55:37.592 [INFO][4263] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:55:37.596375 containerd[2060]: 2025-01-30 13:55:37.594 [INFO][4257] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a16bb9f1e891dfb4700ab9d0041498de6d4449b42f087249a64a0d631c532456" Jan 30 13:55:37.597153 containerd[2060]: time="2025-01-30T13:55:37.596443750Z" level=info msg="TearDown network for sandbox \"a16bb9f1e891dfb4700ab9d0041498de6d4449b42f087249a64a0d631c532456\" successfully" Jan 30 13:55:37.597153 containerd[2060]: time="2025-01-30T13:55:37.596474060Z" level=info msg="StopPodSandbox for \"a16bb9f1e891dfb4700ab9d0041498de6d4449b42f087249a64a0d631c532456\" returns successfully" Jan 30 13:55:37.606949 containerd[2060]: time="2025-01-30T13:55:37.606574036Z" level=info msg="RemovePodSandbox for \"a16bb9f1e891dfb4700ab9d0041498de6d4449b42f087249a64a0d631c532456\"" Jan 30 13:55:37.606949 containerd[2060]: time="2025-01-30T13:55:37.606659668Z" level=info msg="Forcibly stopping sandbox \"a16bb9f1e891dfb4700ab9d0041498de6d4449b42f087249a64a0d631c532456\"" Jan 30 13:55:37.758273 containerd[2060]: 2025-01-30 13:55:37.680 [WARNING][4296] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a16bb9f1e891dfb4700ab9d0041498de6d4449b42f087249a64a0d631c532456" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.31.232-k8s-csi--node--driver--6m9m7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"41331529-9d0a-4578-9d4a-d0617145104a", ResourceVersion:"1076", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 54, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.31.232", ContainerID:"5977a8d79470f60be5369a757cf306ffbe44b9b47d41d3467d16e1ceeae0dc53", Pod:"csi-node-driver-6m9m7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.119.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali38f85c2e272", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:55:37.758273 containerd[2060]: 2025-01-30 13:55:37.682 [INFO][4296] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a16bb9f1e891dfb4700ab9d0041498de6d4449b42f087249a64a0d631c532456" Jan 30 13:55:37.758273 containerd[2060]: 2025-01-30 13:55:37.682 [INFO][4296] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a16bb9f1e891dfb4700ab9d0041498de6d4449b42f087249a64a0d631c532456" iface="eth0" netns="" Jan 30 13:55:37.758273 containerd[2060]: 2025-01-30 13:55:37.682 [INFO][4296] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a16bb9f1e891dfb4700ab9d0041498de6d4449b42f087249a64a0d631c532456" Jan 30 13:55:37.758273 containerd[2060]: 2025-01-30 13:55:37.682 [INFO][4296] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a16bb9f1e891dfb4700ab9d0041498de6d4449b42f087249a64a0d631c532456" Jan 30 13:55:37.758273 containerd[2060]: 2025-01-30 13:55:37.717 [INFO][4302] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a16bb9f1e891dfb4700ab9d0041498de6d4449b42f087249a64a0d631c532456" HandleID="k8s-pod-network.a16bb9f1e891dfb4700ab9d0041498de6d4449b42f087249a64a0d631c532456" Workload="172.31.31.232-k8s-csi--node--driver--6m9m7-eth0" Jan 30 13:55:37.758273 containerd[2060]: 2025-01-30 13:55:37.718 [INFO][4302] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:55:37.758273 containerd[2060]: 2025-01-30 13:55:37.718 [INFO][4302] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:55:37.758273 containerd[2060]: 2025-01-30 13:55:37.745 [WARNING][4302] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a16bb9f1e891dfb4700ab9d0041498de6d4449b42f087249a64a0d631c532456" HandleID="k8s-pod-network.a16bb9f1e891dfb4700ab9d0041498de6d4449b42f087249a64a0d631c532456" Workload="172.31.31.232-k8s-csi--node--driver--6m9m7-eth0" Jan 30 13:55:37.758273 containerd[2060]: 2025-01-30 13:55:37.745 [INFO][4302] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a16bb9f1e891dfb4700ab9d0041498de6d4449b42f087249a64a0d631c532456" HandleID="k8s-pod-network.a16bb9f1e891dfb4700ab9d0041498de6d4449b42f087249a64a0d631c532456" Workload="172.31.31.232-k8s-csi--node--driver--6m9m7-eth0" Jan 30 13:55:37.758273 containerd[2060]: 2025-01-30 13:55:37.755 [INFO][4302] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:55:37.758273 containerd[2060]: 2025-01-30 13:55:37.756 [INFO][4296] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a16bb9f1e891dfb4700ab9d0041498de6d4449b42f087249a64a0d631c532456" Jan 30 13:55:37.759620 containerd[2060]: time="2025-01-30T13:55:37.758319578Z" level=info msg="TearDown network for sandbox \"a16bb9f1e891dfb4700ab9d0041498de6d4449b42f087249a64a0d631c532456\" successfully" Jan 30 13:55:37.774589 containerd[2060]: time="2025-01-30T13:55:37.774533797Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a16bb9f1e891dfb4700ab9d0041498de6d4449b42f087249a64a0d631c532456\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:55:37.774854 containerd[2060]: time="2025-01-30T13:55:37.774620534Z" level=info msg="RemovePodSandbox \"a16bb9f1e891dfb4700ab9d0041498de6d4449b42f087249a64a0d631c532456\" returns successfully" Jan 30 13:55:37.776042 containerd[2060]: time="2025-01-30T13:55:37.775995551Z" level=info msg="StopPodSandbox for \"41172b15d8f71a9554d5d71612174b3e9e800e0551ccddf9c7e8d586c8402e81\"" Jan 30 13:55:37.910031 containerd[2060]: 2025-01-30 13:55:37.851 [WARNING][4320] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="41172b15d8f71a9554d5d71612174b3e9e800e0551ccddf9c7e8d586c8402e81" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.31.232-k8s-nginx--deployment--85f456d6dd--25dwr-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"90c22885-b57e-41ea-be7e-6de1aec04565", ResourceVersion:"1092", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 54, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.31.232", ContainerID:"4781ce471daa5c54b2acbb4062bdca8dcc1f6fcee54a9fde3539aad2cde3d3f2", Pod:"nginx-deployment-85f456d6dd-25dwr", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.119.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calief24f5028f7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:55:37.910031 containerd[2060]: 2025-01-30 13:55:37.852 [INFO][4320] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="41172b15d8f71a9554d5d71612174b3e9e800e0551ccddf9c7e8d586c8402e81" Jan 30 13:55:37.910031 containerd[2060]: 2025-01-30 13:55:37.852 [INFO][4320] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="41172b15d8f71a9554d5d71612174b3e9e800e0551ccddf9c7e8d586c8402e81" iface="eth0" netns="" Jan 30 13:55:37.910031 containerd[2060]: 2025-01-30 13:55:37.852 [INFO][4320] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="41172b15d8f71a9554d5d71612174b3e9e800e0551ccddf9c7e8d586c8402e81" Jan 30 13:55:37.910031 containerd[2060]: 2025-01-30 13:55:37.852 [INFO][4320] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="41172b15d8f71a9554d5d71612174b3e9e800e0551ccddf9c7e8d586c8402e81" Jan 30 13:55:37.910031 containerd[2060]: 2025-01-30 13:55:37.883 [INFO][4326] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="41172b15d8f71a9554d5d71612174b3e9e800e0551ccddf9c7e8d586c8402e81" HandleID="k8s-pod-network.41172b15d8f71a9554d5d71612174b3e9e800e0551ccddf9c7e8d586c8402e81" Workload="172.31.31.232-k8s-nginx--deployment--85f456d6dd--25dwr-eth0" Jan 30 13:55:37.910031 containerd[2060]: 2025-01-30 13:55:37.884 [INFO][4326] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:55:37.910031 containerd[2060]: 2025-01-30 13:55:37.884 [INFO][4326] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:55:37.910031 containerd[2060]: 2025-01-30 13:55:37.894 [WARNING][4326] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="41172b15d8f71a9554d5d71612174b3e9e800e0551ccddf9c7e8d586c8402e81" HandleID="k8s-pod-network.41172b15d8f71a9554d5d71612174b3e9e800e0551ccddf9c7e8d586c8402e81" Workload="172.31.31.232-k8s-nginx--deployment--85f456d6dd--25dwr-eth0" Jan 30 13:55:37.910031 containerd[2060]: 2025-01-30 13:55:37.894 [INFO][4326] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="41172b15d8f71a9554d5d71612174b3e9e800e0551ccddf9c7e8d586c8402e81" HandleID="k8s-pod-network.41172b15d8f71a9554d5d71612174b3e9e800e0551ccddf9c7e8d586c8402e81" Workload="172.31.31.232-k8s-nginx--deployment--85f456d6dd--25dwr-eth0" Jan 30 13:55:37.910031 containerd[2060]: 2025-01-30 13:55:37.906 [INFO][4326] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:55:37.910031 containerd[2060]: 2025-01-30 13:55:37.908 [INFO][4320] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="41172b15d8f71a9554d5d71612174b3e9e800e0551ccddf9c7e8d586c8402e81" Jan 30 13:55:37.911813 containerd[2060]: time="2025-01-30T13:55:37.910082074Z" level=info msg="TearDown network for sandbox \"41172b15d8f71a9554d5d71612174b3e9e800e0551ccddf9c7e8d586c8402e81\" successfully" Jan 30 13:55:37.911813 containerd[2060]: time="2025-01-30T13:55:37.910112163Z" level=info msg="StopPodSandbox for \"41172b15d8f71a9554d5d71612174b3e9e800e0551ccddf9c7e8d586c8402e81\" returns successfully" Jan 30 13:55:37.911813 containerd[2060]: time="2025-01-30T13:55:37.910634918Z" level=info msg="RemovePodSandbox for \"41172b15d8f71a9554d5d71612174b3e9e800e0551ccddf9c7e8d586c8402e81\"" Jan 30 13:55:37.911813 containerd[2060]: time="2025-01-30T13:55:37.910687346Z" level=info msg="Forcibly stopping sandbox \"41172b15d8f71a9554d5d71612174b3e9e800e0551ccddf9c7e8d586c8402e81\"" Jan 30 13:55:38.012074 containerd[2060]: 2025-01-30 13:55:37.960 [WARNING][4345] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="41172b15d8f71a9554d5d71612174b3e9e800e0551ccddf9c7e8d586c8402e81" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.31.232-k8s-nginx--deployment--85f456d6dd--25dwr-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"90c22885-b57e-41ea-be7e-6de1aec04565", ResourceVersion:"1092", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 54, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.31.232", ContainerID:"4781ce471daa5c54b2acbb4062bdca8dcc1f6fcee54a9fde3539aad2cde3d3f2", Pod:"nginx-deployment-85f456d6dd-25dwr", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.119.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calief24f5028f7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:55:38.012074 containerd[2060]: 2025-01-30 13:55:37.961 [INFO][4345] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="41172b15d8f71a9554d5d71612174b3e9e800e0551ccddf9c7e8d586c8402e81" Jan 30 13:55:38.012074 containerd[2060]: 2025-01-30 13:55:37.961 [INFO][4345] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="41172b15d8f71a9554d5d71612174b3e9e800e0551ccddf9c7e8d586c8402e81" iface="eth0" netns="" Jan 30 13:55:38.012074 containerd[2060]: 2025-01-30 13:55:37.961 [INFO][4345] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="41172b15d8f71a9554d5d71612174b3e9e800e0551ccddf9c7e8d586c8402e81" Jan 30 13:55:38.012074 containerd[2060]: 2025-01-30 13:55:37.961 [INFO][4345] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="41172b15d8f71a9554d5d71612174b3e9e800e0551ccddf9c7e8d586c8402e81" Jan 30 13:55:38.012074 containerd[2060]: 2025-01-30 13:55:37.993 [INFO][4351] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="41172b15d8f71a9554d5d71612174b3e9e800e0551ccddf9c7e8d586c8402e81" HandleID="k8s-pod-network.41172b15d8f71a9554d5d71612174b3e9e800e0551ccddf9c7e8d586c8402e81" Workload="172.31.31.232-k8s-nginx--deployment--85f456d6dd--25dwr-eth0" Jan 30 13:55:38.012074 containerd[2060]: 2025-01-30 13:55:37.993 [INFO][4351] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:55:38.012074 containerd[2060]: 2025-01-30 13:55:37.993 [INFO][4351] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:55:38.012074 containerd[2060]: 2025-01-30 13:55:38.004 [WARNING][4351] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="41172b15d8f71a9554d5d71612174b3e9e800e0551ccddf9c7e8d586c8402e81" HandleID="k8s-pod-network.41172b15d8f71a9554d5d71612174b3e9e800e0551ccddf9c7e8d586c8402e81" Workload="172.31.31.232-k8s-nginx--deployment--85f456d6dd--25dwr-eth0" Jan 30 13:55:38.012074 containerd[2060]: 2025-01-30 13:55:38.004 [INFO][4351] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="41172b15d8f71a9554d5d71612174b3e9e800e0551ccddf9c7e8d586c8402e81" HandleID="k8s-pod-network.41172b15d8f71a9554d5d71612174b3e9e800e0551ccddf9c7e8d586c8402e81" Workload="172.31.31.232-k8s-nginx--deployment--85f456d6dd--25dwr-eth0" Jan 30 13:55:38.012074 containerd[2060]: 2025-01-30 13:55:38.009 [INFO][4351] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:55:38.012074 containerd[2060]: 2025-01-30 13:55:38.010 [INFO][4345] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="41172b15d8f71a9554d5d71612174b3e9e800e0551ccddf9c7e8d586c8402e81" Jan 30 13:55:38.014024 containerd[2060]: time="2025-01-30T13:55:38.013980006Z" level=info msg="TearDown network for sandbox \"41172b15d8f71a9554d5d71612174b3e9e800e0551ccddf9c7e8d586c8402e81\" successfully" Jan 30 13:55:38.019755 containerd[2060]: time="2025-01-30T13:55:38.019699494Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"41172b15d8f71a9554d5d71612174b3e9e800e0551ccddf9c7e8d586c8402e81\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:55:38.019883 containerd[2060]: time="2025-01-30T13:55:38.019782078Z" level=info msg="RemovePodSandbox \"41172b15d8f71a9554d5d71612174b3e9e800e0551ccddf9c7e8d586c8402e81\" returns successfully" Jan 30 13:55:38.420310 kubelet[2605]: E0130 13:55:38.420155 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:39.421266 kubelet[2605]: E0130 13:55:39.421217 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:40.422074 kubelet[2605]: E0130 13:55:40.422015 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:41.422733 kubelet[2605]: E0130 13:55:41.422682 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:42.423334 kubelet[2605]: E0130 13:55:42.423274 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:43.424052 kubelet[2605]: E0130 13:55:43.423991 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:44.424594 kubelet[2605]: E0130 13:55:44.424474 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:45.424714 kubelet[2605]: E0130 13:55:45.424659 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:46.425773 kubelet[2605]: E0130 13:55:46.425714 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:47.426911 kubelet[2605]: E0130 13:55:47.426853 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:48.428002 kubelet[2605]: E0130 13:55:48.427944 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:49.429085 kubelet[2605]: E0130 13:55:49.429024 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:50.430408 kubelet[2605]: E0130 13:55:50.430358 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:51.431451 kubelet[2605]: E0130 13:55:51.431406 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:52.434466 kubelet[2605]: E0130 13:55:52.434398 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:52.629681 kubelet[2605]: I0130 13:55:52.629636 2605 topology_manager.go:215] "Topology Admit Handler" podUID="7c14050b-f9eb-43ff-802f-3c801510d555" podNamespace="default" podName="test-pod-1" Jan 30 13:55:52.791527 kubelet[2605]: I0130 13:55:52.791475 2605 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79rph\" (UniqueName: \"kubernetes.io/projected/7c14050b-f9eb-43ff-802f-3c801510d555-kube-api-access-79rph\") pod \"test-pod-1\" (UID: \"7c14050b-f9eb-43ff-802f-3c801510d555\") " pod="default/test-pod-1" Jan 30 13:55:52.791690 kubelet[2605]: I0130 13:55:52.791544 2605 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7b4a80bd-7175-409c-bab8-4585ea983769\" (UniqueName: \"kubernetes.io/nfs/7c14050b-f9eb-43ff-802f-3c801510d555-pvc-7b4a80bd-7175-409c-bab8-4585ea983769\") pod \"test-pod-1\" (UID: \"7c14050b-f9eb-43ff-802f-3c801510d555\") " pod="default/test-pod-1" Jan 30 13:55:52.957357 kernel: FS-Cache: Loaded Jan 30 13:55:53.060816 kernel: RPC: Registered named UNIX socket transport module. Jan 30 13:55:53.060974 kernel: RPC: Registered udp transport module. Jan 30 13:55:53.061010 kernel: RPC: Registered tcp transport module. Jan 30 13:55:53.061041 kernel: RPC: Registered tcp-with-tls transport module. Jan 30 13:55:53.061485 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 30 13:55:53.436705 kubelet[2605]: E0130 13:55:53.435534 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:53.542801 kernel: NFS: Registering the id_resolver key type Jan 30 13:55:53.543369 kernel: Key type id_resolver registered Jan 30 13:55:53.544353 kernel: Key type id_legacy registered Jan 30 13:55:53.593019 nfsidmap[4385]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Jan 30 13:55:53.599114 nfsidmap[4387]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Jan 30 13:55:53.835060 containerd[2060]: time="2025-01-30T13:55:53.834931628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:7c14050b-f9eb-43ff-802f-3c801510d555,Namespace:default,Attempt:0,}" Jan 30 13:55:54.015217 systemd-networkd[1646]: cali5ec59c6bf6e: Link UP Jan 30 13:55:54.015692 systemd-networkd[1646]: cali5ec59c6bf6e: Gained carrier Jan 30 13:55:54.020311 (udev-worker)[4372]: Network interface NamePolicy= disabled on kernel command line. Jan 30 13:55:54.047261 containerd[2060]: 2025-01-30 13:55:53.897 [INFO][4389] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.31.232-k8s-test--pod--1-eth0 default 7c14050b-f9eb-43ff-802f-3c801510d555 1229 0 2025-01-30 13:55:21 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.31.232 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="16349d5ba2b619216888807c5b097905dc69326838aa643e2e2ab89f8d7561e3" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.31.232-k8s-test--pod--1-" Jan 30 13:55:54.047261 containerd[2060]: 2025-01-30 13:55:53.897 [INFO][4389] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="16349d5ba2b619216888807c5b097905dc69326838aa643e2e2ab89f8d7561e3" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.31.232-k8s-test--pod--1-eth0" Jan 30 13:55:54.047261 containerd[2060]: 2025-01-30 13:55:53.935 [INFO][4399] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="16349d5ba2b619216888807c5b097905dc69326838aa643e2e2ab89f8d7561e3" HandleID="k8s-pod-network.16349d5ba2b619216888807c5b097905dc69326838aa643e2e2ab89f8d7561e3" Workload="172.31.31.232-k8s-test--pod--1-eth0" Jan 30 13:55:54.047261 containerd[2060]: 2025-01-30 13:55:53.953 [INFO][4399] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="16349d5ba2b619216888807c5b097905dc69326838aa643e2e2ab89f8d7561e3" HandleID="k8s-pod-network.16349d5ba2b619216888807c5b097905dc69326838aa643e2e2ab89f8d7561e3" Workload="172.31.31.232-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000336ac0), Attrs:map[string]string{"namespace":"default", "node":"172.31.31.232", "pod":"test-pod-1", "timestamp":"2025-01-30 13:55:53.935939414 +0000 UTC"}, Hostname:"172.31.31.232", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:55:54.047261 containerd[2060]: 2025-01-30 13:55:53.953 [INFO][4399] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:55:54.047261 containerd[2060]: 2025-01-30 13:55:53.953 [INFO][4399] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:55:54.047261 containerd[2060]: 2025-01-30 13:55:53.953 [INFO][4399] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.31.232' Jan 30 13:55:54.047261 containerd[2060]: 2025-01-30 13:55:53.956 [INFO][4399] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.16349d5ba2b619216888807c5b097905dc69326838aa643e2e2ab89f8d7561e3" host="172.31.31.232" Jan 30 13:55:54.047261 containerd[2060]: 2025-01-30 13:55:53.962 [INFO][4399] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.31.232" Jan 30 13:55:54.047261 containerd[2060]: 2025-01-30 13:55:53.969 [INFO][4399] ipam/ipam.go 489: Trying affinity for 192.168.119.192/26 host="172.31.31.232" Jan 30 13:55:54.047261 containerd[2060]: 2025-01-30 13:55:53.971 [INFO][4399] ipam/ipam.go 155: Attempting to load block cidr=192.168.119.192/26 host="172.31.31.232" Jan 30 13:55:54.047261 containerd[2060]: 2025-01-30 13:55:53.975 [INFO][4399] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.119.192/26 host="172.31.31.232" Jan 30 13:55:54.047261 containerd[2060]: 2025-01-30 13:55:53.975 [INFO][4399] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.119.192/26 handle="k8s-pod-network.16349d5ba2b619216888807c5b097905dc69326838aa643e2e2ab89f8d7561e3" host="172.31.31.232" Jan 30 13:55:54.047261 containerd[2060]: 2025-01-30 13:55:53.978 [INFO][4399] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.16349d5ba2b619216888807c5b097905dc69326838aa643e2e2ab89f8d7561e3 Jan 30 13:55:54.047261 containerd[2060]: 2025-01-30 13:55:53.986 [INFO][4399] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.119.192/26 handle="k8s-pod-network.16349d5ba2b619216888807c5b097905dc69326838aa643e2e2ab89f8d7561e3" host="172.31.31.232" Jan 30 13:55:54.047261 containerd[2060]: 2025-01-30 13:55:53.997 [INFO][4399] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.119.196/26] block=192.168.119.192/26 handle="k8s-pod-network.16349d5ba2b619216888807c5b097905dc69326838aa643e2e2ab89f8d7561e3" host="172.31.31.232" Jan 30 13:55:54.047261 containerd[2060]: 2025-01-30 13:55:53.997 [INFO][4399] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.119.196/26] handle="k8s-pod-network.16349d5ba2b619216888807c5b097905dc69326838aa643e2e2ab89f8d7561e3" host="172.31.31.232" Jan 30 13:55:54.047261 containerd[2060]: 2025-01-30 13:55:53.998 [INFO][4399] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:55:54.047261 containerd[2060]: 2025-01-30 13:55:53.998 [INFO][4399] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.119.196/26] IPv6=[] ContainerID="16349d5ba2b619216888807c5b097905dc69326838aa643e2e2ab89f8d7561e3" HandleID="k8s-pod-network.16349d5ba2b619216888807c5b097905dc69326838aa643e2e2ab89f8d7561e3" Workload="172.31.31.232-k8s-test--pod--1-eth0" Jan 30 13:55:54.047261 containerd[2060]: 2025-01-30 13:55:54.005 [INFO][4389] cni-plugin/k8s.go 386: Populated endpoint ContainerID="16349d5ba2b619216888807c5b097905dc69326838aa643e2e2ab89f8d7561e3" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.31.232-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.31.232-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"7c14050b-f9eb-43ff-802f-3c801510d555", ResourceVersion:"1229", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 55, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.31.232", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.119.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:55:54.048371 containerd[2060]: 2025-01-30 13:55:54.006 [INFO][4389] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.119.196/32] ContainerID="16349d5ba2b619216888807c5b097905dc69326838aa643e2e2ab89f8d7561e3" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.31.232-k8s-test--pod--1-eth0" Jan 30 13:55:54.048371 containerd[2060]: 2025-01-30 13:55:54.006 [INFO][4389] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="16349d5ba2b619216888807c5b097905dc69326838aa643e2e2ab89f8d7561e3" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.31.232-k8s-test--pod--1-eth0" Jan 30 13:55:54.048371 containerd[2060]: 2025-01-30 13:55:54.010 [INFO][4389] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="16349d5ba2b619216888807c5b097905dc69326838aa643e2e2ab89f8d7561e3" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.31.232-k8s-test--pod--1-eth0" Jan 30 13:55:54.048371 containerd[2060]: 2025-01-30 13:55:54.014 [INFO][4389] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="16349d5ba2b619216888807c5b097905dc69326838aa643e2e2ab89f8d7561e3" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.31.232-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.31.232-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"7c14050b-f9eb-43ff-802f-3c801510d555", ResourceVersion:"1229", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 55, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.31.232", ContainerID:"16349d5ba2b619216888807c5b097905dc69326838aa643e2e2ab89f8d7561e3", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.119.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"72:6a:21:fa:34:7e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:55:54.048371 containerd[2060]: 2025-01-30 13:55:54.037 [INFO][4389] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="16349d5ba2b619216888807c5b097905dc69326838aa643e2e2ab89f8d7561e3" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.31.232-k8s-test--pod--1-eth0" Jan 30 13:55:54.088738 containerd[2060]: time="2025-01-30T13:55:54.088337589Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:55:54.088738 containerd[2060]: time="2025-01-30T13:55:54.088411918Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:55:54.088738 containerd[2060]: time="2025-01-30T13:55:54.088451948Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:55:54.088738 containerd[2060]: time="2025-01-30T13:55:54.088582035Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:55:54.170439 containerd[2060]: time="2025-01-30T13:55:54.170390187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:7c14050b-f9eb-43ff-802f-3c801510d555,Namespace:default,Attempt:0,} returns sandbox id \"16349d5ba2b619216888807c5b097905dc69326838aa643e2e2ab89f8d7561e3\"" Jan 30 13:55:54.188377 containerd[2060]: time="2025-01-30T13:55:54.188337619Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 30 13:55:54.435870 kubelet[2605]: E0130 13:55:54.435731 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:54.503228 containerd[2060]: time="2025-01-30T13:55:54.503154472Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:54.504572 containerd[2060]: time="2025-01-30T13:55:54.504510980Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 30 13:55:54.507841 containerd[2060]: time="2025-01-30T13:55:54.507781414Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"71015439\" in 319.398716ms" Jan 30 13:55:54.507841 containerd[2060]: time="2025-01-30T13:55:54.507830495Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\"" Jan 30 13:55:54.512569 containerd[2060]: time="2025-01-30T13:55:54.512525987Z" level=info msg="CreateContainer within sandbox \"16349d5ba2b619216888807c5b097905dc69326838aa643e2e2ab89f8d7561e3\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 30 13:55:54.541486 containerd[2060]: time="2025-01-30T13:55:54.541435777Z" level=info msg="CreateContainer within sandbox \"16349d5ba2b619216888807c5b097905dc69326838aa643e2e2ab89f8d7561e3\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"5ab849e1a65d5e040bf60446724c150b901b97a1284b42ecc2d9a4a6da6a5a4e\"" Jan 30 13:55:54.546130 containerd[2060]: time="2025-01-30T13:55:54.543302707Z" level=info msg="StartContainer for \"5ab849e1a65d5e040bf60446724c150b901b97a1284b42ecc2d9a4a6da6a5a4e\"" Jan 30 13:55:54.663902 containerd[2060]: time="2025-01-30T13:55:54.663722872Z" level=info msg="StartContainer for \"5ab849e1a65d5e040bf60446724c150b901b97a1284b42ecc2d9a4a6da6a5a4e\" returns successfully" Jan 30 13:55:54.985825 kubelet[2605]: I0130 13:55:54.985770 2605 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=33.648430977 podStartE2EDuration="33.985748297s" podCreationTimestamp="2025-01-30 13:55:21 +0000 UTC" firstStartedPulling="2025-01-30 13:55:54.17214181 +0000 UTC m=+77.774777856" lastFinishedPulling="2025-01-30 13:55:54.509459124 +0000 UTC m=+78.112095176" observedRunningTime="2025-01-30 13:55:54.985731073 +0000 UTC m=+78.588367136" watchObservedRunningTime="2025-01-30 13:55:54.985748297 +0000 UTC m=+78.588384368" Jan 30 13:55:55.436699 kubelet[2605]: E0130 13:55:55.436568 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:55.776249 systemd-networkd[1646]: cali5ec59c6bf6e: Gained IPv6LL Jan 30 13:55:56.437260 kubelet[2605]: E0130 13:55:56.437182 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:57.362520 kubelet[2605]: E0130 13:55:57.362467 2605 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:57.438425 kubelet[2605]: E0130 13:55:57.438368 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:58.439622 kubelet[2605]: E0130 13:55:58.439566 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:58.721942 ntpd[2027]: Listen normally on 11 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%9]:123 Jan 30 13:55:58.722590 ntpd[2027]: 30 Jan 13:55:58 ntpd[2027]: Listen normally on 11 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%9]:123 Jan 30 13:55:59.440382 kubelet[2605]: E0130 13:55:59.440334 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:56:00.441372 kubelet[2605]: E0130 13:56:00.441248 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:56:01.443306 kubelet[2605]: E0130 13:56:01.443029 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:56:02.443738 kubelet[2605]: E0130 13:56:02.443674 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:56:03.444557 kubelet[2605]: E0130 13:56:03.444496 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:56:04.445751 kubelet[2605]: E0130 13:56:04.445694 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:56:05.446908 kubelet[2605]: E0130 13:56:05.446773 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:56:06.447822 kubelet[2605]: E0130 13:56:06.447757 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:56:07.448866 kubelet[2605]: E0130 13:56:07.448810 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:56:08.449866 kubelet[2605]: E0130 13:56:08.449800 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:56:09.450865 kubelet[2605]: E0130 13:56:09.450803 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:56:10.451891 kubelet[2605]: E0130 13:56:10.451831 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:56:11.452329 kubelet[2605]: E0130 13:56:11.452270 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:56:12.452823 kubelet[2605]: E0130 13:56:12.452766 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:56:13.453897 kubelet[2605]: E0130 13:56:13.453831 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:56:14.455044 kubelet[2605]: E0130 13:56:14.454986 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:56:15.456052 kubelet[2605]: E0130 13:56:15.455988 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:56:16.456439 kubelet[2605]: E0130 13:56:16.456377 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:56:17.362424 kubelet[2605]: E0130 13:56:17.362371 2605 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:56:17.457422 kubelet[2605]: E0130 13:56:17.457293 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:56:18.458910 kubelet[2605]: E0130 13:56:18.458584 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:56:19.128622 kubelet[2605]: E0130 13:56:19.128552 2605 controller.go:195] "Failed to update lease" err="Put \"https://172.31.23.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.31.232?timeout=10s\": context deadline exceeded" Jan 30 13:56:19.459410 kubelet[2605]: E0130 13:56:19.459261 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:56:20.459600 kubelet[2605]: E0130 13:56:20.459542 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:56:21.460051 kubelet[2605]: E0130 13:56:21.459990 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:56:22.461114 kubelet[2605]: E0130 13:56:22.461059 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:56:23.462046 kubelet[2605]: E0130 13:56:23.461970 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:56:24.462878 kubelet[2605]: E0130 13:56:24.462832 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:56:25.463049 kubelet[2605]: E0130 13:56:25.462990 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:56:26.463589 kubelet[2605]: E0130 13:56:26.463535 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:56:27.464194 kubelet[2605]: E0130 13:56:27.464136 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:56:28.464990 kubelet[2605]: E0130 13:56:28.464929 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:56:29.129748 kubelet[2605]: E0130 13:56:29.129678 2605 controller.go:195] "Failed to update lease" err="Put \"https://172.31.23.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.31.232?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 13:56:29.465500 kubelet[2605]: E0130 13:56:29.465367 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:56:30.465819 kubelet[2605]: E0130 13:56:30.465763 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:56:31.466455 kubelet[2605]: E0130 13:56:31.466398 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:56:31.636612 systemd[1]: run-containerd-runc-k8s.io-1f292d1c97dde9ed417114fae7b7f7b097221f9c7abb4d0b0fe1e78aaedac643-runc.0UohXl.mount: Deactivated successfully. Jan 30 13:56:32.467446 kubelet[2605]: E0130 13:56:32.467305 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:56:33.468168 kubelet[2605]: E0130 13:56:33.468111 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:56:34.468553 kubelet[2605]: E0130 13:56:34.468496 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:56:35.008047 kubelet[2605]: E0130 13:56:35.007993 2605 controller.go:195] "Failed to update lease" err="Put \"https://172.31.23.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.31.232?timeout=10s\": unexpected EOF" Jan 30 13:56:35.020384 kubelet[2605]: E0130 13:56:35.020339 2605 controller.go:195] "Failed to update lease" err="Put \"https://172.31.23.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.31.232?timeout=10s\": dial tcp 172.31.23.102:6443: connect: connection refused" Jan 30 13:56:35.021225 kubelet[2605]: E0130 13:56:35.020969 2605 controller.go:195] "Failed to update lease" err="Put \"https://172.31.23.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.31.232?timeout=10s\": dial tcp 172.31.23.102:6443: connect: connection refused" Jan 30 13:56:35.021225 kubelet[2605]: I0130 13:56:35.021016 2605 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 30 13:56:35.023216 kubelet[2605]: E0130 13:56:35.021764 2605 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.31.232?timeout=10s\": dial tcp 172.31.23.102:6443: connect: connection refused" interval="200ms" Jan 30 13:56:35.223388 kubelet[2605]: E0130 13:56:35.223333 2605 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.31.232?timeout=10s\": dial tcp 172.31.23.102:6443: connect: connection refused" interval="400ms" Jan 30 13:56:35.469109 kubelet[2605]: E0130 13:56:35.468947 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:56:35.624639 kubelet[2605]: E0130 13:56:35.624522 2605 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.31.232?timeout=10s\": dial tcp 172.31.23.102:6443: connect: connection refused" interval="800ms" Jan 30 13:56:36.425947 kubelet[2605]: E0130 13:56:36.425891 2605 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.31.232?timeout=10s\": dial tcp 172.31.23.102:6443: connect: connection refused" interval="1.6s" Jan 30 13:56:36.469430 kubelet[2605]: E0130 13:56:36.469373 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:56:37.362652 kubelet[2605]: E0130 13:56:37.362606 2605 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:56:37.470219 kubelet[2605]: E0130 13:56:37.470162 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:56:38.027181 kubelet[2605]: E0130 13:56:38.027129 2605 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.31.232?timeout=10s\": dial tcp 172.31.23.102:6443: connect: connection refused" interval="3.2s" Jan 30 13:56:38.471573 kubelet[2605]: E0130 13:56:38.471299 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:56:39.473423 kubelet[2605]: E0130 13:56:39.473367 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:56:40.474164 kubelet[2605]: E0130 13:56:40.474118 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:56:41.475017 kubelet[2605]: E0130 13:56:41.474919 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:56:42.476094 kubelet[2605]: E0130 13:56:42.476037 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:56:43.476307 kubelet[2605]: E0130 13:56:43.476251 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:56:44.476816 kubelet[2605]: E0130 13:56:44.476686 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:56:45.477345 kubelet[2605]: E0130 13:56:45.477287 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:56:46.477871 kubelet[2605]: E0130 13:56:46.477828 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:56:47.478666 kubelet[2605]: E0130 13:56:47.478605 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:56:48.479829 kubelet[2605]: E0130 13:56:48.479766 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:56:49.480113 kubelet[2605]: E0130 13:56:49.480065 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:56:50.346574 kubelet[2605]: E0130 13:56:50.346516 2605 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.31.232\": Get \"https://172.31.23.102:6443/api/v1/nodes/172.31.31.232?resourceVersion=0&timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 13:56:50.480315 kubelet[2605]: E0130 13:56:50.480257 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:56:51.228803 kubelet[2605]: E0130 13:56:51.228746 2605 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.31.232?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Jan 30 13:56:51.480897 kubelet[2605]: E0130 13:56:51.480846 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:56:52.481574 kubelet[2605]: E0130 13:56:52.481517 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:56:53.482406 kubelet[2605]: E0130 13:56:53.482355 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:56:54.482768 kubelet[2605]: E0130 13:56:54.482719 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:56:55.484073 kubelet[2605]: E0130 13:56:55.483594 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:56:56.484591 kubelet[2605]: E0130 13:56:56.484532 2605 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"