Jan 30 13:52:43.075323 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 30 13:52:43.075363 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:52:43.075381 kernel: BIOS-provided physical RAM map: Jan 30 13:52:43.075393 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 30 13:52:43.075405 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 30 13:52:43.075416 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 30 13:52:43.075434 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Jan 30 13:52:43.075448 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Jan 30 13:52:43.075461 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Jan 30 13:52:43.075474 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 30 13:52:43.075487 kernel: NX (Execute Disable) protection: active Jan 30 13:52:43.075500 kernel: APIC: Static calls initialized Jan 30 13:52:43.075512 kernel: SMBIOS 2.7 present. Jan 30 13:52:43.076885 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Jan 30 13:52:43.076917 kernel: Hypervisor detected: KVM Jan 30 13:52:43.076930 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 30 13:52:43.076942 kernel: kvm-clock: using sched offset of 6501933126 cycles Jan 30 13:52:43.076956 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 30 13:52:43.076968 kernel: tsc: Detected 2499.998 MHz processor Jan 30 13:52:43.076980 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 13:52:43.076993 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 13:52:43.077007 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Jan 30 13:52:43.077019 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 30 13:52:43.077032 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 13:52:43.077044 kernel: Using GB pages for direct mapping Jan 30 13:52:43.077056 kernel: ACPI: Early table checksum verification disabled Jan 30 13:52:43.077068 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Jan 30 13:52:43.077080 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Jan 30 13:52:43.077091 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 30 13:52:43.077103 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 30 13:52:43.077117 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Jan 30 13:52:43.077130 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 30 13:52:43.077142 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 30 13:52:43.077156 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Jan 30 13:52:43.077170 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 30 13:52:43.077184 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Jan 30 13:52:43.077198 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Jan 30 13:52:43.077212 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 30 13:52:43.077226 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Jan 30 13:52:43.077244 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Jan 30 13:52:43.077264 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Jan 30 13:52:43.077279 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Jan 30 13:52:43.077294 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Jan 30 13:52:43.077310 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Jan 30 13:52:43.077328 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Jan 30 13:52:43.077343 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Jan 30 13:52:43.077358 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Jan 30 13:52:43.077373 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Jan 30 13:52:43.077388 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 30 13:52:43.077403 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 30 13:52:43.077418 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Jan 30 13:52:43.077433 kernel: NUMA: Initialized distance table, cnt=1 Jan 30 13:52:43.077447 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Jan 30 13:52:43.077465 kernel: Zone ranges: Jan 30 13:52:43.077480 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 13:52:43.077496 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Jan 30 13:52:43.077511 kernel: Normal empty Jan 30 13:52:43.077555 kernel: Movable zone start for each node Jan 30 13:52:43.077571 kernel: Early memory node ranges Jan 30 13:52:43.077586 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 30 13:52:43.077601 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Jan 30 13:52:43.077616 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Jan 30 13:52:43.077635 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 13:52:43.077650 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 30 13:52:43.077665 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Jan 30 13:52:43.077680 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 30 13:52:43.077695 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 30 13:52:43.077710 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Jan 30 13:52:43.077725 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 30 13:52:43.077740 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 13:52:43.077755 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 30 13:52:43.077770 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 30 13:52:43.077788 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 13:52:43.077804 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 30 13:52:43.077819 kernel: TSC deadline timer available Jan 30 13:52:43.077834 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 30 13:52:43.077849 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 30 13:52:43.077864 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Jan 30 13:52:43.077879 kernel: Booting paravirtualized kernel on KVM Jan 30 13:52:43.077894 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 13:52:43.077910 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 30 13:52:43.077928 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 30 13:52:43.077944 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 30 13:52:43.077958 kernel: pcpu-alloc: [0] 0 1 Jan 30 13:52:43.077973 kernel: kvm-guest: PV spinlocks enabled Jan 30 13:52:43.077988 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 30 13:52:43.078005 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:52:43.078021 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 13:52:43.078035 kernel: random: crng init done Jan 30 13:52:43.078053 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 13:52:43.078068 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 30 13:52:43.078083 kernel: Fallback order for Node 0: 0 Jan 30 13:52:43.078097 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Jan 30 13:52:43.078112 kernel: Policy zone: DMA32 Jan 30 13:52:43.078128 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 13:52:43.078143 kernel: Memory: 1932348K/2057760K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 125152K reserved, 0K cma-reserved) Jan 30 13:52:43.078158 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 30 13:52:43.078176 kernel: Kernel/User page tables isolation: enabled Jan 30 13:52:43.078191 kernel: ftrace: allocating 37921 entries in 149 pages Jan 30 13:52:43.078206 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 13:52:43.078221 kernel: Dynamic Preempt: voluntary Jan 30 13:52:43.078236 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 13:52:43.078252 kernel: rcu: RCU event tracing is enabled. Jan 30 13:52:43.078268 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 30 13:52:43.078283 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 13:52:43.078298 kernel: Rude variant of Tasks RCU enabled. Jan 30 13:52:43.078313 kernel: Tracing variant of Tasks RCU enabled. Jan 30 13:52:43.078331 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 13:52:43.078347 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 30 13:52:43.078362 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 30 13:52:43.078376 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 13:52:43.078392 kernel: Console: colour VGA+ 80x25 Jan 30 13:52:43.078407 kernel: printk: console [ttyS0] enabled Jan 30 13:52:43.078422 kernel: ACPI: Core revision 20230628 Jan 30 13:52:43.078437 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Jan 30 13:52:43.078452 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 13:52:43.078470 kernel: x2apic enabled Jan 30 13:52:43.078486 kernel: APIC: Switched APIC routing to: physical x2apic Jan 30 13:52:43.078512 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jan 30 13:52:43.079495 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Jan 30 13:52:43.079517 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 30 13:52:43.079558 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 30 13:52:43.079575 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 13:52:43.079591 kernel: Spectre V2 : Mitigation: Retpolines Jan 30 13:52:43.079606 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 13:52:43.079622 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 30 13:52:43.079639 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 30 13:52:43.079655 kernel: RETBleed: Vulnerable Jan 30 13:52:43.079676 kernel: Speculative Store Bypass: Vulnerable Jan 30 13:52:43.079692 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Jan 30 13:52:43.079708 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 30 13:52:43.079724 kernel: GDS: Unknown: Dependent on hypervisor status Jan 30 13:52:43.079740 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 30 13:52:43.079756 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 30 13:52:43.079772 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 30 13:52:43.079791 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jan 30 13:52:43.079807 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jan 30 13:52:43.079823 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 30 13:52:43.079839 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 30 13:52:43.079855 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 30 13:52:43.079871 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 30 13:52:43.079887 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 30 13:52:43.079903 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jan 30 13:52:43.079919 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jan 30 13:52:43.079935 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Jan 30 13:52:43.079951 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Jan 30 13:52:43.079970 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Jan 30 13:52:43.079986 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Jan 30 13:52:43.080002 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Jan 30 13:52:43.080018 kernel: Freeing SMP alternatives memory: 32K Jan 30 13:52:43.080033 kernel: pid_max: default: 32768 minimum: 301 Jan 30 13:52:43.080049 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 13:52:43.080065 kernel: landlock: Up and running. Jan 30 13:52:43.080081 kernel: SELinux: Initializing. Jan 30 13:52:43.080097 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 30 13:52:43.080113 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 30 13:52:43.080129 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 30 13:52:43.080148 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:52:43.080165 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:52:43.080181 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:52:43.080197 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 30 13:52:43.080214 kernel: signal: max sigframe size: 3632 Jan 30 13:52:43.080230 kernel: rcu: Hierarchical SRCU implementation. Jan 30 13:52:43.080247 kernel: rcu: Max phase no-delay instances is 400. Jan 30 13:52:43.080263 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 30 13:52:43.080279 kernel: smp: Bringing up secondary CPUs ... Jan 30 13:52:43.080299 kernel: smpboot: x86: Booting SMP configuration: Jan 30 13:52:43.080315 kernel: .... node #0, CPUs: #1 Jan 30 13:52:43.080332 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 30 13:52:43.080349 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 30 13:52:43.080365 kernel: smp: Brought up 1 node, 2 CPUs Jan 30 13:52:43.080381 kernel: smpboot: Max logical packages: 1 Jan 30 13:52:43.080397 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Jan 30 13:52:43.080413 kernel: devtmpfs: initialized Jan 30 13:52:43.080431 kernel: x86/mm: Memory block size: 128MB Jan 30 13:52:43.080448 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 13:52:43.080463 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 30 13:52:43.080476 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 13:52:43.080491 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 13:52:43.080516 kernel: audit: initializing netlink subsys (disabled) Jan 30 13:52:43.080581 kernel: audit: type=2000 audit(1738245161.887:1): state=initialized audit_enabled=0 res=1 Jan 30 13:52:43.080597 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 13:52:43.080612 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 13:52:43.080631 kernel: cpuidle: using governor menu Jan 30 13:52:43.080647 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 13:52:43.080663 kernel: dca service started, version 1.12.1 Jan 30 13:52:43.080679 kernel: PCI: Using configuration type 1 for base access Jan 30 13:52:43.080695 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 13:52:43.080710 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 13:52:43.080726 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 13:52:43.080741 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 13:52:43.080757 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 13:52:43.080775 kernel: ACPI: Added _OSI(Module Device) Jan 30 13:52:43.080877 kernel: ACPI: Added _OSI(Processor Device) Jan 30 13:52:43.080893 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 13:52:43.080909 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 13:52:43.080924 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 30 13:52:43.080940 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 30 13:52:43.080955 kernel: ACPI: Interpreter enabled Jan 30 13:52:43.080971 kernel: ACPI: PM: (supports S0 S5) Jan 30 13:52:43.080987 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 13:52:43.081003 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 13:52:43.081022 kernel: PCI: Using E820 reservations for host bridge windows Jan 30 13:52:43.081038 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jan 30 13:52:43.081053 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 13:52:43.081275 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 30 13:52:43.081425 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 30 13:52:43.082645 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 30 13:52:43.082675 kernel: acpiphp: Slot [3] registered Jan 30 13:52:43.082697 kernel: acpiphp: Slot [4] registered Jan 30 13:52:43.082712 kernel: acpiphp: Slot [5] registered Jan 30 13:52:43.082727 kernel: acpiphp: Slot [6] registered Jan 30 13:52:43.082742 kernel: acpiphp: Slot [7] registered Jan 30 13:52:43.082757 kernel: acpiphp: Slot [8] registered Jan 30 13:52:43.082773 kernel: acpiphp: Slot [9] registered Jan 30 13:52:43.082788 kernel: acpiphp: Slot [10] registered Jan 30 13:52:43.082803 kernel: acpiphp: Slot [11] registered Jan 30 13:52:43.082818 kernel: acpiphp: Slot [12] registered Jan 30 13:52:43.082835 kernel: acpiphp: Slot [13] registered Jan 30 13:52:43.082850 kernel: acpiphp: Slot [14] registered Jan 30 13:52:43.082865 kernel: acpiphp: Slot [15] registered Jan 30 13:52:43.082880 kernel: acpiphp: Slot [16] registered Jan 30 13:52:43.082894 kernel: acpiphp: Slot [17] registered Jan 30 13:52:43.082909 kernel: acpiphp: Slot [18] registered Jan 30 13:52:43.082924 kernel: acpiphp: Slot [19] registered Jan 30 13:52:43.082939 kernel: acpiphp: Slot [20] registered Jan 30 13:52:43.082953 kernel: acpiphp: Slot [21] registered Jan 30 13:52:43.082968 kernel: acpiphp: Slot [22] registered Jan 30 13:52:43.082986 kernel: acpiphp: Slot [23] registered Jan 30 13:52:43.083001 kernel: acpiphp: Slot [24] registered Jan 30 13:52:43.083015 kernel: acpiphp: Slot [25] registered Jan 30 13:52:43.083030 kernel: acpiphp: Slot [26] registered Jan 30 13:52:43.083045 kernel: acpiphp: Slot [27] registered Jan 30 13:52:43.083059 kernel: acpiphp: Slot [28] registered Jan 30 13:52:43.083074 kernel: acpiphp: Slot [29] registered Jan 30 13:52:43.083089 kernel: acpiphp: Slot [30] registered Jan 30 13:52:43.083103 kernel: acpiphp: Slot [31] registered Jan 30 13:52:43.083121 kernel: PCI host bridge to bus 0000:00 Jan 30 13:52:43.083266 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 30 13:52:43.083381 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 30 13:52:43.083492 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 30 13:52:43.084686 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 30 13:52:43.084818 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 13:52:43.084970 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 30 13:52:43.085233 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 30 13:52:43.085382 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Jan 30 13:52:43.085513 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 30 13:52:43.085677 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Jan 30 13:52:43.085807 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Jan 30 13:52:43.085937 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Jan 30 13:52:43.086065 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Jan 30 13:52:43.086237 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Jan 30 13:52:43.087315 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Jan 30 13:52:43.087487 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Jan 30 13:52:43.087653 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Jan 30 13:52:43.087805 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Jan 30 13:52:43.087952 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jan 30 13:52:43.088094 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 30 13:52:43.088347 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 30 13:52:43.088605 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Jan 30 13:52:43.088749 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 30 13:52:43.088882 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Jan 30 13:52:43.088901 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 30 13:52:43.088917 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 30 13:52:43.088938 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 30 13:52:43.088953 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 30 13:52:43.088968 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 30 13:52:43.088983 kernel: iommu: Default domain type: Translated Jan 30 13:52:43.088998 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 13:52:43.089014 kernel: PCI: Using ACPI for IRQ routing Jan 30 13:52:43.089028 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 30 13:52:43.089044 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 30 13:52:43.089059 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Jan 30 13:52:43.089633 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Jan 30 13:52:43.089794 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Jan 30 13:52:43.089939 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 30 13:52:43.089961 kernel: vgaarb: loaded Jan 30 13:52:43.089979 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jan 30 13:52:43.089996 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Jan 30 13:52:43.090012 kernel: clocksource: Switched to clocksource kvm-clock Jan 30 13:52:43.090030 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 13:52:43.090047 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 13:52:43.090068 kernel: pnp: PnP ACPI init Jan 30 13:52:43.090086 kernel: pnp: PnP ACPI: found 5 devices Jan 30 13:52:43.090103 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 13:52:43.090121 kernel: NET: Registered PF_INET protocol family Jan 30 13:52:43.090138 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 13:52:43.090155 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 30 13:52:43.090172 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 13:52:43.090189 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 30 13:52:43.090206 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 30 13:52:43.090227 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 30 13:52:43.090244 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 30 13:52:43.090261 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 30 13:52:43.090278 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 13:52:43.090295 kernel: NET: Registered PF_XDP protocol family Jan 30 13:52:43.090451 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 30 13:52:43.090613 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 30 13:52:43.090737 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 30 13:52:43.090865 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 30 13:52:43.091007 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 30 13:52:43.091026 kernel: PCI: CLS 0 bytes, default 64 Jan 30 13:52:43.091041 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 30 13:52:43.091055 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jan 30 13:52:43.091069 kernel: clocksource: Switched to clocksource tsc Jan 30 13:52:43.091083 kernel: Initialise system trusted keyrings Jan 30 13:52:43.091097 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 30 13:52:43.091116 kernel: Key type asymmetric registered Jan 30 13:52:43.091129 kernel: Asymmetric key parser 'x509' registered Jan 30 13:52:43.091143 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 13:52:43.091157 kernel: io scheduler mq-deadline registered Jan 30 13:52:43.091171 kernel: io scheduler kyber registered Jan 30 13:52:43.091185 kernel: io scheduler bfq registered Jan 30 13:52:43.091198 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 13:52:43.091213 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 13:52:43.091227 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 13:52:43.091244 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 30 13:52:43.091258 kernel: i8042: Warning: Keylock active Jan 30 13:52:43.091272 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 30 13:52:43.091286 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 30 13:52:43.091431 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 30 13:52:43.091596 kernel: rtc_cmos 00:00: registered as rtc0 Jan 30 13:52:43.091718 kernel: rtc_cmos 00:00: setting system clock to 2025-01-30T13:52:42 UTC (1738245162) Jan 30 13:52:43.091838 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 30 13:52:43.091859 kernel: intel_pstate: CPU model not supported Jan 30 13:52:43.091999 kernel: NET: Registered PF_INET6 protocol family Jan 30 13:52:43.092021 kernel: Segment Routing with IPv6 Jan 30 13:52:43.092039 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 13:52:43.092054 kernel: NET: Registered PF_PACKET protocol family Jan 30 13:52:43.092070 kernel: Key type dns_resolver registered Jan 30 13:52:43.092086 kernel: IPI shorthand broadcast: enabled Jan 30 13:52:43.092103 kernel: sched_clock: Marking stable (647062155, 304913062)->(1059414759, -107439542) Jan 30 13:52:43.092120 kernel: registered taskstats version 1 Jan 30 13:52:43.092141 kernel: Loading compiled-in X.509 certificates Jan 30 13:52:43.092159 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 30 13:52:43.092175 kernel: Key type .fscrypt registered Jan 30 13:52:43.092192 kernel: Key type fscrypt-provisioning registered Jan 30 13:52:43.092209 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 13:52:43.092226 kernel: ima: Allocated hash algorithm: sha1 Jan 30 13:52:43.092243 kernel: ima: No architecture policies found Jan 30 13:52:43.092260 kernel: clk: Disabling unused clocks Jan 30 13:52:43.092277 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 30 13:52:43.092298 kernel: Write protecting the kernel read-only data: 36864k Jan 30 13:52:43.092315 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 30 13:52:43.092331 kernel: Run /init as init process Jan 30 13:52:43.092349 kernel: with arguments: Jan 30 13:52:43.092366 kernel: /init Jan 30 13:52:43.092382 kernel: with environment: Jan 30 13:52:43.092400 kernel: HOME=/ Jan 30 13:52:43.092416 kernel: TERM=linux Jan 30 13:52:43.092432 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 13:52:43.092460 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:52:43.092579 systemd[1]: Detected virtualization amazon. Jan 30 13:52:43.092603 systemd[1]: Detected architecture x86-64. Jan 30 13:52:43.092622 systemd[1]: Running in initrd. Jan 30 13:52:43.092640 systemd[1]: No hostname configured, using default hostname. Jan 30 13:52:43.092661 systemd[1]: Hostname set to . Jan 30 13:52:43.092681 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:52:43.092698 systemd[1]: Queued start job for default target initrd.target. Jan 30 13:52:43.092717 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:52:43.092735 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:52:43.092755 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 13:52:43.092772 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:52:43.092790 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 13:52:43.092812 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 13:52:43.092834 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 13:52:43.092851 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 13:52:43.092866 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:52:43.092881 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:52:43.092899 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:52:43.092915 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:52:43.092935 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:52:43.092951 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:52:43.092967 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:52:43.092984 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:52:43.093002 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 13:52:43.093019 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 13:52:43.093036 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:52:43.093053 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:52:43.093074 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:52:43.093090 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:52:43.093108 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 30 13:52:43.093132 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 13:52:43.093156 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:52:43.093175 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 13:52:43.093194 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 13:52:43.093217 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:52:43.093240 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:52:43.093259 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:52:43.093278 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 13:52:43.093297 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:52:43.093360 systemd-journald[178]: Collecting audit messages is disabled. Jan 30 13:52:43.093406 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 13:52:43.093427 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:52:43.093448 systemd-journald[178]: Journal started Jan 30 13:52:43.093487 systemd-journald[178]: Runtime Journal (/run/log/journal/ec2e6fa11b7dcf7efdbd859a81d4f77b) is 4.8M, max 38.6M, 33.7M free. Jan 30 13:52:43.103572 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:52:43.094255 systemd-modules-load[179]: Inserted module 'overlay' Jan 30 13:52:43.126762 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:52:43.248600 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 13:52:43.248639 kernel: Bridge firewalling registered Jan 30 13:52:43.140625 systemd-modules-load[179]: Inserted module 'br_netfilter' Jan 30 13:52:43.251949 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:52:43.256807 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:52:43.262281 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:52:43.273914 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:52:43.283871 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:52:43.290463 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:52:43.292155 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:52:43.323355 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:52:43.339930 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:52:43.342832 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:52:43.345034 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:52:43.349714 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 13:52:43.385273 dracut-cmdline[215]: dracut-dracut-053 Jan 30 13:52:43.392968 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:52:43.409506 systemd-resolved[210]: Positive Trust Anchors: Jan 30 13:52:43.409543 systemd-resolved[210]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:52:43.409603 systemd-resolved[210]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:52:43.425245 systemd-resolved[210]: Defaulting to hostname 'linux'. Jan 30 13:52:43.427923 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:52:43.429662 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:52:43.512561 kernel: SCSI subsystem initialized Jan 30 13:52:43.522565 kernel: Loading iSCSI transport class v2.0-870. Jan 30 13:52:43.533554 kernel: iscsi: registered transport (tcp) Jan 30 13:52:43.557699 kernel: iscsi: registered transport (qla4xxx) Jan 30 13:52:43.557777 kernel: QLogic iSCSI HBA Driver Jan 30 13:52:43.597145 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 13:52:43.603719 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 13:52:43.645590 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 13:52:43.645668 kernel: device-mapper: uevent: version 1.0.3 Jan 30 13:52:43.645690 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 13:52:43.687557 kernel: raid6: avx512x4 gen() 17545 MB/s Jan 30 13:52:43.704554 kernel: raid6: avx512x2 gen() 17140 MB/s Jan 30 13:52:43.721559 kernel: raid6: avx512x1 gen() 16776 MB/s Jan 30 13:52:43.738554 kernel: raid6: avx2x4 gen() 17582 MB/s Jan 30 13:52:43.755561 kernel: raid6: avx2x2 gen() 16823 MB/s Jan 30 13:52:43.772627 kernel: raid6: avx2x1 gen() 12922 MB/s Jan 30 13:52:43.772701 kernel: raid6: using algorithm avx2x4 gen() 17582 MB/s Jan 30 13:52:43.790556 kernel: raid6: .... xor() 6942 MB/s, rmw enabled Jan 30 13:52:43.790624 kernel: raid6: using avx512x2 recovery algorithm Jan 30 13:52:43.812556 kernel: xor: automatically using best checksumming function avx Jan 30 13:52:44.011567 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 13:52:44.022149 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:52:44.029731 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:52:44.056211 systemd-udevd[397]: Using default interface naming scheme 'v255'. Jan 30 13:52:44.061577 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:52:44.071491 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 13:52:44.095361 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Jan 30 13:52:44.133929 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:52:44.149636 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:52:44.218926 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:52:44.230615 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 13:52:44.261003 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 13:52:44.264982 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:52:44.268705 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:52:44.270196 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:52:44.279842 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 13:52:44.313930 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:52:44.331561 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 13:52:44.342217 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 30 13:52:44.360803 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 30 13:52:44.361040 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Jan 30 13:52:44.361202 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:80:e1:72:20:17 Jan 30 13:52:44.353261 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:52:44.353483 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:52:44.370974 kernel: AVX2 version of gcm_enc/dec engaged. Jan 30 13:52:44.371010 kernel: AES CTR mode by8 optimization enabled Jan 30 13:52:44.355455 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:52:44.357211 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:52:44.357400 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:52:44.361652 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:52:44.366437 (udev-worker)[453]: Network interface NamePolicy= disabled on kernel command line. Jan 30 13:52:44.375766 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:52:44.394604 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 30 13:52:44.397611 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 30 13:52:44.414561 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 30 13:52:44.421550 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 13:52:44.421612 kernel: GPT:9289727 != 16777215 Jan 30 13:52:44.421634 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 13:52:44.421654 kernel: GPT:9289727 != 16777215 Jan 30 13:52:44.421673 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 13:52:44.421693 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 30 13:52:44.542647 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:52:44.551894 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:52:44.570497 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (461) Jan 30 13:52:44.576557 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/nvme0n1p3 scanned by (udev-worker) (462) Jan 30 13:52:44.615937 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:52:44.675314 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 30 13:52:44.710860 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 30 13:52:44.726905 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 30 13:52:44.728374 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 30 13:52:44.737173 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 30 13:52:44.742878 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 13:52:44.751402 disk-uuid[632]: Primary Header is updated. Jan 30 13:52:44.751402 disk-uuid[632]: Secondary Entries is updated. Jan 30 13:52:44.751402 disk-uuid[632]: Secondary Header is updated. Jan 30 13:52:44.758558 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 30 13:52:44.764567 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 30 13:52:44.770554 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 30 13:52:45.775614 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 30 13:52:45.776971 disk-uuid[633]: The operation has completed successfully. Jan 30 13:52:45.950260 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 13:52:45.950383 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 13:52:45.994732 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 13:52:46.001685 sh[976]: Success Jan 30 13:52:46.035152 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 30 13:52:46.201037 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 13:52:46.220854 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 13:52:46.225906 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 13:52:46.258179 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 30 13:52:46.258259 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:52:46.258279 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 13:52:46.258297 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 13:52:46.259551 kernel: BTRFS info (device dm-0): using free space tree Jan 30 13:52:46.379607 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 30 13:52:46.392584 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 13:52:46.393821 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 13:52:46.406130 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 13:52:46.412802 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 13:52:46.445059 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:52:46.445592 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:52:46.445730 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 30 13:52:46.452558 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 30 13:52:46.477018 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 13:52:46.482558 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:52:46.498511 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 13:52:46.511053 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 13:52:46.609058 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:52:46.621927 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:52:46.707795 systemd-networkd[1168]: lo: Link UP Jan 30 13:52:46.707810 systemd-networkd[1168]: lo: Gained carrier Jan 30 13:52:46.713121 systemd-networkd[1168]: Enumeration completed Jan 30 13:52:46.713415 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:52:46.716293 systemd-networkd[1168]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:52:46.716297 systemd[1]: Reached target network.target - Network. Jan 30 13:52:46.716299 systemd-networkd[1168]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:52:46.728032 systemd-networkd[1168]: eth0: Link UP Jan 30 13:52:46.728043 systemd-networkd[1168]: eth0: Gained carrier Jan 30 13:52:46.728060 systemd-networkd[1168]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:52:46.742641 systemd-networkd[1168]: eth0: DHCPv4 address 172.31.23.102/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 30 13:52:46.929469 ignition[1093]: Ignition 2.19.0 Jan 30 13:52:46.929653 ignition[1093]: Stage: fetch-offline Jan 30 13:52:46.929916 ignition[1093]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:52:46.932969 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:52:46.929926 ignition[1093]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 30 13:52:46.930596 ignition[1093]: Ignition finished successfully Jan 30 13:52:46.950085 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 30 13:52:46.985581 ignition[1177]: Ignition 2.19.0 Jan 30 13:52:46.985598 ignition[1177]: Stage: fetch Jan 30 13:52:46.986077 ignition[1177]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:52:46.986089 ignition[1177]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 30 13:52:46.986200 ignition[1177]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 30 13:52:47.014511 ignition[1177]: PUT result: OK Jan 30 13:52:47.018230 ignition[1177]: parsed url from cmdline: "" Jan 30 13:52:47.018270 ignition[1177]: no config URL provided Jan 30 13:52:47.018281 ignition[1177]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:52:47.018299 ignition[1177]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:52:47.018330 ignition[1177]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 30 13:52:47.021065 ignition[1177]: PUT result: OK Jan 30 13:52:47.022215 ignition[1177]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 30 13:52:47.029520 ignition[1177]: GET result: OK Jan 30 13:52:47.030984 ignition[1177]: parsing config with SHA512: 1ee6bb963f387ec9b75ead8dae96d58d5981369e71c26acaa48ac194680551dfa9ac892748eca0dee78ef5859d6f80353727194d45c5bc7531adb7a6fd324aa6 Jan 30 13:52:47.038551 unknown[1177]: fetched base config from "system" Jan 30 13:52:47.038566 unknown[1177]: fetched base config from "system" Jan 30 13:52:47.038577 unknown[1177]: fetched user config from "aws" Jan 30 13:52:47.041780 ignition[1177]: fetch: fetch complete Jan 30 13:52:47.041862 ignition[1177]: fetch: fetch passed Jan 30 13:52:47.043942 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 30 13:52:47.041952 ignition[1177]: Ignition finished successfully Jan 30 13:52:47.050758 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 13:52:47.076763 ignition[1183]: Ignition 2.19.0 Jan 30 13:52:47.076776 ignition[1183]: Stage: kargs Jan 30 13:52:47.077425 ignition[1183]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:52:47.077438 ignition[1183]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 30 13:52:47.077674 ignition[1183]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 30 13:52:47.078987 ignition[1183]: PUT result: OK Jan 30 13:52:47.086920 ignition[1183]: kargs: kargs passed Jan 30 13:52:47.087012 ignition[1183]: Ignition finished successfully Jan 30 13:52:47.092575 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 13:52:47.105995 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 13:52:47.126542 ignition[1189]: Ignition 2.19.0 Jan 30 13:52:47.126577 ignition[1189]: Stage: disks Jan 30 13:52:47.127818 ignition[1189]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:52:47.127834 ignition[1189]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 30 13:52:47.128551 ignition[1189]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 30 13:52:47.132233 ignition[1189]: PUT result: OK Jan 30 13:52:47.135447 ignition[1189]: disks: disks passed Jan 30 13:52:47.135541 ignition[1189]: Ignition finished successfully Jan 30 13:52:47.138611 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 13:52:47.142056 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 13:52:47.146183 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 13:52:47.151862 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:52:47.153498 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:52:47.154469 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:52:47.165869 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 13:52:47.228181 systemd-fsck[1197]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 30 13:52:47.236109 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 13:52:47.249038 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 13:52:47.412233 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 30 13:52:47.412973 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 13:52:47.415462 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 13:52:47.427747 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:52:47.431677 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 13:52:47.432701 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 30 13:52:47.432762 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 13:52:47.432797 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:52:47.445951 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 13:52:47.449695 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 13:52:47.469377 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1216) Jan 30 13:52:47.469435 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:52:47.469448 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:52:47.470899 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 30 13:52:47.484615 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 30 13:52:47.485803 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:52:47.854320 initrd-setup-root[1240]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 13:52:47.876881 initrd-setup-root[1247]: cut: /sysroot/etc/group: No such file or directory Jan 30 13:52:47.883830 initrd-setup-root[1254]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 13:52:47.900702 initrd-setup-root[1261]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 13:52:48.290766 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 13:52:48.299658 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 13:52:48.302711 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 13:52:48.317971 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:52:48.317096 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 13:52:48.354637 ignition[1329]: INFO : Ignition 2.19.0 Jan 30 13:52:48.354637 ignition[1329]: INFO : Stage: mount Jan 30 13:52:48.359257 ignition[1329]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:52:48.359257 ignition[1329]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 30 13:52:48.359257 ignition[1329]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 30 13:52:48.365625 ignition[1329]: INFO : PUT result: OK Jan 30 13:52:48.364051 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 13:52:48.369559 ignition[1329]: INFO : mount: mount passed Jan 30 13:52:48.370669 ignition[1329]: INFO : Ignition finished successfully Jan 30 13:52:48.371892 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 13:52:48.378753 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 13:52:48.416879 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:52:48.466561 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1340) Jan 30 13:52:48.469224 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:52:48.469292 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:52:48.469316 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 30 13:52:48.479559 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 30 13:52:48.483161 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:52:48.507930 ignition[1356]: INFO : Ignition 2.19.0 Jan 30 13:52:48.507930 ignition[1356]: INFO : Stage: files Jan 30 13:52:48.510149 ignition[1356]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:52:48.510149 ignition[1356]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 30 13:52:48.510149 ignition[1356]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 30 13:52:48.510149 ignition[1356]: INFO : PUT result: OK Jan 30 13:52:48.525424 ignition[1356]: DEBUG : files: compiled without relabeling support, skipping Jan 30 13:52:48.546816 ignition[1356]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 13:52:48.546816 ignition[1356]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 13:52:48.585918 ignition[1356]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 13:52:48.587617 ignition[1356]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 13:52:48.589540 ignition[1356]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 13:52:48.589159 unknown[1356]: wrote ssh authorized keys file for user: core Jan 30 13:52:48.593242 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 30 13:52:48.595574 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 30 13:52:48.598467 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 13:52:48.598467 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 30 13:52:48.703888 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 30 13:52:48.723703 systemd-networkd[1168]: eth0: Gained IPv6LL Jan 30 13:52:48.861128 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 13:52:48.861128 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 30 13:52:48.866138 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 13:52:48.868230 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:52:48.870412 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:52:48.870412 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:52:48.874446 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:52:48.874446 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:52:48.880678 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:52:48.880678 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:52:48.880678 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:52:48.880678 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:52:48.891114 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:52:48.891114 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:52:48.891114 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 30 13:52:49.339590 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 30 13:52:49.867464 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:52:49.867464 ignition[1356]: INFO : files: op(c): [started] processing unit "containerd.service" Jan 30 13:52:49.874989 ignition[1356]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 30 13:52:49.874989 ignition[1356]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 30 13:52:49.874989 ignition[1356]: INFO : files: op(c): [finished] processing unit "containerd.service" Jan 30 13:52:49.874989 ignition[1356]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jan 30 13:52:49.874989 ignition[1356]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:52:49.874989 ignition[1356]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:52:49.874989 ignition[1356]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jan 30 13:52:49.874989 ignition[1356]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jan 30 13:52:49.874989 ignition[1356]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 13:52:49.874989 ignition[1356]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:52:49.874989 ignition[1356]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:52:49.874989 ignition[1356]: INFO : files: files passed Jan 30 13:52:49.874989 ignition[1356]: INFO : Ignition finished successfully Jan 30 13:52:49.872679 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 13:52:49.887766 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 13:52:49.901895 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 13:52:49.907483 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 13:52:49.907627 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 13:52:49.919614 initrd-setup-root-after-ignition[1385]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:52:49.919614 initrd-setup-root-after-ignition[1385]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:52:49.923631 initrd-setup-root-after-ignition[1389]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:52:49.926105 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:52:49.930089 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 13:52:49.939753 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 13:52:49.970383 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 13:52:49.970517 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 13:52:49.975229 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 13:52:49.977213 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 13:52:49.981069 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 13:52:49.990734 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 13:52:50.008874 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:52:50.023024 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 13:52:50.079579 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:52:50.085967 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:52:50.098420 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 13:52:50.109186 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 13:52:50.111545 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:52:50.120471 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 13:52:50.128995 systemd[1]: Stopped target basic.target - Basic System. Jan 30 13:52:50.131230 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 13:52:50.134031 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:52:50.137187 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 13:52:50.140333 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 13:52:50.145021 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:52:50.149972 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 13:52:50.160857 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 13:52:50.163010 systemd[1]: Stopped target swap.target - Swaps. Jan 30 13:52:50.165666 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 13:52:50.167091 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:52:50.172581 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:52:50.175689 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:52:50.179127 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 13:52:50.180618 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:52:50.184363 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 13:52:50.187181 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 13:52:50.192789 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 13:52:50.193038 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:52:50.196361 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 13:52:50.197177 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 13:52:50.209034 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 13:52:50.212319 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 13:52:50.213723 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:52:50.241466 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 13:52:50.248841 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 13:52:50.249232 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:52:50.251874 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 13:52:50.252036 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:52:50.267516 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 13:52:50.267870 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 13:52:50.295462 ignition[1409]: INFO : Ignition 2.19.0 Jan 30 13:52:50.295462 ignition[1409]: INFO : Stage: umount Jan 30 13:52:50.295462 ignition[1409]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:52:50.295462 ignition[1409]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 30 13:52:50.295462 ignition[1409]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 30 13:52:50.313935 ignition[1409]: INFO : PUT result: OK Jan 30 13:52:50.313935 ignition[1409]: INFO : umount: umount passed Jan 30 13:52:50.313935 ignition[1409]: INFO : Ignition finished successfully Jan 30 13:52:50.317158 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 13:52:50.319828 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 13:52:50.319954 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 13:52:50.321280 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 13:52:50.321369 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 13:52:50.332336 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 13:52:50.332425 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 13:52:50.342039 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 13:52:50.342132 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 13:52:50.345000 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 30 13:52:50.345077 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 30 13:52:50.348060 systemd[1]: Stopped target network.target - Network. Jan 30 13:52:50.350712 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 13:52:50.351736 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:52:50.355179 systemd[1]: Stopped target paths.target - Path Units. Jan 30 13:52:50.359320 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 13:52:50.363639 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:52:50.367454 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 13:52:50.370430 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 13:52:50.372824 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 13:52:50.372898 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:52:50.377331 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 13:52:50.377411 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:52:50.381742 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 13:52:50.381836 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 13:52:50.385366 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 13:52:50.385455 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 13:52:50.388897 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 13:52:50.388982 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 13:52:50.394454 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 13:52:50.397051 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 13:52:50.405608 systemd-networkd[1168]: eth0: DHCPv6 lease lost Jan 30 13:52:50.408784 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 13:52:50.408940 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 13:52:50.410604 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 13:52:50.410649 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:52:50.424788 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 13:52:50.426440 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 13:52:50.426594 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:52:50.428778 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:52:50.441523 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 13:52:50.441673 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 13:52:50.462022 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 13:52:50.463255 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:52:50.470244 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 13:52:50.470343 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 13:52:50.474797 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 13:52:50.474861 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:52:50.476163 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 13:52:50.476299 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:52:50.480450 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 13:52:50.481571 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 13:52:50.489769 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:52:50.490611 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:52:50.499920 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 13:52:50.501470 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:52:50.501558 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:52:50.504705 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 13:52:50.505039 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 13:52:50.506260 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 13:52:50.506308 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:52:50.507775 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 13:52:50.507821 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:52:50.510949 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:52:50.511344 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:52:50.517580 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 13:52:50.517753 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 13:52:50.533907 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 13:52:50.534011 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 13:52:50.541782 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 13:52:50.550902 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 13:52:50.617509 systemd[1]: Switching root. Jan 30 13:52:50.655616 systemd-journald[178]: Journal stopped Jan 30 13:52:53.071259 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Jan 30 13:52:53.071355 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 13:52:53.071388 kernel: SELinux: policy capability open_perms=1 Jan 30 13:52:53.071415 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 13:52:53.071441 kernel: SELinux: policy capability always_check_network=0 Jan 30 13:52:53.071461 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 13:52:53.071487 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 13:52:53.071507 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 13:52:53.084027 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 13:52:53.084067 kernel: audit: type=1403 audit(1738245171.315:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 13:52:53.084308 systemd[1]: Successfully loaded SELinux policy in 75.689ms. Jan 30 13:52:53.084364 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.862ms. Jan 30 13:52:53.084396 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:52:53.084419 systemd[1]: Detected virtualization amazon. Jan 30 13:52:53.084445 systemd[1]: Detected architecture x86-64. Jan 30 13:52:53.084471 systemd[1]: Detected first boot. Jan 30 13:52:53.084493 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:52:53.084522 zram_generator::config[1468]: No configuration found. Jan 30 13:52:53.084559 systemd[1]: Populated /etc with preset unit settings. Jan 30 13:52:53.084581 systemd[1]: Queued start job for default target multi-user.target. Jan 30 13:52:53.084606 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 30 13:52:53.084629 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 13:52:53.084650 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 13:52:53.084672 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 13:52:53.084695 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 13:52:53.084717 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 13:52:53.084738 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 13:52:53.084760 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 13:52:53.084781 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 13:52:53.084807 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:52:53.084829 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:52:53.084920 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 13:52:53.084944 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 13:52:53.084966 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 13:52:53.084988 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:52:53.085010 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 30 13:52:53.085033 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:52:53.085055 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 13:52:53.085080 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:52:53.085103 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:52:53.085124 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:52:53.085146 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:52:53.085171 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 13:52:53.085192 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 13:52:53.085213 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 13:52:53.085235 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 13:52:53.085259 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:52:53.085281 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:52:53.085302 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:52:53.089546 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 13:52:53.089614 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 13:52:53.089637 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 13:52:53.089659 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 13:52:53.089682 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:52:53.089705 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 13:52:53.089739 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 13:52:53.089761 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 13:52:53.089787 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 13:52:53.089809 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:52:53.089831 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:52:53.090701 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 13:52:53.090740 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:52:53.090762 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:52:53.090794 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:52:53.090816 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 13:52:53.090841 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:52:53.090864 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 13:52:53.090886 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 30 13:52:53.090910 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 30 13:52:53.090931 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:52:53.090954 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:52:53.090976 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 13:52:53.091002 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 13:52:53.091023 kernel: fuse: init (API version 7.39) Jan 30 13:52:53.091045 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:52:53.091067 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:52:53.091089 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 13:52:53.091110 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 13:52:53.091136 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 13:52:53.091157 kernel: loop: module loaded Jan 30 13:52:53.091178 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 13:52:53.091203 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 13:52:53.091274 systemd-journald[1576]: Collecting audit messages is disabled. Jan 30 13:52:53.091313 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 13:52:53.091335 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 13:52:53.091357 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:52:53.091378 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 13:52:53.091400 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 13:52:53.091424 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:52:53.091445 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:52:53.091467 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:52:53.091489 systemd-journald[1576]: Journal started Jan 30 13:52:53.102130 systemd-journald[1576]: Runtime Journal (/run/log/journal/ec2e6fa11b7dcf7efdbd859a81d4f77b) is 4.8M, max 38.6M, 33.7M free. Jan 30 13:52:53.102228 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:52:53.102268 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 13:52:53.102292 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 13:52:53.102315 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:52:53.102344 kernel: ACPI: bus type drm_connector registered Jan 30 13:52:53.102365 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:52:53.107838 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:52:53.109822 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:52:53.110520 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:52:53.112780 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:52:53.115354 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 13:52:53.117161 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 13:52:53.132848 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 13:52:53.140927 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 13:52:53.149680 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 13:52:53.152838 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 13:52:53.165791 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 13:52:53.178758 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 13:52:53.180659 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:52:53.193765 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 13:52:53.200831 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:52:53.224796 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:52:53.232630 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:52:53.246308 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 13:52:53.250884 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 13:52:53.251877 systemd-journald[1576]: Time spent on flushing to /var/log/journal/ec2e6fa11b7dcf7efdbd859a81d4f77b is 74.684ms for 947 entries. Jan 30 13:52:53.251877 systemd-journald[1576]: System Journal (/var/log/journal/ec2e6fa11b7dcf7efdbd859a81d4f77b) is 8.0M, max 195.6M, 187.6M free. Jan 30 13:52:53.348811 systemd-journald[1576]: Received client request to flush runtime journal. Jan 30 13:52:53.261367 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 13:52:53.265944 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 13:52:53.279038 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:52:53.290258 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 13:52:53.339782 udevadm[1626]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 30 13:52:53.353915 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 13:52:53.363419 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:52:53.368977 systemd-tmpfiles[1618]: ACLs are not supported, ignoring. Jan 30 13:52:53.369005 systemd-tmpfiles[1618]: ACLs are not supported, ignoring. Jan 30 13:52:53.378345 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:52:53.392931 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 13:52:53.463494 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 13:52:53.471812 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:52:53.505812 systemd-tmpfiles[1640]: ACLs are not supported, ignoring. Jan 30 13:52:53.506298 systemd-tmpfiles[1640]: ACLs are not supported, ignoring. Jan 30 13:52:53.515042 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:52:54.084809 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 13:52:54.092746 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:52:54.145386 systemd-udevd[1646]: Using default interface naming scheme 'v255'. Jan 30 13:52:54.218726 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:52:54.232759 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:52:54.290937 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 13:52:54.298563 (udev-worker)[1649]: Network interface NamePolicy= disabled on kernel command line. Jan 30 13:52:54.313862 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 30 13:52:54.486476 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 13:52:54.493562 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 30 13:52:54.497712 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Jan 30 13:52:54.501207 kernel: ACPI: button: Power Button [PWRF] Jan 30 13:52:54.501237 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Jan 30 13:52:54.511578 kernel: ACPI: button: Sleep Button [SLPF] Jan 30 13:52:54.573590 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input5 Jan 30 13:52:54.606462 systemd-networkd[1651]: lo: Link UP Jan 30 13:52:54.606472 systemd-networkd[1651]: lo: Gained carrier Jan 30 13:52:54.609258 systemd-networkd[1651]: Enumeration completed Jan 30 13:52:54.609430 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:52:54.610320 systemd-networkd[1651]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:52:54.610326 systemd-networkd[1651]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:52:54.614917 systemd-networkd[1651]: eth0: Link UP Jan 30 13:52:54.615795 systemd-networkd[1651]: eth0: Gained carrier Jan 30 13:52:54.615828 systemd-networkd[1651]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:52:54.616772 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 13:52:54.627716 systemd-networkd[1651]: eth0: DHCPv4 address 172.31.23.102/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 30 13:52:54.635195 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:52:54.640711 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 13:52:54.657659 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1655) Jan 30 13:52:54.809108 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 13:52:54.818104 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 30 13:52:54.843834 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 13:52:54.873564 lvm[1767]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:52:54.902081 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 13:52:55.000979 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:52:55.013952 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 13:52:55.016046 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:52:55.023477 lvm[1772]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:52:55.052627 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 13:52:55.054751 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 13:52:55.056227 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 13:52:55.056264 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:52:55.057850 systemd[1]: Reached target machines.target - Containers. Jan 30 13:52:55.062098 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 13:52:55.070763 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 13:52:55.075026 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 13:52:55.076634 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:52:55.078943 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 13:52:55.085944 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 13:52:55.097906 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 13:52:55.105823 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 13:52:55.125927 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 13:52:55.143561 kernel: loop0: detected capacity change from 0 to 210664 Jan 30 13:52:55.167880 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 13:52:55.170004 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 13:52:55.198558 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 13:52:55.238567 kernel: loop1: detected capacity change from 0 to 61336 Jan 30 13:52:55.354556 kernel: loop2: detected capacity change from 0 to 140768 Jan 30 13:52:55.478555 kernel: loop3: detected capacity change from 0 to 142488 Jan 30 13:52:55.624558 kernel: loop4: detected capacity change from 0 to 210664 Jan 30 13:52:55.647557 kernel: loop5: detected capacity change from 0 to 61336 Jan 30 13:52:55.659555 kernel: loop6: detected capacity change from 0 to 140768 Jan 30 13:52:55.682561 kernel: loop7: detected capacity change from 0 to 142488 Jan 30 13:52:55.702053 (sd-merge)[1794]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 30 13:52:55.703192 (sd-merge)[1794]: Merged extensions into '/usr'. Jan 30 13:52:55.707686 systemd[1]: Reloading requested from client PID 1781 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 13:52:55.707704 systemd[1]: Reloading... Jan 30 13:52:55.768576 zram_generator::config[1820]: No configuration found. Jan 30 13:52:55.955189 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:52:56.056052 systemd[1]: Reloading finished in 347 ms. Jan 30 13:52:56.080905 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 13:52:56.097263 systemd[1]: Starting ensure-sysext.service... Jan 30 13:52:56.108037 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:52:56.123788 systemd[1]: Reloading requested from client PID 1877 ('systemctl') (unit ensure-sysext.service)... Jan 30 13:52:56.123808 systemd[1]: Reloading... Jan 30 13:52:56.145248 systemd-tmpfiles[1878]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 13:52:56.146059 systemd-tmpfiles[1878]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 13:52:56.148157 systemd-tmpfiles[1878]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 13:52:56.148858 systemd-tmpfiles[1878]: ACLs are not supported, ignoring. Jan 30 13:52:56.148943 systemd-tmpfiles[1878]: ACLs are not supported, ignoring. Jan 30 13:52:56.158978 systemd-tmpfiles[1878]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:52:56.158997 systemd-tmpfiles[1878]: Skipping /boot Jan 30 13:52:56.170600 systemd-tmpfiles[1878]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:52:56.170614 systemd-tmpfiles[1878]: Skipping /boot Jan 30 13:52:56.293553 zram_generator::config[1909]: No configuration found. Jan 30 13:52:56.490622 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:52:56.516625 ldconfig[1777]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 13:52:56.532657 systemd-networkd[1651]: eth0: Gained IPv6LL Jan 30 13:52:56.580054 systemd[1]: Reloading finished in 455 ms. Jan 30 13:52:56.598492 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 13:52:56.601315 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 13:52:56.608295 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:52:56.623782 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 13:52:56.637152 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 13:52:56.642748 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 13:52:56.653481 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:52:56.659797 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 13:52:56.687158 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:52:56.689686 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:52:56.700894 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:52:56.736921 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:52:56.746054 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:52:56.747293 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:52:56.747482 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:52:56.780289 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:52:56.780631 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:52:56.792858 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 13:52:56.805011 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:52:56.805488 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:52:56.822927 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:52:56.823853 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:52:56.845385 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 13:52:56.883149 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:52:56.883581 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:52:56.895177 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:52:56.926786 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:52:56.929983 augenrules[2006]: No rules Jan 30 13:52:56.934770 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:52:56.948942 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:52:56.950259 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:52:56.950353 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 13:52:56.961752 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 13:52:56.962384 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:52:56.971282 systemd[1]: Finished ensure-sysext.service. Jan 30 13:52:56.973134 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 13:52:56.979990 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:52:56.980217 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:52:56.984307 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:52:56.985690 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:52:56.988450 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:52:56.990151 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:52:56.992314 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:52:56.993978 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:52:57.011657 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:52:57.011745 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:52:57.019107 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 13:52:57.025425 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 13:52:57.029069 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 13:52:57.036264 systemd-resolved[1973]: Positive Trust Anchors: Jan 30 13:52:57.036291 systemd-resolved[1973]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:52:57.036336 systemd-resolved[1973]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:52:57.055709 systemd-resolved[1973]: Defaulting to hostname 'linux'. Jan 30 13:52:57.060168 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:52:57.061491 systemd[1]: Reached target network.target - Network. Jan 30 13:52:57.065352 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 13:52:57.066498 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:52:57.069590 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:52:57.071183 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 13:52:57.073207 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 13:52:57.075185 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 13:52:57.076651 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 13:52:57.078095 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 13:52:57.080173 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 13:52:57.080224 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:52:57.082659 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:52:57.084992 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 13:52:57.090142 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 13:52:57.094471 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 13:52:57.103552 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 13:52:57.104722 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:52:57.105711 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:52:57.106945 systemd[1]: System is tainted: cgroupsv1 Jan 30 13:52:57.107002 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:52:57.107026 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:52:57.110209 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 13:52:57.118731 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 30 13:52:57.129943 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 13:52:57.146518 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 13:52:57.152335 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 13:52:57.153604 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 13:52:57.178583 jq[2037]: false Jan 30 13:52:57.180067 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:52:57.201878 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 13:52:57.209111 systemd[1]: Started ntpd.service - Network Time Service. Jan 30 13:52:57.213911 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 13:52:57.227960 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 13:52:57.244826 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 30 13:52:57.257855 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 13:52:57.258937 extend-filesystems[2038]: Found loop4 Jan 30 13:52:57.265715 extend-filesystems[2038]: Found loop5 Jan 30 13:52:57.265715 extend-filesystems[2038]: Found loop6 Jan 30 13:52:57.265715 extend-filesystems[2038]: Found loop7 Jan 30 13:52:57.265715 extend-filesystems[2038]: Found nvme0n1 Jan 30 13:52:57.265715 extend-filesystems[2038]: Found nvme0n1p1 Jan 30 13:52:57.265715 extend-filesystems[2038]: Found nvme0n1p2 Jan 30 13:52:57.265715 extend-filesystems[2038]: Found nvme0n1p3 Jan 30 13:52:57.265715 extend-filesystems[2038]: Found usr Jan 30 13:52:57.265715 extend-filesystems[2038]: Found nvme0n1p4 Jan 30 13:52:57.265715 extend-filesystems[2038]: Found nvme0n1p6 Jan 30 13:52:57.265715 extend-filesystems[2038]: Found nvme0n1p7 Jan 30 13:52:57.265715 extend-filesystems[2038]: Found nvme0n1p9 Jan 30 13:52:57.265715 extend-filesystems[2038]: Checking size of /dev/nvme0n1p9 Jan 30 13:52:57.296760 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 13:52:57.322792 dbus-daemon[2035]: [system] SELinux support is enabled Jan 30 13:52:57.328083 dbus-daemon[2035]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1651 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 30 13:52:57.327779 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 13:52:57.331717 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 13:52:57.335833 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 13:52:57.355590 extend-filesystems[2038]: Resized partition /dev/nvme0n1p9 Jan 30 13:52:57.354681 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 13:52:57.363476 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 13:52:57.381883 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 13:52:57.382236 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 13:52:57.399233 extend-filesystems[2080]: resize2fs 1.47.1 (20-May-2024) Jan 30 13:52:57.400213 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 13:52:57.401649 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 13:52:57.407203 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 13:52:57.414893 jq[2073]: true Jan 30 13:52:57.421930 ntpd[2045]: ntpd 4.2.8p17@1.4004-o Wed Jan 29 09:31:52 UTC 2025 (1): Starting Jan 30 13:52:57.422818 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jan 30 13:52:57.422860 ntpd[2045]: 30 Jan 13:52:57 ntpd[2045]: ntpd 4.2.8p17@1.4004-o Wed Jan 29 09:31:52 UTC 2025 (1): Starting Jan 30 13:52:57.422860 ntpd[2045]: 30 Jan 13:52:57 ntpd[2045]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 30 13:52:57.422860 ntpd[2045]: 30 Jan 13:52:57 ntpd[2045]: ---------------------------------------------------- Jan 30 13:52:57.422860 ntpd[2045]: 30 Jan 13:52:57 ntpd[2045]: ntp-4 is maintained by Network Time Foundation, Jan 30 13:52:57.422860 ntpd[2045]: 30 Jan 13:52:57 ntpd[2045]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 30 13:52:57.422860 ntpd[2045]: 30 Jan 13:52:57 ntpd[2045]: corporation. Support and training for ntp-4 are Jan 30 13:52:57.422860 ntpd[2045]: 30 Jan 13:52:57 ntpd[2045]: available at https://www.nwtime.org/support Jan 30 13:52:57.422860 ntpd[2045]: 30 Jan 13:52:57 ntpd[2045]: ---------------------------------------------------- Jan 30 13:52:57.421961 ntpd[2045]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 30 13:52:57.429287 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 13:52:57.455910 ntpd[2045]: 30 Jan 13:52:57 ntpd[2045]: proto: precision = 0.061 usec (-24) Jan 30 13:52:57.455910 ntpd[2045]: 30 Jan 13:52:57 ntpd[2045]: basedate set to 2025-01-17 Jan 30 13:52:57.455910 ntpd[2045]: 30 Jan 13:52:57 ntpd[2045]: gps base set to 2025-01-19 (week 2350) Jan 30 13:52:57.455910 ntpd[2045]: 30 Jan 13:52:57 ntpd[2045]: Listen and drop on 0 v6wildcard [::]:123 Jan 30 13:52:57.455910 ntpd[2045]: 30 Jan 13:52:57 ntpd[2045]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 30 13:52:57.455910 ntpd[2045]: 30 Jan 13:52:57 ntpd[2045]: Listen normally on 2 lo 127.0.0.1:123 Jan 30 13:52:57.455910 ntpd[2045]: 30 Jan 13:52:57 ntpd[2045]: Listen normally on 3 eth0 172.31.23.102:123 Jan 30 13:52:57.455910 ntpd[2045]: 30 Jan 13:52:57 ntpd[2045]: Listen normally on 4 lo [::1]:123 Jan 30 13:52:57.421972 ntpd[2045]: ---------------------------------------------------- Jan 30 13:52:57.446880 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 13:52:57.421982 ntpd[2045]: ntp-4 is maintained by Network Time Foundation, Jan 30 13:52:57.421992 ntpd[2045]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 30 13:52:57.422001 ntpd[2045]: corporation. Support and training for ntp-4 are Jan 30 13:52:57.422009 ntpd[2045]: available at https://www.nwtime.org/support Jan 30 13:52:57.422020 ntpd[2045]: ---------------------------------------------------- Jan 30 13:52:57.437073 ntpd[2045]: proto: precision = 0.061 usec (-24) Jan 30 13:52:57.437407 ntpd[2045]: basedate set to 2025-01-17 Jan 30 13:52:57.437423 ntpd[2045]: gps base set to 2025-01-19 (week 2350) Jan 30 13:52:57.452051 ntpd[2045]: Listen and drop on 0 v6wildcard [::]:123 Jan 30 13:52:57.476221 ntpd[2045]: 30 Jan 13:52:57 ntpd[2045]: Listen normally on 5 eth0 [fe80::480:e1ff:fe72:2017%2]:123 Jan 30 13:52:57.476221 ntpd[2045]: 30 Jan 13:52:57 ntpd[2045]: Listening on routing socket on fd #22 for interface updates Jan 30 13:52:57.452105 ntpd[2045]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 30 13:52:57.452397 ntpd[2045]: Listen normally on 2 lo 127.0.0.1:123 Jan 30 13:52:57.452439 ntpd[2045]: Listen normally on 3 eth0 172.31.23.102:123 Jan 30 13:52:57.452485 ntpd[2045]: Listen normally on 4 lo [::1]:123 Jan 30 13:52:57.473998 ntpd[2045]: Listen normally on 5 eth0 [fe80::480:e1ff:fe72:2017%2]:123 Jan 30 13:52:57.474085 ntpd[2045]: Listening on routing socket on fd #22 for interface updates Jan 30 13:52:57.505300 update_engine[2068]: I20250130 13:52:57.497305 2068 main.cc:92] Flatcar Update Engine starting Jan 30 13:52:57.494898 ntpd[2045]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 30 13:52:57.510523 ntpd[2045]: 30 Jan 13:52:57 ntpd[2045]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 30 13:52:57.510523 ntpd[2045]: 30 Jan 13:52:57 ntpd[2045]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 30 13:52:57.494937 ntpd[2045]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 30 13:52:57.543555 update_engine[2068]: I20250130 13:52:57.542147 2068 update_check_scheduler.cc:74] Next update check in 2m17s Jan 30 13:52:57.544646 (ntainerd)[2095]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 13:52:57.548825 jq[2085]: true Jan 30 13:52:57.562795 dbus-daemon[2035]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 30 13:52:57.592581 coreos-metadata[2034]: Jan 30 13:52:57.581 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 30 13:52:57.592581 coreos-metadata[2034]: Jan 30 13:52:57.582 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 30 13:52:57.592581 coreos-metadata[2034]: Jan 30 13:52:57.586 INFO Fetch successful Jan 30 13:52:57.592581 coreos-metadata[2034]: Jan 30 13:52:57.586 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 30 13:52:57.592581 coreos-metadata[2034]: Jan 30 13:52:57.587 INFO Fetch successful Jan 30 13:52:57.592581 coreos-metadata[2034]: Jan 30 13:52:57.587 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 30 13:52:57.592581 coreos-metadata[2034]: Jan 30 13:52:57.589 INFO Fetch successful Jan 30 13:52:57.592581 coreos-metadata[2034]: Jan 30 13:52:57.590 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 30 13:52:57.592581 coreos-metadata[2034]: Jan 30 13:52:57.591 INFO Fetch successful Jan 30 13:52:57.592581 coreos-metadata[2034]: Jan 30 13:52:57.591 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 30 13:52:57.593322 coreos-metadata[2034]: Jan 30 13:52:57.593 INFO Fetch failed with 404: resource not found Jan 30 13:52:57.593322 coreos-metadata[2034]: Jan 30 13:52:57.593 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 30 13:52:57.595757 coreos-metadata[2034]: Jan 30 13:52:57.593 INFO Fetch successful Jan 30 13:52:57.595757 coreos-metadata[2034]: Jan 30 13:52:57.593 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 30 13:52:57.596375 coreos-metadata[2034]: Jan 30 13:52:57.596 INFO Fetch successful Jan 30 13:52:57.606167 coreos-metadata[2034]: Jan 30 13:52:57.603 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 30 13:52:57.606992 systemd[1]: Started update-engine.service - Update Engine. Jan 30 13:52:57.613503 tar[2082]: linux-amd64/helm Jan 30 13:52:57.617549 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jan 30 13:52:57.622486 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 13:52:57.623064 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 13:52:57.646862 coreos-metadata[2034]: Jan 30 13:52:57.624 INFO Fetch successful Jan 30 13:52:57.646862 coreos-metadata[2034]: Jan 30 13:52:57.624 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 30 13:52:57.646862 coreos-metadata[2034]: Jan 30 13:52:57.625 INFO Fetch successful Jan 30 13:52:57.646862 coreos-metadata[2034]: Jan 30 13:52:57.625 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 30 13:52:57.646862 coreos-metadata[2034]: Jan 30 13:52:57.628 INFO Fetch successful Jan 30 13:52:57.649830 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 30 13:52:57.658784 extend-filesystems[2080]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 30 13:52:57.658784 extend-filesystems[2080]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 30 13:52:57.658784 extend-filesystems[2080]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jan 30 13:52:57.654688 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 13:52:57.689825 extend-filesystems[2038]: Resized filesystem in /dev/nvme0n1p9 Jan 30 13:52:57.654723 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 13:52:57.657105 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 13:52:57.669747 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 13:52:57.709436 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 13:52:57.709905 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 13:52:57.716472 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 30 13:52:57.741959 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 30 13:52:57.794072 systemd-logind[2064]: Watching system buttons on /dev/input/event1 (Power Button) Jan 30 13:52:57.794100 systemd-logind[2064]: Watching system buttons on /dev/input/event2 (Sleep Button) Jan 30 13:52:57.794124 systemd-logind[2064]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 30 13:52:57.795744 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 30 13:52:57.823759 systemd-logind[2064]: New seat seat0. Jan 30 13:52:57.828715 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 13:52:57.832004 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 13:52:57.897520 bash[2147]: Updated "/home/core/.ssh/authorized_keys" Jan 30 13:52:57.916247 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 13:52:57.936924 systemd[1]: Starting sshkeys.service... Jan 30 13:52:57.987844 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 30 13:52:57.999068 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 30 13:52:58.085906 amazon-ssm-agent[2129]: Initializing new seelog logger Jan 30 13:52:58.100549 amazon-ssm-agent[2129]: New Seelog Logger Creation Complete Jan 30 13:52:58.100549 amazon-ssm-agent[2129]: 2025/01/30 13:52:58 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 13:52:58.100549 amazon-ssm-agent[2129]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 13:52:58.100549 amazon-ssm-agent[2129]: 2025/01/30 13:52:58 processing appconfig overrides Jan 30 13:52:58.101774 amazon-ssm-agent[2129]: 2025/01/30 13:52:58 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 13:52:58.101774 amazon-ssm-agent[2129]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 13:52:58.104702 amazon-ssm-agent[2129]: 2025/01/30 13:52:58 processing appconfig overrides Jan 30 13:52:58.104702 amazon-ssm-agent[2129]: 2025/01/30 13:52:58 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 13:52:58.104702 amazon-ssm-agent[2129]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 13:52:58.104702 amazon-ssm-agent[2129]: 2025/01/30 13:52:58 processing appconfig overrides Jan 30 13:52:58.116839 amazon-ssm-agent[2129]: 2025-01-30 13:52:58 INFO Proxy environment variables: Jan 30 13:52:58.123116 amazon-ssm-agent[2129]: 2025/01/30 13:52:58 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 13:52:58.123241 amazon-ssm-agent[2129]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 13:52:58.123438 amazon-ssm-agent[2129]: 2025/01/30 13:52:58 processing appconfig overrides Jan 30 13:52:58.198558 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (2162) Jan 30 13:52:58.217347 amazon-ssm-agent[2129]: 2025-01-30 13:52:58 INFO no_proxy: Jan 30 13:52:58.220009 locksmithd[2115]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 13:52:58.240474 dbus-daemon[2035]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 30 13:52:58.240908 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 30 13:52:58.248784 dbus-daemon[2035]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2112 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 30 13:52:58.262761 systemd[1]: Starting polkit.service - Authorization Manager... Jan 30 13:52:58.327409 amazon-ssm-agent[2129]: 2025-01-30 13:52:58 INFO https_proxy: Jan 30 13:52:58.328597 polkitd[2197]: Started polkitd version 121 Jan 30 13:52:58.385313 sshd_keygen[2093]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 13:52:58.404333 polkitd[2197]: Loading rules from directory /etc/polkit-1/rules.d Jan 30 13:52:58.426848 polkitd[2197]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 30 13:52:58.432691 polkitd[2197]: Finished loading, compiling and executing 2 rules Jan 30 13:52:58.434804 amazon-ssm-agent[2129]: 2025-01-30 13:52:58 INFO http_proxy: Jan 30 13:52:58.437135 dbus-daemon[2035]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 30 13:52:58.438167 systemd[1]: Started polkit.service - Authorization Manager. Jan 30 13:52:58.461621 coreos-metadata[2164]: Jan 30 13:52:58.459 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 30 13:52:58.464056 coreos-metadata[2164]: Jan 30 13:52:58.462 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 30 13:52:58.464056 coreos-metadata[2164]: Jan 30 13:52:58.463 INFO Fetch successful Jan 30 13:52:58.464056 coreos-metadata[2164]: Jan 30 13:52:58.463 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 30 13:52:58.465459 coreos-metadata[2164]: Jan 30 13:52:58.464 INFO Fetch successful Jan 30 13:52:58.469033 polkitd[2197]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 30 13:52:58.469922 unknown[2164]: wrote ssh authorized keys file for user: core Jan 30 13:52:58.476518 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 13:52:58.494018 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 13:52:58.536215 amazon-ssm-agent[2129]: 2025-01-30 13:52:58 INFO Checking if agent identity type OnPrem can be assumed Jan 30 13:52:58.552788 update-ssh-keys[2280]: Updated "/home/core/.ssh/authorized_keys" Jan 30 13:52:58.546860 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 30 13:52:58.572865 systemd[1]: Finished sshkeys.service. Jan 30 13:52:58.594383 systemd-hostnamed[2112]: Hostname set to (transient) Jan 30 13:52:58.600817 systemd-resolved[1973]: System hostname changed to 'ip-172-31-23-102'. Jan 30 13:52:58.604258 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 13:52:58.604666 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 13:52:58.641154 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 13:52:58.648467 amazon-ssm-agent[2129]: 2025-01-30 13:52:58 INFO Checking if agent identity type EC2 can be assumed Jan 30 13:52:58.725687 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 13:52:58.744491 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 13:52:58.747062 amazon-ssm-agent[2129]: 2025-01-30 13:52:58 INFO Agent will take identity from EC2 Jan 30 13:52:58.754121 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 30 13:52:58.758152 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 13:52:58.846652 amazon-ssm-agent[2129]: 2025-01-30 13:52:58 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 30 13:52:58.883924 containerd[2095]: time="2025-01-30T13:52:58.883691322Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 30 13:52:58.946548 amazon-ssm-agent[2129]: 2025-01-30 13:52:58 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 30 13:52:58.975895 containerd[2095]: time="2025-01-30T13:52:58.975805051Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:52:58.978469 containerd[2095]: time="2025-01-30T13:52:58.978391615Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:52:58.978698 containerd[2095]: time="2025-01-30T13:52:58.978676759Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 13:52:58.978800 containerd[2095]: time="2025-01-30T13:52:58.978786147Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 13:52:58.979147 containerd[2095]: time="2025-01-30T13:52:58.979117057Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 13:52:58.979289 containerd[2095]: time="2025-01-30T13:52:58.979271368Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 13:52:58.979510 containerd[2095]: time="2025-01-30T13:52:58.979488345Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:52:58.979621 containerd[2095]: time="2025-01-30T13:52:58.979606311Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:52:58.980607 containerd[2095]: time="2025-01-30T13:52:58.980519144Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:52:58.980744 containerd[2095]: time="2025-01-30T13:52:58.980726322Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 13:52:58.980879 containerd[2095]: time="2025-01-30T13:52:58.980809403Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:52:58.980879 containerd[2095]: time="2025-01-30T13:52:58.980829603Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 13:52:58.981192 containerd[2095]: time="2025-01-30T13:52:58.981075231Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:52:58.981574 containerd[2095]: time="2025-01-30T13:52:58.981506050Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:52:58.981937 containerd[2095]: time="2025-01-30T13:52:58.981913575Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:52:58.982084 containerd[2095]: time="2025-01-30T13:52:58.981985268Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 13:52:58.982289 containerd[2095]: time="2025-01-30T13:52:58.982237661Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 13:52:58.982430 containerd[2095]: time="2025-01-30T13:52:58.982373740Z" level=info msg="metadata content store policy set" policy=shared Jan 30 13:52:58.989224 containerd[2095]: time="2025-01-30T13:52:58.988545550Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 13:52:58.989224 containerd[2095]: time="2025-01-30T13:52:58.988636129Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 13:52:58.989224 containerd[2095]: time="2025-01-30T13:52:58.988663172Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 13:52:58.989224 containerd[2095]: time="2025-01-30T13:52:58.988726750Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 13:52:58.989224 containerd[2095]: time="2025-01-30T13:52:58.988749023Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 13:52:58.989224 containerd[2095]: time="2025-01-30T13:52:58.989059624Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 13:52:58.990767 containerd[2095]: time="2025-01-30T13:52:58.990048265Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 13:52:58.990767 containerd[2095]: time="2025-01-30T13:52:58.990208010Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 13:52:58.990767 containerd[2095]: time="2025-01-30T13:52:58.990231245Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 13:52:58.990767 containerd[2095]: time="2025-01-30T13:52:58.990269746Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 13:52:58.990767 containerd[2095]: time="2025-01-30T13:52:58.990292218Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 13:52:58.990767 containerd[2095]: time="2025-01-30T13:52:58.990312728Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 13:52:58.990767 containerd[2095]: time="2025-01-30T13:52:58.990335987Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 13:52:58.990767 containerd[2095]: time="2025-01-30T13:52:58.990356569Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 13:52:58.990767 containerd[2095]: time="2025-01-30T13:52:58.990391100Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 13:52:58.990767 containerd[2095]: time="2025-01-30T13:52:58.990410933Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 13:52:58.990767 containerd[2095]: time="2025-01-30T13:52:58.990428992Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 13:52:58.990767 containerd[2095]: time="2025-01-30T13:52:58.990447999Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 13:52:58.990767 containerd[2095]: time="2025-01-30T13:52:58.990476411Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 13:52:58.990767 containerd[2095]: time="2025-01-30T13:52:58.990497231Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 13:52:58.991313 containerd[2095]: time="2025-01-30T13:52:58.990515685Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 13:52:58.991313 containerd[2095]: time="2025-01-30T13:52:58.990548788Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 13:52:58.991313 containerd[2095]: time="2025-01-30T13:52:58.990568235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 13:52:58.991313 containerd[2095]: time="2025-01-30T13:52:58.990588185Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 13:52:58.991313 containerd[2095]: time="2025-01-30T13:52:58.990605961Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 13:52:58.991313 containerd[2095]: time="2025-01-30T13:52:58.990625211Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 13:52:58.991313 containerd[2095]: time="2025-01-30T13:52:58.990645737Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 13:52:58.991313 containerd[2095]: time="2025-01-30T13:52:58.990669212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 13:52:58.991313 containerd[2095]: time="2025-01-30T13:52:58.990687667Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 13:52:58.991313 containerd[2095]: time="2025-01-30T13:52:58.990716309Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 13:52:58.991313 containerd[2095]: time="2025-01-30T13:52:58.990736441Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 13:52:58.992665 containerd[2095]: time="2025-01-30T13:52:58.991755469Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 13:52:58.992665 containerd[2095]: time="2025-01-30T13:52:58.991803205Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 13:52:58.992665 containerd[2095]: time="2025-01-30T13:52:58.991824689Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 13:52:58.992665 containerd[2095]: time="2025-01-30T13:52:58.991843701Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 13:52:58.992665 containerd[2095]: time="2025-01-30T13:52:58.991901168Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 13:52:58.992665 containerd[2095]: time="2025-01-30T13:52:58.992324531Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 13:52:58.992665 containerd[2095]: time="2025-01-30T13:52:58.992350583Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 13:52:58.992665 containerd[2095]: time="2025-01-30T13:52:58.992371527Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 13:52:58.992665 containerd[2095]: time="2025-01-30T13:52:58.992387115Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 13:52:58.992665 containerd[2095]: time="2025-01-30T13:52:58.992406839Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 13:52:58.992665 containerd[2095]: time="2025-01-30T13:52:58.992426505Z" level=info msg="NRI interface is disabled by configuration." Jan 30 13:52:58.992665 containerd[2095]: time="2025-01-30T13:52:58.992441325Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 13:52:58.993844 containerd[2095]: time="2025-01-30T13:52:58.993408079Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 13:52:58.993844 containerd[2095]: time="2025-01-30T13:52:58.993499849Z" level=info msg="Connect containerd service" Jan 30 13:52:58.994669 containerd[2095]: time="2025-01-30T13:52:58.994154738Z" level=info msg="using legacy CRI server" Jan 30 13:52:58.994669 containerd[2095]: time="2025-01-30T13:52:58.994175223Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 13:52:58.994669 containerd[2095]: time="2025-01-30T13:52:58.994321284Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 13:52:58.999035 containerd[2095]: time="2025-01-30T13:52:58.998335812Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:52:59.000065 containerd[2095]: time="2025-01-30T13:52:58.999215652Z" level=info msg="Start subscribing containerd event" Jan 30 13:52:59.000685 containerd[2095]: time="2025-01-30T13:52:59.000658307Z" level=info msg="Start recovering state" Jan 30 13:52:59.001246 containerd[2095]: time="2025-01-30T13:52:59.001226896Z" level=info msg="Start event monitor" Jan 30 13:52:59.001331 containerd[2095]: time="2025-01-30T13:52:59.001318680Z" level=info msg="Start snapshots syncer" Jan 30 13:52:59.001397 containerd[2095]: time="2025-01-30T13:52:59.001385326Z" level=info msg="Start cni network conf syncer for default" Jan 30 13:52:59.001764 containerd[2095]: time="2025-01-30T13:52:59.001520266Z" level=info msg="Start streaming server" Jan 30 13:52:59.001764 containerd[2095]: time="2025-01-30T13:52:59.001121589Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 13:52:59.001764 containerd[2095]: time="2025-01-30T13:52:59.001725346Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 13:52:59.003351 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 13:52:59.005724 containerd[2095]: time="2025-01-30T13:52:59.003996111Z" level=info msg="containerd successfully booted in 0.122110s" Jan 30 13:52:59.006342 amazon-ssm-agent[2129]: 2025-01-30 13:52:58 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 30 13:52:59.006414 amazon-ssm-agent[2129]: 2025-01-30 13:52:58 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 30 13:52:59.006414 amazon-ssm-agent[2129]: 2025-01-30 13:52:58 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Jan 30 13:52:59.006414 amazon-ssm-agent[2129]: 2025-01-30 13:52:58 INFO [amazon-ssm-agent] Starting Core Agent Jan 30 13:52:59.006414 amazon-ssm-agent[2129]: 2025-01-30 13:52:58 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 30 13:52:59.006414 amazon-ssm-agent[2129]: 2025-01-30 13:52:58 INFO [Registrar] Starting registrar module Jan 30 13:52:59.006414 amazon-ssm-agent[2129]: 2025-01-30 13:52:58 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 30 13:52:59.006414 amazon-ssm-agent[2129]: 2025-01-30 13:52:58 INFO [EC2Identity] EC2 registration was successful. Jan 30 13:52:59.006756 amazon-ssm-agent[2129]: 2025-01-30 13:52:58 INFO [CredentialRefresher] credentialRefresher has started Jan 30 13:52:59.006756 amazon-ssm-agent[2129]: 2025-01-30 13:52:58 INFO [CredentialRefresher] Starting credentials refresher loop Jan 30 13:52:59.006756 amazon-ssm-agent[2129]: 2025-01-30 13:52:59 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 30 13:52:59.046001 amazon-ssm-agent[2129]: 2025-01-30 13:52:59 INFO [CredentialRefresher] Next credential rotation will be in 32.1166597773 minutes Jan 30 13:52:59.268653 tar[2082]: linux-amd64/LICENSE Jan 30 13:52:59.271902 tar[2082]: linux-amd64/README.md Jan 30 13:52:59.315886 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 13:52:59.926779 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:52:59.929457 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 13:52:59.930683 (kubelet)[2323]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:52:59.931432 systemd[1]: Startup finished in 9.369s (kernel) + 8.686s (userspace) = 18.055s. Jan 30 13:53:00.047796 amazon-ssm-agent[2129]: 2025-01-30 13:53:00 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 30 13:53:00.149445 amazon-ssm-agent[2129]: 2025-01-30 13:53:00 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2327) started Jan 30 13:53:00.249644 amazon-ssm-agent[2129]: 2025-01-30 13:53:00 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 30 13:53:01.026511 kubelet[2323]: E0130 13:53:01.026454 2323 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:53:01.030497 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:53:01.030840 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:53:04.789931 systemd-resolved[1973]: Clock change detected. Flushing caches. Jan 30 13:53:05.695755 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 13:53:05.713417 systemd[1]: Started sshd@0-172.31.23.102:22-139.178.68.195:45696.service - OpenSSH per-connection server daemon (139.178.68.195:45696). Jan 30 13:53:05.898484 sshd[2348]: Accepted publickey for core from 139.178.68.195 port 45696 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:53:05.903080 sshd[2348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:53:05.918192 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 13:53:05.929433 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 13:53:05.933213 systemd-logind[2064]: New session 1 of user core. Jan 30 13:53:05.964732 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 13:53:05.981553 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 13:53:06.003166 (systemd)[2354]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 13:53:06.177806 systemd[2354]: Queued start job for default target default.target. Jan 30 13:53:06.178447 systemd[2354]: Created slice app.slice - User Application Slice. Jan 30 13:53:06.178484 systemd[2354]: Reached target paths.target - Paths. Jan 30 13:53:06.178503 systemd[2354]: Reached target timers.target - Timers. Jan 30 13:53:06.185150 systemd[2354]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 13:53:06.195905 systemd[2354]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 13:53:06.195998 systemd[2354]: Reached target sockets.target - Sockets. Jan 30 13:53:06.196019 systemd[2354]: Reached target basic.target - Basic System. Jan 30 13:53:06.196075 systemd[2354]: Reached target default.target - Main User Target. Jan 30 13:53:06.196113 systemd[2354]: Startup finished in 168ms. Jan 30 13:53:06.196953 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 13:53:06.208428 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 13:53:06.364695 systemd[1]: Started sshd@1-172.31.23.102:22-139.178.68.195:45698.service - OpenSSH per-connection server daemon (139.178.68.195:45698). Jan 30 13:53:06.554222 sshd[2366]: Accepted publickey for core from 139.178.68.195 port 45698 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:53:06.556240 sshd[2366]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:53:06.564206 systemd-logind[2064]: New session 2 of user core. Jan 30 13:53:06.570409 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 13:53:06.695520 sshd[2366]: pam_unix(sshd:session): session closed for user core Jan 30 13:53:06.700907 systemd[1]: sshd@1-172.31.23.102:22-139.178.68.195:45698.service: Deactivated successfully. Jan 30 13:53:06.706394 systemd-logind[2064]: Session 2 logged out. Waiting for processes to exit. Jan 30 13:53:06.707805 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 13:53:06.710082 systemd-logind[2064]: Removed session 2. Jan 30 13:53:06.727787 systemd[1]: Started sshd@2-172.31.23.102:22-139.178.68.195:45700.service - OpenSSH per-connection server daemon (139.178.68.195:45700). Jan 30 13:53:06.905421 sshd[2374]: Accepted publickey for core from 139.178.68.195 port 45700 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:53:06.907347 sshd[2374]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:53:06.915768 systemd-logind[2064]: New session 3 of user core. Jan 30 13:53:06.927497 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 13:53:07.060839 sshd[2374]: pam_unix(sshd:session): session closed for user core Jan 30 13:53:07.067083 systemd[1]: sshd@2-172.31.23.102:22-139.178.68.195:45700.service: Deactivated successfully. Jan 30 13:53:07.073089 systemd[1]: session-3.scope: Deactivated successfully. Jan 30 13:53:07.074830 systemd-logind[2064]: Session 3 logged out. Waiting for processes to exit. Jan 30 13:53:07.076637 systemd-logind[2064]: Removed session 3. Jan 30 13:53:07.092289 systemd[1]: Started sshd@3-172.31.23.102:22-139.178.68.195:45716.service - OpenSSH per-connection server daemon (139.178.68.195:45716). Jan 30 13:53:07.255636 sshd[2382]: Accepted publickey for core from 139.178.68.195 port 45716 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:53:07.257311 sshd[2382]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:53:07.263161 systemd-logind[2064]: New session 4 of user core. Jan 30 13:53:07.272251 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 13:53:07.416684 sshd[2382]: pam_unix(sshd:session): session closed for user core Jan 30 13:53:07.420564 systemd[1]: sshd@3-172.31.23.102:22-139.178.68.195:45716.service: Deactivated successfully. Jan 30 13:53:07.426521 systemd-logind[2064]: Session 4 logged out. Waiting for processes to exit. Jan 30 13:53:07.427728 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 13:53:07.430299 systemd-logind[2064]: Removed session 4. Jan 30 13:53:07.445282 systemd[1]: Started sshd@4-172.31.23.102:22-139.178.68.195:45728.service - OpenSSH per-connection server daemon (139.178.68.195:45728). Jan 30 13:53:07.613464 sshd[2390]: Accepted publickey for core from 139.178.68.195 port 45728 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:53:07.615820 sshd[2390]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:53:07.625279 systemd-logind[2064]: New session 5 of user core. Jan 30 13:53:07.632298 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 13:53:07.760656 sudo[2395]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 13:53:07.761149 sudo[2395]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:53:07.780167 sudo[2395]: pam_unix(sudo:session): session closed for user root Jan 30 13:53:07.804577 sshd[2390]: pam_unix(sshd:session): session closed for user core Jan 30 13:53:07.812077 systemd[1]: sshd@4-172.31.23.102:22-139.178.68.195:45728.service: Deactivated successfully. Jan 30 13:53:07.819721 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 13:53:07.821435 systemd-logind[2064]: Session 5 logged out. Waiting for processes to exit. Jan 30 13:53:07.822699 systemd-logind[2064]: Removed session 5. Jan 30 13:53:07.838550 systemd[1]: Started sshd@5-172.31.23.102:22-139.178.68.195:45738.service - OpenSSH per-connection server daemon (139.178.68.195:45738). Jan 30 13:53:08.033652 sshd[2400]: Accepted publickey for core from 139.178.68.195 port 45738 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:53:08.035760 sshd[2400]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:53:08.045982 systemd-logind[2064]: New session 6 of user core. Jan 30 13:53:08.049342 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 13:53:08.164440 sudo[2405]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 13:53:08.165106 sudo[2405]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:53:08.171626 sudo[2405]: pam_unix(sudo:session): session closed for user root Jan 30 13:53:08.178349 sudo[2404]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 30 13:53:08.178831 sudo[2404]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:53:08.209417 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 30 13:53:08.229619 auditctl[2408]: No rules Jan 30 13:53:08.231032 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 13:53:08.231438 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 30 13:53:08.248828 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 13:53:08.332022 augenrules[2427]: No rules Jan 30 13:53:08.335714 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 13:53:08.341216 sudo[2404]: pam_unix(sudo:session): session closed for user root Jan 30 13:53:08.364862 sshd[2400]: pam_unix(sshd:session): session closed for user core Jan 30 13:53:08.370205 systemd[1]: sshd@5-172.31.23.102:22-139.178.68.195:45738.service: Deactivated successfully. Jan 30 13:53:08.376126 systemd-logind[2064]: Session 6 logged out. Waiting for processes to exit. Jan 30 13:53:08.377078 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 13:53:08.378426 systemd-logind[2064]: Removed session 6. Jan 30 13:53:08.400350 systemd[1]: Started sshd@6-172.31.23.102:22-139.178.68.195:45752.service - OpenSSH per-connection server daemon (139.178.68.195:45752). Jan 30 13:53:08.564082 sshd[2436]: Accepted publickey for core from 139.178.68.195 port 45752 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:53:08.566257 sshd[2436]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:53:08.584420 systemd-logind[2064]: New session 7 of user core. Jan 30 13:53:08.600351 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 13:53:08.705713 sudo[2440]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 13:53:08.706134 sudo[2440]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:53:09.195184 (dockerd)[2457]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 13:53:09.196028 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 13:53:09.830373 dockerd[2457]: time="2025-01-30T13:53:09.830301761Z" level=info msg="Starting up" Jan 30 13:53:10.004721 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1712269908-merged.mount: Deactivated successfully. Jan 30 13:53:10.298518 dockerd[2457]: time="2025-01-30T13:53:10.298391262Z" level=info msg="Loading containers: start." Jan 30 13:53:10.453954 kernel: Initializing XFRM netlink socket Jan 30 13:53:10.484440 (udev-worker)[2478]: Network interface NamePolicy= disabled on kernel command line. Jan 30 13:53:10.561621 systemd-networkd[1651]: docker0: Link UP Jan 30 13:53:10.576808 dockerd[2457]: time="2025-01-30T13:53:10.576757696Z" level=info msg="Loading containers: done." Jan 30 13:53:10.597682 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3213727648-merged.mount: Deactivated successfully. Jan 30 13:53:10.604285 dockerd[2457]: time="2025-01-30T13:53:10.604229031Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 13:53:10.604462 dockerd[2457]: time="2025-01-30T13:53:10.604358812Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 30 13:53:10.604517 dockerd[2457]: time="2025-01-30T13:53:10.604491857Z" level=info msg="Daemon has completed initialization" Jan 30 13:53:10.643038 dockerd[2457]: time="2025-01-30T13:53:10.642894684Z" level=info msg="API listen on /run/docker.sock" Jan 30 13:53:10.643505 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 13:53:11.650746 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 13:53:11.664752 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:53:11.871089 containerd[2095]: time="2025-01-30T13:53:11.871059791Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 30 13:53:12.022122 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:53:12.031352 (kubelet)[2614]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:53:12.080254 kubelet[2614]: E0130 13:53:12.080219 2614 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:53:12.084156 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:53:12.084423 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:53:12.523707 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2470610942.mount: Deactivated successfully. Jan 30 13:53:14.916265 containerd[2095]: time="2025-01-30T13:53:14.916151897Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:14.917787 containerd[2095]: time="2025-01-30T13:53:14.917736257Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=32677012" Jan 30 13:53:14.919187 containerd[2095]: time="2025-01-30T13:53:14.918706913Z" level=info msg="ImageCreate event name:\"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:14.922970 containerd[2095]: time="2025-01-30T13:53:14.922927888Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:14.924948 containerd[2095]: time="2025-01-30T13:53:14.924784630Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"32673812\" in 3.053234203s" Jan 30 13:53:14.925116 containerd[2095]: time="2025-01-30T13:53:14.925093986Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\"" Jan 30 13:53:14.962400 containerd[2095]: time="2025-01-30T13:53:14.962366847Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 30 13:53:17.165451 containerd[2095]: time="2025-01-30T13:53:17.165399078Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:17.166775 containerd[2095]: time="2025-01-30T13:53:17.166722934Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=29605745" Jan 30 13:53:17.167975 containerd[2095]: time="2025-01-30T13:53:17.167915280Z" level=info msg="ImageCreate event name:\"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:17.171246 containerd[2095]: time="2025-01-30T13:53:17.171182126Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:17.172401 containerd[2095]: time="2025-01-30T13:53:17.172363043Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"31052327\" in 2.209794404s" Jan 30 13:53:17.172793 containerd[2095]: time="2025-01-30T13:53:17.172539024Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\"" Jan 30 13:53:17.209114 containerd[2095]: time="2025-01-30T13:53:17.209075080Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 30 13:53:18.779167 containerd[2095]: time="2025-01-30T13:53:18.779104663Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:18.781257 containerd[2095]: time="2025-01-30T13:53:18.781203470Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=17783064" Jan 30 13:53:18.783611 containerd[2095]: time="2025-01-30T13:53:18.782675971Z" level=info msg="ImageCreate event name:\"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:18.787114 containerd[2095]: time="2025-01-30T13:53:18.787048666Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:18.788684 containerd[2095]: time="2025-01-30T13:53:18.788640950Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"19229664\" in 1.579523178s" Jan 30 13:53:18.788844 containerd[2095]: time="2025-01-30T13:53:18.788823662Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\"" Jan 30 13:53:18.831179 containerd[2095]: time="2025-01-30T13:53:18.831130123Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 30 13:53:20.084071 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1209867817.mount: Deactivated successfully. Jan 30 13:53:20.771564 containerd[2095]: time="2025-01-30T13:53:20.771509286Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:20.773904 containerd[2095]: time="2025-01-30T13:53:20.773784170Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=29058337" Jan 30 13:53:20.775394 containerd[2095]: time="2025-01-30T13:53:20.775331015Z" level=info msg="ImageCreate event name:\"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:20.779954 containerd[2095]: time="2025-01-30T13:53:20.779126061Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:20.779954 containerd[2095]: time="2025-01-30T13:53:20.779755876Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"29057356\" in 1.948582125s" Jan 30 13:53:20.779954 containerd[2095]: time="2025-01-30T13:53:20.779796306Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\"" Jan 30 13:53:20.811712 containerd[2095]: time="2025-01-30T13:53:20.811663900Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 30 13:53:21.410712 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount548135629.mount: Deactivated successfully. Jan 30 13:53:22.335735 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 13:53:22.346271 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:53:22.629904 containerd[2095]: time="2025-01-30T13:53:22.625074345Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:22.629904 containerd[2095]: time="2025-01-30T13:53:22.627339167Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 30 13:53:22.629904 containerd[2095]: time="2025-01-30T13:53:22.629004277Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:22.636116 containerd[2095]: time="2025-01-30T13:53:22.634505065Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:22.646297 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:53:22.651395 containerd[2095]: time="2025-01-30T13:53:22.651343426Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.839632334s" Jan 30 13:53:22.651534 containerd[2095]: time="2025-01-30T13:53:22.651408800Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 30 13:53:22.664239 (kubelet)[2767]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:53:22.697115 containerd[2095]: time="2025-01-30T13:53:22.696791128Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 30 13:53:22.745056 kubelet[2767]: E0130 13:53:22.745016 2767 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:53:22.748443 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:53:22.749843 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:53:23.181380 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount616892236.mount: Deactivated successfully. Jan 30 13:53:23.193528 containerd[2095]: time="2025-01-30T13:53:23.193480130Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:23.194585 containerd[2095]: time="2025-01-30T13:53:23.194529340Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jan 30 13:53:23.195914 containerd[2095]: time="2025-01-30T13:53:23.195639451Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:23.198913 containerd[2095]: time="2025-01-30T13:53:23.198431807Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:23.199911 containerd[2095]: time="2025-01-30T13:53:23.199327582Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 502.477556ms" Jan 30 13:53:23.199911 containerd[2095]: time="2025-01-30T13:53:23.199369751Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 30 13:53:23.226249 containerd[2095]: time="2025-01-30T13:53:23.226207940Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 30 13:53:23.799219 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3044025774.mount: Deactivated successfully. Jan 30 13:53:26.564139 containerd[2095]: time="2025-01-30T13:53:26.564085521Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:26.565574 containerd[2095]: time="2025-01-30T13:53:26.565523219Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Jan 30 13:53:26.566865 containerd[2095]: time="2025-01-30T13:53:26.566468503Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:26.570781 containerd[2095]: time="2025-01-30T13:53:26.570735758Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:26.572515 containerd[2095]: time="2025-01-30T13:53:26.572474268Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.34622785s" Jan 30 13:53:26.572686 containerd[2095]: time="2025-01-30T13:53:26.572664950Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 30 13:53:29.013288 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 30 13:53:30.615467 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:53:30.628391 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:53:30.678443 systemd[1]: Reloading requested from client PID 2906 ('systemctl') (unit session-7.scope)... Jan 30 13:53:30.678462 systemd[1]: Reloading... Jan 30 13:53:30.852984 zram_generator::config[2946]: No configuration found. Jan 30 13:53:31.050074 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:53:31.149596 systemd[1]: Reloading finished in 470 ms. Jan 30 13:53:31.209283 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 30 13:53:31.209409 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 30 13:53:31.209796 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:53:31.221284 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:53:31.452797 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:53:31.460678 (kubelet)[3019]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:53:31.563029 kubelet[3019]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:53:31.563029 kubelet[3019]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 13:53:31.563029 kubelet[3019]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:53:31.566193 kubelet[3019]: I0130 13:53:31.565915 3019 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:53:31.776432 kubelet[3019]: I0130 13:53:31.776308 3019 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 13:53:31.776432 kubelet[3019]: I0130 13:53:31.776342 3019 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:53:31.776747 kubelet[3019]: I0130 13:53:31.776724 3019 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 13:53:31.814722 kubelet[3019]: I0130 13:53:31.814682 3019 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:53:31.820862 kubelet[3019]: E0130 13:53:31.820778 3019 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.23.102:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.23.102:6443: connect: connection refused Jan 30 13:53:31.849107 kubelet[3019]: I0130 13:53:31.848925 3019 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:53:31.861371 kubelet[3019]: I0130 13:53:31.861259 3019 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:53:31.861600 kubelet[3019]: I0130 13:53:31.861365 3019 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-23-102","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 13:53:31.862034 kubelet[3019]: I0130 13:53:31.861614 3019 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:53:31.862034 kubelet[3019]: I0130 13:53:31.861630 3019 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 13:53:31.862146 kubelet[3019]: I0130 13:53:31.862048 3019 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:53:31.866642 kubelet[3019]: W0130 13:53:31.866544 3019 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.23.102:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-102&limit=500&resourceVersion=0": dial tcp 172.31.23.102:6443: connect: connection refused Jan 30 13:53:31.866642 kubelet[3019]: E0130 13:53:31.866646 3019 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.23.102:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-102&limit=500&resourceVersion=0": dial tcp 172.31.23.102:6443: connect: connection refused Jan 30 13:53:31.867010 kubelet[3019]: I0130 13:53:31.866724 3019 kubelet.go:400] "Attempting to sync node with API server" Jan 30 13:53:31.867010 kubelet[3019]: I0130 13:53:31.866745 3019 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:53:31.867010 kubelet[3019]: I0130 13:53:31.866809 3019 kubelet.go:312] "Adding apiserver pod source" Jan 30 13:53:31.867010 kubelet[3019]: I0130 13:53:31.866832 3019 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:53:31.872174 kubelet[3019]: W0130 13:53:31.872087 3019 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.23.102:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.23.102:6443: connect: connection refused Jan 30 13:53:31.872174 kubelet[3019]: E0130 13:53:31.872174 3019 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.23.102:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.23.102:6443: connect: connection refused Jan 30 13:53:31.873439 kubelet[3019]: I0130 13:53:31.872844 3019 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 13:53:31.875888 kubelet[3019]: I0130 13:53:31.875830 3019 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:53:31.876051 kubelet[3019]: W0130 13:53:31.875952 3019 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 13:53:31.877892 kubelet[3019]: I0130 13:53:31.876787 3019 server.go:1264] "Started kubelet" Jan 30 13:53:31.881228 kubelet[3019]: I0130 13:53:31.881186 3019 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:53:31.884693 kubelet[3019]: I0130 13:53:31.884670 3019 server.go:455] "Adding debug handlers to kubelet server" Jan 30 13:53:31.886734 kubelet[3019]: I0130 13:53:31.886674 3019 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:53:31.887275 kubelet[3019]: I0130 13:53:31.887231 3019 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:53:31.888224 kubelet[3019]: E0130 13:53:31.887777 3019 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.23.102:6443/api/v1/namespaces/default/events\": dial tcp 172.31.23.102:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-23-102.181f7ccf827013d2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-23-102,UID:ip-172-31-23-102,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-23-102,},FirstTimestamp:2025-01-30 13:53:31.876758482 +0000 UTC m=+0.410792222,LastTimestamp:2025-01-30 13:53:31.876758482 +0000 UTC m=+0.410792222,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-23-102,}" Jan 30 13:53:31.893484 kubelet[3019]: I0130 13:53:31.891292 3019 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:53:31.893484 kubelet[3019]: I0130 13:53:31.891464 3019 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 13:53:31.894935 kubelet[3019]: I0130 13:53:31.894910 3019 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 13:53:31.898755 kubelet[3019]: I0130 13:53:31.897576 3019 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:53:31.899434 kubelet[3019]: W0130 13:53:31.899382 3019 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.23.102:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.102:6443: connect: connection refused Jan 30 13:53:31.899671 kubelet[3019]: E0130 13:53:31.899654 3019 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.23.102:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.102:6443: connect: connection refused Jan 30 13:53:31.902344 kubelet[3019]: E0130 13:53:31.902318 3019 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-23-102\" not found" Jan 30 13:53:31.903177 kubelet[3019]: E0130 13:53:31.903141 3019 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-102?timeout=10s\": dial tcp 172.31.23.102:6443: connect: connection refused" interval="200ms" Jan 30 13:53:31.904860 kubelet[3019]: E0130 13:53:31.903379 3019 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:53:31.905549 kubelet[3019]: I0130 13:53:31.905518 3019 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:53:31.905747 kubelet[3019]: I0130 13:53:31.905630 3019 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:53:31.907464 kubelet[3019]: I0130 13:53:31.907394 3019 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:53:31.928292 kubelet[3019]: I0130 13:53:31.928236 3019 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:53:31.931486 kubelet[3019]: I0130 13:53:31.931440 3019 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:53:31.931486 kubelet[3019]: I0130 13:53:31.931480 3019 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 13:53:31.931746 kubelet[3019]: I0130 13:53:31.931589 3019 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 13:53:31.931746 kubelet[3019]: E0130 13:53:31.931646 3019 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:53:31.946683 kubelet[3019]: W0130 13:53:31.946611 3019 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.23.102:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.102:6443: connect: connection refused Jan 30 13:53:31.946683 kubelet[3019]: E0130 13:53:31.946686 3019 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.23.102:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.102:6443: connect: connection refused Jan 30 13:53:31.962498 kubelet[3019]: I0130 13:53:31.962447 3019 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 13:53:31.962498 kubelet[3019]: I0130 13:53:31.962467 3019 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 13:53:31.962498 kubelet[3019]: I0130 13:53:31.962487 3019 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:53:31.964840 kubelet[3019]: I0130 13:53:31.964814 3019 policy_none.go:49] "None policy: Start" Jan 30 13:53:31.965487 kubelet[3019]: I0130 13:53:31.965467 3019 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 13:53:31.965487 kubelet[3019]: I0130 13:53:31.965492 3019 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:53:31.973219 kubelet[3019]: I0130 13:53:31.973183 3019 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:53:31.975483 kubelet[3019]: I0130 13:53:31.973415 3019 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:53:31.975483 kubelet[3019]: I0130 13:53:31.973555 3019 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:53:31.977837 kubelet[3019]: E0130 13:53:31.977800 3019 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-23-102\" not found" Jan 30 13:53:32.009713 kubelet[3019]: I0130 13:53:32.009682 3019 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-102" Jan 30 13:53:32.011669 kubelet[3019]: E0130 13:53:32.011567 3019 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.23.102:6443/api/v1/nodes\": dial tcp 172.31.23.102:6443: connect: connection refused" node="ip-172-31-23-102" Jan 30 13:53:32.032597 kubelet[3019]: I0130 13:53:32.032207 3019 topology_manager.go:215] "Topology Admit Handler" podUID="756ac0809267efcb7e6c93bd008648a2" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-23-102" Jan 30 13:53:32.040793 kubelet[3019]: I0130 13:53:32.040092 3019 topology_manager.go:215] "Topology Admit Handler" podUID="8b4a0dd1168dd5b9a454da9c532cc04b" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-23-102" Jan 30 13:53:32.046122 kubelet[3019]: I0130 13:53:32.045972 3019 topology_manager.go:215] "Topology Admit Handler" podUID="58e1a7f74683ce69a66e79867a343225" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-23-102" Jan 30 13:53:32.099729 kubelet[3019]: I0130 13:53:32.099621 3019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8b4a0dd1168dd5b9a454da9c532cc04b-kubeconfig\") pod \"kube-controller-manager-ip-172-31-23-102\" (UID: \"8b4a0dd1168dd5b9a454da9c532cc04b\") " pod="kube-system/kube-controller-manager-ip-172-31-23-102" Jan 30 13:53:32.099729 kubelet[3019]: I0130 13:53:32.099678 3019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8b4a0dd1168dd5b9a454da9c532cc04b-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-23-102\" (UID: \"8b4a0dd1168dd5b9a454da9c532cc04b\") " pod="kube-system/kube-controller-manager-ip-172-31-23-102" Jan 30 13:53:32.099729 kubelet[3019]: I0130 13:53:32.099708 3019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/756ac0809267efcb7e6c93bd008648a2-k8s-certs\") pod \"kube-apiserver-ip-172-31-23-102\" (UID: \"756ac0809267efcb7e6c93bd008648a2\") " pod="kube-system/kube-apiserver-ip-172-31-23-102" Jan 30 13:53:32.099729 kubelet[3019]: I0130 13:53:32.099734 3019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8b4a0dd1168dd5b9a454da9c532cc04b-k8s-certs\") pod \"kube-controller-manager-ip-172-31-23-102\" (UID: \"8b4a0dd1168dd5b9a454da9c532cc04b\") " pod="kube-system/kube-controller-manager-ip-172-31-23-102" Jan 30 13:53:32.100040 kubelet[3019]: I0130 13:53:32.099760 3019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8b4a0dd1168dd5b9a454da9c532cc04b-ca-certs\") pod \"kube-controller-manager-ip-172-31-23-102\" (UID: \"8b4a0dd1168dd5b9a454da9c532cc04b\") " pod="kube-system/kube-controller-manager-ip-172-31-23-102" Jan 30 13:53:32.100040 kubelet[3019]: I0130 13:53:32.099794 3019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8b4a0dd1168dd5b9a454da9c532cc04b-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-23-102\" (UID: \"8b4a0dd1168dd5b9a454da9c532cc04b\") " pod="kube-system/kube-controller-manager-ip-172-31-23-102" Jan 30 13:53:32.100040 kubelet[3019]: I0130 13:53:32.099818 3019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/58e1a7f74683ce69a66e79867a343225-kubeconfig\") pod \"kube-scheduler-ip-172-31-23-102\" (UID: \"58e1a7f74683ce69a66e79867a343225\") " pod="kube-system/kube-scheduler-ip-172-31-23-102" Jan 30 13:53:32.100040 kubelet[3019]: I0130 13:53:32.099839 3019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/756ac0809267efcb7e6c93bd008648a2-ca-certs\") pod \"kube-apiserver-ip-172-31-23-102\" (UID: \"756ac0809267efcb7e6c93bd008648a2\") " pod="kube-system/kube-apiserver-ip-172-31-23-102" Jan 30 13:53:32.100040 kubelet[3019]: I0130 13:53:32.099872 3019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/756ac0809267efcb7e6c93bd008648a2-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-23-102\" (UID: \"756ac0809267efcb7e6c93bd008648a2\") " pod="kube-system/kube-apiserver-ip-172-31-23-102" Jan 30 13:53:32.104144 kubelet[3019]: E0130 13:53:32.104090 3019 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-102?timeout=10s\": dial tcp 172.31.23.102:6443: connect: connection refused" interval="400ms" Jan 30 13:53:32.214474 kubelet[3019]: I0130 13:53:32.214441 3019 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-102" Jan 30 13:53:32.214865 kubelet[3019]: E0130 13:53:32.214832 3019 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.23.102:6443/api/v1/nodes\": dial tcp 172.31.23.102:6443: connect: connection refused" node="ip-172-31-23-102" Jan 30 13:53:32.358200 containerd[2095]: time="2025-01-30T13:53:32.358059944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-23-102,Uid:756ac0809267efcb7e6c93bd008648a2,Namespace:kube-system,Attempt:0,}" Jan 30 13:53:32.383126 containerd[2095]: time="2025-01-30T13:53:32.383067403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-23-102,Uid:8b4a0dd1168dd5b9a454da9c532cc04b,Namespace:kube-system,Attempt:0,}" Jan 30 13:53:32.386264 containerd[2095]: time="2025-01-30T13:53:32.386220804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-23-102,Uid:58e1a7f74683ce69a66e79867a343225,Namespace:kube-system,Attempt:0,}" Jan 30 13:53:32.506976 kubelet[3019]: E0130 13:53:32.506818 3019 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-102?timeout=10s\": dial tcp 172.31.23.102:6443: connect: connection refused" interval="800ms" Jan 30 13:53:32.618300 kubelet[3019]: I0130 13:53:32.618184 3019 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-102" Jan 30 13:53:32.618775 kubelet[3019]: E0130 13:53:32.618566 3019 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.23.102:6443/api/v1/nodes\": dial tcp 172.31.23.102:6443: connect: connection refused" node="ip-172-31-23-102" Jan 30 13:53:32.878296 kubelet[3019]: W0130 13:53:32.878054 3019 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.23.102:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.102:6443: connect: connection refused Jan 30 13:53:32.878296 kubelet[3019]: E0130 13:53:32.878130 3019 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.23.102:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.102:6443: connect: connection refused Jan 30 13:53:32.878369 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount559701766.mount: Deactivated successfully. Jan 30 13:53:32.880889 containerd[2095]: time="2025-01-30T13:53:32.879973551Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:53:32.884767 containerd[2095]: time="2025-01-30T13:53:32.884709486Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 30 13:53:32.885661 containerd[2095]: time="2025-01-30T13:53:32.885622928Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:53:32.887256 containerd[2095]: time="2025-01-30T13:53:32.887151229Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:53:32.888498 containerd[2095]: time="2025-01-30T13:53:32.888451019Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:53:32.889009 containerd[2095]: time="2025-01-30T13:53:32.888858658Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:53:32.890994 containerd[2095]: time="2025-01-30T13:53:32.890948799Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:53:32.894303 containerd[2095]: time="2025-01-30T13:53:32.893638847Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:53:32.897190 containerd[2095]: time="2025-01-30T13:53:32.896964759Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 513.788169ms" Jan 30 13:53:32.902304 containerd[2095]: time="2025-01-30T13:53:32.902128421Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 543.973574ms" Jan 30 13:53:32.905357 containerd[2095]: time="2025-01-30T13:53:32.905302468Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 519.001156ms" Jan 30 13:53:32.943238 kubelet[3019]: W0130 13:53:32.943139 3019 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.23.102:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.23.102:6443: connect: connection refused Jan 30 13:53:32.943238 kubelet[3019]: E0130 13:53:32.943213 3019 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.23.102:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.23.102:6443: connect: connection refused Jan 30 13:53:33.149968 containerd[2095]: time="2025-01-30T13:53:33.149020931Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:53:33.149968 containerd[2095]: time="2025-01-30T13:53:33.149231080Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:53:33.150558 containerd[2095]: time="2025-01-30T13:53:33.149934151Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:53:33.154297 containerd[2095]: time="2025-01-30T13:53:33.152988274Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:53:33.155723 containerd[2095]: time="2025-01-30T13:53:33.155421750Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:53:33.155723 containerd[2095]: time="2025-01-30T13:53:33.155477490Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:53:33.155723 containerd[2095]: time="2025-01-30T13:53:33.155168108Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:53:33.155723 containerd[2095]: time="2025-01-30T13:53:33.155243038Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:53:33.155723 containerd[2095]: time="2025-01-30T13:53:33.155266765Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:53:33.155723 containerd[2095]: time="2025-01-30T13:53:33.155407539Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:53:33.156991 containerd[2095]: time="2025-01-30T13:53:33.156774978Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:53:33.160895 containerd[2095]: time="2025-01-30T13:53:33.158747476Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:53:33.288903 containerd[2095]: time="2025-01-30T13:53:33.287158902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-23-102,Uid:58e1a7f74683ce69a66e79867a343225,Namespace:kube-system,Attempt:0,} returns sandbox id \"aedf1ab759dd558a861105c22b8926d5ddbc337d83535865bbc926874feee768\"" Jan 30 13:53:33.301009 containerd[2095]: time="2025-01-30T13:53:33.299864204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-23-102,Uid:8b4a0dd1168dd5b9a454da9c532cc04b,Namespace:kube-system,Attempt:0,} returns sandbox id \"85d9ee8ad00279a8d9bdf318c5320b158f9fbebef3f46213458d1cbe30c26e97\"" Jan 30 13:53:33.302428 containerd[2095]: time="2025-01-30T13:53:33.302389028Z" level=info msg="CreateContainer within sandbox \"aedf1ab759dd558a861105c22b8926d5ddbc337d83535865bbc926874feee768\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 13:53:33.309131 kubelet[3019]: E0130 13:53:33.309090 3019 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-102?timeout=10s\": dial tcp 172.31.23.102:6443: connect: connection refused" interval="1.6s" Jan 30 13:53:33.321655 containerd[2095]: time="2025-01-30T13:53:33.321617123Z" level=info msg="CreateContainer within sandbox \"85d9ee8ad00279a8d9bdf318c5320b158f9fbebef3f46213458d1cbe30c26e97\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 13:53:33.324023 containerd[2095]: time="2025-01-30T13:53:33.323836780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-23-102,Uid:756ac0809267efcb7e6c93bd008648a2,Namespace:kube-system,Attempt:0,} returns sandbox id \"3a5898c269689cefd633e8fd86ef5495c39935b1dbc9b36c8cccfdbefeb5bd68\"" Jan 30 13:53:33.330730 containerd[2095]: time="2025-01-30T13:53:33.330691952Z" level=info msg="CreateContainer within sandbox \"3a5898c269689cefd633e8fd86ef5495c39935b1dbc9b36c8cccfdbefeb5bd68\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 13:53:33.337752 containerd[2095]: time="2025-01-30T13:53:33.337599517Z" level=info msg="CreateContainer within sandbox \"aedf1ab759dd558a861105c22b8926d5ddbc337d83535865bbc926874feee768\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e5e36de23a234332ccd1e8bd67394937b4190b99d3fa4d6720280959113748a0\"" Jan 30 13:53:33.339713 containerd[2095]: time="2025-01-30T13:53:33.338401625Z" level=info msg="StartContainer for \"e5e36de23a234332ccd1e8bd67394937b4190b99d3fa4d6720280959113748a0\"" Jan 30 13:53:33.373200 kubelet[3019]: W0130 13:53:33.373075 3019 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.23.102:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-102&limit=500&resourceVersion=0": dial tcp 172.31.23.102:6443: connect: connection refused Jan 30 13:53:33.373200 kubelet[3019]: E0130 13:53:33.373153 3019 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.23.102:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-102&limit=500&resourceVersion=0": dial tcp 172.31.23.102:6443: connect: connection refused Jan 30 13:53:33.391645 containerd[2095]: time="2025-01-30T13:53:33.391604643Z" level=info msg="CreateContainer within sandbox \"85d9ee8ad00279a8d9bdf318c5320b158f9fbebef3f46213458d1cbe30c26e97\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"db217d084c4de586edf51b723662a9e33e530180941da95d0a7f9fef89ac48cb\"" Jan 30 13:53:33.394347 containerd[2095]: time="2025-01-30T13:53:33.394313575Z" level=info msg="StartContainer for \"db217d084c4de586edf51b723662a9e33e530180941da95d0a7f9fef89ac48cb\"" Jan 30 13:53:33.394660 containerd[2095]: time="2025-01-30T13:53:33.394627348Z" level=info msg="CreateContainer within sandbox \"3a5898c269689cefd633e8fd86ef5495c39935b1dbc9b36c8cccfdbefeb5bd68\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d5714f9467c033b89f00dcc76cdd5b453ff1ddb80e2588273757cac284fc42da\"" Jan 30 13:53:33.395594 containerd[2095]: time="2025-01-30T13:53:33.395571043Z" level=info msg="StartContainer for \"d5714f9467c033b89f00dcc76cdd5b453ff1ddb80e2588273757cac284fc42da\"" Jan 30 13:53:33.424518 kubelet[3019]: I0130 13:53:33.422315 3019 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-102" Jan 30 13:53:33.424518 kubelet[3019]: E0130 13:53:33.422869 3019 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.23.102:6443/api/v1/nodes\": dial tcp 172.31.23.102:6443: connect: connection refused" node="ip-172-31-23-102" Jan 30 13:53:33.500223 kubelet[3019]: W0130 13:53:33.500087 3019 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.23.102:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.102:6443: connect: connection refused Jan 30 13:53:33.500450 kubelet[3019]: E0130 13:53:33.500411 3019 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.23.102:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.102:6443: connect: connection refused Jan 30 13:53:33.594964 containerd[2095]: time="2025-01-30T13:53:33.593276715Z" level=info msg="StartContainer for \"e5e36de23a234332ccd1e8bd67394937b4190b99d3fa4d6720280959113748a0\" returns successfully" Jan 30 13:53:33.707111 containerd[2095]: time="2025-01-30T13:53:33.705643145Z" level=info msg="StartContainer for \"d5714f9467c033b89f00dcc76cdd5b453ff1ddb80e2588273757cac284fc42da\" returns successfully" Jan 30 13:53:33.719909 containerd[2095]: time="2025-01-30T13:53:33.719255118Z" level=info msg="StartContainer for \"db217d084c4de586edf51b723662a9e33e530180941da95d0a7f9fef89ac48cb\" returns successfully" Jan 30 13:53:33.775810 kubelet[3019]: E0130 13:53:33.775667 3019 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.23.102:6443/api/v1/namespaces/default/events\": dial tcp 172.31.23.102:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-23-102.181f7ccf827013d2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-23-102,UID:ip-172-31-23-102,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-23-102,},FirstTimestamp:2025-01-30 13:53:31.876758482 +0000 UTC m=+0.410792222,LastTimestamp:2025-01-30 13:53:31.876758482 +0000 UTC m=+0.410792222,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-23-102,}" Jan 30 13:53:33.998348 kubelet[3019]: E0130 13:53:33.994985 3019 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.23.102:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.23.102:6443: connect: connection refused Jan 30 13:53:35.028642 kubelet[3019]: I0130 13:53:35.028608 3019 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-102" Jan 30 13:53:36.963002 kubelet[3019]: E0130 13:53:36.962934 3019 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-23-102\" not found" node="ip-172-31-23-102" Jan 30 13:53:36.992659 kubelet[3019]: I0130 13:53:36.992618 3019 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-23-102" Jan 30 13:53:37.873023 kubelet[3019]: I0130 13:53:37.872717 3019 apiserver.go:52] "Watching apiserver" Jan 30 13:53:37.895399 kubelet[3019]: I0130 13:53:37.895357 3019 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 13:53:39.443620 systemd[1]: Reloading requested from client PID 3294 ('systemctl') (unit session-7.scope)... Jan 30 13:53:39.443641 systemd[1]: Reloading... Jan 30 13:53:39.581167 zram_generator::config[3330]: No configuration found. Jan 30 13:53:39.760537 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:53:39.881644 systemd[1]: Reloading finished in 437 ms. Jan 30 13:53:39.925573 kubelet[3019]: E0130 13:53:39.925353 3019 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{ip-172-31-23-102.181f7ccf827013d2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-23-102,UID:ip-172-31-23-102,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-23-102,},FirstTimestamp:2025-01-30 13:53:31.876758482 +0000 UTC m=+0.410792222,LastTimestamp:2025-01-30 13:53:31.876758482 +0000 UTC m=+0.410792222,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-23-102,}" Jan 30 13:53:39.925666 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:53:39.939948 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 13:53:39.940476 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:53:39.951284 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:53:40.314328 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:53:40.322546 (kubelet)[3401]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:53:40.456421 kubelet[3401]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:53:40.456421 kubelet[3401]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 13:53:40.456421 kubelet[3401]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:53:40.456421 kubelet[3401]: I0130 13:53:40.454789 3401 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:53:40.463334 kubelet[3401]: I0130 13:53:40.463301 3401 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 13:53:40.463634 kubelet[3401]: I0130 13:53:40.463617 3401 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:53:40.464064 kubelet[3401]: I0130 13:53:40.464049 3401 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 13:53:40.466407 kubelet[3401]: I0130 13:53:40.466373 3401 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 13:53:40.478272 kubelet[3401]: I0130 13:53:40.478234 3401 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:53:40.502380 kubelet[3401]: I0130 13:53:40.502344 3401 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:53:40.504598 kubelet[3401]: I0130 13:53:40.504555 3401 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:53:40.504989 kubelet[3401]: I0130 13:53:40.504748 3401 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-23-102","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 13:53:40.505137 kubelet[3401]: I0130 13:53:40.505000 3401 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:53:40.505137 kubelet[3401]: I0130 13:53:40.505016 3401 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 13:53:40.510528 kubelet[3401]: I0130 13:53:40.510487 3401 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:53:40.510737 kubelet[3401]: I0130 13:53:40.510718 3401 kubelet.go:400] "Attempting to sync node with API server" Jan 30 13:53:40.513427 kubelet[3401]: I0130 13:53:40.513396 3401 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:53:40.513633 kubelet[3401]: I0130 13:53:40.513456 3401 kubelet.go:312] "Adding apiserver pod source" Jan 30 13:53:40.513633 kubelet[3401]: I0130 13:53:40.513480 3401 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:53:40.521975 kubelet[3401]: I0130 13:53:40.520970 3401 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 13:53:40.521975 kubelet[3401]: I0130 13:53:40.521212 3401 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:53:40.525798 kubelet[3401]: I0130 13:53:40.525761 3401 server.go:1264] "Started kubelet" Jan 30 13:53:40.541860 kubelet[3401]: I0130 13:53:40.541466 3401 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:53:40.552424 kubelet[3401]: I0130 13:53:40.552358 3401 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:53:40.557203 kubelet[3401]: I0130 13:53:40.556223 3401 server.go:455] "Adding debug handlers to kubelet server" Jan 30 13:53:40.576018 kubelet[3401]: I0130 13:53:40.575438 3401 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:53:40.576018 kubelet[3401]: I0130 13:53:40.556658 3401 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:53:40.578757 kubelet[3401]: I0130 13:53:40.559380 3401 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 13:53:40.587948 kubelet[3401]: I0130 13:53:40.587140 3401 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:53:40.593033 kubelet[3401]: I0130 13:53:40.559269 3401 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 13:53:40.593378 kubelet[3401]: I0130 13:53:40.593349 3401 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:53:40.595588 kubelet[3401]: I0130 13:53:40.595546 3401 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:53:40.602138 kubelet[3401]: I0130 13:53:40.602097 3401 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 13:53:40.602525 kubelet[3401]: I0130 13:53:40.602318 3401 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 13:53:40.602525 kubelet[3401]: E0130 13:53:40.602398 3401 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:53:40.603056 kubelet[3401]: I0130 13:53:40.602937 3401 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:53:40.606968 kubelet[3401]: I0130 13:53:40.606766 3401 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:53:40.606968 kubelet[3401]: I0130 13:53:40.606801 3401 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:53:40.627461 kubelet[3401]: E0130 13:53:40.627079 3401 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:53:40.674138 kubelet[3401]: I0130 13:53:40.674108 3401 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-102" Jan 30 13:53:40.697700 kubelet[3401]: I0130 13:53:40.697678 3401 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-23-102" Jan 30 13:53:40.698556 kubelet[3401]: I0130 13:53:40.698368 3401 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-23-102" Jan 30 13:53:40.703978 kubelet[3401]: E0130 13:53:40.703055 3401 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 30 13:53:40.781120 kubelet[3401]: I0130 13:53:40.781098 3401 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 13:53:40.781391 kubelet[3401]: I0130 13:53:40.781362 3401 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 13:53:40.781494 kubelet[3401]: I0130 13:53:40.781487 3401 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:53:40.781670 kubelet[3401]: I0130 13:53:40.781661 3401 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 13:53:40.781730 kubelet[3401]: I0130 13:53:40.781713 3401 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 13:53:40.781768 kubelet[3401]: I0130 13:53:40.781764 3401 policy_none.go:49] "None policy: Start" Jan 30 13:53:40.783207 kubelet[3401]: I0130 13:53:40.783192 3401 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 13:53:40.783305 kubelet[3401]: I0130 13:53:40.783298 3401 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:53:40.783565 kubelet[3401]: I0130 13:53:40.783554 3401 state_mem.go:75] "Updated machine memory state" Jan 30 13:53:40.785904 kubelet[3401]: I0130 13:53:40.785284 3401 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:53:40.786310 kubelet[3401]: I0130 13:53:40.786275 3401 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:53:40.788220 kubelet[3401]: I0130 13:53:40.788203 3401 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:53:40.903321 kubelet[3401]: I0130 13:53:40.903216 3401 topology_manager.go:215] "Topology Admit Handler" podUID="756ac0809267efcb7e6c93bd008648a2" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-23-102" Jan 30 13:53:40.903445 kubelet[3401]: I0130 13:53:40.903331 3401 topology_manager.go:215] "Topology Admit Handler" podUID="8b4a0dd1168dd5b9a454da9c532cc04b" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-23-102" Jan 30 13:53:40.903445 kubelet[3401]: I0130 13:53:40.903410 3401 topology_manager.go:215] "Topology Admit Handler" podUID="58e1a7f74683ce69a66e79867a343225" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-23-102" Jan 30 13:53:40.941361 kubelet[3401]: E0130 13:53:40.941250 3401 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-23-102\" already exists" pod="kube-system/kube-scheduler-ip-172-31-23-102" Jan 30 13:53:40.944923 kubelet[3401]: E0130 13:53:40.944009 3401 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-23-102\" already exists" pod="kube-system/kube-apiserver-ip-172-31-23-102" Jan 30 13:53:41.006687 kubelet[3401]: I0130 13:53:41.005779 3401 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/756ac0809267efcb7e6c93bd008648a2-ca-certs\") pod \"kube-apiserver-ip-172-31-23-102\" (UID: \"756ac0809267efcb7e6c93bd008648a2\") " pod="kube-system/kube-apiserver-ip-172-31-23-102" Jan 30 13:53:41.006687 kubelet[3401]: I0130 13:53:41.005837 3401 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/756ac0809267efcb7e6c93bd008648a2-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-23-102\" (UID: \"756ac0809267efcb7e6c93bd008648a2\") " pod="kube-system/kube-apiserver-ip-172-31-23-102" Jan 30 13:53:41.006687 kubelet[3401]: I0130 13:53:41.005900 3401 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8b4a0dd1168dd5b9a454da9c532cc04b-kubeconfig\") pod \"kube-controller-manager-ip-172-31-23-102\" (UID: \"8b4a0dd1168dd5b9a454da9c532cc04b\") " pod="kube-system/kube-controller-manager-ip-172-31-23-102" Jan 30 13:53:41.006687 kubelet[3401]: I0130 13:53:41.005931 3401 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/58e1a7f74683ce69a66e79867a343225-kubeconfig\") pod \"kube-scheduler-ip-172-31-23-102\" (UID: \"58e1a7f74683ce69a66e79867a343225\") " pod="kube-system/kube-scheduler-ip-172-31-23-102" Jan 30 13:53:41.006687 kubelet[3401]: I0130 13:53:41.006030 3401 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/756ac0809267efcb7e6c93bd008648a2-k8s-certs\") pod \"kube-apiserver-ip-172-31-23-102\" (UID: \"756ac0809267efcb7e6c93bd008648a2\") " pod="kube-system/kube-apiserver-ip-172-31-23-102" Jan 30 13:53:41.006963 kubelet[3401]: I0130 13:53:41.006056 3401 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8b4a0dd1168dd5b9a454da9c532cc04b-ca-certs\") pod \"kube-controller-manager-ip-172-31-23-102\" (UID: \"8b4a0dd1168dd5b9a454da9c532cc04b\") " pod="kube-system/kube-controller-manager-ip-172-31-23-102" Jan 30 13:53:41.006963 kubelet[3401]: I0130 13:53:41.006085 3401 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8b4a0dd1168dd5b9a454da9c532cc04b-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-23-102\" (UID: \"8b4a0dd1168dd5b9a454da9c532cc04b\") " pod="kube-system/kube-controller-manager-ip-172-31-23-102" Jan 30 13:53:41.006963 kubelet[3401]: I0130 13:53:41.006120 3401 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8b4a0dd1168dd5b9a454da9c532cc04b-k8s-certs\") pod \"kube-controller-manager-ip-172-31-23-102\" (UID: \"8b4a0dd1168dd5b9a454da9c532cc04b\") " pod="kube-system/kube-controller-manager-ip-172-31-23-102" Jan 30 13:53:41.006963 kubelet[3401]: I0130 13:53:41.006219 3401 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8b4a0dd1168dd5b9a454da9c532cc04b-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-23-102\" (UID: \"8b4a0dd1168dd5b9a454da9c532cc04b\") " pod="kube-system/kube-controller-manager-ip-172-31-23-102" Jan 30 13:53:41.517054 kubelet[3401]: I0130 13:53:41.516781 3401 apiserver.go:52] "Watching apiserver" Jan 30 13:53:41.585799 kubelet[3401]: I0130 13:53:41.585715 3401 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 13:53:41.782190 kubelet[3401]: E0130 13:53:41.778702 3401 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-23-102\" already exists" pod="kube-system/kube-apiserver-ip-172-31-23-102" Jan 30 13:53:41.830797 kubelet[3401]: I0130 13:53:41.830699 3401 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-23-102" podStartSLOduration=3.830656598 podStartE2EDuration="3.830656598s" podCreationTimestamp="2025-01-30 13:53:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:53:41.800139796 +0000 UTC m=+1.466268659" watchObservedRunningTime="2025-01-30 13:53:41.830656598 +0000 UTC m=+1.496785464" Jan 30 13:53:41.831640 kubelet[3401]: I0130 13:53:41.831599 3401 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-23-102" podStartSLOduration=3.8315694540000003 podStartE2EDuration="3.831569454s" podCreationTimestamp="2025-01-30 13:53:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:53:41.831568665 +0000 UTC m=+1.497697530" watchObservedRunningTime="2025-01-30 13:53:41.831569454 +0000 UTC m=+1.497698321" Jan 30 13:53:41.857949 kubelet[3401]: I0130 13:53:41.856641 3401 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-23-102" podStartSLOduration=1.856620948 podStartE2EDuration="1.856620948s" podCreationTimestamp="2025-01-30 13:53:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:53:41.856410731 +0000 UTC m=+1.522539596" watchObservedRunningTime="2025-01-30 13:53:41.856620948 +0000 UTC m=+1.522749811" Jan 30 13:53:43.258000 update_engine[2068]: I20250130 13:53:43.257920 2068 update_attempter.cc:509] Updating boot flags... Jan 30 13:53:43.474986 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3457) Jan 30 13:53:44.063093 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3461) Jan 30 13:53:45.706083 sudo[2440]: pam_unix(sudo:session): session closed for user root Jan 30 13:53:45.731813 sshd[2436]: pam_unix(sshd:session): session closed for user core Jan 30 13:53:45.741384 systemd[1]: sshd@6-172.31.23.102:22-139.178.68.195:45752.service: Deactivated successfully. Jan 30 13:53:45.750229 systemd-logind[2064]: Session 7 logged out. Waiting for processes to exit. Jan 30 13:53:45.750232 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 13:53:45.753861 systemd-logind[2064]: Removed session 7. Jan 30 13:53:52.453761 kubelet[3401]: I0130 13:53:52.453729 3401 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 13:53:52.466930 containerd[2095]: time="2025-01-30T13:53:52.459079708Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 13:53:52.467606 kubelet[3401]: I0130 13:53:52.461548 3401 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 13:53:53.153461 kubelet[3401]: I0130 13:53:53.153415 3401 topology_manager.go:215] "Topology Admit Handler" podUID="4c620810-37c8-4a5a-a1e3-75785ef38f32" podNamespace="kube-system" podName="kube-proxy-9kxpq" Jan 30 13:53:53.339538 kubelet[3401]: I0130 13:53:53.339405 3401 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4c620810-37c8-4a5a-a1e3-75785ef38f32-xtables-lock\") pod \"kube-proxy-9kxpq\" (UID: \"4c620810-37c8-4a5a-a1e3-75785ef38f32\") " pod="kube-system/kube-proxy-9kxpq" Jan 30 13:53:53.339538 kubelet[3401]: I0130 13:53:53.339455 3401 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4c620810-37c8-4a5a-a1e3-75785ef38f32-lib-modules\") pod \"kube-proxy-9kxpq\" (UID: \"4c620810-37c8-4a5a-a1e3-75785ef38f32\") " pod="kube-system/kube-proxy-9kxpq" Jan 30 13:53:53.339538 kubelet[3401]: I0130 13:53:53.339489 3401 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4c620810-37c8-4a5a-a1e3-75785ef38f32-kube-proxy\") pod \"kube-proxy-9kxpq\" (UID: \"4c620810-37c8-4a5a-a1e3-75785ef38f32\") " pod="kube-system/kube-proxy-9kxpq" Jan 30 13:53:53.339801 kubelet[3401]: I0130 13:53:53.339769 3401 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkjhw\" (UniqueName: \"kubernetes.io/projected/4c620810-37c8-4a5a-a1e3-75785ef38f32-kube-api-access-wkjhw\") pod \"kube-proxy-9kxpq\" (UID: \"4c620810-37c8-4a5a-a1e3-75785ef38f32\") " pod="kube-system/kube-proxy-9kxpq" Jan 30 13:53:53.515270 kubelet[3401]: I0130 13:53:53.511338 3401 topology_manager.go:215] "Topology Admit Handler" podUID="0b83d356-01a4-4910-845d-8529d749b7ce" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-qkw6g" Jan 30 13:53:53.644027 kubelet[3401]: I0130 13:53:53.643934 3401 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2wmz\" (UniqueName: \"kubernetes.io/projected/0b83d356-01a4-4910-845d-8529d749b7ce-kube-api-access-b2wmz\") pod \"tigera-operator-7bc55997bb-qkw6g\" (UID: \"0b83d356-01a4-4910-845d-8529d749b7ce\") " pod="tigera-operator/tigera-operator-7bc55997bb-qkw6g" Jan 30 13:53:53.644191 kubelet[3401]: I0130 13:53:53.644044 3401 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0b83d356-01a4-4910-845d-8529d749b7ce-var-lib-calico\") pod \"tigera-operator-7bc55997bb-qkw6g\" (UID: \"0b83d356-01a4-4910-845d-8529d749b7ce\") " pod="tigera-operator/tigera-operator-7bc55997bb-qkw6g" Jan 30 13:53:53.763200 containerd[2095]: time="2025-01-30T13:53:53.763156208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9kxpq,Uid:4c620810-37c8-4a5a-a1e3-75785ef38f32,Namespace:kube-system,Attempt:0,}" Jan 30 13:53:53.804792 containerd[2095]: time="2025-01-30T13:53:53.803936570Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:53:53.804792 containerd[2095]: time="2025-01-30T13:53:53.804051731Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:53:53.804792 containerd[2095]: time="2025-01-30T13:53:53.804077772Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:53:53.804792 containerd[2095]: time="2025-01-30T13:53:53.804206227Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:53:53.825641 containerd[2095]: time="2025-01-30T13:53:53.824948948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-qkw6g,Uid:0b83d356-01a4-4910-845d-8529d749b7ce,Namespace:tigera-operator,Attempt:0,}" Jan 30 13:53:53.871352 containerd[2095]: time="2025-01-30T13:53:53.871302951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9kxpq,Uid:4c620810-37c8-4a5a-a1e3-75785ef38f32,Namespace:kube-system,Attempt:0,} returns sandbox id \"5fd66188d1105e8f0362ea1dcecb6062bc08a3a39a38022d16e332c7a18933f6\"" Jan 30 13:53:53.907258 containerd[2095]: time="2025-01-30T13:53:53.906914880Z" level=info msg="CreateContainer within sandbox \"5fd66188d1105e8f0362ea1dcecb6062bc08a3a39a38022d16e332c7a18933f6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 13:53:53.923823 containerd[2095]: time="2025-01-30T13:53:53.918384987Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:53:53.923823 containerd[2095]: time="2025-01-30T13:53:53.918476482Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:53:53.923823 containerd[2095]: time="2025-01-30T13:53:53.918499892Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:53:53.923823 containerd[2095]: time="2025-01-30T13:53:53.920937797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:53:53.966226 containerd[2095]: time="2025-01-30T13:53:53.966172862Z" level=info msg="CreateContainer within sandbox \"5fd66188d1105e8f0362ea1dcecb6062bc08a3a39a38022d16e332c7a18933f6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0c1510b50270b162abadc47ff0aab68761317c104f519c9e7df00e3576fa8044\"" Jan 30 13:53:53.971149 containerd[2095]: time="2025-01-30T13:53:53.971097438Z" level=info msg="StartContainer for \"0c1510b50270b162abadc47ff0aab68761317c104f519c9e7df00e3576fa8044\"" Jan 30 13:53:54.101476 containerd[2095]: time="2025-01-30T13:53:54.099233176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-qkw6g,Uid:0b83d356-01a4-4910-845d-8529d749b7ce,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"cfc3ebb70a409a59d6246c0375a29a389bb8bb634ea9be8330182d66f814b4d5\"" Jan 30 13:53:54.109458 containerd[2095]: time="2025-01-30T13:53:54.109411649Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 30 13:53:54.153305 containerd[2095]: time="2025-01-30T13:53:54.153272065Z" level=info msg="StartContainer for \"0c1510b50270b162abadc47ff0aab68761317c104f519c9e7df00e3576fa8044\" returns successfully" Jan 30 13:53:54.850279 kubelet[3401]: I0130 13:53:54.844543 3401 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9kxpq" podStartSLOduration=1.844519636 podStartE2EDuration="1.844519636s" podCreationTimestamp="2025-01-30 13:53:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:53:54.842777518 +0000 UTC m=+14.508906383" watchObservedRunningTime="2025-01-30 13:53:54.844519636 +0000 UTC m=+14.510648496" Jan 30 13:53:57.143548 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount971614940.mount: Deactivated successfully. Jan 30 13:53:57.871514 containerd[2095]: time="2025-01-30T13:53:57.871469400Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:57.873176 containerd[2095]: time="2025-01-30T13:53:57.873023449Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Jan 30 13:53:57.874070 containerd[2095]: time="2025-01-30T13:53:57.874036153Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:57.877944 containerd[2095]: time="2025-01-30T13:53:57.877251532Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:57.879001 containerd[2095]: time="2025-01-30T13:53:57.878484587Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 3.769013913s" Jan 30 13:53:57.879001 containerd[2095]: time="2025-01-30T13:53:57.878526672Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 30 13:53:57.881971 containerd[2095]: time="2025-01-30T13:53:57.881056772Z" level=info msg="CreateContainer within sandbox \"cfc3ebb70a409a59d6246c0375a29a389bb8bb634ea9be8330182d66f814b4d5\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 30 13:53:57.897069 containerd[2095]: time="2025-01-30T13:53:57.896932520Z" level=info msg="CreateContainer within sandbox \"cfc3ebb70a409a59d6246c0375a29a389bb8bb634ea9be8330182d66f814b4d5\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a5da34c208664ca9eb25b6b816c1633574f3f3955ad56a287731582b09d6aaa0\"" Jan 30 13:53:57.897815 containerd[2095]: time="2025-01-30T13:53:57.897765190Z" level=info msg="StartContainer for \"a5da34c208664ca9eb25b6b816c1633574f3f3955ad56a287731582b09d6aaa0\"" Jan 30 13:53:57.970913 containerd[2095]: time="2025-01-30T13:53:57.970749363Z" level=info msg="StartContainer for \"a5da34c208664ca9eb25b6b816c1633574f3f3955ad56a287731582b09d6aaa0\" returns successfully" Jan 30 13:54:01.518575 kubelet[3401]: I0130 13:54:01.518507 3401 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-qkw6g" podStartSLOduration=4.74380474 podStartE2EDuration="8.518487738s" podCreationTimestamp="2025-01-30 13:53:53 +0000 UTC" firstStartedPulling="2025-01-30 13:53:54.10501284 +0000 UTC m=+13.771141685" lastFinishedPulling="2025-01-30 13:53:57.879695823 +0000 UTC m=+17.545824683" observedRunningTime="2025-01-30 13:53:58.807557071 +0000 UTC m=+18.473685933" watchObservedRunningTime="2025-01-30 13:54:01.518487738 +0000 UTC m=+21.184616602" Jan 30 13:54:01.533496 kubelet[3401]: I0130 13:54:01.521671 3401 topology_manager.go:215] "Topology Admit Handler" podUID="ea6c40d2-b6d3-4cb2-a53c-d7d841f15bcd" podNamespace="calico-system" podName="calico-typha-5d75cd755d-rw2sw" Jan 30 13:54:01.621546 kubelet[3401]: I0130 13:54:01.621359 3401 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ea6c40d2-b6d3-4cb2-a53c-d7d841f15bcd-typha-certs\") pod \"calico-typha-5d75cd755d-rw2sw\" (UID: \"ea6c40d2-b6d3-4cb2-a53c-d7d841f15bcd\") " pod="calico-system/calico-typha-5d75cd755d-rw2sw" Jan 30 13:54:01.622063 kubelet[3401]: I0130 13:54:01.621918 3401 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8jcz\" (UniqueName: \"kubernetes.io/projected/ea6c40d2-b6d3-4cb2-a53c-d7d841f15bcd-kube-api-access-b8jcz\") pod \"calico-typha-5d75cd755d-rw2sw\" (UID: \"ea6c40d2-b6d3-4cb2-a53c-d7d841f15bcd\") " pod="calico-system/calico-typha-5d75cd755d-rw2sw" Jan 30 13:54:01.623895 kubelet[3401]: I0130 13:54:01.622497 3401 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea6c40d2-b6d3-4cb2-a53c-d7d841f15bcd-tigera-ca-bundle\") pod \"calico-typha-5d75cd755d-rw2sw\" (UID: \"ea6c40d2-b6d3-4cb2-a53c-d7d841f15bcd\") " pod="calico-system/calico-typha-5d75cd755d-rw2sw" Jan 30 13:54:01.889406 containerd[2095]: time="2025-01-30T13:54:01.889165718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5d75cd755d-rw2sw,Uid:ea6c40d2-b6d3-4cb2-a53c-d7d841f15bcd,Namespace:calico-system,Attempt:0,}" Jan 30 13:54:01.894980 kubelet[3401]: I0130 13:54:01.892953 3401 topology_manager.go:215] "Topology Admit Handler" podUID="1757f404-a47b-4aa2-bdb2-a043c1dbf66d" podNamespace="calico-system" podName="calico-node-6t4fh" Jan 30 13:54:02.046496 kubelet[3401]: I0130 13:54:02.046450 3401 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1757f404-a47b-4aa2-bdb2-a043c1dbf66d-cni-log-dir\") pod \"calico-node-6t4fh\" (UID: \"1757f404-a47b-4aa2-bdb2-a043c1dbf66d\") " pod="calico-system/calico-node-6t4fh" Jan 30 13:54:02.048257 kubelet[3401]: I0130 13:54:02.048215 3401 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7cp9\" (UniqueName: \"kubernetes.io/projected/1757f404-a47b-4aa2-bdb2-a043c1dbf66d-kube-api-access-x7cp9\") pod \"calico-node-6t4fh\" (UID: \"1757f404-a47b-4aa2-bdb2-a043c1dbf66d\") " pod="calico-system/calico-node-6t4fh" Jan 30 13:54:02.048420 kubelet[3401]: I0130 13:54:02.048276 3401 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1757f404-a47b-4aa2-bdb2-a043c1dbf66d-cni-bin-dir\") pod \"calico-node-6t4fh\" (UID: \"1757f404-a47b-4aa2-bdb2-a043c1dbf66d\") " pod="calico-system/calico-node-6t4fh" Jan 30 13:54:02.048420 kubelet[3401]: I0130 13:54:02.048304 3401 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1757f404-a47b-4aa2-bdb2-a043c1dbf66d-policysync\") pod \"calico-node-6t4fh\" (UID: \"1757f404-a47b-4aa2-bdb2-a043c1dbf66d\") " pod="calico-system/calico-node-6t4fh" Jan 30 13:54:02.048420 kubelet[3401]: I0130 13:54:02.048330 3401 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1757f404-a47b-4aa2-bdb2-a043c1dbf66d-xtables-lock\") pod \"calico-node-6t4fh\" (UID: \"1757f404-a47b-4aa2-bdb2-a043c1dbf66d\") " pod="calico-system/calico-node-6t4fh" Jan 30 13:54:02.048420 kubelet[3401]: I0130 13:54:02.048351 3401 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1757f404-a47b-4aa2-bdb2-a043c1dbf66d-node-certs\") pod \"calico-node-6t4fh\" (UID: \"1757f404-a47b-4aa2-bdb2-a043c1dbf66d\") " pod="calico-system/calico-node-6t4fh" Jan 30 13:54:02.048420 kubelet[3401]: I0130 13:54:02.048374 3401 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1757f404-a47b-4aa2-bdb2-a043c1dbf66d-flexvol-driver-host\") pod \"calico-node-6t4fh\" (UID: \"1757f404-a47b-4aa2-bdb2-a043c1dbf66d\") " pod="calico-system/calico-node-6t4fh" Jan 30 13:54:02.048645 kubelet[3401]: I0130 13:54:02.048399 3401 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1757f404-a47b-4aa2-bdb2-a043c1dbf66d-lib-modules\") pod \"calico-node-6t4fh\" (UID: \"1757f404-a47b-4aa2-bdb2-a043c1dbf66d\") " pod="calico-system/calico-node-6t4fh" Jan 30 13:54:02.048645 kubelet[3401]: I0130 13:54:02.048435 3401 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1757f404-a47b-4aa2-bdb2-a043c1dbf66d-tigera-ca-bundle\") pod \"calico-node-6t4fh\" (UID: \"1757f404-a47b-4aa2-bdb2-a043c1dbf66d\") " pod="calico-system/calico-node-6t4fh" Jan 30 13:54:02.048645 kubelet[3401]: I0130 13:54:02.048464 3401 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1757f404-a47b-4aa2-bdb2-a043c1dbf66d-var-lib-calico\") pod \"calico-node-6t4fh\" (UID: \"1757f404-a47b-4aa2-bdb2-a043c1dbf66d\") " pod="calico-system/calico-node-6t4fh" Jan 30 13:54:02.048645 kubelet[3401]: I0130 13:54:02.048492 3401 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1757f404-a47b-4aa2-bdb2-a043c1dbf66d-var-run-calico\") pod \"calico-node-6t4fh\" (UID: \"1757f404-a47b-4aa2-bdb2-a043c1dbf66d\") " pod="calico-system/calico-node-6t4fh" Jan 30 13:54:02.048645 kubelet[3401]: I0130 13:54:02.048518 3401 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1757f404-a47b-4aa2-bdb2-a043c1dbf66d-cni-net-dir\") pod \"calico-node-6t4fh\" (UID: \"1757f404-a47b-4aa2-bdb2-a043c1dbf66d\") " pod="calico-system/calico-node-6t4fh" Jan 30 13:54:02.082091 containerd[2095]: time="2025-01-30T13:54:02.066329437Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:54:02.082091 containerd[2095]: time="2025-01-30T13:54:02.071005048Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:54:02.082091 containerd[2095]: time="2025-01-30T13:54:02.071029425Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:54:02.082091 containerd[2095]: time="2025-01-30T13:54:02.071201956Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:54:02.200994 kubelet[3401]: I0130 13:54:02.185730 3401 topology_manager.go:215] "Topology Admit Handler" podUID="89933e39-f4f4-49ac-8467-88e1539cd0a5" podNamespace="calico-system" podName="csi-node-driver-qjwhb" Jan 30 13:54:02.200994 kubelet[3401]: E0130 13:54:02.187063 3401 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qjwhb" podUID="89933e39-f4f4-49ac-8467-88e1539cd0a5" Jan 30 13:54:02.212419 kubelet[3401]: E0130 13:54:02.203194 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:02.212419 kubelet[3401]: W0130 13:54:02.203331 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:02.212419 kubelet[3401]: E0130 13:54:02.203362 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:02.212681 kubelet[3401]: E0130 13:54:02.212659 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:02.212728 kubelet[3401]: W0130 13:54:02.212687 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:02.212768 kubelet[3401]: E0130 13:54:02.212750 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:02.222915 kubelet[3401]: E0130 13:54:02.214997 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:02.222915 kubelet[3401]: W0130 13:54:02.215026 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:02.222915 kubelet[3401]: E0130 13:54:02.215056 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:02.222915 kubelet[3401]: E0130 13:54:02.215344 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:02.222915 kubelet[3401]: W0130 13:54:02.215353 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:02.222915 kubelet[3401]: E0130 13:54:02.215365 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:02.237137 kubelet[3401]: E0130 13:54:02.237089 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:02.237137 kubelet[3401]: W0130 13:54:02.237126 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:02.237343 kubelet[3401]: E0130 13:54:02.237154 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:02.252521 kubelet[3401]: E0130 13:54:02.245244 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:02.252521 kubelet[3401]: W0130 13:54:02.245274 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:02.252521 kubelet[3401]: E0130 13:54:02.245302 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:02.252521 kubelet[3401]: E0130 13:54:02.252193 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:02.252521 kubelet[3401]: W0130 13:54:02.252219 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:02.252521 kubelet[3401]: E0130 13:54:02.252258 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:02.266726 kubelet[3401]: E0130 13:54:02.266488 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:02.266726 kubelet[3401]: W0130 13:54:02.266516 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:02.266726 kubelet[3401]: E0130 13:54:02.266660 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:02.267638 kubelet[3401]: E0130 13:54:02.267523 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:02.267638 kubelet[3401]: W0130 13:54:02.267557 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:02.267638 kubelet[3401]: E0130 13:54:02.267579 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:02.268167 kubelet[3401]: E0130 13:54:02.268153 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:02.268253 kubelet[3401]: W0130 13:54:02.268242 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:02.268341 kubelet[3401]: E0130 13:54:02.268326 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:02.268633 kubelet[3401]: E0130 13:54:02.268620 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:02.268730 kubelet[3401]: W0130 13:54:02.268719 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:02.269130 kubelet[3401]: E0130 13:54:02.268823 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:02.285319 kubelet[3401]: E0130 13:54:02.284726 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:02.285319 kubelet[3401]: W0130 13:54:02.284753 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:02.320972 kubelet[3401]: E0130 13:54:02.312901 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:02.321114 kubelet[3401]: E0130 13:54:02.320984 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:02.321178 kubelet[3401]: W0130 13:54:02.321130 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:02.321178 kubelet[3401]: E0130 13:54:02.321162 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:02.352686 kubelet[3401]: E0130 13:54:02.352627 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:02.352851 kubelet[3401]: W0130 13:54:02.352769 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:02.354744 kubelet[3401]: E0130 13:54:02.354708 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:02.358020 kubelet[3401]: E0130 13:54:02.357860 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:02.358020 kubelet[3401]: W0130 13:54:02.357903 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:02.358020 kubelet[3401]: E0130 13:54:02.357931 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:02.368714 kubelet[3401]: E0130 13:54:02.362977 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:02.368863 kubelet[3401]: W0130 13:54:02.368712 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:02.368863 kubelet[3401]: E0130 13:54:02.368759 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:02.376318 kubelet[3401]: E0130 13:54:02.375963 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:02.376318 kubelet[3401]: W0130 13:54:02.375996 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:02.376318 kubelet[3401]: E0130 13:54:02.376026 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:02.388189 kubelet[3401]: E0130 13:54:02.387832 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:02.388189 kubelet[3401]: W0130 13:54:02.387871 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:02.388189 kubelet[3401]: E0130 13:54:02.387917 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:02.389975 kubelet[3401]: E0130 13:54:02.389941 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:02.390127 kubelet[3401]: W0130 13:54:02.389983 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:02.390127 kubelet[3401]: E0130 13:54:02.390019 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:02.391232 kubelet[3401]: E0130 13:54:02.391212 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:02.391477 kubelet[3401]: W0130 13:54:02.391355 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:02.391477 kubelet[3401]: E0130 13:54:02.391386 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:02.400534 kubelet[3401]: E0130 13:54:02.391835 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:02.400534 kubelet[3401]: W0130 13:54:02.400351 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:02.400534 kubelet[3401]: E0130 13:54:02.400394 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:02.401292 kubelet[3401]: E0130 13:54:02.401164 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:02.401292 kubelet[3401]: W0130 13:54:02.401181 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:02.405550 kubelet[3401]: E0130 13:54:02.405520 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:02.405550 kubelet[3401]: W0130 13:54:02.405547 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:02.406926 kubelet[3401]: E0130 13:54:02.406708 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:02.406926 kubelet[3401]: E0130 13:54:02.406749 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:02.406926 kubelet[3401]: I0130 13:54:02.406782 3401 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/89933e39-f4f4-49ac-8467-88e1539cd0a5-varrun\") pod \"csi-node-driver-qjwhb\" (UID: \"89933e39-f4f4-49ac-8467-88e1539cd0a5\") " pod="calico-system/csi-node-driver-qjwhb" Jan 30 13:54:02.418743 kubelet[3401]: E0130 13:54:02.418694 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:02.420132 kubelet[3401]: W0130 13:54:02.420096 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:02.421231 kubelet[3401]: E0130 13:54:02.421040 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:02.439034 kubelet[3401]: E0130 13:54:02.437828 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:02.439034 kubelet[3401]: W0130 13:54:02.439029 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:02.440295 kubelet[3401]: E0130 13:54:02.440269 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:02.440554 kubelet[3401]: W0130 13:54:02.440437 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:02.440938 kubelet[3401]: E0130 13:54:02.440722 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:02.441211 kubelet[3401]: E0130 13:54:02.441083 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:02.441211 kubelet[3401]: W0130 13:54:02.441098 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:02.441482 kubelet[3401]: E0130 13:54:02.441470 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:02.441672 kubelet[3401]: W0130 13:54:02.441657 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:02.441782 kubelet[3401]: E0130 13:54:02.441761 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:02.442002 kubelet[3401]: E0130 13:54:02.441932 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:02.442977 kubelet[3401]: E0130 13:54:02.442943 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:02.455464 kubelet[3401]: I0130 13:54:02.454967 3401 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/89933e39-f4f4-49ac-8467-88e1539cd0a5-kubelet-dir\") pod \"csi-node-driver-qjwhb\" (UID: \"89933e39-f4f4-49ac-8467-88e1539cd0a5\") " pod="calico-system/csi-node-driver-qjwhb" Jan 30 13:54:02.455464 kubelet[3401]: E0130 13:54:02.454854 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:02.455464 kubelet[3401]: W0130 13:54:02.455009 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:02.455464 kubelet[3401]: E0130 13:54:02.455032 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:02.465728 kubelet[3401]: E0130 13:54:02.465690 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:02.465867 kubelet[3401]: W0130 13:54:02.465737 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:02.470073 kubelet[3401]: E0130 13:54:02.469942 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:02.499224 kubelet[3401]: E0130 13:54:02.499173 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:02.499224 kubelet[3401]: W0130 13:54:02.499219 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:02.513906 kubelet[3401]: E0130 13:54:02.513575 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:02.513906 kubelet[3401]: W0130 13:54:02.513602 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:02.531198 kubelet[3401]: E0130 13:54:02.531153 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:02.531677 kubelet[3401]: E0130 13:54:02.531221 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:02.532354 kubelet[3401]: E0130 13:54:02.532189 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:02.532354 kubelet[3401]: W0130 13:54:02.532210 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:02.533184 kubelet[3401]: E0130 13:54:02.533072 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:02.533184 kubelet[3401]: W0130 13:54:02.533088 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:02.533451 kubelet[3401]: E0130 13:54:02.533431 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:02.533615 kubelet[3401]: W0130 13:54:02.533511 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:02.533615 kubelet[3401]: E0130 13:54:02.533533 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:02.533829 kubelet[3401]: E0130 13:54:02.533814 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:02.533932 kubelet[3401]: W0130 13:54:02.533919 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:02.534007 kubelet[3401]: E0130 13:54:02.533997 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:02.543951 kubelet[3401]: E0130 13:54:02.534257 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:02.543951 kubelet[3401]: W0130 13:54:02.534269 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:02.543951 kubelet[3401]: E0130 13:54:02.534280 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:02.543951 kubelet[3401]: E0130 13:54:02.533950 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:02.543951 kubelet[3401]: E0130 13:54:02.533965 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:02.544646 kubelet[3401]: E0130 13:54:02.544619 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:02.544765 kubelet[3401]: W0130 13:54:02.544749 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:02.544839 kubelet[3401]: E0130 13:54:02.544827 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:02.545296 kubelet[3401]: E0130 13:54:02.545280 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:02.545388 kubelet[3401]: W0130 13:54:02.545374 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:02.545490 kubelet[3401]: E0130 13:54:02.545477 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:02.546222 kubelet[3401]: E0130 13:54:02.546193 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:02.546950 kubelet[3401]: W0130 13:54:02.546351 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:02.546950 kubelet[3401]: E0130 13:54:02.546373 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:02.548101 kubelet[3401]: E0130 13:54:02.548083 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:02.548209 kubelet[3401]: W0130 13:54:02.548195 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:02.548401 kubelet[3401]: E0130 13:54:02.548280 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:02.548968 kubelet[3401]: E0130 13:54:02.548953 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:02.549065 kubelet[3401]: W0130 13:54:02.549052 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:02.549199 kubelet[3401]: E0130 13:54:02.549140 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:02.568840 containerd[2095]: time="2025-01-30T13:54:02.560224503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6t4fh,Uid:1757f404-a47b-4aa2-bdb2-a043c1dbf66d,Namespace:calico-system,Attempt:0,}" Jan 30 13:54:02.663085 kubelet[3401]: E0130 13:54:02.663048 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:02.663085 kubelet[3401]: W0130 13:54:02.663081 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:02.663288 kubelet[3401]: E0130 13:54:02.663107 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:02.663288 kubelet[3401]: I0130 13:54:02.663162 3401 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/89933e39-f4f4-49ac-8467-88e1539cd0a5-registration-dir\") pod \"csi-node-driver-qjwhb\" (UID: \"89933e39-f4f4-49ac-8467-88e1539cd0a5\") " pod="calico-system/csi-node-driver-qjwhb" Jan 30 13:54:02.664000 kubelet[3401]: E0130 13:54:02.663975 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:02.664000 kubelet[3401]: W0130 13:54:02.663997 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:02.664138 kubelet[3401]: E0130 13:54:02.664033 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:02.692759 kubelet[3401]: E0130 13:54:02.682104 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:02.692759 kubelet[3401]: W0130 13:54:02.682135 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:02.692759 kubelet[3401]: E0130 13:54:02.682169 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:02.693039 kubelet[3401]: E0130 13:54:02.692906 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:02.693039 kubelet[3401]: W0130 13:54:02.692930 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:02.693126 kubelet[3401]: E0130 13:54:02.693076 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:02.693941 kubelet[3401]: E0130 13:54:02.693314 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:02.693941 kubelet[3401]: W0130 13:54:02.693329 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:02.693941 kubelet[3401]: E0130 13:54:02.693389 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:02.693941 kubelet[3401]: I0130 13:54:02.693424 3401 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cp5lk\" (UniqueName: \"kubernetes.io/projected/89933e39-f4f4-49ac-8467-88e1539cd0a5-kube-api-access-cp5lk\") pod \"csi-node-driver-qjwhb\" (UID: \"89933e39-f4f4-49ac-8467-88e1539cd0a5\") " pod="calico-system/csi-node-driver-qjwhb" Jan 30 13:54:02.695000 kubelet[3401]: E0130 13:54:02.694541 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:02.695000 kubelet[3401]: W0130 13:54:02.694557 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:02.695000 kubelet[3401]: E0130 13:54:02.694578 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:02.699898 kubelet[3401]: E0130 13:54:02.695991 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:02.699898 kubelet[3401]: W0130 13:54:02.696009 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:02.699898 kubelet[3401]: E0130 13:54:02.696081 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:02.699898 kubelet[3401]: E0130 13:54:02.696285 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:02.699898 kubelet[3401]: W0130 13:54:02.696295 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:02.699898 kubelet[3401]: E0130 13:54:02.696834 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:02.699898 kubelet[3401]: I0130 13:54:02.696910 3401 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/89933e39-f4f4-49ac-8467-88e1539cd0a5-socket-dir\") pod \"csi-node-driver-qjwhb\" (UID: \"89933e39-f4f4-49ac-8467-88e1539cd0a5\") " pod="calico-system/csi-node-driver-qjwhb" Jan 30 13:54:02.699898 kubelet[3401]: E0130 13:54:02.697705 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:02.699898 kubelet[3401]: W0130 13:54:02.697718 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:02.700346 kubelet[3401]: E0130 13:54:02.698002 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:02.700346 kubelet[3401]: E0130 13:54:02.698141 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:02.700346 kubelet[3401]: W0130 13:54:02.698150 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:02.700346 kubelet[3401]: E0130 13:54:02.699050 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:02.701900 kubelet[3401]: E0130 13:54:02.701082 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:02.701900 kubelet[3401]: W0130 13:54:02.701123 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:02.701900 kubelet[3401]: E0130 13:54:02.701307 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:02.701900 kubelet[3401]: E0130 13:54:02.701660 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:02.701900 kubelet[3401]: W0130 13:54:02.701671 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:02.702330 kubelet[3401]: E0130 13:54:02.702312 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:02.703667 kubelet[3401]: E0130 13:54:02.703644 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:02.703667 kubelet[3401]: W0130 13:54:02.703665 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:02.717923 kubelet[3401]: E0130 13:54:02.716958 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:02.720429 kubelet[3401]: E0130 13:54:02.720037 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:02.720429 kubelet[3401]: W0130 13:54:02.720064 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:02.720429 kubelet[3401]: E0130 13:54:02.720398 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:02.720718 kubelet[3401]: E0130 13:54:02.720701 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:02.720718 kubelet[3401]: W0130 13:54:02.720719 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:02.721084 kubelet[3401]: E0130 13:54:02.720898 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:02.739318 kubelet[3401]: E0130 13:54:02.739277 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:02.739318 kubelet[3401]: W0130 13:54:02.739312 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:02.739318 kubelet[3401]: E0130 13:54:02.739338 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:02.764133 kubelet[3401]: E0130 13:54:02.754135 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:02.764133 kubelet[3401]: W0130 13:54:02.754164 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:02.764133 kubelet[3401]: E0130 13:54:02.754193 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:02.764133 kubelet[3401]: E0130 13:54:02.754518 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:02.764133 kubelet[3401]: W0130 13:54:02.754530 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:02.764133 kubelet[3401]: E0130 13:54:02.754544 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:02.764133 kubelet[3401]: E0130 13:54:02.754798 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:02.764133 kubelet[3401]: W0130 13:54:02.754807 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:02.764133 kubelet[3401]: E0130 13:54:02.754818 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:02.821159 kubelet[3401]: E0130 13:54:02.821131 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:02.821526 kubelet[3401]: W0130 13:54:02.821319 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:02.821526 kubelet[3401]: E0130 13:54:02.821347 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:02.825159 kubelet[3401]: E0130 13:54:02.825007 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:02.825159 kubelet[3401]: W0130 13:54:02.825034 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:02.825159 kubelet[3401]: E0130 13:54:02.825069 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:02.834140 kubelet[3401]: E0130 13:54:02.825481 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:02.834140 kubelet[3401]: W0130 13:54:02.825514 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:02.834140 kubelet[3401]: E0130 13:54:02.825539 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:02.834140 kubelet[3401]: E0130 13:54:02.825801 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:02.834140 kubelet[3401]: W0130 13:54:02.825812 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:02.834140 kubelet[3401]: E0130 13:54:02.825823 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:02.834140 kubelet[3401]: E0130 13:54:02.827096 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:02.834140 kubelet[3401]: W0130 13:54:02.827110 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:02.834140 kubelet[3401]: E0130 13:54:02.827139 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:02.836057 kubelet[3401]: E0130 13:54:02.835987 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:02.836057 kubelet[3401]: W0130 13:54:02.836055 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:02.836248 kubelet[3401]: E0130 13:54:02.836110 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:02.838918 kubelet[3401]: E0130 13:54:02.838856 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:02.838918 kubelet[3401]: W0130 13:54:02.838915 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:02.839549 kubelet[3401]: E0130 13:54:02.838967 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:02.839549 kubelet[3401]: E0130 13:54:02.839334 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:02.839549 kubelet[3401]: W0130 13:54:02.839356 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:02.839549 kubelet[3401]: E0130 13:54:02.839504 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:02.857761 kubelet[3401]: E0130 13:54:02.846483 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:02.857761 kubelet[3401]: W0130 13:54:02.846617 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:02.857761 kubelet[3401]: E0130 13:54:02.846758 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:02.857761 kubelet[3401]: E0130 13:54:02.857318 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:02.857761 kubelet[3401]: W0130 13:54:02.857346 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:02.866912 kubelet[3401]: E0130 13:54:02.865400 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:02.867199 kubelet[3401]: E0130 13:54:02.867173 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:02.867280 kubelet[3401]: W0130 13:54:02.867200 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:02.867280 kubelet[3401]: E0130 13:54:02.867240 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:02.867719 kubelet[3401]: E0130 13:54:02.867698 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:02.867719 kubelet[3401]: W0130 13:54:02.867717 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:02.877939 kubelet[3401]: E0130 13:54:02.877621 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:02.889210 kubelet[3401]: E0130 13:54:02.880355 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:02.889210 kubelet[3401]: W0130 13:54:02.880436 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:02.889210 kubelet[3401]: E0130 13:54:02.880563 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:02.889210 kubelet[3401]: E0130 13:54:02.883093 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:02.889210 kubelet[3401]: W0130 13:54:02.883117 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:02.889210 kubelet[3401]: E0130 13:54:02.883236 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:02.889210 kubelet[3401]: E0130 13:54:02.883801 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:02.889616 kubelet[3401]: W0130 13:54:02.883816 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:02.889616 kubelet[3401]: E0130 13:54:02.889341 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:02.939504 containerd[2095]: time="2025-01-30T13:54:02.939392926Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:54:02.944665 kubelet[3401]: E0130 13:54:02.944029 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:02.949924 kubelet[3401]: W0130 13:54:02.948416 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:02.949924 kubelet[3401]: E0130 13:54:02.948584 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:02.950104 containerd[2095]: time="2025-01-30T13:54:02.948906338Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:54:02.955985 containerd[2095]: time="2025-01-30T13:54:02.954020595Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:54:02.955985 containerd[2095]: time="2025-01-30T13:54:02.954182535Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:54:02.956171 kubelet[3401]: E0130 13:54:02.956101 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:02.956601 kubelet[3401]: W0130 13:54:02.956127 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:02.956601 kubelet[3401]: E0130 13:54:02.956419 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:02.957082 containerd[2095]: time="2025-01-30T13:54:02.956959772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5d75cd755d-rw2sw,Uid:ea6c40d2-b6d3-4cb2-a53c-d7d841f15bcd,Namespace:calico-system,Attempt:0,} returns sandbox id \"5b97cf3477465cdd2e877babe8afd0c565737b72b2f25a554520d86315035259\"" Jan 30 13:54:02.959908 containerd[2095]: time="2025-01-30T13:54:02.959795944Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 30 13:54:03.039415 containerd[2095]: time="2025-01-30T13:54:03.039352408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6t4fh,Uid:1757f404-a47b-4aa2-bdb2-a043c1dbf66d,Namespace:calico-system,Attempt:0,} returns sandbox id \"2c23834265bcdbfbf84e72e76d22d3e6581d61d1710dfba57bb955dd74591536\"" Jan 30 13:54:03.602751 kubelet[3401]: E0130 13:54:03.602666 3401 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qjwhb" podUID="89933e39-f4f4-49ac-8467-88e1539cd0a5" Jan 30 13:54:04.473367 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2789812042.mount: Deactivated successfully. Jan 30 13:54:05.482730 containerd[2095]: time="2025-01-30T13:54:05.482318994Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:05.486575 containerd[2095]: time="2025-01-30T13:54:05.484995582Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Jan 30 13:54:05.487031 containerd[2095]: time="2025-01-30T13:54:05.486948000Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:05.495213 containerd[2095]: time="2025-01-30T13:54:05.495142130Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:05.497959 containerd[2095]: time="2025-01-30T13:54:05.497915894Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.538076217s" Jan 30 13:54:05.498465 containerd[2095]: time="2025-01-30T13:54:05.498257671Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 30 13:54:05.502290 containerd[2095]: time="2025-01-30T13:54:05.502258919Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 30 13:54:05.517488 containerd[2095]: time="2025-01-30T13:54:05.517445516Z" level=info msg="CreateContainer within sandbox \"5b97cf3477465cdd2e877babe8afd0c565737b72b2f25a554520d86315035259\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 30 13:54:05.555925 containerd[2095]: time="2025-01-30T13:54:05.554351600Z" level=info msg="CreateContainer within sandbox \"5b97cf3477465cdd2e877babe8afd0c565737b72b2f25a554520d86315035259\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"e61d8111e8e64df342f800cfb485f9985f323454a229b3990f3add4c4f6205a6\"" Jan 30 13:54:05.579326 containerd[2095]: time="2025-01-30T13:54:05.578751184Z" level=info msg="StartContainer for \"e61d8111e8e64df342f800cfb485f9985f323454a229b3990f3add4c4f6205a6\"" Jan 30 13:54:05.590626 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1204773635.mount: Deactivated successfully. Jan 30 13:54:05.606467 kubelet[3401]: E0130 13:54:05.606156 3401 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qjwhb" podUID="89933e39-f4f4-49ac-8467-88e1539cd0a5" Jan 30 13:54:05.776829 containerd[2095]: time="2025-01-30T13:54:05.776381835Z" level=info msg="StartContainer for \"e61d8111e8e64df342f800cfb485f9985f323454a229b3990f3add4c4f6205a6\" returns successfully" Jan 30 13:54:05.997630 kubelet[3401]: E0130 13:54:05.997470 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:05.997630 kubelet[3401]: W0130 13:54:05.997515 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:05.997630 kubelet[3401]: E0130 13:54:05.997539 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:05.998428 kubelet[3401]: E0130 13:54:05.998410 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:05.998979 kubelet[3401]: W0130 13:54:05.998552 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:05.998979 kubelet[3401]: E0130 13:54:05.998578 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:06.000100 kubelet[3401]: E0130 13:54:05.999415 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:06.000100 kubelet[3401]: W0130 13:54:05.999430 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:06.000100 kubelet[3401]: E0130 13:54:05.999447 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:06.000538 kubelet[3401]: E0130 13:54:06.000354 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:06.000538 kubelet[3401]: W0130 13:54:06.000370 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:06.000538 kubelet[3401]: E0130 13:54:06.000401 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:06.002482 kubelet[3401]: E0130 13:54:06.001643 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:06.002482 kubelet[3401]: W0130 13:54:06.001659 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:06.002482 kubelet[3401]: E0130 13:54:06.002375 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:06.003932 kubelet[3401]: E0130 13:54:06.003441 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:06.003932 kubelet[3401]: W0130 13:54:06.003456 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:06.003932 kubelet[3401]: E0130 13:54:06.003476 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:06.005403 kubelet[3401]: E0130 13:54:06.005252 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:06.005403 kubelet[3401]: W0130 13:54:06.005268 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:06.005403 kubelet[3401]: E0130 13:54:06.005288 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:06.005606 kubelet[3401]: E0130 13:54:06.005544 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:06.005606 kubelet[3401]: W0130 13:54:06.005555 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:06.005606 kubelet[3401]: E0130 13:54:06.005571 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:06.005997 kubelet[3401]: E0130 13:54:06.005813 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:06.005997 kubelet[3401]: W0130 13:54:06.005824 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:06.005997 kubelet[3401]: E0130 13:54:06.005969 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:06.006589 kubelet[3401]: E0130 13:54:06.006565 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:06.006589 kubelet[3401]: W0130 13:54:06.006579 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:06.006706 kubelet[3401]: E0130 13:54:06.006593 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:06.008389 kubelet[3401]: E0130 13:54:06.006965 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:06.008389 kubelet[3401]: W0130 13:54:06.006977 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:06.008389 kubelet[3401]: E0130 13:54:06.006993 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:06.008389 kubelet[3401]: E0130 13:54:06.007272 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:06.008389 kubelet[3401]: W0130 13:54:06.007285 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:06.008389 kubelet[3401]: E0130 13:54:06.007298 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:06.009619 kubelet[3401]: E0130 13:54:06.008949 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:06.009619 kubelet[3401]: W0130 13:54:06.008965 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:06.009619 kubelet[3401]: E0130 13:54:06.008981 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:06.009619 kubelet[3401]: E0130 13:54:06.009496 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:06.009619 kubelet[3401]: W0130 13:54:06.009509 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:06.009619 kubelet[3401]: E0130 13:54:06.009523 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:06.011600 kubelet[3401]: E0130 13:54:06.010560 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:06.011600 kubelet[3401]: W0130 13:54:06.010573 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:06.011600 kubelet[3401]: E0130 13:54:06.010588 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:06.091220 kubelet[3401]: E0130 13:54:06.089189 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:06.091220 kubelet[3401]: W0130 13:54:06.089237 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:06.091220 kubelet[3401]: E0130 13:54:06.089264 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:06.091220 kubelet[3401]: E0130 13:54:06.089679 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:06.091220 kubelet[3401]: W0130 13:54:06.089692 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:06.091220 kubelet[3401]: E0130 13:54:06.089712 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:06.091220 kubelet[3401]: E0130 13:54:06.090127 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:06.091220 kubelet[3401]: W0130 13:54:06.090139 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:06.091220 kubelet[3401]: E0130 13:54:06.090164 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:06.091220 kubelet[3401]: E0130 13:54:06.090495 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:06.091747 kubelet[3401]: W0130 13:54:06.090507 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:06.091747 kubelet[3401]: E0130 13:54:06.090531 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:06.091747 kubelet[3401]: E0130 13:54:06.090757 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:06.091747 kubelet[3401]: W0130 13:54:06.090767 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:06.091747 kubelet[3401]: E0130 13:54:06.090790 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:06.091747 kubelet[3401]: E0130 13:54:06.091088 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:06.091747 kubelet[3401]: W0130 13:54:06.091099 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:06.091747 kubelet[3401]: E0130 13:54:06.091124 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:06.092200 kubelet[3401]: E0130 13:54:06.092104 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:06.092200 kubelet[3401]: W0130 13:54:06.092116 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:06.092399 kubelet[3401]: E0130 13:54:06.092348 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:06.096748 kubelet[3401]: E0130 13:54:06.094801 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:06.097185 kubelet[3401]: W0130 13:54:06.096749 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:06.097284 kubelet[3401]: E0130 13:54:06.097263 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:06.111974 kubelet[3401]: E0130 13:54:06.097702 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:06.111974 kubelet[3401]: W0130 13:54:06.097720 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:06.111974 kubelet[3401]: E0130 13:54:06.108072 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:06.111974 kubelet[3401]: E0130 13:54:06.110081 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:06.111974 kubelet[3401]: W0130 13:54:06.110117 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:06.111974 kubelet[3401]: E0130 13:54:06.110484 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:06.111974 kubelet[3401]: E0130 13:54:06.110856 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:06.111974 kubelet[3401]: W0130 13:54:06.110870 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:06.114209 kubelet[3401]: E0130 13:54:06.112146 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:06.114209 kubelet[3401]: W0130 13:54:06.112162 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:06.114209 kubelet[3401]: E0130 13:54:06.112182 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:06.114209 kubelet[3401]: E0130 13:54:06.112408 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:06.114209 kubelet[3401]: W0130 13:54:06.112417 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:06.114209 kubelet[3401]: E0130 13:54:06.112428 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:06.114209 kubelet[3401]: E0130 13:54:06.112639 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:06.114209 kubelet[3401]: W0130 13:54:06.112649 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:06.114209 kubelet[3401]: E0130 13:54:06.112662 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:06.114209 kubelet[3401]: E0130 13:54:06.113110 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:06.114679 kubelet[3401]: W0130 13:54:06.113120 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:06.114679 kubelet[3401]: E0130 13:54:06.113134 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:06.114679 kubelet[3401]: E0130 13:54:06.113164 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:06.114679 kubelet[3401]: E0130 13:54:06.113382 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:06.114679 kubelet[3401]: W0130 13:54:06.113393 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:06.114679 kubelet[3401]: E0130 13:54:06.113404 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:06.114679 kubelet[3401]: E0130 13:54:06.113614 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:06.114679 kubelet[3401]: W0130 13:54:06.113625 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:06.114679 kubelet[3401]: E0130 13:54:06.113640 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:06.114679 kubelet[3401]: E0130 13:54:06.114026 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:06.115075 kubelet[3401]: W0130 13:54:06.114037 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:06.115075 kubelet[3401]: E0130 13:54:06.114050 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:06.963644 kubelet[3401]: I0130 13:54:06.963520 3401 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:54:07.021912 kubelet[3401]: E0130 13:54:07.021675 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:07.021912 kubelet[3401]: W0130 13:54:07.021702 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:07.021912 kubelet[3401]: E0130 13:54:07.021748 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:07.025117 kubelet[3401]: E0130 13:54:07.023913 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:07.025117 kubelet[3401]: W0130 13:54:07.023943 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:07.025117 kubelet[3401]: E0130 13:54:07.023967 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:07.034318 kubelet[3401]: E0130 13:54:07.031455 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:07.034318 kubelet[3401]: W0130 13:54:07.031677 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:07.034318 kubelet[3401]: E0130 13:54:07.031711 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:07.034318 kubelet[3401]: E0130 13:54:07.033317 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:07.034318 kubelet[3401]: W0130 13:54:07.033336 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:07.034318 kubelet[3401]: E0130 13:54:07.033456 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:07.034318 kubelet[3401]: E0130 13:54:07.034299 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:07.034318 kubelet[3401]: W0130 13:54:07.034312 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:07.039455 kubelet[3401]: E0130 13:54:07.034352 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:07.039455 kubelet[3401]: E0130 13:54:07.035154 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:07.039455 kubelet[3401]: W0130 13:54:07.035167 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:07.039455 kubelet[3401]: E0130 13:54:07.035243 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:07.039455 kubelet[3401]: E0130 13:54:07.038467 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:07.039455 kubelet[3401]: W0130 13:54:07.038499 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:07.039455 kubelet[3401]: E0130 13:54:07.038519 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:07.039455 kubelet[3401]: E0130 13:54:07.039129 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:07.039455 kubelet[3401]: W0130 13:54:07.039143 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:07.039455 kubelet[3401]: E0130 13:54:07.039159 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:07.041266 kubelet[3401]: E0130 13:54:07.040321 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:07.041266 kubelet[3401]: W0130 13:54:07.040728 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:07.041266 kubelet[3401]: E0130 13:54:07.040754 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:07.043574 kubelet[3401]: E0130 13:54:07.042318 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:07.043574 kubelet[3401]: W0130 13:54:07.042333 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:07.043574 kubelet[3401]: E0130 13:54:07.042360 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:07.045580 kubelet[3401]: E0130 13:54:07.045106 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:07.045580 kubelet[3401]: W0130 13:54:07.045128 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:07.045580 kubelet[3401]: E0130 13:54:07.045563 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:07.060473 kubelet[3401]: E0130 13:54:07.059529 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:07.061667 kubelet[3401]: W0130 13:54:07.061460 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:07.063115 kubelet[3401]: E0130 13:54:07.062943 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:07.072250 kubelet[3401]: E0130 13:54:07.071954 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:07.073429 kubelet[3401]: W0130 13:54:07.072044 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:07.073730 kubelet[3401]: E0130 13:54:07.073655 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:07.077994 kubelet[3401]: E0130 13:54:07.077607 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:07.079866 kubelet[3401]: W0130 13:54:07.077634 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:07.080909 kubelet[3401]: E0130 13:54:07.079291 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:07.084916 kubelet[3401]: E0130 13:54:07.084751 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:07.085055 kubelet[3401]: W0130 13:54:07.084919 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:07.085055 kubelet[3401]: E0130 13:54:07.084948 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:07.111525 kubelet[3401]: E0130 13:54:07.111247 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:07.111525 kubelet[3401]: W0130 13:54:07.111274 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:07.111525 kubelet[3401]: E0130 13:54:07.111301 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:07.112190 kubelet[3401]: E0130 13:54:07.111979 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:07.112190 kubelet[3401]: W0130 13:54:07.112008 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:07.112190 kubelet[3401]: E0130 13:54:07.112033 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:07.112640 kubelet[3401]: E0130 13:54:07.112617 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:07.112803 kubelet[3401]: W0130 13:54:07.112714 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:07.112861 kubelet[3401]: E0130 13:54:07.112808 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:07.113656 kubelet[3401]: E0130 13:54:07.113547 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:07.113656 kubelet[3401]: W0130 13:54:07.113561 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:07.113656 kubelet[3401]: E0130 13:54:07.113645 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:07.114090 kubelet[3401]: E0130 13:54:07.113918 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:07.114090 kubelet[3401]: W0130 13:54:07.113936 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:07.114090 kubelet[3401]: E0130 13:54:07.113963 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:07.114490 kubelet[3401]: E0130 13:54:07.114468 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:07.114490 kubelet[3401]: W0130 13:54:07.114484 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:07.114616 kubelet[3401]: E0130 13:54:07.114569 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:07.114786 kubelet[3401]: E0130 13:54:07.114763 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:07.114786 kubelet[3401]: W0130 13:54:07.114778 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:07.114987 kubelet[3401]: E0130 13:54:07.114841 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:07.115582 kubelet[3401]: E0130 13:54:07.115269 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:07.115582 kubelet[3401]: W0130 13:54:07.115282 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:07.115582 kubelet[3401]: E0130 13:54:07.115557 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:07.116142 kubelet[3401]: E0130 13:54:07.115981 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:07.116142 kubelet[3401]: W0130 13:54:07.115993 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:07.116142 kubelet[3401]: E0130 13:54:07.116011 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:07.116972 kubelet[3401]: E0130 13:54:07.116620 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:07.116972 kubelet[3401]: W0130 13:54:07.116634 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:07.116972 kubelet[3401]: E0130 13:54:07.116718 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:07.119090 kubelet[3401]: E0130 13:54:07.119061 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:07.119090 kubelet[3401]: W0130 13:54:07.119090 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:07.119812 kubelet[3401]: E0130 13:54:07.119790 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:07.120171 kubelet[3401]: E0130 13:54:07.120154 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:07.120247 kubelet[3401]: W0130 13:54:07.120172 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:07.120293 kubelet[3401]: E0130 13:54:07.120260 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:07.121101 kubelet[3401]: E0130 13:54:07.121063 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:07.121101 kubelet[3401]: W0130 13:54:07.121078 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:07.121355 kubelet[3401]: E0130 13:54:07.121332 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:07.121924 kubelet[3401]: E0130 13:54:07.121895 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:07.121924 kubelet[3401]: W0130 13:54:07.121919 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:07.122264 kubelet[3401]: E0130 13:54:07.122022 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:07.123078 kubelet[3401]: E0130 13:54:07.122704 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:07.123078 kubelet[3401]: W0130 13:54:07.122716 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:07.123078 kubelet[3401]: E0130 13:54:07.122734 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:07.126525 kubelet[3401]: E0130 13:54:07.125744 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:07.126525 kubelet[3401]: W0130 13:54:07.125765 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:07.126525 kubelet[3401]: E0130 13:54:07.126064 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:07.126525 kubelet[3401]: E0130 13:54:07.126360 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:07.126525 kubelet[3401]: W0130 13:54:07.126372 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:07.126525 kubelet[3401]: E0130 13:54:07.126385 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:07.127593 kubelet[3401]: E0130 13:54:07.126837 3401 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:54:07.127593 kubelet[3401]: W0130 13:54:07.126849 3401 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:54:07.127593 kubelet[3401]: E0130 13:54:07.126862 3401 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:54:07.232317 containerd[2095]: time="2025-01-30T13:54:07.232148978Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:07.232317 containerd[2095]: time="2025-01-30T13:54:07.253266661Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Jan 30 13:54:07.260500 containerd[2095]: time="2025-01-30T13:54:07.258979010Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:07.280067 containerd[2095]: time="2025-01-30T13:54:07.279961832Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:07.283034 containerd[2095]: time="2025-01-30T13:54:07.282983530Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.780573115s" Jan 30 13:54:07.283249 containerd[2095]: time="2025-01-30T13:54:07.283040591Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 30 13:54:07.307997 containerd[2095]: time="2025-01-30T13:54:07.307673729Z" level=info msg="CreateContainer within sandbox \"2c23834265bcdbfbf84e72e76d22d3e6581d61d1710dfba57bb955dd74591536\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 30 13:54:07.341905 containerd[2095]: time="2025-01-30T13:54:07.341728018Z" level=info msg="CreateContainer within sandbox \"2c23834265bcdbfbf84e72e76d22d3e6581d61d1710dfba57bb955dd74591536\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"3ae87bbd4260c17b216949ca64b4e241f0da5e14cbc7ed9e8b65275a684d41af\"" Jan 30 13:54:07.344500 containerd[2095]: time="2025-01-30T13:54:07.344189978Z" level=info msg="StartContainer for \"3ae87bbd4260c17b216949ca64b4e241f0da5e14cbc7ed9e8b65275a684d41af\"" Jan 30 13:54:07.518317 containerd[2095]: time="2025-01-30T13:54:07.518245488Z" level=info msg="StartContainer for \"3ae87bbd4260c17b216949ca64b4e241f0da5e14cbc7ed9e8b65275a684d41af\" returns successfully" Jan 30 13:54:07.588858 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3ae87bbd4260c17b216949ca64b4e241f0da5e14cbc7ed9e8b65275a684d41af-rootfs.mount: Deactivated successfully. Jan 30 13:54:07.606859 kubelet[3401]: E0130 13:54:07.606799 3401 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qjwhb" podUID="89933e39-f4f4-49ac-8467-88e1539cd0a5" Jan 30 13:54:07.773484 containerd[2095]: time="2025-01-30T13:54:07.711906810Z" level=info msg="shim disconnected" id=3ae87bbd4260c17b216949ca64b4e241f0da5e14cbc7ed9e8b65275a684d41af namespace=k8s.io Jan 30 13:54:07.773484 containerd[2095]: time="2025-01-30T13:54:07.773287216Z" level=warning msg="cleaning up after shim disconnected" id=3ae87bbd4260c17b216949ca64b4e241f0da5e14cbc7ed9e8b65275a684d41af namespace=k8s.io Jan 30 13:54:07.773484 containerd[2095]: time="2025-01-30T13:54:07.773307247Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:54:07.972943 containerd[2095]: time="2025-01-30T13:54:07.972038809Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 30 13:54:08.008054 kubelet[3401]: I0130 13:54:08.007664 3401 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5d75cd755d-rw2sw" podStartSLOduration=4.467081804 podStartE2EDuration="7.007460112s" podCreationTimestamp="2025-01-30 13:54:01 +0000 UTC" firstStartedPulling="2025-01-30 13:54:02.959256825 +0000 UTC m=+22.625385673" lastFinishedPulling="2025-01-30 13:54:05.499635139 +0000 UTC m=+25.165763981" observedRunningTime="2025-01-30 13:54:05.991513116 +0000 UTC m=+25.657641980" watchObservedRunningTime="2025-01-30 13:54:08.007460112 +0000 UTC m=+27.673588979" Jan 30 13:54:09.603526 kubelet[3401]: E0130 13:54:09.603475 3401 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qjwhb" podUID="89933e39-f4f4-49ac-8467-88e1539cd0a5" Jan 30 13:54:11.603627 kubelet[3401]: E0130 13:54:11.603574 3401 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qjwhb" podUID="89933e39-f4f4-49ac-8467-88e1539cd0a5" Jan 30 13:54:12.710019 containerd[2095]: time="2025-01-30T13:54:12.709972936Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:12.711377 containerd[2095]: time="2025-01-30T13:54:12.711281755Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 30 13:54:12.727729 containerd[2095]: time="2025-01-30T13:54:12.727474190Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:12.738606 containerd[2095]: time="2025-01-30T13:54:12.738559744Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 4.766478514s" Jan 30 13:54:12.738606 containerd[2095]: time="2025-01-30T13:54:12.738600739Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 30 13:54:12.739138 containerd[2095]: time="2025-01-30T13:54:12.738864479Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:12.742113 containerd[2095]: time="2025-01-30T13:54:12.742072223Z" level=info msg="CreateContainer within sandbox \"2c23834265bcdbfbf84e72e76d22d3e6581d61d1710dfba57bb955dd74591536\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 30 13:54:12.782157 containerd[2095]: time="2025-01-30T13:54:12.782104779Z" level=info msg="CreateContainer within sandbox \"2c23834265bcdbfbf84e72e76d22d3e6581d61d1710dfba57bb955dd74591536\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"5bde277a2cc63038a7938d05410372199411fc405926c2d94a66213671b46662\"" Jan 30 13:54:12.785525 containerd[2095]: time="2025-01-30T13:54:12.785000415Z" level=info msg="StartContainer for \"5bde277a2cc63038a7938d05410372199411fc405926c2d94a66213671b46662\"" Jan 30 13:54:12.865739 systemd[1]: run-containerd-runc-k8s.io-5bde277a2cc63038a7938d05410372199411fc405926c2d94a66213671b46662-runc.wrlUj8.mount: Deactivated successfully. Jan 30 13:54:12.903364 containerd[2095]: time="2025-01-30T13:54:12.903319914Z" level=info msg="StartContainer for \"5bde277a2cc63038a7938d05410372199411fc405926c2d94a66213671b46662\" returns successfully" Jan 30 13:54:13.603731 kubelet[3401]: E0130 13:54:13.603672 3401 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qjwhb" podUID="89933e39-f4f4-49ac-8467-88e1539cd0a5" Jan 30 13:54:14.044022 kubelet[3401]: I0130 13:54:14.043513 3401 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 30 13:54:14.048617 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5bde277a2cc63038a7938d05410372199411fc405926c2d94a66213671b46662-rootfs.mount: Deactivated successfully. Jan 30 13:54:14.054143 containerd[2095]: time="2025-01-30T13:54:14.054071013Z" level=info msg="shim disconnected" id=5bde277a2cc63038a7938d05410372199411fc405926c2d94a66213671b46662 namespace=k8s.io Jan 30 13:54:14.055843 containerd[2095]: time="2025-01-30T13:54:14.054147474Z" level=warning msg="cleaning up after shim disconnected" id=5bde277a2cc63038a7938d05410372199411fc405926c2d94a66213671b46662 namespace=k8s.io Jan 30 13:54:14.055843 containerd[2095]: time="2025-01-30T13:54:14.054161741Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:54:14.118075 kubelet[3401]: I0130 13:54:14.117630 3401 topology_manager.go:215] "Topology Admit Handler" podUID="d1c07e75-2ef2-4a46-836f-bfeb350b2011" podNamespace="calico-system" podName="calico-kube-controllers-6fd4bfb6f4-cv7hp" Jan 30 13:54:14.135899 kubelet[3401]: I0130 13:54:14.133499 3401 topology_manager.go:215] "Topology Admit Handler" podUID="97758f69-3608-4bd5-a29f-602e25cb96c7" podNamespace="kube-system" podName="coredns-7db6d8ff4d-stfj9" Jan 30 13:54:14.135899 kubelet[3401]: I0130 13:54:14.133699 3401 topology_manager.go:215] "Topology Admit Handler" podUID="955728c7-dcd5-4a2a-928a-608a27b0ea08" podNamespace="calico-apiserver" podName="calico-apiserver-7b99fdb47b-ktz62" Jan 30 13:54:14.161359 kubelet[3401]: I0130 13:54:14.161309 3401 topology_manager.go:215] "Topology Admit Handler" podUID="8add0565-323b-46d5-8793-d7bb0f574609" podNamespace="kube-system" podName="coredns-7db6d8ff4d-84qz8" Jan 30 13:54:14.161552 kubelet[3401]: I0130 13:54:14.161531 3401 topology_manager.go:215] "Topology Admit Handler" podUID="75d06128-1791-4f18-82b2-8e83d7439284" podNamespace="calico-apiserver" podName="calico-apiserver-7b99fdb47b-fvbzx" Jan 30 13:54:14.181920 kubelet[3401]: I0130 13:54:14.176136 3401 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8add0565-323b-46d5-8793-d7bb0f574609-config-volume\") pod \"coredns-7db6d8ff4d-84qz8\" (UID: \"8add0565-323b-46d5-8793-d7bb0f574609\") " pod="kube-system/coredns-7db6d8ff4d-84qz8" Jan 30 13:54:14.181920 kubelet[3401]: I0130 13:54:14.176181 3401 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/97758f69-3608-4bd5-a29f-602e25cb96c7-config-volume\") pod \"coredns-7db6d8ff4d-stfj9\" (UID: \"97758f69-3608-4bd5-a29f-602e25cb96c7\") " pod="kube-system/coredns-7db6d8ff4d-stfj9" Jan 30 13:54:14.181920 kubelet[3401]: I0130 13:54:14.176213 3401 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glxhg\" (UniqueName: \"kubernetes.io/projected/d1c07e75-2ef2-4a46-836f-bfeb350b2011-kube-api-access-glxhg\") pod \"calico-kube-controllers-6fd4bfb6f4-cv7hp\" (UID: \"d1c07e75-2ef2-4a46-836f-bfeb350b2011\") " pod="calico-system/calico-kube-controllers-6fd4bfb6f4-cv7hp" Jan 30 13:54:14.181920 kubelet[3401]: I0130 13:54:14.176243 3401 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1c07e75-2ef2-4a46-836f-bfeb350b2011-tigera-ca-bundle\") pod \"calico-kube-controllers-6fd4bfb6f4-cv7hp\" (UID: \"d1c07e75-2ef2-4a46-836f-bfeb350b2011\") " pod="calico-system/calico-kube-controllers-6fd4bfb6f4-cv7hp" Jan 30 13:54:14.181920 kubelet[3401]: I0130 13:54:14.176272 3401 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/75d06128-1791-4f18-82b2-8e83d7439284-calico-apiserver-certs\") pod \"calico-apiserver-7b99fdb47b-fvbzx\" (UID: \"75d06128-1791-4f18-82b2-8e83d7439284\") " pod="calico-apiserver/calico-apiserver-7b99fdb47b-fvbzx" Jan 30 13:54:14.186949 kubelet[3401]: I0130 13:54:14.176303 3401 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/955728c7-dcd5-4a2a-928a-608a27b0ea08-calico-apiserver-certs\") pod \"calico-apiserver-7b99fdb47b-ktz62\" (UID: \"955728c7-dcd5-4a2a-928a-608a27b0ea08\") " pod="calico-apiserver/calico-apiserver-7b99fdb47b-ktz62" Jan 30 13:54:14.186949 kubelet[3401]: I0130 13:54:14.176330 3401 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxxg7\" (UniqueName: \"kubernetes.io/projected/955728c7-dcd5-4a2a-928a-608a27b0ea08-kube-api-access-qxxg7\") pod \"calico-apiserver-7b99fdb47b-ktz62\" (UID: \"955728c7-dcd5-4a2a-928a-608a27b0ea08\") " pod="calico-apiserver/calico-apiserver-7b99fdb47b-ktz62" Jan 30 13:54:14.186949 kubelet[3401]: I0130 13:54:14.176378 3401 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmvf6\" (UniqueName: \"kubernetes.io/projected/8add0565-323b-46d5-8793-d7bb0f574609-kube-api-access-gmvf6\") pod \"coredns-7db6d8ff4d-84qz8\" (UID: \"8add0565-323b-46d5-8793-d7bb0f574609\") " pod="kube-system/coredns-7db6d8ff4d-84qz8" Jan 30 13:54:14.186949 kubelet[3401]: I0130 13:54:14.176436 3401 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqgf6\" (UniqueName: \"kubernetes.io/projected/97758f69-3608-4bd5-a29f-602e25cb96c7-kube-api-access-cqgf6\") pod \"coredns-7db6d8ff4d-stfj9\" (UID: \"97758f69-3608-4bd5-a29f-602e25cb96c7\") " pod="kube-system/coredns-7db6d8ff4d-stfj9" Jan 30 13:54:14.186949 kubelet[3401]: I0130 13:54:14.176460 3401 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkrjj\" (UniqueName: \"kubernetes.io/projected/75d06128-1791-4f18-82b2-8e83d7439284-kube-api-access-gkrjj\") pod \"calico-apiserver-7b99fdb47b-fvbzx\" (UID: \"75d06128-1791-4f18-82b2-8e83d7439284\") " pod="calico-apiserver/calico-apiserver-7b99fdb47b-fvbzx" Jan 30 13:54:14.432931 containerd[2095]: time="2025-01-30T13:54:14.432132245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fd4bfb6f4-cv7hp,Uid:d1c07e75-2ef2-4a46-836f-bfeb350b2011,Namespace:calico-system,Attempt:0,}" Jan 30 13:54:14.458138 containerd[2095]: time="2025-01-30T13:54:14.456497988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b99fdb47b-ktz62,Uid:955728c7-dcd5-4a2a-928a-608a27b0ea08,Namespace:calico-apiserver,Attempt:0,}" Jan 30 13:54:14.494412 containerd[2095]: time="2025-01-30T13:54:14.493496197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b99fdb47b-fvbzx,Uid:75d06128-1791-4f18-82b2-8e83d7439284,Namespace:calico-apiserver,Attempt:0,}" Jan 30 13:54:14.503166 containerd[2095]: time="2025-01-30T13:54:14.502688018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-84qz8,Uid:8add0565-323b-46d5-8793-d7bb0f574609,Namespace:kube-system,Attempt:0,}" Jan 30 13:54:14.510939 containerd[2095]: time="2025-01-30T13:54:14.510869775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-stfj9,Uid:97758f69-3608-4bd5-a29f-602e25cb96c7,Namespace:kube-system,Attempt:0,}" Jan 30 13:54:14.923944 containerd[2095]: time="2025-01-30T13:54:14.923783726Z" level=error msg="Failed to destroy network for sandbox \"744b5addb80ebf4b59cca9c8983a7bbdb091a1f2b961b4f5e7dcb573b05200f6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:54:14.933282 containerd[2095]: time="2025-01-30T13:54:14.933221464Z" level=error msg="encountered an error cleaning up failed sandbox \"744b5addb80ebf4b59cca9c8983a7bbdb091a1f2b961b4f5e7dcb573b05200f6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:54:14.940308 containerd[2095]: time="2025-01-30T13:54:14.939994229Z" level=error msg="Failed to destroy network for sandbox \"630b1847d9c10a2e208c86dcfd9698e221af4a27cba08687339465f8b336214e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:54:14.942213 containerd[2095]: time="2025-01-30T13:54:14.940463536Z" level=error msg="encountered an error cleaning up failed sandbox \"630b1847d9c10a2e208c86dcfd9698e221af4a27cba08687339465f8b336214e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:54:14.945621 containerd[2095]: time="2025-01-30T13:54:14.945507472Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b99fdb47b-ktz62,Uid:955728c7-dcd5-4a2a-928a-608a27b0ea08,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"630b1847d9c10a2e208c86dcfd9698e221af4a27cba08687339465f8b336214e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:54:14.946596 containerd[2095]: time="2025-01-30T13:54:14.945886657Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b99fdb47b-fvbzx,Uid:75d06128-1791-4f18-82b2-8e83d7439284,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"744b5addb80ebf4b59cca9c8983a7bbdb091a1f2b961b4f5e7dcb573b05200f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:54:14.957142 containerd[2095]: time="2025-01-30T13:54:14.957027804Z" level=error msg="Failed to destroy network for sandbox \"ac4689f84082430159dbf25ab66facdf1b47e45260712660c9d42fe6d75487a0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:54:14.957746 containerd[2095]: time="2025-01-30T13:54:14.957713830Z" level=error msg="encountered an error cleaning up failed sandbox \"ac4689f84082430159dbf25ab66facdf1b47e45260712660c9d42fe6d75487a0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:54:14.957953 containerd[2095]: time="2025-01-30T13:54:14.957872564Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-84qz8,Uid:8add0565-323b-46d5-8793-d7bb0f574609,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ac4689f84082430159dbf25ab66facdf1b47e45260712660c9d42fe6d75487a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:54:14.965482 containerd[2095]: time="2025-01-30T13:54:14.965398169Z" level=error msg="Failed to destroy network for sandbox \"55d1e351f14edcbcd30c03829b2dacb4f06c00cd8f3a12cccdda01708d4d131e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:54:14.965816 containerd[2095]: time="2025-01-30T13:54:14.965776175Z" level=error msg="encountered an error cleaning up failed sandbox \"55d1e351f14edcbcd30c03829b2dacb4f06c00cd8f3a12cccdda01708d4d131e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:54:14.965951 containerd[2095]: time="2025-01-30T13:54:14.965840372Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fd4bfb6f4-cv7hp,Uid:d1c07e75-2ef2-4a46-836f-bfeb350b2011,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"55d1e351f14edcbcd30c03829b2dacb4f06c00cd8f3a12cccdda01708d4d131e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:54:14.967919 kubelet[3401]: E0130 13:54:14.956396 3401 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"744b5addb80ebf4b59cca9c8983a7bbdb091a1f2b961b4f5e7dcb573b05200f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:54:14.967919 kubelet[3401]: E0130 13:54:14.967832 3401 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"744b5addb80ebf4b59cca9c8983a7bbdb091a1f2b961b4f5e7dcb573b05200f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b99fdb47b-fvbzx" Jan 30 13:54:14.968750 kubelet[3401]: E0130 13:54:14.968331 3401 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"744b5addb80ebf4b59cca9c8983a7bbdb091a1f2b961b4f5e7dcb573b05200f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b99fdb47b-fvbzx" Jan 30 13:54:14.968750 kubelet[3401]: E0130 13:54:14.968439 3401 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7b99fdb47b-fvbzx_calico-apiserver(75d06128-1791-4f18-82b2-8e83d7439284)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7b99fdb47b-fvbzx_calico-apiserver(75d06128-1791-4f18-82b2-8e83d7439284)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"744b5addb80ebf4b59cca9c8983a7bbdb091a1f2b961b4f5e7dcb573b05200f6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b99fdb47b-fvbzx" podUID="75d06128-1791-4f18-82b2-8e83d7439284" Jan 30 13:54:14.968750 kubelet[3401]: E0130 13:54:14.956462 3401 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"630b1847d9c10a2e208c86dcfd9698e221af4a27cba08687339465f8b336214e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:54:14.969060 kubelet[3401]: E0130 13:54:14.968511 3401 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"630b1847d9c10a2e208c86dcfd9698e221af4a27cba08687339465f8b336214e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b99fdb47b-ktz62" Jan 30 13:54:14.969060 kubelet[3401]: E0130 13:54:14.968533 3401 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"630b1847d9c10a2e208c86dcfd9698e221af4a27cba08687339465f8b336214e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b99fdb47b-ktz62" Jan 30 13:54:14.969060 kubelet[3401]: E0130 13:54:14.968569 3401 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7b99fdb47b-ktz62_calico-apiserver(955728c7-dcd5-4a2a-928a-608a27b0ea08)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7b99fdb47b-ktz62_calico-apiserver(955728c7-dcd5-4a2a-928a-608a27b0ea08)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"630b1847d9c10a2e208c86dcfd9698e221af4a27cba08687339465f8b336214e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b99fdb47b-ktz62" podUID="955728c7-dcd5-4a2a-928a-608a27b0ea08" Jan 30 13:54:14.969278 kubelet[3401]: E0130 13:54:14.967925 3401 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac4689f84082430159dbf25ab66facdf1b47e45260712660c9d42fe6d75487a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:54:14.969278 kubelet[3401]: E0130 13:54:14.968611 3401 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac4689f84082430159dbf25ab66facdf1b47e45260712660c9d42fe6d75487a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-84qz8" Jan 30 13:54:14.969278 kubelet[3401]: E0130 13:54:14.969066 3401 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac4689f84082430159dbf25ab66facdf1b47e45260712660c9d42fe6d75487a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-84qz8" Jan 30 13:54:14.969398 kubelet[3401]: E0130 13:54:14.969112 3401 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-84qz8_kube-system(8add0565-323b-46d5-8793-d7bb0f574609)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-84qz8_kube-system(8add0565-323b-46d5-8793-d7bb0f574609)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ac4689f84082430159dbf25ab66facdf1b47e45260712660c9d42fe6d75487a0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-84qz8" podUID="8add0565-323b-46d5-8793-d7bb0f574609" Jan 30 13:54:14.969398 kubelet[3401]: E0130 13:54:14.967950 3401 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55d1e351f14edcbcd30c03829b2dacb4f06c00cd8f3a12cccdda01708d4d131e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:54:14.969398 kubelet[3401]: E0130 13:54:14.969160 3401 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55d1e351f14edcbcd30c03829b2dacb4f06c00cd8f3a12cccdda01708d4d131e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6fd4bfb6f4-cv7hp" Jan 30 13:54:14.969548 kubelet[3401]: E0130 13:54:14.969180 3401 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55d1e351f14edcbcd30c03829b2dacb4f06c00cd8f3a12cccdda01708d4d131e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6fd4bfb6f4-cv7hp" Jan 30 13:54:14.969548 kubelet[3401]: E0130 13:54:14.969214 3401 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6fd4bfb6f4-cv7hp_calico-system(d1c07e75-2ef2-4a46-836f-bfeb350b2011)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6fd4bfb6f4-cv7hp_calico-system(d1c07e75-2ef2-4a46-836f-bfeb350b2011)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"55d1e351f14edcbcd30c03829b2dacb4f06c00cd8f3a12cccdda01708d4d131e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6fd4bfb6f4-cv7hp" podUID="d1c07e75-2ef2-4a46-836f-bfeb350b2011" Jan 30 13:54:14.972724 containerd[2095]: time="2025-01-30T13:54:14.972677973Z" level=error msg="Failed to destroy network for sandbox \"a43897853c8fd200597d9055b5ff78c738b67dcfc2e122ad4c65af26fc6855c3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:54:14.973075 containerd[2095]: time="2025-01-30T13:54:14.973034628Z" level=error msg="encountered an error cleaning up failed sandbox \"a43897853c8fd200597d9055b5ff78c738b67dcfc2e122ad4c65af26fc6855c3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:54:14.973178 containerd[2095]: time="2025-01-30T13:54:14.973090477Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-stfj9,Uid:97758f69-3608-4bd5-a29f-602e25cb96c7,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a43897853c8fd200597d9055b5ff78c738b67dcfc2e122ad4c65af26fc6855c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:54:14.973363 kubelet[3401]: E0130 13:54:14.973323 3401 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a43897853c8fd200597d9055b5ff78c738b67dcfc2e122ad4c65af26fc6855c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:54:14.973427 kubelet[3401]: E0130 13:54:14.973384 3401 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a43897853c8fd200597d9055b5ff78c738b67dcfc2e122ad4c65af26fc6855c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-stfj9" Jan 30 13:54:14.973427 kubelet[3401]: E0130 13:54:14.973410 3401 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a43897853c8fd200597d9055b5ff78c738b67dcfc2e122ad4c65af26fc6855c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-stfj9" Jan 30 13:54:14.973572 kubelet[3401]: E0130 13:54:14.973492 3401 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-stfj9_kube-system(97758f69-3608-4bd5-a29f-602e25cb96c7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-stfj9_kube-system(97758f69-3608-4bd5-a29f-602e25cb96c7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a43897853c8fd200597d9055b5ff78c738b67dcfc2e122ad4c65af26fc6855c3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-stfj9" podUID="97758f69-3608-4bd5-a29f-602e25cb96c7" Jan 30 13:54:15.002840 kubelet[3401]: I0130 13:54:15.002807 3401 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac4689f84082430159dbf25ab66facdf1b47e45260712660c9d42fe6d75487a0" Jan 30 13:54:15.026061 containerd[2095]: time="2025-01-30T13:54:15.025783798Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 30 13:54:15.047906 kubelet[3401]: I0130 13:54:15.047239 3401 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a43897853c8fd200597d9055b5ff78c738b67dcfc2e122ad4c65af26fc6855c3" Jan 30 13:54:15.071900 kubelet[3401]: I0130 13:54:15.071671 3401 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="744b5addb80ebf4b59cca9c8983a7bbdb091a1f2b961b4f5e7dcb573b05200f6" Jan 30 13:54:15.074806 kubelet[3401]: I0130 13:54:15.074772 3401 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="630b1847d9c10a2e208c86dcfd9698e221af4a27cba08687339465f8b336214e" Jan 30 13:54:15.077666 kubelet[3401]: I0130 13:54:15.077065 3401 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55d1e351f14edcbcd30c03829b2dacb4f06c00cd8f3a12cccdda01708d4d131e" Jan 30 13:54:15.098546 containerd[2095]: time="2025-01-30T13:54:15.098491676Z" level=info msg="StopPodSandbox for \"55d1e351f14edcbcd30c03829b2dacb4f06c00cd8f3a12cccdda01708d4d131e\"" Jan 30 13:54:15.101344 containerd[2095]: time="2025-01-30T13:54:15.100116006Z" level=info msg="StopPodSandbox for \"ac4689f84082430159dbf25ab66facdf1b47e45260712660c9d42fe6d75487a0\"" Jan 30 13:54:15.101344 containerd[2095]: time="2025-01-30T13:54:15.101331626Z" level=info msg="Ensure that sandbox ac4689f84082430159dbf25ab66facdf1b47e45260712660c9d42fe6d75487a0 in task-service has been cleanup successfully" Jan 30 13:54:15.101558 containerd[2095]: time="2025-01-30T13:54:15.101536626Z" level=info msg="Ensure that sandbox 55d1e351f14edcbcd30c03829b2dacb4f06c00cd8f3a12cccdda01708d4d131e in task-service has been cleanup successfully" Jan 30 13:54:15.104371 containerd[2095]: time="2025-01-30T13:54:15.104334714Z" level=info msg="StopPodSandbox for \"a43897853c8fd200597d9055b5ff78c738b67dcfc2e122ad4c65af26fc6855c3\"" Jan 30 13:54:15.104786 containerd[2095]: time="2025-01-30T13:54:15.104348683Z" level=info msg="StopPodSandbox for \"744b5addb80ebf4b59cca9c8983a7bbdb091a1f2b961b4f5e7dcb573b05200f6\"" Jan 30 13:54:15.105237 containerd[2095]: time="2025-01-30T13:54:15.105211526Z" level=info msg="Ensure that sandbox 744b5addb80ebf4b59cca9c8983a7bbdb091a1f2b961b4f5e7dcb573b05200f6 in task-service has been cleanup successfully" Jan 30 13:54:15.106225 containerd[2095]: time="2025-01-30T13:54:15.106200539Z" level=info msg="Ensure that sandbox a43897853c8fd200597d9055b5ff78c738b67dcfc2e122ad4c65af26fc6855c3 in task-service has been cleanup successfully" Jan 30 13:54:15.106401 containerd[2095]: time="2025-01-30T13:54:15.104385081Z" level=info msg="StopPodSandbox for \"630b1847d9c10a2e208c86dcfd9698e221af4a27cba08687339465f8b336214e\"" Jan 30 13:54:15.110819 containerd[2095]: time="2025-01-30T13:54:15.110768952Z" level=info msg="Ensure that sandbox 630b1847d9c10a2e208c86dcfd9698e221af4a27cba08687339465f8b336214e in task-service has been cleanup successfully" Jan 30 13:54:15.211831 containerd[2095]: time="2025-01-30T13:54:15.211575213Z" level=error msg="StopPodSandbox for \"55d1e351f14edcbcd30c03829b2dacb4f06c00cd8f3a12cccdda01708d4d131e\" failed" error="failed to destroy network for sandbox \"55d1e351f14edcbcd30c03829b2dacb4f06c00cd8f3a12cccdda01708d4d131e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:54:15.212622 kubelet[3401]: E0130 13:54:15.212344 3401 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"55d1e351f14edcbcd30c03829b2dacb4f06c00cd8f3a12cccdda01708d4d131e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="55d1e351f14edcbcd30c03829b2dacb4f06c00cd8f3a12cccdda01708d4d131e" Jan 30 13:54:15.212622 kubelet[3401]: E0130 13:54:15.212411 3401 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"55d1e351f14edcbcd30c03829b2dacb4f06c00cd8f3a12cccdda01708d4d131e"} Jan 30 13:54:15.212622 kubelet[3401]: E0130 13:54:15.212492 3401 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d1c07e75-2ef2-4a46-836f-bfeb350b2011\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"55d1e351f14edcbcd30c03829b2dacb4f06c00cd8f3a12cccdda01708d4d131e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:54:15.212622 kubelet[3401]: E0130 13:54:15.212523 3401 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d1c07e75-2ef2-4a46-836f-bfeb350b2011\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"55d1e351f14edcbcd30c03829b2dacb4f06c00cd8f3a12cccdda01708d4d131e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6fd4bfb6f4-cv7hp" podUID="d1c07e75-2ef2-4a46-836f-bfeb350b2011" Jan 30 13:54:15.242077 containerd[2095]: time="2025-01-30T13:54:15.242014442Z" level=error msg="StopPodSandbox for \"744b5addb80ebf4b59cca9c8983a7bbdb091a1f2b961b4f5e7dcb573b05200f6\" failed" error="failed to destroy network for sandbox \"744b5addb80ebf4b59cca9c8983a7bbdb091a1f2b961b4f5e7dcb573b05200f6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:54:15.242755 containerd[2095]: time="2025-01-30T13:54:15.242216012Z" level=error msg="StopPodSandbox for \"ac4689f84082430159dbf25ab66facdf1b47e45260712660c9d42fe6d75487a0\" failed" error="failed to destroy network for sandbox \"ac4689f84082430159dbf25ab66facdf1b47e45260712660c9d42fe6d75487a0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:54:15.242841 kubelet[3401]: E0130 13:54:15.242302 3401 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"744b5addb80ebf4b59cca9c8983a7bbdb091a1f2b961b4f5e7dcb573b05200f6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="744b5addb80ebf4b59cca9c8983a7bbdb091a1f2b961b4f5e7dcb573b05200f6" Jan 30 13:54:15.242841 kubelet[3401]: E0130 13:54:15.242368 3401 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"744b5addb80ebf4b59cca9c8983a7bbdb091a1f2b961b4f5e7dcb573b05200f6"} Jan 30 13:54:15.242841 kubelet[3401]: E0130 13:54:15.242549 3401 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ac4689f84082430159dbf25ab66facdf1b47e45260712660c9d42fe6d75487a0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ac4689f84082430159dbf25ab66facdf1b47e45260712660c9d42fe6d75487a0" Jan 30 13:54:15.242841 kubelet[3401]: E0130 13:54:15.242579 3401 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ac4689f84082430159dbf25ab66facdf1b47e45260712660c9d42fe6d75487a0"} Jan 30 13:54:15.242841 kubelet[3401]: E0130 13:54:15.242615 3401 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8add0565-323b-46d5-8793-d7bb0f574609\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ac4689f84082430159dbf25ab66facdf1b47e45260712660c9d42fe6d75487a0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:54:15.244441 kubelet[3401]: E0130 13:54:15.242913 3401 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8add0565-323b-46d5-8793-d7bb0f574609\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ac4689f84082430159dbf25ab66facdf1b47e45260712660c9d42fe6d75487a0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-84qz8" podUID="8add0565-323b-46d5-8793-d7bb0f574609" Jan 30 13:54:15.244441 kubelet[3401]: E0130 13:54:15.242412 3401 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"75d06128-1791-4f18-82b2-8e83d7439284\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"744b5addb80ebf4b59cca9c8983a7bbdb091a1f2b961b4f5e7dcb573b05200f6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:54:15.244441 kubelet[3401]: E0130 13:54:15.243958 3401 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"75d06128-1791-4f18-82b2-8e83d7439284\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"744b5addb80ebf4b59cca9c8983a7bbdb091a1f2b961b4f5e7dcb573b05200f6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b99fdb47b-fvbzx" podUID="75d06128-1791-4f18-82b2-8e83d7439284" Jan 30 13:54:15.245351 containerd[2095]: time="2025-01-30T13:54:15.245296742Z" level=error msg="StopPodSandbox for \"630b1847d9c10a2e208c86dcfd9698e221af4a27cba08687339465f8b336214e\" failed" error="failed to destroy network for sandbox \"630b1847d9c10a2e208c86dcfd9698e221af4a27cba08687339465f8b336214e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:54:15.246224 kubelet[3401]: E0130 13:54:15.245949 3401 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"630b1847d9c10a2e208c86dcfd9698e221af4a27cba08687339465f8b336214e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="630b1847d9c10a2e208c86dcfd9698e221af4a27cba08687339465f8b336214e" Jan 30 13:54:15.246224 kubelet[3401]: E0130 13:54:15.246043 3401 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"630b1847d9c10a2e208c86dcfd9698e221af4a27cba08687339465f8b336214e"} Jan 30 13:54:15.246224 kubelet[3401]: E0130 13:54:15.246102 3401 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"955728c7-dcd5-4a2a-928a-608a27b0ea08\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"630b1847d9c10a2e208c86dcfd9698e221af4a27cba08687339465f8b336214e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:54:15.246224 kubelet[3401]: E0130 13:54:15.246170 3401 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"955728c7-dcd5-4a2a-928a-608a27b0ea08\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"630b1847d9c10a2e208c86dcfd9698e221af4a27cba08687339465f8b336214e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b99fdb47b-ktz62" podUID="955728c7-dcd5-4a2a-928a-608a27b0ea08" Jan 30 13:54:15.252538 containerd[2095]: time="2025-01-30T13:54:15.252491032Z" level=error msg="StopPodSandbox for \"a43897853c8fd200597d9055b5ff78c738b67dcfc2e122ad4c65af26fc6855c3\" failed" error="failed to destroy network for sandbox \"a43897853c8fd200597d9055b5ff78c738b67dcfc2e122ad4c65af26fc6855c3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:54:15.252973 kubelet[3401]: E0130 13:54:15.252754 3401 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a43897853c8fd200597d9055b5ff78c738b67dcfc2e122ad4c65af26fc6855c3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a43897853c8fd200597d9055b5ff78c738b67dcfc2e122ad4c65af26fc6855c3" Jan 30 13:54:15.252973 kubelet[3401]: E0130 13:54:15.252805 3401 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a43897853c8fd200597d9055b5ff78c738b67dcfc2e122ad4c65af26fc6855c3"} Jan 30 13:54:15.252973 kubelet[3401]: E0130 13:54:15.252839 3401 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"97758f69-3608-4bd5-a29f-602e25cb96c7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a43897853c8fd200597d9055b5ff78c738b67dcfc2e122ad4c65af26fc6855c3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:54:15.252973 kubelet[3401]: E0130 13:54:15.252861 3401 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"97758f69-3608-4bd5-a29f-602e25cb96c7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a43897853c8fd200597d9055b5ff78c738b67dcfc2e122ad4c65af26fc6855c3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-stfj9" podUID="97758f69-3608-4bd5-a29f-602e25cb96c7" Jan 30 13:54:15.607680 containerd[2095]: time="2025-01-30T13:54:15.606731458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qjwhb,Uid:89933e39-f4f4-49ac-8467-88e1539cd0a5,Namespace:calico-system,Attempt:0,}" Jan 30 13:54:15.728852 containerd[2095]: time="2025-01-30T13:54:15.728799626Z" level=error msg="Failed to destroy network for sandbox \"b4e187380a46d1b9706f790e3cf321040cd75bd4fea8554b9e5c9401d0b0beff\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:54:15.732517 containerd[2095]: time="2025-01-30T13:54:15.732458872Z" level=error msg="encountered an error cleaning up failed sandbox \"b4e187380a46d1b9706f790e3cf321040cd75bd4fea8554b9e5c9401d0b0beff\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:54:15.732669 containerd[2095]: time="2025-01-30T13:54:15.732554845Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qjwhb,Uid:89933e39-f4f4-49ac-8467-88e1539cd0a5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b4e187380a46d1b9706f790e3cf321040cd75bd4fea8554b9e5c9401d0b0beff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:54:15.732907 kubelet[3401]: E0130 13:54:15.732808 3401 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4e187380a46d1b9706f790e3cf321040cd75bd4fea8554b9e5c9401d0b0beff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:54:15.732988 kubelet[3401]: E0130 13:54:15.732944 3401 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4e187380a46d1b9706f790e3cf321040cd75bd4fea8554b9e5c9401d0b0beff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qjwhb" Jan 30 13:54:15.733291 kubelet[3401]: E0130 13:54:15.733259 3401 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4e187380a46d1b9706f790e3cf321040cd75bd4fea8554b9e5c9401d0b0beff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qjwhb" Jan 30 13:54:15.733396 kubelet[3401]: E0130 13:54:15.733357 3401 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qjwhb_calico-system(89933e39-f4f4-49ac-8467-88e1539cd0a5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qjwhb_calico-system(89933e39-f4f4-49ac-8467-88e1539cd0a5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b4e187380a46d1b9706f790e3cf321040cd75bd4fea8554b9e5c9401d0b0beff\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qjwhb" podUID="89933e39-f4f4-49ac-8467-88e1539cd0a5" Jan 30 13:54:15.736806 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b4e187380a46d1b9706f790e3cf321040cd75bd4fea8554b9e5c9401d0b0beff-shm.mount: Deactivated successfully. Jan 30 13:54:16.082904 kubelet[3401]: I0130 13:54:16.082845 3401 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4e187380a46d1b9706f790e3cf321040cd75bd4fea8554b9e5c9401d0b0beff" Jan 30 13:54:16.084108 containerd[2095]: time="2025-01-30T13:54:16.084057756Z" level=info msg="StopPodSandbox for \"b4e187380a46d1b9706f790e3cf321040cd75bd4fea8554b9e5c9401d0b0beff\"" Jan 30 13:54:16.086213 containerd[2095]: time="2025-01-30T13:54:16.084292714Z" level=info msg="Ensure that sandbox b4e187380a46d1b9706f790e3cf321040cd75bd4fea8554b9e5c9401d0b0beff in task-service has been cleanup successfully" Jan 30 13:54:16.149789 containerd[2095]: time="2025-01-30T13:54:16.149641843Z" level=error msg="StopPodSandbox for \"b4e187380a46d1b9706f790e3cf321040cd75bd4fea8554b9e5c9401d0b0beff\" failed" error="failed to destroy network for sandbox \"b4e187380a46d1b9706f790e3cf321040cd75bd4fea8554b9e5c9401d0b0beff\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:54:16.151086 kubelet[3401]: E0130 13:54:16.150325 3401 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b4e187380a46d1b9706f790e3cf321040cd75bd4fea8554b9e5c9401d0b0beff\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b4e187380a46d1b9706f790e3cf321040cd75bd4fea8554b9e5c9401d0b0beff" Jan 30 13:54:16.151086 kubelet[3401]: E0130 13:54:16.150384 3401 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b4e187380a46d1b9706f790e3cf321040cd75bd4fea8554b9e5c9401d0b0beff"} Jan 30 13:54:16.151086 kubelet[3401]: E0130 13:54:16.150431 3401 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"89933e39-f4f4-49ac-8467-88e1539cd0a5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b4e187380a46d1b9706f790e3cf321040cd75bd4fea8554b9e5c9401d0b0beff\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:54:16.151086 kubelet[3401]: E0130 13:54:16.150486 3401 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"89933e39-f4f4-49ac-8467-88e1539cd0a5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b4e187380a46d1b9706f790e3cf321040cd75bd4fea8554b9e5c9401d0b0beff\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qjwhb" podUID="89933e39-f4f4-49ac-8467-88e1539cd0a5" Jan 30 13:54:21.957002 systemd-journald[1576]: Under memory pressure, flushing caches. Jan 30 13:54:21.954962 systemd-resolved[1973]: Under memory pressure, flushing caches. Jan 30 13:54:21.955087 systemd-resolved[1973]: Flushed all caches. Jan 30 13:54:22.506807 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1161152829.mount: Deactivated successfully. Jan 30 13:54:22.596094 containerd[2095]: time="2025-01-30T13:54:22.594428388Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 30 13:54:22.599192 containerd[2095]: time="2025-01-30T13:54:22.599139757Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:22.619730 containerd[2095]: time="2025-01-30T13:54:22.619043904Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:22.620275 containerd[2095]: time="2025-01-30T13:54:22.620167079Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:22.621393 containerd[2095]: time="2025-01-30T13:54:22.620932276Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 7.595077377s" Jan 30 13:54:22.621393 containerd[2095]: time="2025-01-30T13:54:22.620977308Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 30 13:54:22.684528 containerd[2095]: time="2025-01-30T13:54:22.684420815Z" level=info msg="CreateContainer within sandbox \"2c23834265bcdbfbf84e72e76d22d3e6581d61d1710dfba57bb955dd74591536\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 30 13:54:22.761261 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount580349682.mount: Deactivated successfully. Jan 30 13:54:22.777613 containerd[2095]: time="2025-01-30T13:54:22.777563331Z" level=info msg="CreateContainer within sandbox \"2c23834265bcdbfbf84e72e76d22d3e6581d61d1710dfba57bb955dd74591536\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d45caf1e604b35830cc73bfbf781c911b46001f8f31b8112907324592ae2b607\"" Jan 30 13:54:22.778969 containerd[2095]: time="2025-01-30T13:54:22.778615851Z" level=info msg="StartContainer for \"d45caf1e604b35830cc73bfbf781c911b46001f8f31b8112907324592ae2b607\"" Jan 30 13:54:22.918292 containerd[2095]: time="2025-01-30T13:54:22.918249114Z" level=info msg="StartContainer for \"d45caf1e604b35830cc73bfbf781c911b46001f8f31b8112907324592ae2b607\" returns successfully" Jan 30 13:54:23.271080 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 30 13:54:23.271250 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 30 13:54:24.003174 systemd-resolved[1973]: Under memory pressure, flushing caches. Jan 30 13:54:24.007656 systemd-journald[1576]: Under memory pressure, flushing caches. Jan 30 13:54:24.003189 systemd-resolved[1973]: Flushed all caches. Jan 30 13:54:26.607511 containerd[2095]: time="2025-01-30T13:54:26.606348639Z" level=info msg="StopPodSandbox for \"ac4689f84082430159dbf25ab66facdf1b47e45260712660c9d42fe6d75487a0\"" Jan 30 13:54:26.609171 containerd[2095]: time="2025-01-30T13:54:26.608395453Z" level=info msg="StopPodSandbox for \"55d1e351f14edcbcd30c03829b2dacb4f06c00cd8f3a12cccdda01708d4d131e\"" Jan 30 13:54:26.611647 containerd[2095]: time="2025-01-30T13:54:26.611613873Z" level=info msg="StopPodSandbox for \"a43897853c8fd200597d9055b5ff78c738b67dcfc2e122ad4c65af26fc6855c3\"" Jan 30 13:54:26.874401 kubelet[3401]: I0130 13:54:26.860923 3401 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-6t4fh" podStartSLOduration=6.246309126 podStartE2EDuration="25.827811404s" podCreationTimestamp="2025-01-30 13:54:01 +0000 UTC" firstStartedPulling="2025-01-30 13:54:03.04111786 +0000 UTC m=+22.707246715" lastFinishedPulling="2025-01-30 13:54:22.622620138 +0000 UTC m=+42.288748993" observedRunningTime="2025-01-30 13:54:23.255928846 +0000 UTC m=+42.922057710" watchObservedRunningTime="2025-01-30 13:54:26.827811404 +0000 UTC m=+46.493940265" Jan 30 13:54:27.169804 containerd[2095]: 2025-01-30 13:54:26.835 [INFO][4923] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ac4689f84082430159dbf25ab66facdf1b47e45260712660c9d42fe6d75487a0" Jan 30 13:54:27.169804 containerd[2095]: 2025-01-30 13:54:26.842 [INFO][4923] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ac4689f84082430159dbf25ab66facdf1b47e45260712660c9d42fe6d75487a0" iface="eth0" netns="/var/run/netns/cni-f9d5e9f4-3d09-5575-21b2-10c2fdc0c34e" Jan 30 13:54:27.169804 containerd[2095]: 2025-01-30 13:54:26.842 [INFO][4923] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ac4689f84082430159dbf25ab66facdf1b47e45260712660c9d42fe6d75487a0" iface="eth0" netns="/var/run/netns/cni-f9d5e9f4-3d09-5575-21b2-10c2fdc0c34e" Jan 30 13:54:27.169804 containerd[2095]: 2025-01-30 13:54:26.843 [INFO][4923] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ac4689f84082430159dbf25ab66facdf1b47e45260712660c9d42fe6d75487a0" iface="eth0" netns="/var/run/netns/cni-f9d5e9f4-3d09-5575-21b2-10c2fdc0c34e" Jan 30 13:54:27.169804 containerd[2095]: 2025-01-30 13:54:26.843 [INFO][4923] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ac4689f84082430159dbf25ab66facdf1b47e45260712660c9d42fe6d75487a0" Jan 30 13:54:27.169804 containerd[2095]: 2025-01-30 13:54:26.843 [INFO][4923] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ac4689f84082430159dbf25ab66facdf1b47e45260712660c9d42fe6d75487a0" Jan 30 13:54:27.169804 containerd[2095]: 2025-01-30 13:54:27.139 [INFO][4936] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ac4689f84082430159dbf25ab66facdf1b47e45260712660c9d42fe6d75487a0" HandleID="k8s-pod-network.ac4689f84082430159dbf25ab66facdf1b47e45260712660c9d42fe6d75487a0" Workload="ip--172--31--23--102-k8s-coredns--7db6d8ff4d--84qz8-eth0" Jan 30 13:54:27.169804 containerd[2095]: 2025-01-30 13:54:27.141 [INFO][4936] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:54:27.169804 containerd[2095]: 2025-01-30 13:54:27.142 [INFO][4936] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:54:27.169804 containerd[2095]: 2025-01-30 13:54:27.158 [WARNING][4936] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ac4689f84082430159dbf25ab66facdf1b47e45260712660c9d42fe6d75487a0" HandleID="k8s-pod-network.ac4689f84082430159dbf25ab66facdf1b47e45260712660c9d42fe6d75487a0" Workload="ip--172--31--23--102-k8s-coredns--7db6d8ff4d--84qz8-eth0" Jan 30 13:54:27.169804 containerd[2095]: 2025-01-30 13:54:27.158 [INFO][4936] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ac4689f84082430159dbf25ab66facdf1b47e45260712660c9d42fe6d75487a0" HandleID="k8s-pod-network.ac4689f84082430159dbf25ab66facdf1b47e45260712660c9d42fe6d75487a0" Workload="ip--172--31--23--102-k8s-coredns--7db6d8ff4d--84qz8-eth0" Jan 30 13:54:27.169804 containerd[2095]: 2025-01-30 13:54:27.160 [INFO][4936] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:54:27.169804 containerd[2095]: 2025-01-30 13:54:27.164 [INFO][4923] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ac4689f84082430159dbf25ab66facdf1b47e45260712660c9d42fe6d75487a0" Jan 30 13:54:27.179897 systemd[1]: run-netns-cni\x2df9d5e9f4\x2d3d09\x2d5575\x2d21b2\x2d10c2fdc0c34e.mount: Deactivated successfully. Jan 30 13:54:27.194562 containerd[2095]: time="2025-01-30T13:54:27.192347162Z" level=info msg="TearDown network for sandbox \"ac4689f84082430159dbf25ab66facdf1b47e45260712660c9d42fe6d75487a0\" successfully" Jan 30 13:54:27.194562 containerd[2095]: time="2025-01-30T13:54:27.192397123Z" level=info msg="StopPodSandbox for \"ac4689f84082430159dbf25ab66facdf1b47e45260712660c9d42fe6d75487a0\" returns successfully" Jan 30 13:54:27.200849 containerd[2095]: time="2025-01-30T13:54:27.200276852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-84qz8,Uid:8add0565-323b-46d5-8793-d7bb0f574609,Namespace:kube-system,Attempt:1,}" Jan 30 13:54:27.204037 containerd[2095]: 2025-01-30 13:54:26.828 [INFO][4910] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a43897853c8fd200597d9055b5ff78c738b67dcfc2e122ad4c65af26fc6855c3" Jan 30 13:54:27.204037 containerd[2095]: 2025-01-30 13:54:26.829 [INFO][4910] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a43897853c8fd200597d9055b5ff78c738b67dcfc2e122ad4c65af26fc6855c3" iface="eth0" netns="/var/run/netns/cni-4a628538-8eb5-148c-a827-20f51ee2e01b" Jan 30 13:54:27.204037 containerd[2095]: 2025-01-30 13:54:26.830 [INFO][4910] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a43897853c8fd200597d9055b5ff78c738b67dcfc2e122ad4c65af26fc6855c3" iface="eth0" netns="/var/run/netns/cni-4a628538-8eb5-148c-a827-20f51ee2e01b" Jan 30 13:54:27.204037 containerd[2095]: 2025-01-30 13:54:26.836 [INFO][4910] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a43897853c8fd200597d9055b5ff78c738b67dcfc2e122ad4c65af26fc6855c3" iface="eth0" netns="/var/run/netns/cni-4a628538-8eb5-148c-a827-20f51ee2e01b" Jan 30 13:54:27.204037 containerd[2095]: 2025-01-30 13:54:26.837 [INFO][4910] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a43897853c8fd200597d9055b5ff78c738b67dcfc2e122ad4c65af26fc6855c3" Jan 30 13:54:27.204037 containerd[2095]: 2025-01-30 13:54:26.837 [INFO][4910] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a43897853c8fd200597d9055b5ff78c738b67dcfc2e122ad4c65af26fc6855c3" Jan 30 13:54:27.204037 containerd[2095]: 2025-01-30 13:54:27.139 [INFO][4934] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a43897853c8fd200597d9055b5ff78c738b67dcfc2e122ad4c65af26fc6855c3" HandleID="k8s-pod-network.a43897853c8fd200597d9055b5ff78c738b67dcfc2e122ad4c65af26fc6855c3" Workload="ip--172--31--23--102-k8s-coredns--7db6d8ff4d--stfj9-eth0" Jan 30 13:54:27.204037 containerd[2095]: 2025-01-30 13:54:27.141 [INFO][4934] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:54:27.204037 containerd[2095]: 2025-01-30 13:54:27.160 [INFO][4934] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:54:27.204037 containerd[2095]: 2025-01-30 13:54:27.181 [WARNING][4934] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a43897853c8fd200597d9055b5ff78c738b67dcfc2e122ad4c65af26fc6855c3" HandleID="k8s-pod-network.a43897853c8fd200597d9055b5ff78c738b67dcfc2e122ad4c65af26fc6855c3" Workload="ip--172--31--23--102-k8s-coredns--7db6d8ff4d--stfj9-eth0" Jan 30 13:54:27.204037 containerd[2095]: 2025-01-30 13:54:27.181 [INFO][4934] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a43897853c8fd200597d9055b5ff78c738b67dcfc2e122ad4c65af26fc6855c3" HandleID="k8s-pod-network.a43897853c8fd200597d9055b5ff78c738b67dcfc2e122ad4c65af26fc6855c3" Workload="ip--172--31--23--102-k8s-coredns--7db6d8ff4d--stfj9-eth0" Jan 30 13:54:27.204037 containerd[2095]: 2025-01-30 13:54:27.185 [INFO][4934] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:54:27.204037 containerd[2095]: 2025-01-30 13:54:27.199 [INFO][4910] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a43897853c8fd200597d9055b5ff78c738b67dcfc2e122ad4c65af26fc6855c3" Jan 30 13:54:27.205084 containerd[2095]: time="2025-01-30T13:54:27.204934022Z" level=info msg="TearDown network for sandbox \"a43897853c8fd200597d9055b5ff78c738b67dcfc2e122ad4c65af26fc6855c3\" successfully" Jan 30 13:54:27.205084 containerd[2095]: time="2025-01-30T13:54:27.204970698Z" level=info msg="StopPodSandbox for \"a43897853c8fd200597d9055b5ff78c738b67dcfc2e122ad4c65af26fc6855c3\" returns successfully" Jan 30 13:54:27.207595 systemd[1]: run-netns-cni\x2d4a628538\x2d8eb5\x2d148c\x2da827\x2d20f51ee2e01b.mount: Deactivated successfully. Jan 30 13:54:27.213088 containerd[2095]: time="2025-01-30T13:54:27.212705633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-stfj9,Uid:97758f69-3608-4bd5-a29f-602e25cb96c7,Namespace:kube-system,Attempt:1,}" Jan 30 13:54:27.215923 containerd[2095]: 2025-01-30 13:54:26.825 [INFO][4915] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="55d1e351f14edcbcd30c03829b2dacb4f06c00cd8f3a12cccdda01708d4d131e" Jan 30 13:54:27.215923 containerd[2095]: 2025-01-30 13:54:26.829 [INFO][4915] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="55d1e351f14edcbcd30c03829b2dacb4f06c00cd8f3a12cccdda01708d4d131e" iface="eth0" netns="/var/run/netns/cni-bc8209b8-3834-b566-b22e-1ca55d8272fa" Jan 30 13:54:27.215923 containerd[2095]: 2025-01-30 13:54:26.829 [INFO][4915] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="55d1e351f14edcbcd30c03829b2dacb4f06c00cd8f3a12cccdda01708d4d131e" iface="eth0" netns="/var/run/netns/cni-bc8209b8-3834-b566-b22e-1ca55d8272fa" Jan 30 13:54:27.215923 containerd[2095]: 2025-01-30 13:54:26.836 [INFO][4915] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="55d1e351f14edcbcd30c03829b2dacb4f06c00cd8f3a12cccdda01708d4d131e" iface="eth0" netns="/var/run/netns/cni-bc8209b8-3834-b566-b22e-1ca55d8272fa" Jan 30 13:54:27.215923 containerd[2095]: 2025-01-30 13:54:26.836 [INFO][4915] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="55d1e351f14edcbcd30c03829b2dacb4f06c00cd8f3a12cccdda01708d4d131e" Jan 30 13:54:27.215923 containerd[2095]: 2025-01-30 13:54:26.836 [INFO][4915] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="55d1e351f14edcbcd30c03829b2dacb4f06c00cd8f3a12cccdda01708d4d131e" Jan 30 13:54:27.215923 containerd[2095]: 2025-01-30 13:54:27.139 [INFO][4935] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="55d1e351f14edcbcd30c03829b2dacb4f06c00cd8f3a12cccdda01708d4d131e" HandleID="k8s-pod-network.55d1e351f14edcbcd30c03829b2dacb4f06c00cd8f3a12cccdda01708d4d131e" Workload="ip--172--31--23--102-k8s-calico--kube--controllers--6fd4bfb6f4--cv7hp-eth0" Jan 30 13:54:27.215923 containerd[2095]: 2025-01-30 13:54:27.141 [INFO][4935] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:54:27.215923 containerd[2095]: 2025-01-30 13:54:27.185 [INFO][4935] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:54:27.215923 containerd[2095]: 2025-01-30 13:54:27.200 [WARNING][4935] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="55d1e351f14edcbcd30c03829b2dacb4f06c00cd8f3a12cccdda01708d4d131e" HandleID="k8s-pod-network.55d1e351f14edcbcd30c03829b2dacb4f06c00cd8f3a12cccdda01708d4d131e" Workload="ip--172--31--23--102-k8s-calico--kube--controllers--6fd4bfb6f4--cv7hp-eth0" Jan 30 13:54:27.215923 containerd[2095]: 2025-01-30 13:54:27.201 [INFO][4935] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="55d1e351f14edcbcd30c03829b2dacb4f06c00cd8f3a12cccdda01708d4d131e" HandleID="k8s-pod-network.55d1e351f14edcbcd30c03829b2dacb4f06c00cd8f3a12cccdda01708d4d131e" Workload="ip--172--31--23--102-k8s-calico--kube--controllers--6fd4bfb6f4--cv7hp-eth0" Jan 30 13:54:27.215923 containerd[2095]: 2025-01-30 13:54:27.209 [INFO][4935] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:54:27.215923 containerd[2095]: 2025-01-30 13:54:27.213 [INFO][4915] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="55d1e351f14edcbcd30c03829b2dacb4f06c00cd8f3a12cccdda01708d4d131e" Jan 30 13:54:27.222001 containerd[2095]: time="2025-01-30T13:54:27.221057167Z" level=info msg="TearDown network for sandbox \"55d1e351f14edcbcd30c03829b2dacb4f06c00cd8f3a12cccdda01708d4d131e\" successfully" Jan 30 13:54:27.222001 containerd[2095]: time="2025-01-30T13:54:27.221098929Z" level=info msg="StopPodSandbox for \"55d1e351f14edcbcd30c03829b2dacb4f06c00cd8f3a12cccdda01708d4d131e\" returns successfully" Jan 30 13:54:27.225127 systemd[1]: run-netns-cni\x2dbc8209b8\x2d3834\x2db566\x2db22e\x2d1ca55d8272fa.mount: Deactivated successfully. Jan 30 13:54:27.231758 kubelet[3401]: I0130 13:54:27.231621 3401 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:54:27.246225 containerd[2095]: time="2025-01-30T13:54:27.246146844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fd4bfb6f4-cv7hp,Uid:d1c07e75-2ef2-4a46-836f-bfeb350b2011,Namespace:calico-system,Attempt:1,}" Jan 30 13:54:27.946887 systemd-networkd[1651]: cali918ff56ceea: Link UP Jan 30 13:54:27.956035 systemd-networkd[1651]: cali918ff56ceea: Gained carrier Jan 30 13:54:27.967489 (udev-worker)[5060]: Network interface NamePolicy= disabled on kernel command line. Jan 30 13:54:28.034858 containerd[2095]: 2025-01-30 13:54:27.560 [INFO][4978] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 30 13:54:28.034858 containerd[2095]: 2025-01-30 13:54:27.586 [INFO][4978] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--102-k8s-coredns--7db6d8ff4d--84qz8-eth0 coredns-7db6d8ff4d- kube-system 8add0565-323b-46d5-8793-d7bb0f574609 742 0 2025-01-30 13:53:53 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-23-102 coredns-7db6d8ff4d-84qz8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali918ff56ceea [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="17b934772c5aa4d9a52bdbe4d12520ec9bf00a364004b46c6da3bd534e9de68e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-84qz8" WorkloadEndpoint="ip--172--31--23--102-k8s-coredns--7db6d8ff4d--84qz8-" Jan 30 13:54:28.034858 containerd[2095]: 2025-01-30 13:54:27.587 [INFO][4978] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="17b934772c5aa4d9a52bdbe4d12520ec9bf00a364004b46c6da3bd534e9de68e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-84qz8" WorkloadEndpoint="ip--172--31--23--102-k8s-coredns--7db6d8ff4d--84qz8-eth0" Jan 30 13:54:28.034858 containerd[2095]: 2025-01-30 13:54:27.725 [INFO][5036] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="17b934772c5aa4d9a52bdbe4d12520ec9bf00a364004b46c6da3bd534e9de68e" HandleID="k8s-pod-network.17b934772c5aa4d9a52bdbe4d12520ec9bf00a364004b46c6da3bd534e9de68e" Workload="ip--172--31--23--102-k8s-coredns--7db6d8ff4d--84qz8-eth0" Jan 30 13:54:28.034858 containerd[2095]: 2025-01-30 13:54:27.751 [INFO][5036] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="17b934772c5aa4d9a52bdbe4d12520ec9bf00a364004b46c6da3bd534e9de68e" HandleID="k8s-pod-network.17b934772c5aa4d9a52bdbe4d12520ec9bf00a364004b46c6da3bd534e9de68e" Workload="ip--172--31--23--102-k8s-coredns--7db6d8ff4d--84qz8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031a690), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-23-102", "pod":"coredns-7db6d8ff4d-84qz8", "timestamp":"2025-01-30 13:54:27.725942943 +0000 UTC"}, Hostname:"ip-172-31-23-102", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:54:28.034858 containerd[2095]: 2025-01-30 13:54:27.752 [INFO][5036] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:54:28.034858 containerd[2095]: 2025-01-30 13:54:27.752 [INFO][5036] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:54:28.034858 containerd[2095]: 2025-01-30 13:54:27.752 [INFO][5036] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-102' Jan 30 13:54:28.034858 containerd[2095]: 2025-01-30 13:54:27.755 [INFO][5036] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.17b934772c5aa4d9a52bdbe4d12520ec9bf00a364004b46c6da3bd534e9de68e" host="ip-172-31-23-102" Jan 30 13:54:28.034858 containerd[2095]: 2025-01-30 13:54:27.774 [INFO][5036] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-23-102" Jan 30 13:54:28.034858 containerd[2095]: 2025-01-30 13:54:27.796 [INFO][5036] ipam/ipam.go 489: Trying affinity for 192.168.54.64/26 host="ip-172-31-23-102" Jan 30 13:54:28.034858 containerd[2095]: 2025-01-30 13:54:27.805 [INFO][5036] ipam/ipam.go 155: Attempting to load block cidr=192.168.54.64/26 host="ip-172-31-23-102" Jan 30 13:54:28.034858 containerd[2095]: 2025-01-30 13:54:27.813 [INFO][5036] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.54.64/26 host="ip-172-31-23-102" Jan 30 13:54:28.034858 containerd[2095]: 2025-01-30 13:54:27.813 [INFO][5036] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.54.64/26 handle="k8s-pod-network.17b934772c5aa4d9a52bdbe4d12520ec9bf00a364004b46c6da3bd534e9de68e" host="ip-172-31-23-102" Jan 30 13:54:28.034858 containerd[2095]: 2025-01-30 13:54:27.826 [INFO][5036] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.17b934772c5aa4d9a52bdbe4d12520ec9bf00a364004b46c6da3bd534e9de68e Jan 30 13:54:28.034858 containerd[2095]: 2025-01-30 13:54:27.843 [INFO][5036] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.54.64/26 handle="k8s-pod-network.17b934772c5aa4d9a52bdbe4d12520ec9bf00a364004b46c6da3bd534e9de68e" host="ip-172-31-23-102" Jan 30 13:54:28.034858 containerd[2095]: 2025-01-30 13:54:27.865 [INFO][5036] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.54.65/26] block=192.168.54.64/26 handle="k8s-pod-network.17b934772c5aa4d9a52bdbe4d12520ec9bf00a364004b46c6da3bd534e9de68e" host="ip-172-31-23-102" Jan 30 13:54:28.034858 containerd[2095]: 2025-01-30 13:54:27.866 [INFO][5036] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.54.65/26] handle="k8s-pod-network.17b934772c5aa4d9a52bdbe4d12520ec9bf00a364004b46c6da3bd534e9de68e" host="ip-172-31-23-102" Jan 30 13:54:28.034858 containerd[2095]: 2025-01-30 13:54:27.867 [INFO][5036] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:54:28.034858 containerd[2095]: 2025-01-30 13:54:27.868 [INFO][5036] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.54.65/26] IPv6=[] ContainerID="17b934772c5aa4d9a52bdbe4d12520ec9bf00a364004b46c6da3bd534e9de68e" HandleID="k8s-pod-network.17b934772c5aa4d9a52bdbe4d12520ec9bf00a364004b46c6da3bd534e9de68e" Workload="ip--172--31--23--102-k8s-coredns--7db6d8ff4d--84qz8-eth0" Jan 30 13:54:28.052847 containerd[2095]: 2025-01-30 13:54:27.888 [INFO][4978] cni-plugin/k8s.go 386: Populated endpoint ContainerID="17b934772c5aa4d9a52bdbe4d12520ec9bf00a364004b46c6da3bd534e9de68e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-84qz8" WorkloadEndpoint="ip--172--31--23--102-k8s-coredns--7db6d8ff4d--84qz8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--102-k8s-coredns--7db6d8ff4d--84qz8-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8add0565-323b-46d5-8793-d7bb0f574609", ResourceVersion:"742", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 53, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-102", ContainerID:"", Pod:"coredns-7db6d8ff4d-84qz8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.54.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali918ff56ceea", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:54:28.052847 containerd[2095]: 2025-01-30 13:54:27.888 [INFO][4978] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.54.65/32] ContainerID="17b934772c5aa4d9a52bdbe4d12520ec9bf00a364004b46c6da3bd534e9de68e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-84qz8" WorkloadEndpoint="ip--172--31--23--102-k8s-coredns--7db6d8ff4d--84qz8-eth0" Jan 30 13:54:28.052847 containerd[2095]: 2025-01-30 13:54:27.888 [INFO][4978] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali918ff56ceea ContainerID="17b934772c5aa4d9a52bdbe4d12520ec9bf00a364004b46c6da3bd534e9de68e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-84qz8" WorkloadEndpoint="ip--172--31--23--102-k8s-coredns--7db6d8ff4d--84qz8-eth0" Jan 30 13:54:28.052847 containerd[2095]: 2025-01-30 13:54:27.969 [INFO][4978] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="17b934772c5aa4d9a52bdbe4d12520ec9bf00a364004b46c6da3bd534e9de68e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-84qz8" WorkloadEndpoint="ip--172--31--23--102-k8s-coredns--7db6d8ff4d--84qz8-eth0" Jan 30 13:54:28.052847 containerd[2095]: 2025-01-30 13:54:27.975 [INFO][4978] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="17b934772c5aa4d9a52bdbe4d12520ec9bf00a364004b46c6da3bd534e9de68e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-84qz8" WorkloadEndpoint="ip--172--31--23--102-k8s-coredns--7db6d8ff4d--84qz8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--102-k8s-coredns--7db6d8ff4d--84qz8-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8add0565-323b-46d5-8793-d7bb0f574609", ResourceVersion:"742", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 53, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-102", ContainerID:"17b934772c5aa4d9a52bdbe4d12520ec9bf00a364004b46c6da3bd534e9de68e", Pod:"coredns-7db6d8ff4d-84qz8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.54.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali918ff56ceea", MAC:"02:ef:65:38:f9:20", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:54:28.052847 containerd[2095]: 2025-01-30 13:54:27.999 [INFO][4978] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="17b934772c5aa4d9a52bdbe4d12520ec9bf00a364004b46c6da3bd534e9de68e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-84qz8" WorkloadEndpoint="ip--172--31--23--102-k8s-coredns--7db6d8ff4d--84qz8-eth0" Jan 30 13:54:28.052847 containerd[2095]: 2025-01-30 13:54:27.445 [INFO][4955] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 30 13:54:28.045503 systemd-networkd[1651]: cali2974176b028: Link UP Jan 30 13:54:28.053562 containerd[2095]: 2025-01-30 13:54:27.536 [INFO][4955] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--102-k8s-coredns--7db6d8ff4d--stfj9-eth0 coredns-7db6d8ff4d- kube-system 97758f69-3608-4bd5-a29f-602e25cb96c7 740 0 2025-01-30 13:53:53 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-23-102 coredns-7db6d8ff4d-stfj9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2974176b028 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="65710e2f9b083e16aaab38dad1c5b2f9b3fe32f52fef8b5d746e6bb38bab3360" Namespace="kube-system" Pod="coredns-7db6d8ff4d-stfj9" WorkloadEndpoint="ip--172--31--23--102-k8s-coredns--7db6d8ff4d--stfj9-" Jan 30 13:54:28.053562 containerd[2095]: 2025-01-30 13:54:27.538 [INFO][4955] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="65710e2f9b083e16aaab38dad1c5b2f9b3fe32f52fef8b5d746e6bb38bab3360" Namespace="kube-system" Pod="coredns-7db6d8ff4d-stfj9" WorkloadEndpoint="ip--172--31--23--102-k8s-coredns--7db6d8ff4d--stfj9-eth0" Jan 30 13:54:28.053562 containerd[2095]: 2025-01-30 13:54:27.761 [INFO][5032] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="65710e2f9b083e16aaab38dad1c5b2f9b3fe32f52fef8b5d746e6bb38bab3360" HandleID="k8s-pod-network.65710e2f9b083e16aaab38dad1c5b2f9b3fe32f52fef8b5d746e6bb38bab3360" Workload="ip--172--31--23--102-k8s-coredns--7db6d8ff4d--stfj9-eth0" Jan 30 13:54:28.053562 containerd[2095]: 2025-01-30 13:54:27.792 [INFO][5032] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="65710e2f9b083e16aaab38dad1c5b2f9b3fe32f52fef8b5d746e6bb38bab3360" HandleID="k8s-pod-network.65710e2f9b083e16aaab38dad1c5b2f9b3fe32f52fef8b5d746e6bb38bab3360" Workload="ip--172--31--23--102-k8s-coredns--7db6d8ff4d--stfj9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003e4e80), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-23-102", "pod":"coredns-7db6d8ff4d-stfj9", "timestamp":"2025-01-30 13:54:27.761269185 +0000 UTC"}, Hostname:"ip-172-31-23-102", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:54:28.053562 containerd[2095]: 2025-01-30 13:54:27.795 [INFO][5032] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:54:28.053562 containerd[2095]: 2025-01-30 13:54:27.867 [INFO][5032] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:54:28.053562 containerd[2095]: 2025-01-30 13:54:27.869 [INFO][5032] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-102' Jan 30 13:54:28.053562 containerd[2095]: 2025-01-30 13:54:27.874 [INFO][5032] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.65710e2f9b083e16aaab38dad1c5b2f9b3fe32f52fef8b5d746e6bb38bab3360" host="ip-172-31-23-102" Jan 30 13:54:28.053562 containerd[2095]: 2025-01-30 13:54:27.891 [INFO][5032] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-23-102" Jan 30 13:54:28.053562 containerd[2095]: 2025-01-30 13:54:27.914 [INFO][5032] ipam/ipam.go 489: Trying affinity for 192.168.54.64/26 host="ip-172-31-23-102" Jan 30 13:54:28.053562 containerd[2095]: 2025-01-30 13:54:27.920 [INFO][5032] ipam/ipam.go 155: Attempting to load block cidr=192.168.54.64/26 host="ip-172-31-23-102" Jan 30 13:54:28.053562 containerd[2095]: 2025-01-30 13:54:27.929 [INFO][5032] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.54.64/26 host="ip-172-31-23-102" Jan 30 13:54:28.053562 containerd[2095]: 2025-01-30 13:54:27.929 [INFO][5032] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.54.64/26 handle="k8s-pod-network.65710e2f9b083e16aaab38dad1c5b2f9b3fe32f52fef8b5d746e6bb38bab3360" host="ip-172-31-23-102" Jan 30 13:54:28.053562 containerd[2095]: 2025-01-30 13:54:27.940 [INFO][5032] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.65710e2f9b083e16aaab38dad1c5b2f9b3fe32f52fef8b5d746e6bb38bab3360 Jan 30 13:54:28.053562 containerd[2095]: 2025-01-30 13:54:27.962 [INFO][5032] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.54.64/26 handle="k8s-pod-network.65710e2f9b083e16aaab38dad1c5b2f9b3fe32f52fef8b5d746e6bb38bab3360" host="ip-172-31-23-102" Jan 30 13:54:28.053562 containerd[2095]: 2025-01-30 13:54:27.984 [INFO][5032] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.54.66/26] block=192.168.54.64/26 handle="k8s-pod-network.65710e2f9b083e16aaab38dad1c5b2f9b3fe32f52fef8b5d746e6bb38bab3360" host="ip-172-31-23-102" Jan 30 13:54:28.053562 containerd[2095]: 2025-01-30 13:54:27.984 [INFO][5032] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.54.66/26] handle="k8s-pod-network.65710e2f9b083e16aaab38dad1c5b2f9b3fe32f52fef8b5d746e6bb38bab3360" host="ip-172-31-23-102" Jan 30 13:54:28.053562 containerd[2095]: 2025-01-30 13:54:27.984 [INFO][5032] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:54:28.053562 containerd[2095]: 2025-01-30 13:54:27.984 [INFO][5032] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.54.66/26] IPv6=[] ContainerID="65710e2f9b083e16aaab38dad1c5b2f9b3fe32f52fef8b5d746e6bb38bab3360" HandleID="k8s-pod-network.65710e2f9b083e16aaab38dad1c5b2f9b3fe32f52fef8b5d746e6bb38bab3360" Workload="ip--172--31--23--102-k8s-coredns--7db6d8ff4d--stfj9-eth0" Jan 30 13:54:28.047056 systemd-networkd[1651]: cali2974176b028: Gained carrier Jan 30 13:54:28.059343 containerd[2095]: 2025-01-30 13:54:28.015 [INFO][4955] cni-plugin/k8s.go 386: Populated endpoint ContainerID="65710e2f9b083e16aaab38dad1c5b2f9b3fe32f52fef8b5d746e6bb38bab3360" Namespace="kube-system" Pod="coredns-7db6d8ff4d-stfj9" WorkloadEndpoint="ip--172--31--23--102-k8s-coredns--7db6d8ff4d--stfj9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--102-k8s-coredns--7db6d8ff4d--stfj9-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"97758f69-3608-4bd5-a29f-602e25cb96c7", ResourceVersion:"740", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 53, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-102", ContainerID:"", Pod:"coredns-7db6d8ff4d-stfj9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.54.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2974176b028", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:54:28.059343 containerd[2095]: 2025-01-30 13:54:28.016 [INFO][4955] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.54.66/32] ContainerID="65710e2f9b083e16aaab38dad1c5b2f9b3fe32f52fef8b5d746e6bb38bab3360" Namespace="kube-system" Pod="coredns-7db6d8ff4d-stfj9" WorkloadEndpoint="ip--172--31--23--102-k8s-coredns--7db6d8ff4d--stfj9-eth0" Jan 30 13:54:28.059343 containerd[2095]: 2025-01-30 13:54:28.016 [INFO][4955] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2974176b028 ContainerID="65710e2f9b083e16aaab38dad1c5b2f9b3fe32f52fef8b5d746e6bb38bab3360" Namespace="kube-system" Pod="coredns-7db6d8ff4d-stfj9" WorkloadEndpoint="ip--172--31--23--102-k8s-coredns--7db6d8ff4d--stfj9-eth0" Jan 30 13:54:28.059343 containerd[2095]: 2025-01-30 13:54:28.020 [INFO][4955] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="65710e2f9b083e16aaab38dad1c5b2f9b3fe32f52fef8b5d746e6bb38bab3360" Namespace="kube-system" Pod="coredns-7db6d8ff4d-stfj9" WorkloadEndpoint="ip--172--31--23--102-k8s-coredns--7db6d8ff4d--stfj9-eth0" Jan 30 13:54:28.059343 containerd[2095]: 2025-01-30 13:54:28.022 [INFO][4955] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="65710e2f9b083e16aaab38dad1c5b2f9b3fe32f52fef8b5d746e6bb38bab3360" Namespace="kube-system" Pod="coredns-7db6d8ff4d-stfj9" WorkloadEndpoint="ip--172--31--23--102-k8s-coredns--7db6d8ff4d--stfj9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--102-k8s-coredns--7db6d8ff4d--stfj9-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"97758f69-3608-4bd5-a29f-602e25cb96c7", ResourceVersion:"740", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 53, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-102", ContainerID:"65710e2f9b083e16aaab38dad1c5b2f9b3fe32f52fef8b5d746e6bb38bab3360", Pod:"coredns-7db6d8ff4d-stfj9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.54.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2974176b028", MAC:"6e:eb:80:8a:34:7a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:54:28.059343 containerd[2095]: 2025-01-30 13:54:28.038 [INFO][4955] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="65710e2f9b083e16aaab38dad1c5b2f9b3fe32f52fef8b5d746e6bb38bab3360" Namespace="kube-system" Pod="coredns-7db6d8ff4d-stfj9" WorkloadEndpoint="ip--172--31--23--102-k8s-coredns--7db6d8ff4d--stfj9-eth0" Jan 30 13:54:28.221917 (udev-worker)[5059]: Network interface NamePolicy= disabled on kernel command line. Jan 30 13:54:28.230611 systemd-networkd[1651]: calic6dda57501e: Link UP Jan 30 13:54:28.230840 systemd-networkd[1651]: calic6dda57501e: Gained carrier Jan 30 13:54:28.276489 containerd[2095]: time="2025-01-30T13:54:28.275730499Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:54:28.276489 containerd[2095]: time="2025-01-30T13:54:28.275809213Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:54:28.276489 containerd[2095]: time="2025-01-30T13:54:28.276005488Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:54:28.276489 containerd[2095]: time="2025-01-30T13:54:28.276158201Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:54:28.290673 containerd[2095]: 2025-01-30 13:54:27.594 [INFO][4981] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 30 13:54:28.290673 containerd[2095]: 2025-01-30 13:54:27.659 [INFO][4981] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--102-k8s-calico--kube--controllers--6fd4bfb6f4--cv7hp-eth0 calico-kube-controllers-6fd4bfb6f4- calico-system d1c07e75-2ef2-4a46-836f-bfeb350b2011 741 0 2025-01-30 13:54:02 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6fd4bfb6f4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-23-102 calico-kube-controllers-6fd4bfb6f4-cv7hp eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic6dda57501e [] []}} ContainerID="02fb1f084ab02a3a2779747abe161d31352141b54f9be8fd3e77e8e5ee96292e" Namespace="calico-system" Pod="calico-kube-controllers-6fd4bfb6f4-cv7hp" WorkloadEndpoint="ip--172--31--23--102-k8s-calico--kube--controllers--6fd4bfb6f4--cv7hp-" Jan 30 13:54:28.290673 containerd[2095]: 2025-01-30 13:54:27.659 [INFO][4981] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="02fb1f084ab02a3a2779747abe161d31352141b54f9be8fd3e77e8e5ee96292e" Namespace="calico-system" Pod="calico-kube-controllers-6fd4bfb6f4-cv7hp" WorkloadEndpoint="ip--172--31--23--102-k8s-calico--kube--controllers--6fd4bfb6f4--cv7hp-eth0" Jan 30 13:54:28.290673 containerd[2095]: 2025-01-30 13:54:27.814 [INFO][5043] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="02fb1f084ab02a3a2779747abe161d31352141b54f9be8fd3e77e8e5ee96292e" HandleID="k8s-pod-network.02fb1f084ab02a3a2779747abe161d31352141b54f9be8fd3e77e8e5ee96292e" Workload="ip--172--31--23--102-k8s-calico--kube--controllers--6fd4bfb6f4--cv7hp-eth0" Jan 30 13:54:28.290673 containerd[2095]: 2025-01-30 13:54:27.845 [INFO][5043] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="02fb1f084ab02a3a2779747abe161d31352141b54f9be8fd3e77e8e5ee96292e" HandleID="k8s-pod-network.02fb1f084ab02a3a2779747abe161d31352141b54f9be8fd3e77e8e5ee96292e" Workload="ip--172--31--23--102-k8s-calico--kube--controllers--6fd4bfb6f4--cv7hp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00047b1b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-23-102", "pod":"calico-kube-controllers-6fd4bfb6f4-cv7hp", "timestamp":"2025-01-30 13:54:27.814382527 +0000 UTC"}, Hostname:"ip-172-31-23-102", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:54:28.290673 containerd[2095]: 2025-01-30 13:54:27.846 [INFO][5043] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:54:28.290673 containerd[2095]: 2025-01-30 13:54:27.988 [INFO][5043] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:54:28.290673 containerd[2095]: 2025-01-30 13:54:27.988 [INFO][5043] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-102' Jan 30 13:54:28.290673 containerd[2095]: 2025-01-30 13:54:28.024 [INFO][5043] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.02fb1f084ab02a3a2779747abe161d31352141b54f9be8fd3e77e8e5ee96292e" host="ip-172-31-23-102" Jan 30 13:54:28.290673 containerd[2095]: 2025-01-30 13:54:28.062 [INFO][5043] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-23-102" Jan 30 13:54:28.290673 containerd[2095]: 2025-01-30 13:54:28.085 [INFO][5043] ipam/ipam.go 489: Trying affinity for 192.168.54.64/26 host="ip-172-31-23-102" Jan 30 13:54:28.290673 containerd[2095]: 2025-01-30 13:54:28.093 [INFO][5043] ipam/ipam.go 155: Attempting to load block cidr=192.168.54.64/26 host="ip-172-31-23-102" Jan 30 13:54:28.290673 containerd[2095]: 2025-01-30 13:54:28.110 [INFO][5043] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.54.64/26 host="ip-172-31-23-102" Jan 30 13:54:28.290673 containerd[2095]: 2025-01-30 13:54:28.110 [INFO][5043] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.54.64/26 handle="k8s-pod-network.02fb1f084ab02a3a2779747abe161d31352141b54f9be8fd3e77e8e5ee96292e" host="ip-172-31-23-102" Jan 30 13:54:28.290673 containerd[2095]: 2025-01-30 13:54:28.116 [INFO][5043] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.02fb1f084ab02a3a2779747abe161d31352141b54f9be8fd3e77e8e5ee96292e Jan 30 13:54:28.290673 containerd[2095]: 2025-01-30 13:54:28.130 [INFO][5043] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.54.64/26 handle="k8s-pod-network.02fb1f084ab02a3a2779747abe161d31352141b54f9be8fd3e77e8e5ee96292e" host="ip-172-31-23-102" Jan 30 13:54:28.290673 containerd[2095]: 2025-01-30 13:54:28.163 [INFO][5043] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.54.67/26] block=192.168.54.64/26 handle="k8s-pod-network.02fb1f084ab02a3a2779747abe161d31352141b54f9be8fd3e77e8e5ee96292e" host="ip-172-31-23-102" Jan 30 13:54:28.290673 containerd[2095]: 2025-01-30 13:54:28.169 [INFO][5043] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.54.67/26] handle="k8s-pod-network.02fb1f084ab02a3a2779747abe161d31352141b54f9be8fd3e77e8e5ee96292e" host="ip-172-31-23-102" Jan 30 13:54:28.290673 containerd[2095]: 2025-01-30 13:54:28.169 [INFO][5043] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:54:28.290673 containerd[2095]: 2025-01-30 13:54:28.169 [INFO][5043] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.54.67/26] IPv6=[] ContainerID="02fb1f084ab02a3a2779747abe161d31352141b54f9be8fd3e77e8e5ee96292e" HandleID="k8s-pod-network.02fb1f084ab02a3a2779747abe161d31352141b54f9be8fd3e77e8e5ee96292e" Workload="ip--172--31--23--102-k8s-calico--kube--controllers--6fd4bfb6f4--cv7hp-eth0" Jan 30 13:54:28.293090 containerd[2095]: 2025-01-30 13:54:28.193 [INFO][4981] cni-plugin/k8s.go 386: Populated endpoint ContainerID="02fb1f084ab02a3a2779747abe161d31352141b54f9be8fd3e77e8e5ee96292e" Namespace="calico-system" Pod="calico-kube-controllers-6fd4bfb6f4-cv7hp" WorkloadEndpoint="ip--172--31--23--102-k8s-calico--kube--controllers--6fd4bfb6f4--cv7hp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--102-k8s-calico--kube--controllers--6fd4bfb6f4--cv7hp-eth0", GenerateName:"calico-kube-controllers-6fd4bfb6f4-", Namespace:"calico-system", SelfLink:"", UID:"d1c07e75-2ef2-4a46-836f-bfeb350b2011", ResourceVersion:"741", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 54, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6fd4bfb6f4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-102", ContainerID:"", Pod:"calico-kube-controllers-6fd4bfb6f4-cv7hp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.54.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic6dda57501e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:54:28.293090 containerd[2095]: 2025-01-30 13:54:28.193 [INFO][4981] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.54.67/32] ContainerID="02fb1f084ab02a3a2779747abe161d31352141b54f9be8fd3e77e8e5ee96292e" Namespace="calico-system" Pod="calico-kube-controllers-6fd4bfb6f4-cv7hp" WorkloadEndpoint="ip--172--31--23--102-k8s-calico--kube--controllers--6fd4bfb6f4--cv7hp-eth0" Jan 30 13:54:28.293090 containerd[2095]: 2025-01-30 13:54:28.215 [INFO][4981] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic6dda57501e ContainerID="02fb1f084ab02a3a2779747abe161d31352141b54f9be8fd3e77e8e5ee96292e" Namespace="calico-system" Pod="calico-kube-controllers-6fd4bfb6f4-cv7hp" WorkloadEndpoint="ip--172--31--23--102-k8s-calico--kube--controllers--6fd4bfb6f4--cv7hp-eth0" Jan 30 13:54:28.293090 containerd[2095]: 2025-01-30 13:54:28.221 [INFO][4981] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="02fb1f084ab02a3a2779747abe161d31352141b54f9be8fd3e77e8e5ee96292e" Namespace="calico-system" Pod="calico-kube-controllers-6fd4bfb6f4-cv7hp" WorkloadEndpoint="ip--172--31--23--102-k8s-calico--kube--controllers--6fd4bfb6f4--cv7hp-eth0" Jan 30 13:54:28.293090 containerd[2095]: 2025-01-30 13:54:28.221 [INFO][4981] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="02fb1f084ab02a3a2779747abe161d31352141b54f9be8fd3e77e8e5ee96292e" Namespace="calico-system" Pod="calico-kube-controllers-6fd4bfb6f4-cv7hp" WorkloadEndpoint="ip--172--31--23--102-k8s-calico--kube--controllers--6fd4bfb6f4--cv7hp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--102-k8s-calico--kube--controllers--6fd4bfb6f4--cv7hp-eth0", GenerateName:"calico-kube-controllers-6fd4bfb6f4-", Namespace:"calico-system", SelfLink:"", UID:"d1c07e75-2ef2-4a46-836f-bfeb350b2011", ResourceVersion:"741", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 54, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6fd4bfb6f4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-102", ContainerID:"02fb1f084ab02a3a2779747abe161d31352141b54f9be8fd3e77e8e5ee96292e", Pod:"calico-kube-controllers-6fd4bfb6f4-cv7hp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.54.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic6dda57501e", MAC:"d2:de:62:72:b9:3a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:54:28.293090 containerd[2095]: 2025-01-30 13:54:28.270 [INFO][4981] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="02fb1f084ab02a3a2779747abe161d31352141b54f9be8fd3e77e8e5ee96292e" Namespace="calico-system" Pod="calico-kube-controllers-6fd4bfb6f4-cv7hp" WorkloadEndpoint="ip--172--31--23--102-k8s-calico--kube--controllers--6fd4bfb6f4--cv7hp-eth0" Jan 30 13:54:28.333486 containerd[2095]: time="2025-01-30T13:54:28.331457602Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:54:28.334428 containerd[2095]: time="2025-01-30T13:54:28.334204469Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:54:28.335164 containerd[2095]: time="2025-01-30T13:54:28.334635148Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:54:28.337186 containerd[2095]: time="2025-01-30T13:54:28.336918477Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:54:28.587459 containerd[2095]: time="2025-01-30T13:54:28.587295825Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:54:28.587652 containerd[2095]: time="2025-01-30T13:54:28.587437818Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:54:28.587652 containerd[2095]: time="2025-01-30T13:54:28.587460046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:54:28.587652 containerd[2095]: time="2025-01-30T13:54:28.587583066Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:54:28.606162 containerd[2095]: time="2025-01-30T13:54:28.606119676Z" level=info msg="StopPodSandbox for \"744b5addb80ebf4b59cca9c8983a7bbdb091a1f2b961b4f5e7dcb573b05200f6\"" Jan 30 13:54:28.637053 containerd[2095]: time="2025-01-30T13:54:28.637013597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-stfj9,Uid:97758f69-3608-4bd5-a29f-602e25cb96c7,Namespace:kube-system,Attempt:1,} returns sandbox id \"65710e2f9b083e16aaab38dad1c5b2f9b3fe32f52fef8b5d746e6bb38bab3360\"" Jan 30 13:54:28.645932 containerd[2095]: time="2025-01-30T13:54:28.645867478Z" level=info msg="CreateContainer within sandbox \"65710e2f9b083e16aaab38dad1c5b2f9b3fe32f52fef8b5d746e6bb38bab3360\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:54:28.650649 containerd[2095]: time="2025-01-30T13:54:28.650598006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-84qz8,Uid:8add0565-323b-46d5-8793-d7bb0f574609,Namespace:kube-system,Attempt:1,} returns sandbox id \"17b934772c5aa4d9a52bdbe4d12520ec9bf00a364004b46c6da3bd534e9de68e\"" Jan 30 13:54:28.682975 containerd[2095]: time="2025-01-30T13:54:28.682926055Z" level=info msg="CreateContainer within sandbox \"17b934772c5aa4d9a52bdbe4d12520ec9bf00a364004b46c6da3bd534e9de68e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:54:28.754060 containerd[2095]: time="2025-01-30T13:54:28.753577088Z" level=info msg="CreateContainer within sandbox \"65710e2f9b083e16aaab38dad1c5b2f9b3fe32f52fef8b5d746e6bb38bab3360\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7a533f75186178274f06d327eac17fa8233e48e6161536b9aa44ca483d697319\"" Jan 30 13:54:28.754483 containerd[2095]: time="2025-01-30T13:54:28.754423641Z" level=info msg="StartContainer for \"7a533f75186178274f06d327eac17fa8233e48e6161536b9aa44ca483d697319\"" Jan 30 13:54:28.761340 containerd[2095]: time="2025-01-30T13:54:28.761206515Z" level=info msg="CreateContainer within sandbox \"17b934772c5aa4d9a52bdbe4d12520ec9bf00a364004b46c6da3bd534e9de68e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"76a4d5a55e71b1a7d2cecb0d4e0ed9b29e907926c0dfa09df0bb63887dcb0e73\"" Jan 30 13:54:28.762966 containerd[2095]: time="2025-01-30T13:54:28.762165570Z" level=info msg="StartContainer for \"76a4d5a55e71b1a7d2cecb0d4e0ed9b29e907926c0dfa09df0bb63887dcb0e73\"" Jan 30 13:54:28.894890 containerd[2095]: time="2025-01-30T13:54:28.894639324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fd4bfb6f4-cv7hp,Uid:d1c07e75-2ef2-4a46-836f-bfeb350b2011,Namespace:calico-system,Attempt:1,} returns sandbox id \"02fb1f084ab02a3a2779747abe161d31352141b54f9be8fd3e77e8e5ee96292e\"" Jan 30 13:54:28.902076 containerd[2095]: time="2025-01-30T13:54:28.901606741Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 30 13:54:29.014377 containerd[2095]: time="2025-01-30T13:54:29.011576572Z" level=info msg="StartContainer for \"76a4d5a55e71b1a7d2cecb0d4e0ed9b29e907926c0dfa09df0bb63887dcb0e73\" returns successfully" Jan 30 13:54:29.027823 containerd[2095]: time="2025-01-30T13:54:29.026953066Z" level=info msg="StartContainer for \"7a533f75186178274f06d327eac17fa8233e48e6161536b9aa44ca483d697319\" returns successfully" Jan 30 13:54:29.031629 containerd[2095]: 2025-01-30 13:54:28.841 [INFO][5251] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="744b5addb80ebf4b59cca9c8983a7bbdb091a1f2b961b4f5e7dcb573b05200f6" Jan 30 13:54:29.031629 containerd[2095]: 2025-01-30 13:54:28.843 [INFO][5251] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="744b5addb80ebf4b59cca9c8983a7bbdb091a1f2b961b4f5e7dcb573b05200f6" iface="eth0" netns="/var/run/netns/cni-3abaf919-0eac-7647-0742-97c4c28a25ce" Jan 30 13:54:29.031629 containerd[2095]: 2025-01-30 13:54:28.843 [INFO][5251] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="744b5addb80ebf4b59cca9c8983a7bbdb091a1f2b961b4f5e7dcb573b05200f6" iface="eth0" netns="/var/run/netns/cni-3abaf919-0eac-7647-0742-97c4c28a25ce" Jan 30 13:54:29.031629 containerd[2095]: 2025-01-30 13:54:28.845 [INFO][5251] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="744b5addb80ebf4b59cca9c8983a7bbdb091a1f2b961b4f5e7dcb573b05200f6" iface="eth0" netns="/var/run/netns/cni-3abaf919-0eac-7647-0742-97c4c28a25ce" Jan 30 13:54:29.031629 containerd[2095]: 2025-01-30 13:54:28.845 [INFO][5251] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="744b5addb80ebf4b59cca9c8983a7bbdb091a1f2b961b4f5e7dcb573b05200f6" Jan 30 13:54:29.031629 containerd[2095]: 2025-01-30 13:54:28.845 [INFO][5251] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="744b5addb80ebf4b59cca9c8983a7bbdb091a1f2b961b4f5e7dcb573b05200f6" Jan 30 13:54:29.031629 containerd[2095]: 2025-01-30 13:54:28.986 [INFO][5296] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="744b5addb80ebf4b59cca9c8983a7bbdb091a1f2b961b4f5e7dcb573b05200f6" HandleID="k8s-pod-network.744b5addb80ebf4b59cca9c8983a7bbdb091a1f2b961b4f5e7dcb573b05200f6" Workload="ip--172--31--23--102-k8s-calico--apiserver--7b99fdb47b--fvbzx-eth0" Jan 30 13:54:29.031629 containerd[2095]: 2025-01-30 13:54:28.986 [INFO][5296] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:54:29.031629 containerd[2095]: 2025-01-30 13:54:28.986 [INFO][5296] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:54:29.031629 containerd[2095]: 2025-01-30 13:54:29.000 [WARNING][5296] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="744b5addb80ebf4b59cca9c8983a7bbdb091a1f2b961b4f5e7dcb573b05200f6" HandleID="k8s-pod-network.744b5addb80ebf4b59cca9c8983a7bbdb091a1f2b961b4f5e7dcb573b05200f6" Workload="ip--172--31--23--102-k8s-calico--apiserver--7b99fdb47b--fvbzx-eth0" Jan 30 13:54:29.031629 containerd[2095]: 2025-01-30 13:54:29.000 [INFO][5296] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="744b5addb80ebf4b59cca9c8983a7bbdb091a1f2b961b4f5e7dcb573b05200f6" HandleID="k8s-pod-network.744b5addb80ebf4b59cca9c8983a7bbdb091a1f2b961b4f5e7dcb573b05200f6" Workload="ip--172--31--23--102-k8s-calico--apiserver--7b99fdb47b--fvbzx-eth0" Jan 30 13:54:29.031629 containerd[2095]: 2025-01-30 13:54:29.004 [INFO][5296] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:54:29.031629 containerd[2095]: 2025-01-30 13:54:29.012 [INFO][5251] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="744b5addb80ebf4b59cca9c8983a7bbdb091a1f2b961b4f5e7dcb573b05200f6" Jan 30 13:54:29.046536 containerd[2095]: time="2025-01-30T13:54:29.034062248Z" level=info msg="TearDown network for sandbox \"744b5addb80ebf4b59cca9c8983a7bbdb091a1f2b961b4f5e7dcb573b05200f6\" successfully" Jan 30 13:54:29.046536 containerd[2095]: time="2025-01-30T13:54:29.034105850Z" level=info msg="StopPodSandbox for \"744b5addb80ebf4b59cca9c8983a7bbdb091a1f2b961b4f5e7dcb573b05200f6\" returns successfully" Jan 30 13:54:29.075540 containerd[2095]: time="2025-01-30T13:54:29.074365476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b99fdb47b-fvbzx,Uid:75d06128-1791-4f18-82b2-8e83d7439284,Namespace:calico-apiserver,Attempt:1,}" Jan 30 13:54:29.211500 systemd[1]: run-netns-cni\x2d3abaf919\x2d0eac\x2d7647\x2d0742\x2d97c4c28a25ce.mount: Deactivated successfully. Jan 30 13:54:29.251449 systemd-networkd[1651]: cali918ff56ceea: Gained IPv6LL Jan 30 13:54:29.317042 systemd-networkd[1651]: calic6dda57501e: Gained IPv6LL Jan 30 13:54:29.345906 kubelet[3401]: I0130 13:54:29.343901 3401 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-stfj9" podStartSLOduration=36.343844488 podStartE2EDuration="36.343844488s" podCreationTimestamp="2025-01-30 13:53:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:54:29.291188095 +0000 UTC m=+48.957316959" watchObservedRunningTime="2025-01-30 13:54:29.343844488 +0000 UTC m=+49.009973354" Jan 30 13:54:29.352242 kubelet[3401]: I0130 13:54:29.350731 3401 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-84qz8" podStartSLOduration=36.350709595 podStartE2EDuration="36.350709595s" podCreationTimestamp="2025-01-30 13:53:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:54:29.341551171 +0000 UTC m=+49.007680035" watchObservedRunningTime="2025-01-30 13:54:29.350709595 +0000 UTC m=+49.016838460" Jan 30 13:54:29.379242 systemd-networkd[1651]: cali2974176b028: Gained IPv6LL Jan 30 13:54:29.540792 systemd-networkd[1651]: cali8cb76a8f17b: Link UP Jan 30 13:54:29.543285 systemd-networkd[1651]: cali8cb76a8f17b: Gained carrier Jan 30 13:54:29.586851 containerd[2095]: 2025-01-30 13:54:29.288 [INFO][5357] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 30 13:54:29.586851 containerd[2095]: 2025-01-30 13:54:29.328 [INFO][5357] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--102-k8s-calico--apiserver--7b99fdb47b--fvbzx-eth0 calico-apiserver-7b99fdb47b- calico-apiserver 75d06128-1791-4f18-82b2-8e83d7439284 763 0 2025-01-30 13:54:01 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7b99fdb47b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-23-102 calico-apiserver-7b99fdb47b-fvbzx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8cb76a8f17b [] []}} ContainerID="c8e7eb6ec81d242693f8deef7b79a0d7a11401bc0a307039cb692ce915c5c864" Namespace="calico-apiserver" Pod="calico-apiserver-7b99fdb47b-fvbzx" WorkloadEndpoint="ip--172--31--23--102-k8s-calico--apiserver--7b99fdb47b--fvbzx-" Jan 30 13:54:29.586851 containerd[2095]: 2025-01-30 13:54:29.330 [INFO][5357] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c8e7eb6ec81d242693f8deef7b79a0d7a11401bc0a307039cb692ce915c5c864" Namespace="calico-apiserver" Pod="calico-apiserver-7b99fdb47b-fvbzx" WorkloadEndpoint="ip--172--31--23--102-k8s-calico--apiserver--7b99fdb47b--fvbzx-eth0" Jan 30 13:54:29.586851 containerd[2095]: 2025-01-30 13:54:29.459 [INFO][5370] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c8e7eb6ec81d242693f8deef7b79a0d7a11401bc0a307039cb692ce915c5c864" HandleID="k8s-pod-network.c8e7eb6ec81d242693f8deef7b79a0d7a11401bc0a307039cb692ce915c5c864" Workload="ip--172--31--23--102-k8s-calico--apiserver--7b99fdb47b--fvbzx-eth0" Jan 30 13:54:29.586851 containerd[2095]: 2025-01-30 13:54:29.479 [INFO][5370] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c8e7eb6ec81d242693f8deef7b79a0d7a11401bc0a307039cb692ce915c5c864" HandleID="k8s-pod-network.c8e7eb6ec81d242693f8deef7b79a0d7a11401bc0a307039cb692ce915c5c864" Workload="ip--172--31--23--102-k8s-calico--apiserver--7b99fdb47b--fvbzx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000397cb0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-23-102", "pod":"calico-apiserver-7b99fdb47b-fvbzx", "timestamp":"2025-01-30 13:54:29.459944966 +0000 UTC"}, Hostname:"ip-172-31-23-102", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:54:29.586851 containerd[2095]: 2025-01-30 13:54:29.479 [INFO][5370] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:54:29.586851 containerd[2095]: 2025-01-30 13:54:29.479 [INFO][5370] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:54:29.586851 containerd[2095]: 2025-01-30 13:54:29.479 [INFO][5370] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-102' Jan 30 13:54:29.586851 containerd[2095]: 2025-01-30 13:54:29.481 [INFO][5370] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c8e7eb6ec81d242693f8deef7b79a0d7a11401bc0a307039cb692ce915c5c864" host="ip-172-31-23-102" Jan 30 13:54:29.586851 containerd[2095]: 2025-01-30 13:54:29.490 [INFO][5370] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-23-102" Jan 30 13:54:29.586851 containerd[2095]: 2025-01-30 13:54:29.497 [INFO][5370] ipam/ipam.go 489: Trying affinity for 192.168.54.64/26 host="ip-172-31-23-102" Jan 30 13:54:29.586851 containerd[2095]: 2025-01-30 13:54:29.499 [INFO][5370] ipam/ipam.go 155: Attempting to load block cidr=192.168.54.64/26 host="ip-172-31-23-102" Jan 30 13:54:29.586851 containerd[2095]: 2025-01-30 13:54:29.501 [INFO][5370] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.54.64/26 host="ip-172-31-23-102" Jan 30 13:54:29.586851 containerd[2095]: 2025-01-30 13:54:29.502 [INFO][5370] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.54.64/26 handle="k8s-pod-network.c8e7eb6ec81d242693f8deef7b79a0d7a11401bc0a307039cb692ce915c5c864" host="ip-172-31-23-102" Jan 30 13:54:29.586851 containerd[2095]: 2025-01-30 13:54:29.503 [INFO][5370] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c8e7eb6ec81d242693f8deef7b79a0d7a11401bc0a307039cb692ce915c5c864 Jan 30 13:54:29.586851 containerd[2095]: 2025-01-30 13:54:29.511 [INFO][5370] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.54.64/26 handle="k8s-pod-network.c8e7eb6ec81d242693f8deef7b79a0d7a11401bc0a307039cb692ce915c5c864" host="ip-172-31-23-102" Jan 30 13:54:29.586851 containerd[2095]: 2025-01-30 13:54:29.527 [INFO][5370] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.54.68/26] block=192.168.54.64/26 handle="k8s-pod-network.c8e7eb6ec81d242693f8deef7b79a0d7a11401bc0a307039cb692ce915c5c864" host="ip-172-31-23-102" Jan 30 13:54:29.586851 containerd[2095]: 2025-01-30 13:54:29.527 [INFO][5370] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.54.68/26] handle="k8s-pod-network.c8e7eb6ec81d242693f8deef7b79a0d7a11401bc0a307039cb692ce915c5c864" host="ip-172-31-23-102" Jan 30 13:54:29.586851 containerd[2095]: 2025-01-30 13:54:29.528 [INFO][5370] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:54:29.586851 containerd[2095]: 2025-01-30 13:54:29.528 [INFO][5370] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.54.68/26] IPv6=[] ContainerID="c8e7eb6ec81d242693f8deef7b79a0d7a11401bc0a307039cb692ce915c5c864" HandleID="k8s-pod-network.c8e7eb6ec81d242693f8deef7b79a0d7a11401bc0a307039cb692ce915c5c864" Workload="ip--172--31--23--102-k8s-calico--apiserver--7b99fdb47b--fvbzx-eth0" Jan 30 13:54:29.588560 containerd[2095]: 2025-01-30 13:54:29.532 [INFO][5357] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c8e7eb6ec81d242693f8deef7b79a0d7a11401bc0a307039cb692ce915c5c864" Namespace="calico-apiserver" Pod="calico-apiserver-7b99fdb47b-fvbzx" WorkloadEndpoint="ip--172--31--23--102-k8s-calico--apiserver--7b99fdb47b--fvbzx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--102-k8s-calico--apiserver--7b99fdb47b--fvbzx-eth0", GenerateName:"calico-apiserver-7b99fdb47b-", Namespace:"calico-apiserver", SelfLink:"", UID:"75d06128-1791-4f18-82b2-8e83d7439284", ResourceVersion:"763", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 54, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b99fdb47b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-102", ContainerID:"", Pod:"calico-apiserver-7b99fdb47b-fvbzx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.54.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8cb76a8f17b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:54:29.588560 containerd[2095]: 2025-01-30 13:54:29.532 [INFO][5357] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.54.68/32] ContainerID="c8e7eb6ec81d242693f8deef7b79a0d7a11401bc0a307039cb692ce915c5c864" Namespace="calico-apiserver" Pod="calico-apiserver-7b99fdb47b-fvbzx" WorkloadEndpoint="ip--172--31--23--102-k8s-calico--apiserver--7b99fdb47b--fvbzx-eth0" Jan 30 13:54:29.588560 containerd[2095]: 2025-01-30 13:54:29.532 [INFO][5357] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8cb76a8f17b ContainerID="c8e7eb6ec81d242693f8deef7b79a0d7a11401bc0a307039cb692ce915c5c864" Namespace="calico-apiserver" Pod="calico-apiserver-7b99fdb47b-fvbzx" WorkloadEndpoint="ip--172--31--23--102-k8s-calico--apiserver--7b99fdb47b--fvbzx-eth0" Jan 30 13:54:29.588560 containerd[2095]: 2025-01-30 13:54:29.544 [INFO][5357] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c8e7eb6ec81d242693f8deef7b79a0d7a11401bc0a307039cb692ce915c5c864" Namespace="calico-apiserver" Pod="calico-apiserver-7b99fdb47b-fvbzx" WorkloadEndpoint="ip--172--31--23--102-k8s-calico--apiserver--7b99fdb47b--fvbzx-eth0" Jan 30 13:54:29.588560 containerd[2095]: 2025-01-30 13:54:29.548 [INFO][5357] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c8e7eb6ec81d242693f8deef7b79a0d7a11401bc0a307039cb692ce915c5c864" Namespace="calico-apiserver" Pod="calico-apiserver-7b99fdb47b-fvbzx" WorkloadEndpoint="ip--172--31--23--102-k8s-calico--apiserver--7b99fdb47b--fvbzx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--102-k8s-calico--apiserver--7b99fdb47b--fvbzx-eth0", GenerateName:"calico-apiserver-7b99fdb47b-", Namespace:"calico-apiserver", SelfLink:"", UID:"75d06128-1791-4f18-82b2-8e83d7439284", ResourceVersion:"763", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 54, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b99fdb47b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-102", ContainerID:"c8e7eb6ec81d242693f8deef7b79a0d7a11401bc0a307039cb692ce915c5c864", Pod:"calico-apiserver-7b99fdb47b-fvbzx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.54.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8cb76a8f17b", MAC:"82:df:c6:e8:87:0f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:54:29.588560 containerd[2095]: 2025-01-30 13:54:29.583 [INFO][5357] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c8e7eb6ec81d242693f8deef7b79a0d7a11401bc0a307039cb692ce915c5c864" Namespace="calico-apiserver" Pod="calico-apiserver-7b99fdb47b-fvbzx" WorkloadEndpoint="ip--172--31--23--102-k8s-calico--apiserver--7b99fdb47b--fvbzx-eth0" Jan 30 13:54:29.606760 containerd[2095]: time="2025-01-30T13:54:29.606677623Z" level=info msg="StopPodSandbox for \"630b1847d9c10a2e208c86dcfd9698e221af4a27cba08687339465f8b336214e\"" Jan 30 13:54:29.650319 containerd[2095]: time="2025-01-30T13:54:29.649103918Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:54:29.650319 containerd[2095]: time="2025-01-30T13:54:29.649204815Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:54:29.650319 containerd[2095]: time="2025-01-30T13:54:29.649220320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:54:29.650319 containerd[2095]: time="2025-01-30T13:54:29.649391770Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:54:29.787311 containerd[2095]: time="2025-01-30T13:54:29.787268164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b99fdb47b-fvbzx,Uid:75d06128-1791-4f18-82b2-8e83d7439284,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"c8e7eb6ec81d242693f8deef7b79a0d7a11401bc0a307039cb692ce915c5c864\"" Jan 30 13:54:29.814132 containerd[2095]: 2025-01-30 13:54:29.745 [INFO][5417] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="630b1847d9c10a2e208c86dcfd9698e221af4a27cba08687339465f8b336214e" Jan 30 13:54:29.814132 containerd[2095]: 2025-01-30 13:54:29.745 [INFO][5417] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="630b1847d9c10a2e208c86dcfd9698e221af4a27cba08687339465f8b336214e" iface="eth0" netns="/var/run/netns/cni-f805ebd0-ec76-1253-954b-f7f80ae1b3e3" Jan 30 13:54:29.814132 containerd[2095]: 2025-01-30 13:54:29.746 [INFO][5417] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="630b1847d9c10a2e208c86dcfd9698e221af4a27cba08687339465f8b336214e" iface="eth0" netns="/var/run/netns/cni-f805ebd0-ec76-1253-954b-f7f80ae1b3e3" Jan 30 13:54:29.814132 containerd[2095]: 2025-01-30 13:54:29.747 [INFO][5417] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="630b1847d9c10a2e208c86dcfd9698e221af4a27cba08687339465f8b336214e" iface="eth0" netns="/var/run/netns/cni-f805ebd0-ec76-1253-954b-f7f80ae1b3e3" Jan 30 13:54:29.814132 containerd[2095]: 2025-01-30 13:54:29.748 [INFO][5417] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="630b1847d9c10a2e208c86dcfd9698e221af4a27cba08687339465f8b336214e" Jan 30 13:54:29.814132 containerd[2095]: 2025-01-30 13:54:29.748 [INFO][5417] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="630b1847d9c10a2e208c86dcfd9698e221af4a27cba08687339465f8b336214e" Jan 30 13:54:29.814132 containerd[2095]: 2025-01-30 13:54:29.798 [INFO][5448] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="630b1847d9c10a2e208c86dcfd9698e221af4a27cba08687339465f8b336214e" HandleID="k8s-pod-network.630b1847d9c10a2e208c86dcfd9698e221af4a27cba08687339465f8b336214e" Workload="ip--172--31--23--102-k8s-calico--apiserver--7b99fdb47b--ktz62-eth0" Jan 30 13:54:29.814132 containerd[2095]: 2025-01-30 13:54:29.798 [INFO][5448] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:54:29.814132 containerd[2095]: 2025-01-30 13:54:29.798 [INFO][5448] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:54:29.814132 containerd[2095]: 2025-01-30 13:54:29.805 [WARNING][5448] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="630b1847d9c10a2e208c86dcfd9698e221af4a27cba08687339465f8b336214e" HandleID="k8s-pod-network.630b1847d9c10a2e208c86dcfd9698e221af4a27cba08687339465f8b336214e" Workload="ip--172--31--23--102-k8s-calico--apiserver--7b99fdb47b--ktz62-eth0" Jan 30 13:54:29.814132 containerd[2095]: 2025-01-30 13:54:29.805 [INFO][5448] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="630b1847d9c10a2e208c86dcfd9698e221af4a27cba08687339465f8b336214e" HandleID="k8s-pod-network.630b1847d9c10a2e208c86dcfd9698e221af4a27cba08687339465f8b336214e" Workload="ip--172--31--23--102-k8s-calico--apiserver--7b99fdb47b--ktz62-eth0" Jan 30 13:54:29.814132 containerd[2095]: 2025-01-30 13:54:29.806 [INFO][5448] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:54:29.814132 containerd[2095]: 2025-01-30 13:54:29.811 [INFO][5417] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="630b1847d9c10a2e208c86dcfd9698e221af4a27cba08687339465f8b336214e" Jan 30 13:54:29.817827 containerd[2095]: time="2025-01-30T13:54:29.814643137Z" level=info msg="TearDown network for sandbox \"630b1847d9c10a2e208c86dcfd9698e221af4a27cba08687339465f8b336214e\" successfully" Jan 30 13:54:29.817827 containerd[2095]: time="2025-01-30T13:54:29.814679072Z" level=info msg="StopPodSandbox for \"630b1847d9c10a2e208c86dcfd9698e221af4a27cba08687339465f8b336214e\" returns successfully" Jan 30 13:54:29.822679 containerd[2095]: time="2025-01-30T13:54:29.821151541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b99fdb47b-ktz62,Uid:955728c7-dcd5-4a2a-928a-608a27b0ea08,Namespace:calico-apiserver,Attempt:1,}" Jan 30 13:54:29.824552 systemd[1]: run-netns-cni\x2df805ebd0\x2dec76\x2d1253\x2d954b\x2df7f80ae1b3e3.mount: Deactivated successfully. Jan 30 13:54:30.057994 systemd-networkd[1651]: cali82dcac028ac: Link UP Jan 30 13:54:30.064526 systemd-networkd[1651]: cali82dcac028ac: Gained carrier Jan 30 13:54:30.101022 containerd[2095]: 2025-01-30 13:54:29.902 [INFO][5470] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 30 13:54:30.101022 containerd[2095]: 2025-01-30 13:54:29.915 [INFO][5470] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--102-k8s-calico--apiserver--7b99fdb47b--ktz62-eth0 calico-apiserver-7b99fdb47b- calico-apiserver 955728c7-dcd5-4a2a-928a-608a27b0ea08 781 0 2025-01-30 13:54:01 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7b99fdb47b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-23-102 calico-apiserver-7b99fdb47b-ktz62 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali82dcac028ac [] []}} ContainerID="33e11650b0d92af738bf1aba6ffa81b1d3596959fefbb170462ec85193007874" Namespace="calico-apiserver" Pod="calico-apiserver-7b99fdb47b-ktz62" WorkloadEndpoint="ip--172--31--23--102-k8s-calico--apiserver--7b99fdb47b--ktz62-" Jan 30 13:54:30.101022 containerd[2095]: 2025-01-30 13:54:29.915 [INFO][5470] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="33e11650b0d92af738bf1aba6ffa81b1d3596959fefbb170462ec85193007874" Namespace="calico-apiserver" Pod="calico-apiserver-7b99fdb47b-ktz62" WorkloadEndpoint="ip--172--31--23--102-k8s-calico--apiserver--7b99fdb47b--ktz62-eth0" Jan 30 13:54:30.101022 containerd[2095]: 2025-01-30 13:54:29.955 [INFO][5479] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="33e11650b0d92af738bf1aba6ffa81b1d3596959fefbb170462ec85193007874" HandleID="k8s-pod-network.33e11650b0d92af738bf1aba6ffa81b1d3596959fefbb170462ec85193007874" Workload="ip--172--31--23--102-k8s-calico--apiserver--7b99fdb47b--ktz62-eth0" Jan 30 13:54:30.101022 containerd[2095]: 2025-01-30 13:54:29.969 [INFO][5479] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="33e11650b0d92af738bf1aba6ffa81b1d3596959fefbb170462ec85193007874" HandleID="k8s-pod-network.33e11650b0d92af738bf1aba6ffa81b1d3596959fefbb170462ec85193007874" Workload="ip--172--31--23--102-k8s-calico--apiserver--7b99fdb47b--ktz62-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290ae0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-23-102", "pod":"calico-apiserver-7b99fdb47b-ktz62", "timestamp":"2025-01-30 13:54:29.955409423 +0000 UTC"}, Hostname:"ip-172-31-23-102", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:54:30.101022 containerd[2095]: 2025-01-30 13:54:29.969 [INFO][5479] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:54:30.101022 containerd[2095]: 2025-01-30 13:54:29.969 [INFO][5479] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:54:30.101022 containerd[2095]: 2025-01-30 13:54:29.969 [INFO][5479] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-102' Jan 30 13:54:30.101022 containerd[2095]: 2025-01-30 13:54:29.972 [INFO][5479] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.33e11650b0d92af738bf1aba6ffa81b1d3596959fefbb170462ec85193007874" host="ip-172-31-23-102" Jan 30 13:54:30.101022 containerd[2095]: 2025-01-30 13:54:29.978 [INFO][5479] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-23-102" Jan 30 13:54:30.101022 containerd[2095]: 2025-01-30 13:54:29.983 [INFO][5479] ipam/ipam.go 489: Trying affinity for 192.168.54.64/26 host="ip-172-31-23-102" Jan 30 13:54:30.101022 containerd[2095]: 2025-01-30 13:54:29.985 [INFO][5479] ipam/ipam.go 155: Attempting to load block cidr=192.168.54.64/26 host="ip-172-31-23-102" Jan 30 13:54:30.101022 containerd[2095]: 2025-01-30 13:54:29.988 [INFO][5479] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.54.64/26 host="ip-172-31-23-102" Jan 30 13:54:30.101022 containerd[2095]: 2025-01-30 13:54:29.988 [INFO][5479] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.54.64/26 handle="k8s-pod-network.33e11650b0d92af738bf1aba6ffa81b1d3596959fefbb170462ec85193007874" host="ip-172-31-23-102" Jan 30 13:54:30.101022 containerd[2095]: 2025-01-30 13:54:29.990 [INFO][5479] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.33e11650b0d92af738bf1aba6ffa81b1d3596959fefbb170462ec85193007874 Jan 30 13:54:30.101022 containerd[2095]: 2025-01-30 13:54:29.998 [INFO][5479] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.54.64/26 handle="k8s-pod-network.33e11650b0d92af738bf1aba6ffa81b1d3596959fefbb170462ec85193007874" host="ip-172-31-23-102" Jan 30 13:54:30.101022 containerd[2095]: 2025-01-30 13:54:30.035 [INFO][5479] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.54.69/26] block=192.168.54.64/26 handle="k8s-pod-network.33e11650b0d92af738bf1aba6ffa81b1d3596959fefbb170462ec85193007874" host="ip-172-31-23-102" Jan 30 13:54:30.101022 containerd[2095]: 2025-01-30 13:54:30.035 [INFO][5479] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.54.69/26] handle="k8s-pod-network.33e11650b0d92af738bf1aba6ffa81b1d3596959fefbb170462ec85193007874" host="ip-172-31-23-102" Jan 30 13:54:30.101022 containerd[2095]: 2025-01-30 13:54:30.035 [INFO][5479] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:54:30.101022 containerd[2095]: 2025-01-30 13:54:30.035 [INFO][5479] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.54.69/26] IPv6=[] ContainerID="33e11650b0d92af738bf1aba6ffa81b1d3596959fefbb170462ec85193007874" HandleID="k8s-pod-network.33e11650b0d92af738bf1aba6ffa81b1d3596959fefbb170462ec85193007874" Workload="ip--172--31--23--102-k8s-calico--apiserver--7b99fdb47b--ktz62-eth0" Jan 30 13:54:30.109124 containerd[2095]: 2025-01-30 13:54:30.046 [INFO][5470] cni-plugin/k8s.go 386: Populated endpoint ContainerID="33e11650b0d92af738bf1aba6ffa81b1d3596959fefbb170462ec85193007874" Namespace="calico-apiserver" Pod="calico-apiserver-7b99fdb47b-ktz62" WorkloadEndpoint="ip--172--31--23--102-k8s-calico--apiserver--7b99fdb47b--ktz62-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--102-k8s-calico--apiserver--7b99fdb47b--ktz62-eth0", GenerateName:"calico-apiserver-7b99fdb47b-", Namespace:"calico-apiserver", SelfLink:"", UID:"955728c7-dcd5-4a2a-928a-608a27b0ea08", ResourceVersion:"781", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 54, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b99fdb47b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-102", ContainerID:"", Pod:"calico-apiserver-7b99fdb47b-ktz62", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.54.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali82dcac028ac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:54:30.109124 containerd[2095]: 2025-01-30 13:54:30.046 [INFO][5470] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.54.69/32] ContainerID="33e11650b0d92af738bf1aba6ffa81b1d3596959fefbb170462ec85193007874" Namespace="calico-apiserver" Pod="calico-apiserver-7b99fdb47b-ktz62" WorkloadEndpoint="ip--172--31--23--102-k8s-calico--apiserver--7b99fdb47b--ktz62-eth0" Jan 30 13:54:30.109124 containerd[2095]: 2025-01-30 13:54:30.046 [INFO][5470] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali82dcac028ac ContainerID="33e11650b0d92af738bf1aba6ffa81b1d3596959fefbb170462ec85193007874" Namespace="calico-apiserver" Pod="calico-apiserver-7b99fdb47b-ktz62" WorkloadEndpoint="ip--172--31--23--102-k8s-calico--apiserver--7b99fdb47b--ktz62-eth0" Jan 30 13:54:30.109124 containerd[2095]: 2025-01-30 13:54:30.049 [INFO][5470] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="33e11650b0d92af738bf1aba6ffa81b1d3596959fefbb170462ec85193007874" Namespace="calico-apiserver" Pod="calico-apiserver-7b99fdb47b-ktz62" WorkloadEndpoint="ip--172--31--23--102-k8s-calico--apiserver--7b99fdb47b--ktz62-eth0" Jan 30 13:54:30.109124 containerd[2095]: 2025-01-30 13:54:30.061 [INFO][5470] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="33e11650b0d92af738bf1aba6ffa81b1d3596959fefbb170462ec85193007874" Namespace="calico-apiserver" Pod="calico-apiserver-7b99fdb47b-ktz62" WorkloadEndpoint="ip--172--31--23--102-k8s-calico--apiserver--7b99fdb47b--ktz62-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--102-k8s-calico--apiserver--7b99fdb47b--ktz62-eth0", GenerateName:"calico-apiserver-7b99fdb47b-", Namespace:"calico-apiserver", SelfLink:"", UID:"955728c7-dcd5-4a2a-928a-608a27b0ea08", ResourceVersion:"781", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 54, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b99fdb47b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-102", ContainerID:"33e11650b0d92af738bf1aba6ffa81b1d3596959fefbb170462ec85193007874", Pod:"calico-apiserver-7b99fdb47b-ktz62", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.54.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali82dcac028ac", MAC:"02:a4:44:7a:71:2f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:54:30.109124 containerd[2095]: 2025-01-30 13:54:30.094 [INFO][5470] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="33e11650b0d92af738bf1aba6ffa81b1d3596959fefbb170462ec85193007874" Namespace="calico-apiserver" Pod="calico-apiserver-7b99fdb47b-ktz62" WorkloadEndpoint="ip--172--31--23--102-k8s-calico--apiserver--7b99fdb47b--ktz62-eth0" Jan 30 13:54:30.193981 containerd[2095]: time="2025-01-30T13:54:30.193748905Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:54:30.193981 containerd[2095]: time="2025-01-30T13:54:30.193822212Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:54:30.193981 containerd[2095]: time="2025-01-30T13:54:30.193860290Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:54:30.200079 containerd[2095]: time="2025-01-30T13:54:30.199052177Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:54:30.278011 systemd[1]: run-containerd-runc-k8s.io-33e11650b0d92af738bf1aba6ffa81b1d3596959fefbb170462ec85193007874-runc.O1NDvx.mount: Deactivated successfully. Jan 30 13:54:30.622187 containerd[2095]: time="2025-01-30T13:54:30.621797089Z" level=info msg="StopPodSandbox for \"b4e187380a46d1b9706f790e3cf321040cd75bd4fea8554b9e5c9401d0b0beff\"" Jan 30 13:54:30.648114 kubelet[3401]: I0130 13:54:30.648060 3401 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:54:30.659090 systemd-networkd[1651]: cali8cb76a8f17b: Gained IPv6LL Jan 30 13:54:30.819115 containerd[2095]: time="2025-01-30T13:54:30.818675757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b99fdb47b-ktz62,Uid:955728c7-dcd5-4a2a-928a-608a27b0ea08,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"33e11650b0d92af738bf1aba6ffa81b1d3596959fefbb170462ec85193007874\"" Jan 30 13:54:31.111466 containerd[2095]: 2025-01-30 13:54:30.949 [INFO][5570] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b4e187380a46d1b9706f790e3cf321040cd75bd4fea8554b9e5c9401d0b0beff" Jan 30 13:54:31.111466 containerd[2095]: 2025-01-30 13:54:30.949 [INFO][5570] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b4e187380a46d1b9706f790e3cf321040cd75bd4fea8554b9e5c9401d0b0beff" iface="eth0" netns="/var/run/netns/cni-44df4fbe-5be0-522c-593a-177ab12577bb" Jan 30 13:54:31.111466 containerd[2095]: 2025-01-30 13:54:30.951 [INFO][5570] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b4e187380a46d1b9706f790e3cf321040cd75bd4fea8554b9e5c9401d0b0beff" iface="eth0" netns="/var/run/netns/cni-44df4fbe-5be0-522c-593a-177ab12577bb" Jan 30 13:54:31.111466 containerd[2095]: 2025-01-30 13:54:30.952 [INFO][5570] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b4e187380a46d1b9706f790e3cf321040cd75bd4fea8554b9e5c9401d0b0beff" iface="eth0" netns="/var/run/netns/cni-44df4fbe-5be0-522c-593a-177ab12577bb" Jan 30 13:54:31.111466 containerd[2095]: 2025-01-30 13:54:30.952 [INFO][5570] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b4e187380a46d1b9706f790e3cf321040cd75bd4fea8554b9e5c9401d0b0beff" Jan 30 13:54:31.111466 containerd[2095]: 2025-01-30 13:54:30.952 [INFO][5570] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b4e187380a46d1b9706f790e3cf321040cd75bd4fea8554b9e5c9401d0b0beff" Jan 30 13:54:31.111466 containerd[2095]: 2025-01-30 13:54:31.054 [INFO][5581] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b4e187380a46d1b9706f790e3cf321040cd75bd4fea8554b9e5c9401d0b0beff" HandleID="k8s-pod-network.b4e187380a46d1b9706f790e3cf321040cd75bd4fea8554b9e5c9401d0b0beff" Workload="ip--172--31--23--102-k8s-csi--node--driver--qjwhb-eth0" Jan 30 13:54:31.111466 containerd[2095]: 2025-01-30 13:54:31.055 [INFO][5581] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:54:31.111466 containerd[2095]: 2025-01-30 13:54:31.056 [INFO][5581] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:54:31.111466 containerd[2095]: 2025-01-30 13:54:31.078 [WARNING][5581] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b4e187380a46d1b9706f790e3cf321040cd75bd4fea8554b9e5c9401d0b0beff" HandleID="k8s-pod-network.b4e187380a46d1b9706f790e3cf321040cd75bd4fea8554b9e5c9401d0b0beff" Workload="ip--172--31--23--102-k8s-csi--node--driver--qjwhb-eth0" Jan 30 13:54:31.111466 containerd[2095]: 2025-01-30 13:54:31.078 [INFO][5581] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b4e187380a46d1b9706f790e3cf321040cd75bd4fea8554b9e5c9401d0b0beff" HandleID="k8s-pod-network.b4e187380a46d1b9706f790e3cf321040cd75bd4fea8554b9e5c9401d0b0beff" Workload="ip--172--31--23--102-k8s-csi--node--driver--qjwhb-eth0" Jan 30 13:54:31.111466 containerd[2095]: 2025-01-30 13:54:31.083 [INFO][5581] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:54:31.111466 containerd[2095]: 2025-01-30 13:54:31.095 [INFO][5570] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b4e187380a46d1b9706f790e3cf321040cd75bd4fea8554b9e5c9401d0b0beff" Jan 30 13:54:31.117124 containerd[2095]: time="2025-01-30T13:54:31.113974794Z" level=info msg="TearDown network for sandbox \"b4e187380a46d1b9706f790e3cf321040cd75bd4fea8554b9e5c9401d0b0beff\" successfully" Jan 30 13:54:31.117124 containerd[2095]: time="2025-01-30T13:54:31.114013696Z" level=info msg="StopPodSandbox for \"b4e187380a46d1b9706f790e3cf321040cd75bd4fea8554b9e5c9401d0b0beff\" returns successfully" Jan 30 13:54:31.122310 containerd[2095]: time="2025-01-30T13:54:31.119473768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qjwhb,Uid:89933e39-f4f4-49ac-8467-88e1539cd0a5,Namespace:calico-system,Attempt:1,}" Jan 30 13:54:31.125284 systemd[1]: run-netns-cni\x2d44df4fbe\x2d5be0\x2d522c\x2d593a\x2d177ab12577bb.mount: Deactivated successfully. Jan 30 13:54:31.579480 systemd-networkd[1651]: caliaec1a939c30: Link UP Jan 30 13:54:31.579753 systemd-networkd[1651]: caliaec1a939c30: Gained carrier Jan 30 13:54:31.643358 containerd[2095]: 2025-01-30 13:54:31.228 [INFO][5607] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 30 13:54:31.643358 containerd[2095]: 2025-01-30 13:54:31.253 [INFO][5607] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--102-k8s-csi--node--driver--qjwhb-eth0 csi-node-driver- calico-system 89933e39-f4f4-49ac-8467-88e1539cd0a5 806 0 2025-01-30 13:54:02 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-23-102 csi-node-driver-qjwhb eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliaec1a939c30 [] []}} ContainerID="e1e746a7090c8759d51ba435d8b37be2dab12c531cf02d74e0787efca162bbaa" Namespace="calico-system" Pod="csi-node-driver-qjwhb" WorkloadEndpoint="ip--172--31--23--102-k8s-csi--node--driver--qjwhb-" Jan 30 13:54:31.643358 containerd[2095]: 2025-01-30 13:54:31.254 [INFO][5607] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e1e746a7090c8759d51ba435d8b37be2dab12c531cf02d74e0787efca162bbaa" Namespace="calico-system" Pod="csi-node-driver-qjwhb" WorkloadEndpoint="ip--172--31--23--102-k8s-csi--node--driver--qjwhb-eth0" Jan 30 13:54:31.643358 containerd[2095]: 2025-01-30 13:54:31.429 [INFO][5627] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e1e746a7090c8759d51ba435d8b37be2dab12c531cf02d74e0787efca162bbaa" HandleID="k8s-pod-network.e1e746a7090c8759d51ba435d8b37be2dab12c531cf02d74e0787efca162bbaa" Workload="ip--172--31--23--102-k8s-csi--node--driver--qjwhb-eth0" Jan 30 13:54:31.643358 containerd[2095]: 2025-01-30 13:54:31.452 [INFO][5627] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e1e746a7090c8759d51ba435d8b37be2dab12c531cf02d74e0787efca162bbaa" HandleID="k8s-pod-network.e1e746a7090c8759d51ba435d8b37be2dab12c531cf02d74e0787efca162bbaa" Workload="ip--172--31--23--102-k8s-csi--node--driver--qjwhb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003bc880), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-23-102", "pod":"csi-node-driver-qjwhb", "timestamp":"2025-01-30 13:54:31.429194632 +0000 UTC"}, Hostname:"ip-172-31-23-102", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:54:31.643358 containerd[2095]: 2025-01-30 13:54:31.453 [INFO][5627] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:54:31.643358 containerd[2095]: 2025-01-30 13:54:31.453 [INFO][5627] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:54:31.643358 containerd[2095]: 2025-01-30 13:54:31.453 [INFO][5627] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-102' Jan 30 13:54:31.643358 containerd[2095]: 2025-01-30 13:54:31.458 [INFO][5627] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e1e746a7090c8759d51ba435d8b37be2dab12c531cf02d74e0787efca162bbaa" host="ip-172-31-23-102" Jan 30 13:54:31.643358 containerd[2095]: 2025-01-30 13:54:31.471 [INFO][5627] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-23-102" Jan 30 13:54:31.643358 containerd[2095]: 2025-01-30 13:54:31.479 [INFO][5627] ipam/ipam.go 489: Trying affinity for 192.168.54.64/26 host="ip-172-31-23-102" Jan 30 13:54:31.643358 containerd[2095]: 2025-01-30 13:54:31.513 [INFO][5627] ipam/ipam.go 155: Attempting to load block cidr=192.168.54.64/26 host="ip-172-31-23-102" Jan 30 13:54:31.643358 containerd[2095]: 2025-01-30 13:54:31.526 [INFO][5627] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.54.64/26 host="ip-172-31-23-102" Jan 30 13:54:31.643358 containerd[2095]: 2025-01-30 13:54:31.527 [INFO][5627] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.54.64/26 handle="k8s-pod-network.e1e746a7090c8759d51ba435d8b37be2dab12c531cf02d74e0787efca162bbaa" host="ip-172-31-23-102" Jan 30 13:54:31.643358 containerd[2095]: 2025-01-30 13:54:31.530 [INFO][5627] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e1e746a7090c8759d51ba435d8b37be2dab12c531cf02d74e0787efca162bbaa Jan 30 13:54:31.643358 containerd[2095]: 2025-01-30 13:54:31.538 [INFO][5627] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.54.64/26 handle="k8s-pod-network.e1e746a7090c8759d51ba435d8b37be2dab12c531cf02d74e0787efca162bbaa" host="ip-172-31-23-102" Jan 30 13:54:31.643358 containerd[2095]: 2025-01-30 13:54:31.553 [INFO][5627] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.54.70/26] block=192.168.54.64/26 handle="k8s-pod-network.e1e746a7090c8759d51ba435d8b37be2dab12c531cf02d74e0787efca162bbaa" host="ip-172-31-23-102" Jan 30 13:54:31.643358 containerd[2095]: 2025-01-30 13:54:31.554 [INFO][5627] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.54.70/26] handle="k8s-pod-network.e1e746a7090c8759d51ba435d8b37be2dab12c531cf02d74e0787efca162bbaa" host="ip-172-31-23-102" Jan 30 13:54:31.643358 containerd[2095]: 2025-01-30 13:54:31.554 [INFO][5627] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:54:31.643358 containerd[2095]: 2025-01-30 13:54:31.554 [INFO][5627] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.54.70/26] IPv6=[] ContainerID="e1e746a7090c8759d51ba435d8b37be2dab12c531cf02d74e0787efca162bbaa" HandleID="k8s-pod-network.e1e746a7090c8759d51ba435d8b37be2dab12c531cf02d74e0787efca162bbaa" Workload="ip--172--31--23--102-k8s-csi--node--driver--qjwhb-eth0" Jan 30 13:54:31.644964 containerd[2095]: 2025-01-30 13:54:31.567 [INFO][5607] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e1e746a7090c8759d51ba435d8b37be2dab12c531cf02d74e0787efca162bbaa" Namespace="calico-system" Pod="csi-node-driver-qjwhb" WorkloadEndpoint="ip--172--31--23--102-k8s-csi--node--driver--qjwhb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--102-k8s-csi--node--driver--qjwhb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"89933e39-f4f4-49ac-8467-88e1539cd0a5", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 54, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-102", ContainerID:"", Pod:"csi-node-driver-qjwhb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.54.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliaec1a939c30", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:54:31.644964 containerd[2095]: 2025-01-30 13:54:31.568 [INFO][5607] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.54.70/32] ContainerID="e1e746a7090c8759d51ba435d8b37be2dab12c531cf02d74e0787efca162bbaa" Namespace="calico-system" Pod="csi-node-driver-qjwhb" WorkloadEndpoint="ip--172--31--23--102-k8s-csi--node--driver--qjwhb-eth0" Jan 30 13:54:31.644964 containerd[2095]: 2025-01-30 13:54:31.568 [INFO][5607] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaec1a939c30 ContainerID="e1e746a7090c8759d51ba435d8b37be2dab12c531cf02d74e0787efca162bbaa" Namespace="calico-system" Pod="csi-node-driver-qjwhb" WorkloadEndpoint="ip--172--31--23--102-k8s-csi--node--driver--qjwhb-eth0" Jan 30 13:54:31.644964 containerd[2095]: 2025-01-30 13:54:31.580 [INFO][5607] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e1e746a7090c8759d51ba435d8b37be2dab12c531cf02d74e0787efca162bbaa" Namespace="calico-system" Pod="csi-node-driver-qjwhb" WorkloadEndpoint="ip--172--31--23--102-k8s-csi--node--driver--qjwhb-eth0" Jan 30 13:54:31.644964 containerd[2095]: 2025-01-30 13:54:31.582 [INFO][5607] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e1e746a7090c8759d51ba435d8b37be2dab12c531cf02d74e0787efca162bbaa" Namespace="calico-system" Pod="csi-node-driver-qjwhb" WorkloadEndpoint="ip--172--31--23--102-k8s-csi--node--driver--qjwhb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--102-k8s-csi--node--driver--qjwhb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"89933e39-f4f4-49ac-8467-88e1539cd0a5", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 54, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-102", ContainerID:"e1e746a7090c8759d51ba435d8b37be2dab12c531cf02d74e0787efca162bbaa", Pod:"csi-node-driver-qjwhb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.54.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliaec1a939c30", MAC:"a2:ee:f3:3d:28:cf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:54:31.644964 containerd[2095]: 2025-01-30 13:54:31.619 [INFO][5607] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e1e746a7090c8759d51ba435d8b37be2dab12c531cf02d74e0787efca162bbaa" Namespace="calico-system" Pod="csi-node-driver-qjwhb" WorkloadEndpoint="ip--172--31--23--102-k8s-csi--node--driver--qjwhb-eth0" Jan 30 13:54:31.721354 containerd[2095]: time="2025-01-30T13:54:31.719365508Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:54:31.721354 containerd[2095]: time="2025-01-30T13:54:31.719450978Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:54:31.721354 containerd[2095]: time="2025-01-30T13:54:31.719595738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:54:31.721354 containerd[2095]: time="2025-01-30T13:54:31.719807149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:54:31.883746 containerd[2095]: time="2025-01-30T13:54:31.883623472Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qjwhb,Uid:89933e39-f4f4-49ac-8467-88e1539cd0a5,Namespace:calico-system,Attempt:1,} returns sandbox id \"e1e746a7090c8759d51ba435d8b37be2dab12c531cf02d74e0787efca162bbaa\"" Jan 30 13:54:31.954906 kernel: bpftool[5702]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 30 13:54:32.067028 systemd-networkd[1651]: cali82dcac028ac: Gained IPv6LL Jan 30 13:54:32.553731 systemd-networkd[1651]: vxlan.calico: Link UP Jan 30 13:54:32.553739 systemd-networkd[1651]: vxlan.calico: Gained carrier Jan 30 13:54:32.674912 (udev-worker)[5638]: Network interface NamePolicy= disabled on kernel command line. Jan 30 13:54:32.966138 systemd-networkd[1651]: caliaec1a939c30: Gained IPv6LL Jan 30 13:54:33.803714 containerd[2095]: time="2025-01-30T13:54:33.803396296Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:33.805917 containerd[2095]: time="2025-01-30T13:54:33.805835899Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 30 13:54:33.812746 containerd[2095]: time="2025-01-30T13:54:33.812677889Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:33.823544 containerd[2095]: time="2025-01-30T13:54:33.818643657Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:33.823544 containerd[2095]: time="2025-01-30T13:54:33.819808978Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 4.916875726s" Jan 30 13:54:33.826913 containerd[2095]: time="2025-01-30T13:54:33.819846509Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 30 13:54:33.834792 containerd[2095]: time="2025-01-30T13:54:33.834223710Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 30 13:54:33.897495 containerd[2095]: time="2025-01-30T13:54:33.897401309Z" level=info msg="CreateContainer within sandbox \"02fb1f084ab02a3a2779747abe161d31352141b54f9be8fd3e77e8e5ee96292e\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 30 13:54:33.930108 systemd-journald[1576]: Under memory pressure, flushing caches. Jan 30 13:54:33.924647 systemd-resolved[1973]: Under memory pressure, flushing caches. Jan 30 13:54:33.924695 systemd-resolved[1973]: Flushed all caches. Jan 30 13:54:33.973816 containerd[2095]: time="2025-01-30T13:54:33.973651811Z" level=info msg="CreateContainer within sandbox \"02fb1f084ab02a3a2779747abe161d31352141b54f9be8fd3e77e8e5ee96292e\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"60cfa40b3ed72c834e8959b1dde4b904e849d0612a0bdad71657ca3e4d375ca3\"" Jan 30 13:54:33.976777 containerd[2095]: time="2025-01-30T13:54:33.976710830Z" level=info msg="StartContainer for \"60cfa40b3ed72c834e8959b1dde4b904e849d0612a0bdad71657ca3e4d375ca3\"" Jan 30 13:54:34.257165 containerd[2095]: time="2025-01-30T13:54:34.256951558Z" level=info msg="StartContainer for \"60cfa40b3ed72c834e8959b1dde4b904e849d0612a0bdad71657ca3e4d375ca3\" returns successfully" Jan 30 13:54:34.430440 kubelet[3401]: I0130 13:54:34.429219 3401 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6fd4bfb6f4-cv7hp" podStartSLOduration=27.493879368 podStartE2EDuration="32.4291959s" podCreationTimestamp="2025-01-30 13:54:02 +0000 UTC" firstStartedPulling="2025-01-30 13:54:28.898581888 +0000 UTC m=+48.564710732" lastFinishedPulling="2025-01-30 13:54:33.833898406 +0000 UTC m=+53.500027264" observedRunningTime="2025-01-30 13:54:34.42681547 +0000 UTC m=+54.092944334" watchObservedRunningTime="2025-01-30 13:54:34.4291959 +0000 UTC m=+54.095324766" Jan 30 13:54:34.563176 systemd-networkd[1651]: vxlan.calico: Gained IPv6LL Jan 30 13:54:36.789959 ntpd[2045]: Listen normally on 6 vxlan.calico 192.168.54.64:123 Jan 30 13:54:36.792417 ntpd[2045]: 30 Jan 13:54:36 ntpd[2045]: Listen normally on 6 vxlan.calico 192.168.54.64:123 Jan 30 13:54:36.792417 ntpd[2045]: 30 Jan 13:54:36 ntpd[2045]: Listen normally on 7 cali918ff56ceea [fe80::ecee:eeff:feee:eeee%4]:123 Jan 30 13:54:36.792417 ntpd[2045]: 30 Jan 13:54:36 ntpd[2045]: Listen normally on 8 cali2974176b028 [fe80::ecee:eeff:feee:eeee%5]:123 Jan 30 13:54:36.792417 ntpd[2045]: 30 Jan 13:54:36 ntpd[2045]: Listen normally on 9 calic6dda57501e [fe80::ecee:eeff:feee:eeee%6]:123 Jan 30 13:54:36.792417 ntpd[2045]: 30 Jan 13:54:36 ntpd[2045]: Listen normally on 10 cali8cb76a8f17b [fe80::ecee:eeff:feee:eeee%7]:123 Jan 30 13:54:36.792417 ntpd[2045]: 30 Jan 13:54:36 ntpd[2045]: Listen normally on 11 cali82dcac028ac [fe80::ecee:eeff:feee:eeee%8]:123 Jan 30 13:54:36.792417 ntpd[2045]: 30 Jan 13:54:36 ntpd[2045]: Listen normally on 12 caliaec1a939c30 [fe80::ecee:eeff:feee:eeee%9]:123 Jan 30 13:54:36.792417 ntpd[2045]: 30 Jan 13:54:36 ntpd[2045]: Listen normally on 13 vxlan.calico [fe80::64ba:d4ff:fe49:884f%10]:123 Jan 30 13:54:36.790051 ntpd[2045]: Listen normally on 7 cali918ff56ceea [fe80::ecee:eeff:feee:eeee%4]:123 Jan 30 13:54:36.790119 ntpd[2045]: Listen normally on 8 cali2974176b028 [fe80::ecee:eeff:feee:eeee%5]:123 Jan 30 13:54:36.790156 ntpd[2045]: Listen normally on 9 calic6dda57501e [fe80::ecee:eeff:feee:eeee%6]:123 Jan 30 13:54:36.790195 ntpd[2045]: Listen normally on 10 cali8cb76a8f17b [fe80::ecee:eeff:feee:eeee%7]:123 Jan 30 13:54:36.790231 ntpd[2045]: Listen normally on 11 cali82dcac028ac [fe80::ecee:eeff:feee:eeee%8]:123 Jan 30 13:54:36.790266 ntpd[2045]: Listen normally on 12 caliaec1a939c30 [fe80::ecee:eeff:feee:eeee%9]:123 Jan 30 13:54:36.790303 ntpd[2045]: Listen normally on 13 vxlan.calico [fe80::64ba:d4ff:fe49:884f%10]:123 Jan 30 13:54:36.797459 containerd[2095]: time="2025-01-30T13:54:36.797420072Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:36.799413 containerd[2095]: time="2025-01-30T13:54:36.799314737Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 30 13:54:36.800525 containerd[2095]: time="2025-01-30T13:54:36.800465135Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:36.804922 containerd[2095]: time="2025-01-30T13:54:36.804625357Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:36.805520 containerd[2095]: time="2025-01-30T13:54:36.805480374Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 2.971204664s" Jan 30 13:54:36.805620 containerd[2095]: time="2025-01-30T13:54:36.805527060Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 30 13:54:36.807258 containerd[2095]: time="2025-01-30T13:54:36.807224090Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 30 13:54:36.813256 containerd[2095]: time="2025-01-30T13:54:36.813185202Z" level=info msg="CreateContainer within sandbox \"c8e7eb6ec81d242693f8deef7b79a0d7a11401bc0a307039cb692ce915c5c864\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 13:54:36.832175 containerd[2095]: time="2025-01-30T13:54:36.831426081Z" level=info msg="CreateContainer within sandbox \"c8e7eb6ec81d242693f8deef7b79a0d7a11401bc0a307039cb692ce915c5c864\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4f77d80527f382058deb57ccb4c8e8954fedc21c24fab9304a471ddab2ec4c96\"" Jan 30 13:54:36.834920 containerd[2095]: time="2025-01-30T13:54:36.833645731Z" level=info msg="StartContainer for \"4f77d80527f382058deb57ccb4c8e8954fedc21c24fab9304a471ddab2ec4c96\"" Jan 30 13:54:36.962916 containerd[2095]: time="2025-01-30T13:54:36.962554122Z" level=info msg="StartContainer for \"4f77d80527f382058deb57ccb4c8e8954fedc21c24fab9304a471ddab2ec4c96\" returns successfully" Jan 30 13:54:37.148309 containerd[2095]: time="2025-01-30T13:54:37.148182245Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:37.151902 containerd[2095]: time="2025-01-30T13:54:37.151696479Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 30 13:54:37.155897 containerd[2095]: time="2025-01-30T13:54:37.155763706Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 347.701896ms" Jan 30 13:54:37.155897 containerd[2095]: time="2025-01-30T13:54:37.155826094Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 30 13:54:37.158059 containerd[2095]: time="2025-01-30T13:54:37.157643971Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 30 13:54:37.177411 containerd[2095]: time="2025-01-30T13:54:37.177370463Z" level=info msg="CreateContainer within sandbox \"33e11650b0d92af738bf1aba6ffa81b1d3596959fefbb170462ec85193007874\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 13:54:37.262367 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount395186741.mount: Deactivated successfully. Jan 30 13:54:37.263549 containerd[2095]: time="2025-01-30T13:54:37.262978003Z" level=info msg="CreateContainer within sandbox \"33e11650b0d92af738bf1aba6ffa81b1d3596959fefbb170462ec85193007874\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5c2219df7e10123ae8af5b87806f221492a6fad93cd57b352e275b1382b0a451\"" Jan 30 13:54:37.271279 containerd[2095]: time="2025-01-30T13:54:37.271247340Z" level=info msg="StartContainer for \"5c2219df7e10123ae8af5b87806f221492a6fad93cd57b352e275b1382b0a451\"" Jan 30 13:54:37.496929 containerd[2095]: time="2025-01-30T13:54:37.496778228Z" level=info msg="StartContainer for \"5c2219df7e10123ae8af5b87806f221492a6fad93cd57b352e275b1382b0a451\" returns successfully" Jan 30 13:54:38.453714 kubelet[3401]: I0130 13:54:38.453541 3401 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:54:38.498936 kubelet[3401]: I0130 13:54:38.498853 3401 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7b99fdb47b-ktz62" podStartSLOduration=31.16781571 podStartE2EDuration="37.498828435s" podCreationTimestamp="2025-01-30 13:54:01 +0000 UTC" firstStartedPulling="2025-01-30 13:54:30.826267271 +0000 UTC m=+50.492396123" lastFinishedPulling="2025-01-30 13:54:37.157279993 +0000 UTC m=+56.823408848" observedRunningTime="2025-01-30 13:54:38.498366935 +0000 UTC m=+58.164495797" watchObservedRunningTime="2025-01-30 13:54:38.498828435 +0000 UTC m=+58.164957301" Jan 30 13:54:38.500177 kubelet[3401]: I0130 13:54:38.498982 3401 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7b99fdb47b-fvbzx" podStartSLOduration=30.482209854 podStartE2EDuration="37.498976283s" podCreationTimestamp="2025-01-30 13:54:01 +0000 UTC" firstStartedPulling="2025-01-30 13:54:29.790104892 +0000 UTC m=+49.456233748" lastFinishedPulling="2025-01-30 13:54:36.806871316 +0000 UTC m=+56.473000177" observedRunningTime="2025-01-30 13:54:37.455148617 +0000 UTC m=+57.121277482" watchObservedRunningTime="2025-01-30 13:54:38.498976283 +0000 UTC m=+58.165105146" Jan 30 13:54:38.548085 systemd[1]: Started sshd@7-172.31.23.102:22-139.178.68.195:36238.service - OpenSSH per-connection server daemon (139.178.68.195:36238). Jan 30 13:54:38.952254 sshd[5920]: Accepted publickey for core from 139.178.68.195 port 36238 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:54:38.959168 sshd[5920]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:54:38.986779 systemd-logind[2064]: New session 8 of user core. Jan 30 13:54:38.993682 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 13:54:39.305060 containerd[2095]: time="2025-01-30T13:54:39.305009095Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:39.308727 containerd[2095]: time="2025-01-30T13:54:39.308662533Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 30 13:54:39.314927 containerd[2095]: time="2025-01-30T13:54:39.312266715Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:39.318645 containerd[2095]: time="2025-01-30T13:54:39.318475969Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:39.321344 containerd[2095]: time="2025-01-30T13:54:39.321203960Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 2.163385346s" Jan 30 13:54:39.321492 containerd[2095]: time="2025-01-30T13:54:39.321348622Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 30 13:54:39.331114 containerd[2095]: time="2025-01-30T13:54:39.330869688Z" level=info msg="CreateContainer within sandbox \"e1e746a7090c8759d51ba435d8b37be2dab12c531cf02d74e0787efca162bbaa\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 30 13:54:39.434377 containerd[2095]: time="2025-01-30T13:54:39.434336952Z" level=info msg="CreateContainer within sandbox \"e1e746a7090c8759d51ba435d8b37be2dab12c531cf02d74e0787efca162bbaa\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"6db8189886b723033a37d931b7f1a1481e0d1f74af88c9cdc12425a734b1e4d7\"" Jan 30 13:54:39.439871 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1676313652.mount: Deactivated successfully. Jan 30 13:54:39.443255 containerd[2095]: time="2025-01-30T13:54:39.441208300Z" level=info msg="StartContainer for \"6db8189886b723033a37d931b7f1a1481e0d1f74af88c9cdc12425a734b1e4d7\"" Jan 30 13:54:39.504900 kubelet[3401]: I0130 13:54:39.502847 3401 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:54:39.707732 containerd[2095]: time="2025-01-30T13:54:39.707427637Z" level=info msg="StartContainer for \"6db8189886b723033a37d931b7f1a1481e0d1f74af88c9cdc12425a734b1e4d7\" returns successfully" Jan 30 13:54:39.713366 containerd[2095]: time="2025-01-30T13:54:39.713138762Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 30 13:54:39.938973 systemd-resolved[1973]: Under memory pressure, flushing caches. Jan 30 13:54:39.939014 systemd-resolved[1973]: Flushed all caches. Jan 30 13:54:39.943637 systemd-journald[1576]: Under memory pressure, flushing caches. Jan 30 13:54:40.621162 sshd[5920]: pam_unix(sshd:session): session closed for user core Jan 30 13:54:40.635747 systemd[1]: sshd@7-172.31.23.102:22-139.178.68.195:36238.service: Deactivated successfully. Jan 30 13:54:40.644710 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 13:54:40.645403 systemd-logind[2064]: Session 8 logged out. Waiting for processes to exit. Jan 30 13:54:40.650482 systemd-logind[2064]: Removed session 8. Jan 30 13:54:40.760723 containerd[2095]: time="2025-01-30T13:54:40.760685758Z" level=info msg="StopPodSandbox for \"b4e187380a46d1b9706f790e3cf321040cd75bd4fea8554b9e5c9401d0b0beff\"" Jan 30 13:54:41.241901 containerd[2095]: 2025-01-30 13:54:41.109 [WARNING][5996] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b4e187380a46d1b9706f790e3cf321040cd75bd4fea8554b9e5c9401d0b0beff" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--102-k8s-csi--node--driver--qjwhb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"89933e39-f4f4-49ac-8467-88e1539cd0a5", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 54, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-102", ContainerID:"e1e746a7090c8759d51ba435d8b37be2dab12c531cf02d74e0787efca162bbaa", Pod:"csi-node-driver-qjwhb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.54.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliaec1a939c30", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:54:41.241901 containerd[2095]: 2025-01-30 13:54:41.114 [INFO][5996] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b4e187380a46d1b9706f790e3cf321040cd75bd4fea8554b9e5c9401d0b0beff" Jan 30 13:54:41.241901 containerd[2095]: 2025-01-30 13:54:41.114 [INFO][5996] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b4e187380a46d1b9706f790e3cf321040cd75bd4fea8554b9e5c9401d0b0beff" iface="eth0" netns="" Jan 30 13:54:41.241901 containerd[2095]: 2025-01-30 13:54:41.114 [INFO][5996] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b4e187380a46d1b9706f790e3cf321040cd75bd4fea8554b9e5c9401d0b0beff" Jan 30 13:54:41.241901 containerd[2095]: 2025-01-30 13:54:41.114 [INFO][5996] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b4e187380a46d1b9706f790e3cf321040cd75bd4fea8554b9e5c9401d0b0beff" Jan 30 13:54:41.241901 containerd[2095]: 2025-01-30 13:54:41.208 [INFO][6007] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b4e187380a46d1b9706f790e3cf321040cd75bd4fea8554b9e5c9401d0b0beff" HandleID="k8s-pod-network.b4e187380a46d1b9706f790e3cf321040cd75bd4fea8554b9e5c9401d0b0beff" Workload="ip--172--31--23--102-k8s-csi--node--driver--qjwhb-eth0" Jan 30 13:54:41.241901 containerd[2095]: 2025-01-30 13:54:41.209 [INFO][6007] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:54:41.241901 containerd[2095]: 2025-01-30 13:54:41.209 [INFO][6007] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:54:41.241901 containerd[2095]: 2025-01-30 13:54:41.222 [WARNING][6007] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b4e187380a46d1b9706f790e3cf321040cd75bd4fea8554b9e5c9401d0b0beff" HandleID="k8s-pod-network.b4e187380a46d1b9706f790e3cf321040cd75bd4fea8554b9e5c9401d0b0beff" Workload="ip--172--31--23--102-k8s-csi--node--driver--qjwhb-eth0" Jan 30 13:54:41.241901 containerd[2095]: 2025-01-30 13:54:41.222 [INFO][6007] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b4e187380a46d1b9706f790e3cf321040cd75bd4fea8554b9e5c9401d0b0beff" HandleID="k8s-pod-network.b4e187380a46d1b9706f790e3cf321040cd75bd4fea8554b9e5c9401d0b0beff" Workload="ip--172--31--23--102-k8s-csi--node--driver--qjwhb-eth0" Jan 30 13:54:41.241901 containerd[2095]: 2025-01-30 13:54:41.225 [INFO][6007] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:54:41.241901 containerd[2095]: 2025-01-30 13:54:41.231 [INFO][5996] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b4e187380a46d1b9706f790e3cf321040cd75bd4fea8554b9e5c9401d0b0beff" Jan 30 13:54:41.243558 containerd[2095]: time="2025-01-30T13:54:41.243114071Z" level=info msg="TearDown network for sandbox \"b4e187380a46d1b9706f790e3cf321040cd75bd4fea8554b9e5c9401d0b0beff\" successfully" Jan 30 13:54:41.243558 containerd[2095]: time="2025-01-30T13:54:41.243144133Z" level=info msg="StopPodSandbox for \"b4e187380a46d1b9706f790e3cf321040cd75bd4fea8554b9e5c9401d0b0beff\" returns successfully" Jan 30 13:54:41.281056 containerd[2095]: time="2025-01-30T13:54:41.281007857Z" level=info msg="RemovePodSandbox for \"b4e187380a46d1b9706f790e3cf321040cd75bd4fea8554b9e5c9401d0b0beff\"" Jan 30 13:54:41.285069 containerd[2095]: time="2025-01-30T13:54:41.284670874Z" level=info msg="Forcibly stopping sandbox \"b4e187380a46d1b9706f790e3cf321040cd75bd4fea8554b9e5c9401d0b0beff\"" Jan 30 13:54:41.636083 containerd[2095]: 2025-01-30 13:54:41.423 [WARNING][6025] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b4e187380a46d1b9706f790e3cf321040cd75bd4fea8554b9e5c9401d0b0beff" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--102-k8s-csi--node--driver--qjwhb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"89933e39-f4f4-49ac-8467-88e1539cd0a5", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 54, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-102", ContainerID:"e1e746a7090c8759d51ba435d8b37be2dab12c531cf02d74e0787efca162bbaa", Pod:"csi-node-driver-qjwhb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.54.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliaec1a939c30", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:54:41.636083 containerd[2095]: 2025-01-30 13:54:41.424 [INFO][6025] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b4e187380a46d1b9706f790e3cf321040cd75bd4fea8554b9e5c9401d0b0beff" Jan 30 13:54:41.636083 containerd[2095]: 2025-01-30 13:54:41.424 [INFO][6025] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b4e187380a46d1b9706f790e3cf321040cd75bd4fea8554b9e5c9401d0b0beff" iface="eth0" netns="" Jan 30 13:54:41.636083 containerd[2095]: 2025-01-30 13:54:41.424 [INFO][6025] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b4e187380a46d1b9706f790e3cf321040cd75bd4fea8554b9e5c9401d0b0beff" Jan 30 13:54:41.636083 containerd[2095]: 2025-01-30 13:54:41.424 [INFO][6025] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b4e187380a46d1b9706f790e3cf321040cd75bd4fea8554b9e5c9401d0b0beff" Jan 30 13:54:41.636083 containerd[2095]: 2025-01-30 13:54:41.553 [INFO][6031] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b4e187380a46d1b9706f790e3cf321040cd75bd4fea8554b9e5c9401d0b0beff" HandleID="k8s-pod-network.b4e187380a46d1b9706f790e3cf321040cd75bd4fea8554b9e5c9401d0b0beff" Workload="ip--172--31--23--102-k8s-csi--node--driver--qjwhb-eth0" Jan 30 13:54:41.636083 containerd[2095]: 2025-01-30 13:54:41.553 [INFO][6031] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:54:41.636083 containerd[2095]: 2025-01-30 13:54:41.556 [INFO][6031] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:54:41.636083 containerd[2095]: 2025-01-30 13:54:41.587 [WARNING][6031] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b4e187380a46d1b9706f790e3cf321040cd75bd4fea8554b9e5c9401d0b0beff" HandleID="k8s-pod-network.b4e187380a46d1b9706f790e3cf321040cd75bd4fea8554b9e5c9401d0b0beff" Workload="ip--172--31--23--102-k8s-csi--node--driver--qjwhb-eth0" Jan 30 13:54:41.636083 containerd[2095]: 2025-01-30 13:54:41.590 [INFO][6031] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b4e187380a46d1b9706f790e3cf321040cd75bd4fea8554b9e5c9401d0b0beff" HandleID="k8s-pod-network.b4e187380a46d1b9706f790e3cf321040cd75bd4fea8554b9e5c9401d0b0beff" Workload="ip--172--31--23--102-k8s-csi--node--driver--qjwhb-eth0" Jan 30 13:54:41.636083 containerd[2095]: 2025-01-30 13:54:41.598 [INFO][6031] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:54:41.636083 containerd[2095]: 2025-01-30 13:54:41.621 [INFO][6025] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b4e187380a46d1b9706f790e3cf321040cd75bd4fea8554b9e5c9401d0b0beff" Jan 30 13:54:41.637239 containerd[2095]: time="2025-01-30T13:54:41.636135810Z" level=info msg="TearDown network for sandbox \"b4e187380a46d1b9706f790e3cf321040cd75bd4fea8554b9e5c9401d0b0beff\" successfully" Jan 30 13:54:41.664786 containerd[2095]: time="2025-01-30T13:54:41.664721397Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b4e187380a46d1b9706f790e3cf321040cd75bd4fea8554b9e5c9401d0b0beff\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:54:41.686583 containerd[2095]: time="2025-01-30T13:54:41.686495415Z" level=info msg="RemovePodSandbox \"b4e187380a46d1b9706f790e3cf321040cd75bd4fea8554b9e5c9401d0b0beff\" returns successfully" Jan 30 13:54:41.688927 containerd[2095]: time="2025-01-30T13:54:41.688840037Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:41.698693 containerd[2095]: time="2025-01-30T13:54:41.698448889Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 30 13:54:41.700040 containerd[2095]: time="2025-01-30T13:54:41.699903782Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:41.703249 containerd[2095]: time="2025-01-30T13:54:41.703119536Z" level=info msg="StopPodSandbox for \"744b5addb80ebf4b59cca9c8983a7bbdb091a1f2b961b4f5e7dcb573b05200f6\"" Jan 30 13:54:41.708657 containerd[2095]: time="2025-01-30T13:54:41.708046374Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:41.736839 containerd[2095]: time="2025-01-30T13:54:41.736752138Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 2.023569877s" Jan 30 13:54:41.737213 containerd[2095]: time="2025-01-30T13:54:41.737142928Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 30 13:54:41.745200 containerd[2095]: time="2025-01-30T13:54:41.745084902Z" level=info msg="CreateContainer within sandbox \"e1e746a7090c8759d51ba435d8b37be2dab12c531cf02d74e0787efca162bbaa\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 30 13:54:41.783646 containerd[2095]: time="2025-01-30T13:54:41.780203218Z" level=info msg="CreateContainer within sandbox \"e1e746a7090c8759d51ba435d8b37be2dab12c531cf02d74e0787efca162bbaa\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"90f75d096738b7c85a7d7b76b2ce9a9b643fb8707ddc965799829e1d22b69946\"" Jan 30 13:54:41.786316 containerd[2095]: time="2025-01-30T13:54:41.784866354Z" level=info msg="StartContainer for \"90f75d096738b7c85a7d7b76b2ce9a9b643fb8707ddc965799829e1d22b69946\"" Jan 30 13:54:41.895472 containerd[2095]: 2025-01-30 13:54:41.808 [WARNING][6049] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="744b5addb80ebf4b59cca9c8983a7bbdb091a1f2b961b4f5e7dcb573b05200f6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--102-k8s-calico--apiserver--7b99fdb47b--fvbzx-eth0", GenerateName:"calico-apiserver-7b99fdb47b-", Namespace:"calico-apiserver", SelfLink:"", UID:"75d06128-1791-4f18-82b2-8e83d7439284", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 54, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b99fdb47b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-102", ContainerID:"c8e7eb6ec81d242693f8deef7b79a0d7a11401bc0a307039cb692ce915c5c864", Pod:"calico-apiserver-7b99fdb47b-fvbzx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.54.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8cb76a8f17b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:54:41.895472 containerd[2095]: 2025-01-30 13:54:41.810 [INFO][6049] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="744b5addb80ebf4b59cca9c8983a7bbdb091a1f2b961b4f5e7dcb573b05200f6" Jan 30 13:54:41.895472 containerd[2095]: 2025-01-30 13:54:41.810 [INFO][6049] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="744b5addb80ebf4b59cca9c8983a7bbdb091a1f2b961b4f5e7dcb573b05200f6" iface="eth0" netns="" Jan 30 13:54:41.895472 containerd[2095]: 2025-01-30 13:54:41.810 [INFO][6049] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="744b5addb80ebf4b59cca9c8983a7bbdb091a1f2b961b4f5e7dcb573b05200f6" Jan 30 13:54:41.895472 containerd[2095]: 2025-01-30 13:54:41.810 [INFO][6049] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="744b5addb80ebf4b59cca9c8983a7bbdb091a1f2b961b4f5e7dcb573b05200f6" Jan 30 13:54:41.895472 containerd[2095]: 2025-01-30 13:54:41.858 [INFO][6059] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="744b5addb80ebf4b59cca9c8983a7bbdb091a1f2b961b4f5e7dcb573b05200f6" HandleID="k8s-pod-network.744b5addb80ebf4b59cca9c8983a7bbdb091a1f2b961b4f5e7dcb573b05200f6" Workload="ip--172--31--23--102-k8s-calico--apiserver--7b99fdb47b--fvbzx-eth0" Jan 30 13:54:41.895472 containerd[2095]: 2025-01-30 13:54:41.858 [INFO][6059] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:54:41.895472 containerd[2095]: 2025-01-30 13:54:41.858 [INFO][6059] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:54:41.895472 containerd[2095]: 2025-01-30 13:54:41.868 [WARNING][6059] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="744b5addb80ebf4b59cca9c8983a7bbdb091a1f2b961b4f5e7dcb573b05200f6" HandleID="k8s-pod-network.744b5addb80ebf4b59cca9c8983a7bbdb091a1f2b961b4f5e7dcb573b05200f6" Workload="ip--172--31--23--102-k8s-calico--apiserver--7b99fdb47b--fvbzx-eth0" Jan 30 13:54:41.895472 containerd[2095]: 2025-01-30 13:54:41.874 [INFO][6059] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="744b5addb80ebf4b59cca9c8983a7bbdb091a1f2b961b4f5e7dcb573b05200f6" HandleID="k8s-pod-network.744b5addb80ebf4b59cca9c8983a7bbdb091a1f2b961b4f5e7dcb573b05200f6" Workload="ip--172--31--23--102-k8s-calico--apiserver--7b99fdb47b--fvbzx-eth0" Jan 30 13:54:41.895472 containerd[2095]: 2025-01-30 13:54:41.878 [INFO][6059] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:54:41.895472 containerd[2095]: 2025-01-30 13:54:41.883 [INFO][6049] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="744b5addb80ebf4b59cca9c8983a7bbdb091a1f2b961b4f5e7dcb573b05200f6" Jan 30 13:54:41.900212 containerd[2095]: time="2025-01-30T13:54:41.898040154Z" level=info msg="TearDown network for sandbox \"744b5addb80ebf4b59cca9c8983a7bbdb091a1f2b961b4f5e7dcb573b05200f6\" successfully" Jan 30 13:54:41.900212 containerd[2095]: time="2025-01-30T13:54:41.898077882Z" level=info msg="StopPodSandbox for \"744b5addb80ebf4b59cca9c8983a7bbdb091a1f2b961b4f5e7dcb573b05200f6\" returns successfully" Jan 30 13:54:41.904290 containerd[2095]: time="2025-01-30T13:54:41.903925123Z" level=info msg="RemovePodSandbox for \"744b5addb80ebf4b59cca9c8983a7bbdb091a1f2b961b4f5e7dcb573b05200f6\"" Jan 30 13:54:41.904290 containerd[2095]: time="2025-01-30T13:54:41.903985003Z" level=info msg="Forcibly stopping sandbox \"744b5addb80ebf4b59cca9c8983a7bbdb091a1f2b961b4f5e7dcb573b05200f6\"" Jan 30 13:54:41.989543 systemd-journald[1576]: Under memory pressure, flushing caches. Jan 30 13:54:41.987121 systemd-resolved[1973]: Under memory pressure, flushing caches. Jan 30 13:54:41.987180 systemd-resolved[1973]: Flushed all caches. Jan 30 13:54:42.039167 containerd[2095]: time="2025-01-30T13:54:42.038526452Z" level=info msg="StartContainer for \"90f75d096738b7c85a7d7b76b2ce9a9b643fb8707ddc965799829e1d22b69946\" returns successfully" Jan 30 13:54:42.106826 containerd[2095]: 2025-01-30 13:54:41.995 [WARNING][6081] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="744b5addb80ebf4b59cca9c8983a7bbdb091a1f2b961b4f5e7dcb573b05200f6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--102-k8s-calico--apiserver--7b99fdb47b--fvbzx-eth0", GenerateName:"calico-apiserver-7b99fdb47b-", Namespace:"calico-apiserver", SelfLink:"", UID:"75d06128-1791-4f18-82b2-8e83d7439284", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 54, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b99fdb47b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-102", ContainerID:"c8e7eb6ec81d242693f8deef7b79a0d7a11401bc0a307039cb692ce915c5c864", Pod:"calico-apiserver-7b99fdb47b-fvbzx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.54.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8cb76a8f17b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:54:42.106826 containerd[2095]: 2025-01-30 13:54:41.996 [INFO][6081] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="744b5addb80ebf4b59cca9c8983a7bbdb091a1f2b961b4f5e7dcb573b05200f6" Jan 30 13:54:42.106826 containerd[2095]: 2025-01-30 13:54:41.996 [INFO][6081] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="744b5addb80ebf4b59cca9c8983a7bbdb091a1f2b961b4f5e7dcb573b05200f6" iface="eth0" netns="" Jan 30 13:54:42.106826 containerd[2095]: 2025-01-30 13:54:41.996 [INFO][6081] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="744b5addb80ebf4b59cca9c8983a7bbdb091a1f2b961b4f5e7dcb573b05200f6" Jan 30 13:54:42.106826 containerd[2095]: 2025-01-30 13:54:41.996 [INFO][6081] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="744b5addb80ebf4b59cca9c8983a7bbdb091a1f2b961b4f5e7dcb573b05200f6" Jan 30 13:54:42.106826 containerd[2095]: 2025-01-30 13:54:42.059 [INFO][6101] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="744b5addb80ebf4b59cca9c8983a7bbdb091a1f2b961b4f5e7dcb573b05200f6" HandleID="k8s-pod-network.744b5addb80ebf4b59cca9c8983a7bbdb091a1f2b961b4f5e7dcb573b05200f6" Workload="ip--172--31--23--102-k8s-calico--apiserver--7b99fdb47b--fvbzx-eth0" Jan 30 13:54:42.106826 containerd[2095]: 2025-01-30 13:54:42.059 [INFO][6101] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:54:42.106826 containerd[2095]: 2025-01-30 13:54:42.060 [INFO][6101] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:54:42.106826 containerd[2095]: 2025-01-30 13:54:42.088 [WARNING][6101] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="744b5addb80ebf4b59cca9c8983a7bbdb091a1f2b961b4f5e7dcb573b05200f6" HandleID="k8s-pod-network.744b5addb80ebf4b59cca9c8983a7bbdb091a1f2b961b4f5e7dcb573b05200f6" Workload="ip--172--31--23--102-k8s-calico--apiserver--7b99fdb47b--fvbzx-eth0" Jan 30 13:54:42.106826 containerd[2095]: 2025-01-30 13:54:42.088 [INFO][6101] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="744b5addb80ebf4b59cca9c8983a7bbdb091a1f2b961b4f5e7dcb573b05200f6" HandleID="k8s-pod-network.744b5addb80ebf4b59cca9c8983a7bbdb091a1f2b961b4f5e7dcb573b05200f6" Workload="ip--172--31--23--102-k8s-calico--apiserver--7b99fdb47b--fvbzx-eth0" Jan 30 13:54:42.106826 containerd[2095]: 2025-01-30 13:54:42.091 [INFO][6101] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:54:42.106826 containerd[2095]: 2025-01-30 13:54:42.094 [INFO][6081] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="744b5addb80ebf4b59cca9c8983a7bbdb091a1f2b961b4f5e7dcb573b05200f6" Jan 30 13:54:42.109744 containerd[2095]: time="2025-01-30T13:54:42.108279015Z" level=info msg="TearDown network for sandbox \"744b5addb80ebf4b59cca9c8983a7bbdb091a1f2b961b4f5e7dcb573b05200f6\" successfully" Jan 30 13:54:42.135189 containerd[2095]: time="2025-01-30T13:54:42.135132679Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"744b5addb80ebf4b59cca9c8983a7bbdb091a1f2b961b4f5e7dcb573b05200f6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:54:42.135350 containerd[2095]: time="2025-01-30T13:54:42.135222961Z" level=info msg="RemovePodSandbox \"744b5addb80ebf4b59cca9c8983a7bbdb091a1f2b961b4f5e7dcb573b05200f6\" returns successfully" Jan 30 13:54:42.137732 containerd[2095]: time="2025-01-30T13:54:42.137704422Z" level=info msg="StopPodSandbox for \"55d1e351f14edcbcd30c03829b2dacb4f06c00cd8f3a12cccdda01708d4d131e\"" Jan 30 13:54:42.346293 containerd[2095]: 2025-01-30 13:54:42.238 [WARNING][6131] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="55d1e351f14edcbcd30c03829b2dacb4f06c00cd8f3a12cccdda01708d4d131e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--102-k8s-calico--kube--controllers--6fd4bfb6f4--cv7hp-eth0", GenerateName:"calico-kube-controllers-6fd4bfb6f4-", Namespace:"calico-system", SelfLink:"", UID:"d1c07e75-2ef2-4a46-836f-bfeb350b2011", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 54, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6fd4bfb6f4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-102", ContainerID:"02fb1f084ab02a3a2779747abe161d31352141b54f9be8fd3e77e8e5ee96292e", Pod:"calico-kube-controllers-6fd4bfb6f4-cv7hp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.54.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic6dda57501e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:54:42.346293 containerd[2095]: 2025-01-30 13:54:42.239 [INFO][6131] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="55d1e351f14edcbcd30c03829b2dacb4f06c00cd8f3a12cccdda01708d4d131e" Jan 30 13:54:42.346293 containerd[2095]: 2025-01-30 13:54:42.239 [INFO][6131] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="55d1e351f14edcbcd30c03829b2dacb4f06c00cd8f3a12cccdda01708d4d131e" iface="eth0" netns="" Jan 30 13:54:42.346293 containerd[2095]: 2025-01-30 13:54:42.239 [INFO][6131] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="55d1e351f14edcbcd30c03829b2dacb4f06c00cd8f3a12cccdda01708d4d131e" Jan 30 13:54:42.346293 containerd[2095]: 2025-01-30 13:54:42.239 [INFO][6131] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="55d1e351f14edcbcd30c03829b2dacb4f06c00cd8f3a12cccdda01708d4d131e" Jan 30 13:54:42.346293 containerd[2095]: 2025-01-30 13:54:42.312 [INFO][6137] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="55d1e351f14edcbcd30c03829b2dacb4f06c00cd8f3a12cccdda01708d4d131e" HandleID="k8s-pod-network.55d1e351f14edcbcd30c03829b2dacb4f06c00cd8f3a12cccdda01708d4d131e" Workload="ip--172--31--23--102-k8s-calico--kube--controllers--6fd4bfb6f4--cv7hp-eth0" Jan 30 13:54:42.346293 containerd[2095]: 2025-01-30 13:54:42.313 [INFO][6137] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:54:42.346293 containerd[2095]: 2025-01-30 13:54:42.313 [INFO][6137] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:54:42.346293 containerd[2095]: 2025-01-30 13:54:42.332 [WARNING][6137] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="55d1e351f14edcbcd30c03829b2dacb4f06c00cd8f3a12cccdda01708d4d131e" HandleID="k8s-pod-network.55d1e351f14edcbcd30c03829b2dacb4f06c00cd8f3a12cccdda01708d4d131e" Workload="ip--172--31--23--102-k8s-calico--kube--controllers--6fd4bfb6f4--cv7hp-eth0" Jan 30 13:54:42.346293 containerd[2095]: 2025-01-30 13:54:42.333 [INFO][6137] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="55d1e351f14edcbcd30c03829b2dacb4f06c00cd8f3a12cccdda01708d4d131e" HandleID="k8s-pod-network.55d1e351f14edcbcd30c03829b2dacb4f06c00cd8f3a12cccdda01708d4d131e" Workload="ip--172--31--23--102-k8s-calico--kube--controllers--6fd4bfb6f4--cv7hp-eth0" Jan 30 13:54:42.346293 containerd[2095]: 2025-01-30 13:54:42.339 [INFO][6137] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:54:42.346293 containerd[2095]: 2025-01-30 13:54:42.344 [INFO][6131] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="55d1e351f14edcbcd30c03829b2dacb4f06c00cd8f3a12cccdda01708d4d131e" Jan 30 13:54:42.347709 containerd[2095]: time="2025-01-30T13:54:42.346894545Z" level=info msg="TearDown network for sandbox \"55d1e351f14edcbcd30c03829b2dacb4f06c00cd8f3a12cccdda01708d4d131e\" successfully" Jan 30 13:54:42.347709 containerd[2095]: time="2025-01-30T13:54:42.346940023Z" level=info msg="StopPodSandbox for \"55d1e351f14edcbcd30c03829b2dacb4f06c00cd8f3a12cccdda01708d4d131e\" returns successfully" Jan 30 13:54:42.347709 containerd[2095]: time="2025-01-30T13:54:42.347668817Z" level=info msg="RemovePodSandbox for \"55d1e351f14edcbcd30c03829b2dacb4f06c00cd8f3a12cccdda01708d4d131e\"" Jan 30 13:54:42.347709 containerd[2095]: time="2025-01-30T13:54:42.347707053Z" level=info msg="Forcibly stopping sandbox \"55d1e351f14edcbcd30c03829b2dacb4f06c00cd8f3a12cccdda01708d4d131e\"" Jan 30 13:54:42.494923 containerd[2095]: 2025-01-30 13:54:42.433 [WARNING][6155] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="55d1e351f14edcbcd30c03829b2dacb4f06c00cd8f3a12cccdda01708d4d131e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--102-k8s-calico--kube--controllers--6fd4bfb6f4--cv7hp-eth0", GenerateName:"calico-kube-controllers-6fd4bfb6f4-", Namespace:"calico-system", SelfLink:"", UID:"d1c07e75-2ef2-4a46-836f-bfeb350b2011", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 54, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6fd4bfb6f4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-102", ContainerID:"02fb1f084ab02a3a2779747abe161d31352141b54f9be8fd3e77e8e5ee96292e", Pod:"calico-kube-controllers-6fd4bfb6f4-cv7hp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.54.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic6dda57501e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:54:42.494923 containerd[2095]: 2025-01-30 13:54:42.434 [INFO][6155] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="55d1e351f14edcbcd30c03829b2dacb4f06c00cd8f3a12cccdda01708d4d131e" Jan 30 13:54:42.494923 containerd[2095]: 2025-01-30 13:54:42.434 [INFO][6155] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="55d1e351f14edcbcd30c03829b2dacb4f06c00cd8f3a12cccdda01708d4d131e" iface="eth0" netns="" Jan 30 13:54:42.494923 containerd[2095]: 2025-01-30 13:54:42.434 [INFO][6155] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="55d1e351f14edcbcd30c03829b2dacb4f06c00cd8f3a12cccdda01708d4d131e" Jan 30 13:54:42.494923 containerd[2095]: 2025-01-30 13:54:42.434 [INFO][6155] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="55d1e351f14edcbcd30c03829b2dacb4f06c00cd8f3a12cccdda01708d4d131e" Jan 30 13:54:42.494923 containerd[2095]: 2025-01-30 13:54:42.476 [INFO][6161] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="55d1e351f14edcbcd30c03829b2dacb4f06c00cd8f3a12cccdda01708d4d131e" HandleID="k8s-pod-network.55d1e351f14edcbcd30c03829b2dacb4f06c00cd8f3a12cccdda01708d4d131e" Workload="ip--172--31--23--102-k8s-calico--kube--controllers--6fd4bfb6f4--cv7hp-eth0" Jan 30 13:54:42.494923 containerd[2095]: 2025-01-30 13:54:42.477 [INFO][6161] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:54:42.494923 containerd[2095]: 2025-01-30 13:54:42.477 [INFO][6161] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:54:42.494923 containerd[2095]: 2025-01-30 13:54:42.486 [WARNING][6161] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="55d1e351f14edcbcd30c03829b2dacb4f06c00cd8f3a12cccdda01708d4d131e" HandleID="k8s-pod-network.55d1e351f14edcbcd30c03829b2dacb4f06c00cd8f3a12cccdda01708d4d131e" Workload="ip--172--31--23--102-k8s-calico--kube--controllers--6fd4bfb6f4--cv7hp-eth0" Jan 30 13:54:42.494923 containerd[2095]: 2025-01-30 13:54:42.486 [INFO][6161] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="55d1e351f14edcbcd30c03829b2dacb4f06c00cd8f3a12cccdda01708d4d131e" HandleID="k8s-pod-network.55d1e351f14edcbcd30c03829b2dacb4f06c00cd8f3a12cccdda01708d4d131e" Workload="ip--172--31--23--102-k8s-calico--kube--controllers--6fd4bfb6f4--cv7hp-eth0" Jan 30 13:54:42.494923 containerd[2095]: 2025-01-30 13:54:42.488 [INFO][6161] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:54:42.494923 containerd[2095]: 2025-01-30 13:54:42.491 [INFO][6155] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="55d1e351f14edcbcd30c03829b2dacb4f06c00cd8f3a12cccdda01708d4d131e" Jan 30 13:54:42.494923 containerd[2095]: time="2025-01-30T13:54:42.493713649Z" level=info msg="TearDown network for sandbox \"55d1e351f14edcbcd30c03829b2dacb4f06c00cd8f3a12cccdda01708d4d131e\" successfully" Jan 30 13:54:42.507695 containerd[2095]: time="2025-01-30T13:54:42.507636241Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"55d1e351f14edcbcd30c03829b2dacb4f06c00cd8f3a12cccdda01708d4d131e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:54:42.530759 containerd[2095]: time="2025-01-30T13:54:42.530707743Z" level=info msg="RemovePodSandbox \"55d1e351f14edcbcd30c03829b2dacb4f06c00cd8f3a12cccdda01708d4d131e\" returns successfully" Jan 30 13:54:42.531914 containerd[2095]: time="2025-01-30T13:54:42.531841559Z" level=info msg="StopPodSandbox for \"a43897853c8fd200597d9055b5ff78c738b67dcfc2e122ad4c65af26fc6855c3\"" Jan 30 13:54:42.761401 containerd[2095]: 2025-01-30 13:54:42.687 [WARNING][6179] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a43897853c8fd200597d9055b5ff78c738b67dcfc2e122ad4c65af26fc6855c3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--102-k8s-coredns--7db6d8ff4d--stfj9-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"97758f69-3608-4bd5-a29f-602e25cb96c7", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 53, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-102", ContainerID:"65710e2f9b083e16aaab38dad1c5b2f9b3fe32f52fef8b5d746e6bb38bab3360", Pod:"coredns-7db6d8ff4d-stfj9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.54.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2974176b028", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:54:42.761401 containerd[2095]: 2025-01-30 13:54:42.689 [INFO][6179] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a43897853c8fd200597d9055b5ff78c738b67dcfc2e122ad4c65af26fc6855c3" Jan 30 13:54:42.761401 containerd[2095]: 2025-01-30 13:54:42.689 [INFO][6179] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a43897853c8fd200597d9055b5ff78c738b67dcfc2e122ad4c65af26fc6855c3" iface="eth0" netns="" Jan 30 13:54:42.761401 containerd[2095]: 2025-01-30 13:54:42.689 [INFO][6179] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a43897853c8fd200597d9055b5ff78c738b67dcfc2e122ad4c65af26fc6855c3" Jan 30 13:54:42.761401 containerd[2095]: 2025-01-30 13:54:42.689 [INFO][6179] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a43897853c8fd200597d9055b5ff78c738b67dcfc2e122ad4c65af26fc6855c3" Jan 30 13:54:42.761401 containerd[2095]: 2025-01-30 13:54:42.739 [INFO][6185] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a43897853c8fd200597d9055b5ff78c738b67dcfc2e122ad4c65af26fc6855c3" HandleID="k8s-pod-network.a43897853c8fd200597d9055b5ff78c738b67dcfc2e122ad4c65af26fc6855c3" Workload="ip--172--31--23--102-k8s-coredns--7db6d8ff4d--stfj9-eth0" Jan 30 13:54:42.761401 containerd[2095]: 2025-01-30 13:54:42.740 [INFO][6185] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:54:42.761401 containerd[2095]: 2025-01-30 13:54:42.740 [INFO][6185] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:54:42.761401 containerd[2095]: 2025-01-30 13:54:42.754 [WARNING][6185] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a43897853c8fd200597d9055b5ff78c738b67dcfc2e122ad4c65af26fc6855c3" HandleID="k8s-pod-network.a43897853c8fd200597d9055b5ff78c738b67dcfc2e122ad4c65af26fc6855c3" Workload="ip--172--31--23--102-k8s-coredns--7db6d8ff4d--stfj9-eth0" Jan 30 13:54:42.761401 containerd[2095]: 2025-01-30 13:54:42.754 [INFO][6185] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a43897853c8fd200597d9055b5ff78c738b67dcfc2e122ad4c65af26fc6855c3" HandleID="k8s-pod-network.a43897853c8fd200597d9055b5ff78c738b67dcfc2e122ad4c65af26fc6855c3" Workload="ip--172--31--23--102-k8s-coredns--7db6d8ff4d--stfj9-eth0" Jan 30 13:54:42.761401 containerd[2095]: 2025-01-30 13:54:42.756 [INFO][6185] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:54:42.761401 containerd[2095]: 2025-01-30 13:54:42.759 [INFO][6179] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a43897853c8fd200597d9055b5ff78c738b67dcfc2e122ad4c65af26fc6855c3" Jan 30 13:54:42.761401 containerd[2095]: time="2025-01-30T13:54:42.761367666Z" level=info msg="TearDown network for sandbox \"a43897853c8fd200597d9055b5ff78c738b67dcfc2e122ad4c65af26fc6855c3\" successfully" Jan 30 13:54:42.761401 containerd[2095]: time="2025-01-30T13:54:42.761399180Z" level=info msg="StopPodSandbox for \"a43897853c8fd200597d9055b5ff78c738b67dcfc2e122ad4c65af26fc6855c3\" returns successfully" Jan 30 13:54:42.770953 containerd[2095]: time="2025-01-30T13:54:42.764333290Z" level=info msg="RemovePodSandbox for \"a43897853c8fd200597d9055b5ff78c738b67dcfc2e122ad4c65af26fc6855c3\"" Jan 30 13:54:42.770953 containerd[2095]: time="2025-01-30T13:54:42.764378260Z" level=info msg="Forcibly stopping sandbox \"a43897853c8fd200597d9055b5ff78c738b67dcfc2e122ad4c65af26fc6855c3\"" Jan 30 13:54:42.898275 containerd[2095]: 2025-01-30 13:54:42.830 [WARNING][6203] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a43897853c8fd200597d9055b5ff78c738b67dcfc2e122ad4c65af26fc6855c3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--102-k8s-coredns--7db6d8ff4d--stfj9-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"97758f69-3608-4bd5-a29f-602e25cb96c7", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 53, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-102", ContainerID:"65710e2f9b083e16aaab38dad1c5b2f9b3fe32f52fef8b5d746e6bb38bab3360", Pod:"coredns-7db6d8ff4d-stfj9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.54.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2974176b028", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:54:42.898275 containerd[2095]: 2025-01-30 13:54:42.831 [INFO][6203] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a43897853c8fd200597d9055b5ff78c738b67dcfc2e122ad4c65af26fc6855c3" Jan 30 13:54:42.898275 containerd[2095]: 2025-01-30 13:54:42.831 [INFO][6203] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a43897853c8fd200597d9055b5ff78c738b67dcfc2e122ad4c65af26fc6855c3" iface="eth0" netns="" Jan 30 13:54:42.898275 containerd[2095]: 2025-01-30 13:54:42.831 [INFO][6203] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a43897853c8fd200597d9055b5ff78c738b67dcfc2e122ad4c65af26fc6855c3" Jan 30 13:54:42.898275 containerd[2095]: 2025-01-30 13:54:42.831 [INFO][6203] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a43897853c8fd200597d9055b5ff78c738b67dcfc2e122ad4c65af26fc6855c3" Jan 30 13:54:42.898275 containerd[2095]: 2025-01-30 13:54:42.872 [INFO][6209] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a43897853c8fd200597d9055b5ff78c738b67dcfc2e122ad4c65af26fc6855c3" HandleID="k8s-pod-network.a43897853c8fd200597d9055b5ff78c738b67dcfc2e122ad4c65af26fc6855c3" Workload="ip--172--31--23--102-k8s-coredns--7db6d8ff4d--stfj9-eth0" Jan 30 13:54:42.898275 containerd[2095]: 2025-01-30 13:54:42.872 [INFO][6209] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:54:42.898275 containerd[2095]: 2025-01-30 13:54:42.872 [INFO][6209] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:54:42.898275 containerd[2095]: 2025-01-30 13:54:42.883 [WARNING][6209] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a43897853c8fd200597d9055b5ff78c738b67dcfc2e122ad4c65af26fc6855c3" HandleID="k8s-pod-network.a43897853c8fd200597d9055b5ff78c738b67dcfc2e122ad4c65af26fc6855c3" Workload="ip--172--31--23--102-k8s-coredns--7db6d8ff4d--stfj9-eth0" Jan 30 13:54:42.898275 containerd[2095]: 2025-01-30 13:54:42.883 [INFO][6209] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a43897853c8fd200597d9055b5ff78c738b67dcfc2e122ad4c65af26fc6855c3" HandleID="k8s-pod-network.a43897853c8fd200597d9055b5ff78c738b67dcfc2e122ad4c65af26fc6855c3" Workload="ip--172--31--23--102-k8s-coredns--7db6d8ff4d--stfj9-eth0" Jan 30 13:54:42.898275 containerd[2095]: 2025-01-30 13:54:42.889 [INFO][6209] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:54:42.898275 containerd[2095]: 2025-01-30 13:54:42.893 [INFO][6203] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a43897853c8fd200597d9055b5ff78c738b67dcfc2e122ad4c65af26fc6855c3" Jan 30 13:54:42.899958 containerd[2095]: time="2025-01-30T13:54:42.899498042Z" level=info msg="TearDown network for sandbox \"a43897853c8fd200597d9055b5ff78c738b67dcfc2e122ad4c65af26fc6855c3\" successfully" Jan 30 13:54:42.918428 containerd[2095]: time="2025-01-30T13:54:42.915692085Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a43897853c8fd200597d9055b5ff78c738b67dcfc2e122ad4c65af26fc6855c3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:54:42.918428 containerd[2095]: time="2025-01-30T13:54:42.915784609Z" level=info msg="RemovePodSandbox \"a43897853c8fd200597d9055b5ff78c738b67dcfc2e122ad4c65af26fc6855c3\" returns successfully" Jan 30 13:54:42.918825 containerd[2095]: time="2025-01-30T13:54:42.918786922Z" level=info msg="StopPodSandbox for \"630b1847d9c10a2e208c86dcfd9698e221af4a27cba08687339465f8b336214e\"" Jan 30 13:54:43.082664 containerd[2095]: 2025-01-30 13:54:42.995 [WARNING][6227] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="630b1847d9c10a2e208c86dcfd9698e221af4a27cba08687339465f8b336214e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--102-k8s-calico--apiserver--7b99fdb47b--ktz62-eth0", GenerateName:"calico-apiserver-7b99fdb47b-", Namespace:"calico-apiserver", SelfLink:"", UID:"955728c7-dcd5-4a2a-928a-608a27b0ea08", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 54, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b99fdb47b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-102", ContainerID:"33e11650b0d92af738bf1aba6ffa81b1d3596959fefbb170462ec85193007874", Pod:"calico-apiserver-7b99fdb47b-ktz62", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.54.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali82dcac028ac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:54:43.082664 containerd[2095]: 2025-01-30 13:54:42.996 [INFO][6227] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="630b1847d9c10a2e208c86dcfd9698e221af4a27cba08687339465f8b336214e" Jan 30 13:54:43.082664 containerd[2095]: 2025-01-30 13:54:42.996 [INFO][6227] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="630b1847d9c10a2e208c86dcfd9698e221af4a27cba08687339465f8b336214e" iface="eth0" netns="" Jan 30 13:54:43.082664 containerd[2095]: 2025-01-30 13:54:42.996 [INFO][6227] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="630b1847d9c10a2e208c86dcfd9698e221af4a27cba08687339465f8b336214e" Jan 30 13:54:43.082664 containerd[2095]: 2025-01-30 13:54:42.996 [INFO][6227] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="630b1847d9c10a2e208c86dcfd9698e221af4a27cba08687339465f8b336214e" Jan 30 13:54:43.082664 containerd[2095]: 2025-01-30 13:54:43.064 [INFO][6234] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="630b1847d9c10a2e208c86dcfd9698e221af4a27cba08687339465f8b336214e" HandleID="k8s-pod-network.630b1847d9c10a2e208c86dcfd9698e221af4a27cba08687339465f8b336214e" Workload="ip--172--31--23--102-k8s-calico--apiserver--7b99fdb47b--ktz62-eth0" Jan 30 13:54:43.082664 containerd[2095]: 2025-01-30 13:54:43.064 [INFO][6234] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:54:43.082664 containerd[2095]: 2025-01-30 13:54:43.064 [INFO][6234] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:54:43.082664 containerd[2095]: 2025-01-30 13:54:43.075 [WARNING][6234] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="630b1847d9c10a2e208c86dcfd9698e221af4a27cba08687339465f8b336214e" HandleID="k8s-pod-network.630b1847d9c10a2e208c86dcfd9698e221af4a27cba08687339465f8b336214e" Workload="ip--172--31--23--102-k8s-calico--apiserver--7b99fdb47b--ktz62-eth0" Jan 30 13:54:43.082664 containerd[2095]: 2025-01-30 13:54:43.075 [INFO][6234] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="630b1847d9c10a2e208c86dcfd9698e221af4a27cba08687339465f8b336214e" HandleID="k8s-pod-network.630b1847d9c10a2e208c86dcfd9698e221af4a27cba08687339465f8b336214e" Workload="ip--172--31--23--102-k8s-calico--apiserver--7b99fdb47b--ktz62-eth0" Jan 30 13:54:43.082664 containerd[2095]: 2025-01-30 13:54:43.078 [INFO][6234] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:54:43.082664 containerd[2095]: 2025-01-30 13:54:43.080 [INFO][6227] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="630b1847d9c10a2e208c86dcfd9698e221af4a27cba08687339465f8b336214e" Jan 30 13:54:43.086310 containerd[2095]: time="2025-01-30T13:54:43.083969649Z" level=info msg="TearDown network for sandbox \"630b1847d9c10a2e208c86dcfd9698e221af4a27cba08687339465f8b336214e\" successfully" Jan 30 13:54:43.086310 containerd[2095]: time="2025-01-30T13:54:43.084012738Z" level=info msg="StopPodSandbox for \"630b1847d9c10a2e208c86dcfd9698e221af4a27cba08687339465f8b336214e\" returns successfully" Jan 30 13:54:43.086310 containerd[2095]: time="2025-01-30T13:54:43.085128876Z" level=info msg="RemovePodSandbox for \"630b1847d9c10a2e208c86dcfd9698e221af4a27cba08687339465f8b336214e\"" Jan 30 13:54:43.086310 containerd[2095]: time="2025-01-30T13:54:43.085163387Z" level=info msg="Forcibly stopping sandbox \"630b1847d9c10a2e208c86dcfd9698e221af4a27cba08687339465f8b336214e\"" Jan 30 13:54:43.236754 containerd[2095]: 2025-01-30 13:54:43.183 [WARNING][6252] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="630b1847d9c10a2e208c86dcfd9698e221af4a27cba08687339465f8b336214e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--102-k8s-calico--apiserver--7b99fdb47b--ktz62-eth0", GenerateName:"calico-apiserver-7b99fdb47b-", Namespace:"calico-apiserver", SelfLink:"", UID:"955728c7-dcd5-4a2a-928a-608a27b0ea08", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 54, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b99fdb47b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-102", ContainerID:"33e11650b0d92af738bf1aba6ffa81b1d3596959fefbb170462ec85193007874", Pod:"calico-apiserver-7b99fdb47b-ktz62", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.54.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali82dcac028ac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:54:43.236754 containerd[2095]: 2025-01-30 13:54:43.184 [INFO][6252] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="630b1847d9c10a2e208c86dcfd9698e221af4a27cba08687339465f8b336214e" Jan 30 13:54:43.236754 containerd[2095]: 2025-01-30 13:54:43.184 [INFO][6252] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="630b1847d9c10a2e208c86dcfd9698e221af4a27cba08687339465f8b336214e" iface="eth0" netns="" Jan 30 13:54:43.236754 containerd[2095]: 2025-01-30 13:54:43.184 [INFO][6252] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="630b1847d9c10a2e208c86dcfd9698e221af4a27cba08687339465f8b336214e" Jan 30 13:54:43.236754 containerd[2095]: 2025-01-30 13:54:43.184 [INFO][6252] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="630b1847d9c10a2e208c86dcfd9698e221af4a27cba08687339465f8b336214e" Jan 30 13:54:43.236754 containerd[2095]: 2025-01-30 13:54:43.217 [INFO][6259] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="630b1847d9c10a2e208c86dcfd9698e221af4a27cba08687339465f8b336214e" HandleID="k8s-pod-network.630b1847d9c10a2e208c86dcfd9698e221af4a27cba08687339465f8b336214e" Workload="ip--172--31--23--102-k8s-calico--apiserver--7b99fdb47b--ktz62-eth0" Jan 30 13:54:43.236754 containerd[2095]: 2025-01-30 13:54:43.218 [INFO][6259] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:54:43.236754 containerd[2095]: 2025-01-30 13:54:43.218 [INFO][6259] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:54:43.236754 containerd[2095]: 2025-01-30 13:54:43.228 [WARNING][6259] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="630b1847d9c10a2e208c86dcfd9698e221af4a27cba08687339465f8b336214e" HandleID="k8s-pod-network.630b1847d9c10a2e208c86dcfd9698e221af4a27cba08687339465f8b336214e" Workload="ip--172--31--23--102-k8s-calico--apiserver--7b99fdb47b--ktz62-eth0" Jan 30 13:54:43.236754 containerd[2095]: 2025-01-30 13:54:43.229 [INFO][6259] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="630b1847d9c10a2e208c86dcfd9698e221af4a27cba08687339465f8b336214e" HandleID="k8s-pod-network.630b1847d9c10a2e208c86dcfd9698e221af4a27cba08687339465f8b336214e" Workload="ip--172--31--23--102-k8s-calico--apiserver--7b99fdb47b--ktz62-eth0" Jan 30 13:54:43.236754 containerd[2095]: 2025-01-30 13:54:43.232 [INFO][6259] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:54:43.236754 containerd[2095]: 2025-01-30 13:54:43.234 [INFO][6252] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="630b1847d9c10a2e208c86dcfd9698e221af4a27cba08687339465f8b336214e" Jan 30 13:54:43.236754 containerd[2095]: time="2025-01-30T13:54:43.236498732Z" level=info msg="TearDown network for sandbox \"630b1847d9c10a2e208c86dcfd9698e221af4a27cba08687339465f8b336214e\" successfully" Jan 30 13:54:43.245200 containerd[2095]: time="2025-01-30T13:54:43.244933670Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"630b1847d9c10a2e208c86dcfd9698e221af4a27cba08687339465f8b336214e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:54:43.245200 containerd[2095]: time="2025-01-30T13:54:43.245019448Z" level=info msg="RemovePodSandbox \"630b1847d9c10a2e208c86dcfd9698e221af4a27cba08687339465f8b336214e\" returns successfully" Jan 30 13:54:43.246432 containerd[2095]: time="2025-01-30T13:54:43.246397216Z" level=info msg="StopPodSandbox for \"ac4689f84082430159dbf25ab66facdf1b47e45260712660c9d42fe6d75487a0\"" Jan 30 13:54:43.307566 kubelet[3401]: I0130 13:54:43.307496 3401 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 30 13:54:43.316599 kubelet[3401]: I0130 13:54:43.316479 3401 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 30 13:54:43.359699 containerd[2095]: 2025-01-30 13:54:43.309 [WARNING][6278] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ac4689f84082430159dbf25ab66facdf1b47e45260712660c9d42fe6d75487a0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--102-k8s-coredns--7db6d8ff4d--84qz8-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8add0565-323b-46d5-8793-d7bb0f574609", ResourceVersion:"787", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 53, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-102", ContainerID:"17b934772c5aa4d9a52bdbe4d12520ec9bf00a364004b46c6da3bd534e9de68e", Pod:"coredns-7db6d8ff4d-84qz8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.54.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali918ff56ceea", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:54:43.359699 containerd[2095]: 2025-01-30 13:54:43.309 [INFO][6278] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ac4689f84082430159dbf25ab66facdf1b47e45260712660c9d42fe6d75487a0" Jan 30 13:54:43.359699 containerd[2095]: 2025-01-30 13:54:43.309 [INFO][6278] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ac4689f84082430159dbf25ab66facdf1b47e45260712660c9d42fe6d75487a0" iface="eth0" netns="" Jan 30 13:54:43.359699 containerd[2095]: 2025-01-30 13:54:43.309 [INFO][6278] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ac4689f84082430159dbf25ab66facdf1b47e45260712660c9d42fe6d75487a0" Jan 30 13:54:43.359699 containerd[2095]: 2025-01-30 13:54:43.309 [INFO][6278] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ac4689f84082430159dbf25ab66facdf1b47e45260712660c9d42fe6d75487a0" Jan 30 13:54:43.359699 containerd[2095]: 2025-01-30 13:54:43.341 [INFO][6284] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ac4689f84082430159dbf25ab66facdf1b47e45260712660c9d42fe6d75487a0" HandleID="k8s-pod-network.ac4689f84082430159dbf25ab66facdf1b47e45260712660c9d42fe6d75487a0" Workload="ip--172--31--23--102-k8s-coredns--7db6d8ff4d--84qz8-eth0" Jan 30 13:54:43.359699 containerd[2095]: 2025-01-30 13:54:43.342 [INFO][6284] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:54:43.359699 containerd[2095]: 2025-01-30 13:54:43.342 [INFO][6284] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:54:43.359699 containerd[2095]: 2025-01-30 13:54:43.351 [WARNING][6284] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ac4689f84082430159dbf25ab66facdf1b47e45260712660c9d42fe6d75487a0" HandleID="k8s-pod-network.ac4689f84082430159dbf25ab66facdf1b47e45260712660c9d42fe6d75487a0" Workload="ip--172--31--23--102-k8s-coredns--7db6d8ff4d--84qz8-eth0" Jan 30 13:54:43.359699 containerd[2095]: 2025-01-30 13:54:43.351 [INFO][6284] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ac4689f84082430159dbf25ab66facdf1b47e45260712660c9d42fe6d75487a0" HandleID="k8s-pod-network.ac4689f84082430159dbf25ab66facdf1b47e45260712660c9d42fe6d75487a0" Workload="ip--172--31--23--102-k8s-coredns--7db6d8ff4d--84qz8-eth0" Jan 30 13:54:43.359699 containerd[2095]: 2025-01-30 13:54:43.354 [INFO][6284] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:54:43.359699 containerd[2095]: 2025-01-30 13:54:43.356 [INFO][6278] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ac4689f84082430159dbf25ab66facdf1b47e45260712660c9d42fe6d75487a0" Jan 30 13:54:43.362842 containerd[2095]: time="2025-01-30T13:54:43.359676646Z" level=info msg="TearDown network for sandbox \"ac4689f84082430159dbf25ab66facdf1b47e45260712660c9d42fe6d75487a0\" successfully" Jan 30 13:54:43.362842 containerd[2095]: time="2025-01-30T13:54:43.359972269Z" level=info msg="StopPodSandbox for \"ac4689f84082430159dbf25ab66facdf1b47e45260712660c9d42fe6d75487a0\" returns successfully" Jan 30 13:54:43.362842 containerd[2095]: time="2025-01-30T13:54:43.361107678Z" level=info msg="RemovePodSandbox for \"ac4689f84082430159dbf25ab66facdf1b47e45260712660c9d42fe6d75487a0\"" Jan 30 13:54:43.362842 containerd[2095]: time="2025-01-30T13:54:43.361141258Z" level=info msg="Forcibly stopping sandbox \"ac4689f84082430159dbf25ab66facdf1b47e45260712660c9d42fe6d75487a0\"" Jan 30 13:54:43.511105 containerd[2095]: 2025-01-30 13:54:43.463 [WARNING][6303] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ac4689f84082430159dbf25ab66facdf1b47e45260712660c9d42fe6d75487a0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--102-k8s-coredns--7db6d8ff4d--84qz8-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8add0565-323b-46d5-8793-d7bb0f574609", ResourceVersion:"787", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 53, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-102", ContainerID:"17b934772c5aa4d9a52bdbe4d12520ec9bf00a364004b46c6da3bd534e9de68e", Pod:"coredns-7db6d8ff4d-84qz8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.54.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali918ff56ceea", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:54:43.511105 containerd[2095]: 2025-01-30 13:54:43.463 [INFO][6303] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ac4689f84082430159dbf25ab66facdf1b47e45260712660c9d42fe6d75487a0" Jan 30 13:54:43.511105 containerd[2095]: 2025-01-30 13:54:43.464 [INFO][6303] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ac4689f84082430159dbf25ab66facdf1b47e45260712660c9d42fe6d75487a0" iface="eth0" netns="" Jan 30 13:54:43.511105 containerd[2095]: 2025-01-30 13:54:43.464 [INFO][6303] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ac4689f84082430159dbf25ab66facdf1b47e45260712660c9d42fe6d75487a0" Jan 30 13:54:43.511105 containerd[2095]: 2025-01-30 13:54:43.464 [INFO][6303] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ac4689f84082430159dbf25ab66facdf1b47e45260712660c9d42fe6d75487a0" Jan 30 13:54:43.511105 containerd[2095]: 2025-01-30 13:54:43.496 [INFO][6309] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ac4689f84082430159dbf25ab66facdf1b47e45260712660c9d42fe6d75487a0" HandleID="k8s-pod-network.ac4689f84082430159dbf25ab66facdf1b47e45260712660c9d42fe6d75487a0" Workload="ip--172--31--23--102-k8s-coredns--7db6d8ff4d--84qz8-eth0" Jan 30 13:54:43.511105 containerd[2095]: 2025-01-30 13:54:43.496 [INFO][6309] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:54:43.511105 containerd[2095]: 2025-01-30 13:54:43.496 [INFO][6309] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:54:43.511105 containerd[2095]: 2025-01-30 13:54:43.505 [WARNING][6309] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ac4689f84082430159dbf25ab66facdf1b47e45260712660c9d42fe6d75487a0" HandleID="k8s-pod-network.ac4689f84082430159dbf25ab66facdf1b47e45260712660c9d42fe6d75487a0" Workload="ip--172--31--23--102-k8s-coredns--7db6d8ff4d--84qz8-eth0" Jan 30 13:54:43.511105 containerd[2095]: 2025-01-30 13:54:43.505 [INFO][6309] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ac4689f84082430159dbf25ab66facdf1b47e45260712660c9d42fe6d75487a0" HandleID="k8s-pod-network.ac4689f84082430159dbf25ab66facdf1b47e45260712660c9d42fe6d75487a0" Workload="ip--172--31--23--102-k8s-coredns--7db6d8ff4d--84qz8-eth0" Jan 30 13:54:43.511105 containerd[2095]: 2025-01-30 13:54:43.507 [INFO][6309] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:54:43.511105 containerd[2095]: 2025-01-30 13:54:43.509 [INFO][6303] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ac4689f84082430159dbf25ab66facdf1b47e45260712660c9d42fe6d75487a0" Jan 30 13:54:43.512561 containerd[2095]: time="2025-01-30T13:54:43.511151571Z" level=info msg="TearDown network for sandbox \"ac4689f84082430159dbf25ab66facdf1b47e45260712660c9d42fe6d75487a0\" successfully" Jan 30 13:54:43.519137 containerd[2095]: time="2025-01-30T13:54:43.519090628Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ac4689f84082430159dbf25ab66facdf1b47e45260712660c9d42fe6d75487a0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:54:43.519137 containerd[2095]: time="2025-01-30T13:54:43.519231088Z" level=info msg="RemovePodSandbox \"ac4689f84082430159dbf25ab66facdf1b47e45260712660c9d42fe6d75487a0\" returns successfully" Jan 30 13:54:45.648366 systemd[1]: Started sshd@8-172.31.23.102:22-139.178.68.195:51348.service - OpenSSH per-connection server daemon (139.178.68.195:51348). Jan 30 13:54:45.878354 sshd[6342]: Accepted publickey for core from 139.178.68.195 port 51348 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:54:45.884224 sshd[6342]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:54:45.890503 systemd-logind[2064]: New session 9 of user core. Jan 30 13:54:45.900217 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 13:54:46.686424 sshd[6342]: pam_unix(sshd:session): session closed for user core Jan 30 13:54:46.693101 systemd[1]: sshd@8-172.31.23.102:22-139.178.68.195:51348.service: Deactivated successfully. Jan 30 13:54:46.698513 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 13:54:46.699692 systemd-logind[2064]: Session 9 logged out. Waiting for processes to exit. Jan 30 13:54:46.700838 systemd-logind[2064]: Removed session 9. Jan 30 13:54:49.126522 kubelet[3401]: I0130 13:54:49.126478 3401 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:54:49.243714 kubelet[3401]: I0130 13:54:49.242529 3401 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-qjwhb" podStartSLOduration=37.391821672 podStartE2EDuration="47.242230513s" podCreationTimestamp="2025-01-30 13:54:02 +0000 UTC" firstStartedPulling="2025-01-30 13:54:31.88915401 +0000 UTC m=+51.555282863" lastFinishedPulling="2025-01-30 13:54:41.739562841 +0000 UTC m=+61.405691704" observedRunningTime="2025-01-30 13:54:42.68927864 +0000 UTC m=+62.355407505" watchObservedRunningTime="2025-01-30 13:54:49.242230513 +0000 UTC m=+68.908359377" Jan 30 13:54:49.342697 kubelet[3401]: I0130 13:54:49.341830 3401 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:54:51.718652 systemd[1]: Started sshd@9-172.31.23.102:22-139.178.68.195:51354.service - OpenSSH per-connection server daemon (139.178.68.195:51354). Jan 30 13:54:51.916354 sshd[6365]: Accepted publickey for core from 139.178.68.195 port 51354 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:54:51.918685 sshd[6365]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:54:51.929537 systemd-logind[2064]: New session 10 of user core. Jan 30 13:54:51.936002 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 13:54:52.196889 sshd[6365]: pam_unix(sshd:session): session closed for user core Jan 30 13:54:52.208441 systemd[1]: sshd@9-172.31.23.102:22-139.178.68.195:51354.service: Deactivated successfully. Jan 30 13:54:52.215702 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 13:54:52.216492 systemd-logind[2064]: Session 10 logged out. Waiting for processes to exit. Jan 30 13:54:52.228712 systemd[1]: Started sshd@10-172.31.23.102:22-139.178.68.195:51370.service - OpenSSH per-connection server daemon (139.178.68.195:51370). Jan 30 13:54:52.230290 systemd-logind[2064]: Removed session 10. Jan 30 13:54:52.409661 sshd[6380]: Accepted publickey for core from 139.178.68.195 port 51370 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:54:52.412804 sshd[6380]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:54:52.427523 systemd-logind[2064]: New session 11 of user core. Jan 30 13:54:52.436307 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 13:54:52.793967 sshd[6380]: pam_unix(sshd:session): session closed for user core Jan 30 13:54:52.815007 systemd[1]: Started sshd@11-172.31.23.102:22-139.178.68.195:51386.service - OpenSSH per-connection server daemon (139.178.68.195:51386). Jan 30 13:54:52.819185 systemd[1]: sshd@10-172.31.23.102:22-139.178.68.195:51370.service: Deactivated successfully. Jan 30 13:54:52.825710 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 13:54:52.830263 systemd-logind[2064]: Session 11 logged out. Waiting for processes to exit. Jan 30 13:54:52.835492 systemd-logind[2064]: Removed session 11. Jan 30 13:54:53.013144 sshd[6389]: Accepted publickey for core from 139.178.68.195 port 51386 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:54:53.014629 sshd[6389]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:54:53.022204 systemd-logind[2064]: New session 12 of user core. Jan 30 13:54:53.028459 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 13:54:53.395578 sshd[6389]: pam_unix(sshd:session): session closed for user core Jan 30 13:54:53.401591 systemd[1]: sshd@11-172.31.23.102:22-139.178.68.195:51386.service: Deactivated successfully. Jan 30 13:54:53.407270 systemd-logind[2064]: Session 12 logged out. Waiting for processes to exit. Jan 30 13:54:53.407736 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 13:54:53.411327 systemd-logind[2064]: Removed session 12. Jan 30 13:54:58.426309 systemd[1]: Started sshd@12-172.31.23.102:22-139.178.68.195:35128.service - OpenSSH per-connection server daemon (139.178.68.195:35128). Jan 30 13:54:58.609500 sshd[6438]: Accepted publickey for core from 139.178.68.195 port 35128 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:54:58.613061 sshd[6438]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:54:58.619204 systemd-logind[2064]: New session 13 of user core. Jan 30 13:54:58.630839 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 13:54:58.935580 sshd[6438]: pam_unix(sshd:session): session closed for user core Jan 30 13:54:58.956434 systemd[1]: sshd@12-172.31.23.102:22-139.178.68.195:35128.service: Deactivated successfully. Jan 30 13:54:58.977080 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 13:54:58.984311 systemd-logind[2064]: Session 13 logged out. Waiting for processes to exit. Jan 30 13:54:58.988592 systemd-logind[2064]: Removed session 13. Jan 30 13:55:03.969463 systemd[1]: Started sshd@13-172.31.23.102:22-139.178.68.195:35144.service - OpenSSH per-connection server daemon (139.178.68.195:35144). Jan 30 13:55:04.200914 sshd[6457]: Accepted publickey for core from 139.178.68.195 port 35144 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:55:04.204433 sshd[6457]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:55:04.211599 systemd-logind[2064]: New session 14 of user core. Jan 30 13:55:04.219062 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 13:55:04.778588 sshd[6457]: pam_unix(sshd:session): session closed for user core Jan 30 13:55:04.784427 systemd[1]: sshd@13-172.31.23.102:22-139.178.68.195:35144.service: Deactivated successfully. Jan 30 13:55:04.789887 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 13:55:04.791385 systemd-logind[2064]: Session 14 logged out. Waiting for processes to exit. Jan 30 13:55:04.793319 systemd-logind[2064]: Removed session 14. Jan 30 13:55:05.987113 systemd-resolved[1973]: Under memory pressure, flushing caches. Jan 30 13:55:05.989562 systemd-journald[1576]: Under memory pressure, flushing caches. Jan 30 13:55:05.987151 systemd-resolved[1973]: Flushed all caches. Jan 30 13:55:09.806993 systemd[1]: Started sshd@14-172.31.23.102:22-139.178.68.195:56084.service - OpenSSH per-connection server daemon (139.178.68.195:56084). Jan 30 13:55:09.996719 sshd[6470]: Accepted publickey for core from 139.178.68.195 port 56084 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:55:10.001857 sshd[6470]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:55:10.010274 systemd-logind[2064]: New session 15 of user core. Jan 30 13:55:10.017653 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 13:55:10.304915 sshd[6470]: pam_unix(sshd:session): session closed for user core Jan 30 13:55:10.314976 systemd[1]: sshd@14-172.31.23.102:22-139.178.68.195:56084.service: Deactivated successfully. Jan 30 13:55:10.321980 systemd-logind[2064]: Session 15 logged out. Waiting for processes to exit. Jan 30 13:55:10.323029 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 13:55:10.326773 systemd-logind[2064]: Removed session 15. Jan 30 13:55:15.311455 update_engine[2068]: I20250130 13:55:15.311380 2068 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 30 13:55:15.311455 update_engine[2068]: I20250130 13:55:15.311447 2068 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 30 13:55:15.313818 update_engine[2068]: I20250130 13:55:15.313776 2068 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 30 13:55:15.318225 update_engine[2068]: I20250130 13:55:15.316421 2068 omaha_request_params.cc:62] Current group set to lts Jan 30 13:55:15.318225 update_engine[2068]: I20250130 13:55:15.316999 2068 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 30 13:55:15.318225 update_engine[2068]: I20250130 13:55:15.317021 2068 update_attempter.cc:643] Scheduling an action processor start. Jan 30 13:55:15.318225 update_engine[2068]: I20250130 13:55:15.317044 2068 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 30 13:55:15.318225 update_engine[2068]: I20250130 13:55:15.317096 2068 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 30 13:55:15.318225 update_engine[2068]: I20250130 13:55:15.317174 2068 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 30 13:55:15.318225 update_engine[2068]: I20250130 13:55:15.317184 2068 omaha_request_action.cc:272] Request: Jan 30 13:55:15.318225 update_engine[2068]: Jan 30 13:55:15.318225 update_engine[2068]: Jan 30 13:55:15.318225 update_engine[2068]: Jan 30 13:55:15.318225 update_engine[2068]: Jan 30 13:55:15.318225 update_engine[2068]: Jan 30 13:55:15.318225 update_engine[2068]: Jan 30 13:55:15.318225 update_engine[2068]: Jan 30 13:55:15.318225 update_engine[2068]: Jan 30 13:55:15.318225 update_engine[2068]: I20250130 13:55:15.317193 2068 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 13:55:15.345312 systemd[1]: Started sshd@15-172.31.23.102:22-139.178.68.195:54090.service - OpenSSH per-connection server daemon (139.178.68.195:54090). Jan 30 13:55:15.354903 update_engine[2068]: I20250130 13:55:15.354830 2068 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 13:55:15.362552 update_engine[2068]: I20250130 13:55:15.360834 2068 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 13:55:15.366651 locksmithd[2115]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 30 13:55:15.378196 update_engine[2068]: E20250130 13:55:15.378032 2068 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 13:55:15.378196 update_engine[2068]: I20250130 13:55:15.378155 2068 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 30 13:55:15.582248 sshd[6512]: Accepted publickey for core from 139.178.68.195 port 54090 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:55:15.587150 sshd[6512]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:55:15.594149 systemd-logind[2064]: New session 16 of user core. Jan 30 13:55:15.599269 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 13:55:15.971408 systemd-resolved[1973]: Under memory pressure, flushing caches. Jan 30 13:55:15.973020 systemd-journald[1576]: Under memory pressure, flushing caches. Jan 30 13:55:15.971437 systemd-resolved[1973]: Flushed all caches. Jan 30 13:55:16.179700 sshd[6512]: pam_unix(sshd:session): session closed for user core Jan 30 13:55:16.188388 systemd[1]: sshd@15-172.31.23.102:22-139.178.68.195:54090.service: Deactivated successfully. Jan 30 13:55:16.196608 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 13:55:16.198759 systemd-logind[2064]: Session 16 logged out. Waiting for processes to exit. Jan 30 13:55:16.207718 systemd[1]: Started sshd@16-172.31.23.102:22-139.178.68.195:54106.service - OpenSSH per-connection server daemon (139.178.68.195:54106). Jan 30 13:55:16.208796 systemd-logind[2064]: Removed session 16. Jan 30 13:55:16.367058 sshd[6527]: Accepted publickey for core from 139.178.68.195 port 54106 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:55:16.367900 sshd[6527]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:55:16.373963 systemd-logind[2064]: New session 17 of user core. Jan 30 13:55:16.379422 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 13:55:17.024850 sshd[6527]: pam_unix(sshd:session): session closed for user core Jan 30 13:55:17.034851 systemd[1]: sshd@16-172.31.23.102:22-139.178.68.195:54106.service: Deactivated successfully. Jan 30 13:55:17.044730 systemd-logind[2064]: Session 17 logged out. Waiting for processes to exit. Jan 30 13:55:17.057430 systemd[1]: Started sshd@17-172.31.23.102:22-139.178.68.195:54112.service - OpenSSH per-connection server daemon (139.178.68.195:54112). Jan 30 13:55:17.058696 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 13:55:17.060313 systemd-logind[2064]: Removed session 17. Jan 30 13:55:17.233928 sshd[6539]: Accepted publickey for core from 139.178.68.195 port 54112 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:55:17.236555 sshd[6539]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:55:17.242593 systemd-logind[2064]: New session 18 of user core. Jan 30 13:55:17.247956 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 13:55:18.020359 systemd-resolved[1973]: Under memory pressure, flushing caches. Jan 30 13:55:18.022371 systemd-journald[1576]: Under memory pressure, flushing caches. Jan 30 13:55:18.020375 systemd-resolved[1973]: Flushed all caches. Jan 30 13:55:20.068911 systemd-journald[1576]: Under memory pressure, flushing caches. Jan 30 13:55:20.069332 systemd-resolved[1973]: Under memory pressure, flushing caches. Jan 30 13:55:20.069344 systemd-resolved[1973]: Flushed all caches. Jan 30 13:55:20.146543 sshd[6539]: pam_unix(sshd:session): session closed for user core Jan 30 13:55:20.185808 systemd[1]: sshd@17-172.31.23.102:22-139.178.68.195:54112.service: Deactivated successfully. Jan 30 13:55:20.202058 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 13:55:20.205939 systemd-logind[2064]: Session 18 logged out. Waiting for processes to exit. Jan 30 13:55:20.222473 systemd[1]: Started sshd@18-172.31.23.102:22-139.178.68.195:54118.service - OpenSSH per-connection server daemon (139.178.68.195:54118). Jan 30 13:55:20.229561 systemd-logind[2064]: Removed session 18. Jan 30 13:55:20.428834 sshd[6563]: Accepted publickey for core from 139.178.68.195 port 54118 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:55:20.430765 sshd[6563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:55:20.438462 systemd-logind[2064]: New session 19 of user core. Jan 30 13:55:20.445476 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 13:55:21.445911 sshd[6563]: pam_unix(sshd:session): session closed for user core Jan 30 13:55:21.453529 systemd[1]: sshd@18-172.31.23.102:22-139.178.68.195:54118.service: Deactivated successfully. Jan 30 13:55:21.457952 systemd-logind[2064]: Session 19 logged out. Waiting for processes to exit. Jan 30 13:55:21.458601 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 13:55:21.461101 systemd-logind[2064]: Removed session 19. Jan 30 13:55:21.474416 systemd[1]: Started sshd@19-172.31.23.102:22-139.178.68.195:54122.service - OpenSSH per-connection server daemon (139.178.68.195:54122). Jan 30 13:55:21.655506 sshd[6577]: Accepted publickey for core from 139.178.68.195 port 54122 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:55:21.658841 sshd[6577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:55:21.663966 systemd-logind[2064]: New session 20 of user core. Jan 30 13:55:21.668171 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 13:55:21.922675 sshd[6577]: pam_unix(sshd:session): session closed for user core Jan 30 13:55:21.930680 systemd[1]: sshd@19-172.31.23.102:22-139.178.68.195:54122.service: Deactivated successfully. Jan 30 13:55:21.932553 systemd-logind[2064]: Session 20 logged out. Waiting for processes to exit. Jan 30 13:55:21.939822 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 13:55:21.946458 systemd-logind[2064]: Removed session 20. Jan 30 13:55:22.115189 systemd-resolved[1973]: Under memory pressure, flushing caches. Jan 30 13:55:22.115218 systemd-resolved[1973]: Flushed all caches. Jan 30 13:55:22.116903 systemd-journald[1576]: Under memory pressure, flushing caches. Jan 30 13:55:25.245990 update_engine[2068]: I20250130 13:55:25.245916 2068 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 13:55:25.246824 update_engine[2068]: I20250130 13:55:25.246434 2068 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 13:55:25.246824 update_engine[2068]: I20250130 13:55:25.246794 2068 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 13:55:25.247676 update_engine[2068]: E20250130 13:55:25.247459 2068 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 13:55:25.247676 update_engine[2068]: I20250130 13:55:25.247519 2068 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 30 13:55:26.961505 systemd[1]: Started sshd@20-172.31.23.102:22-139.178.68.195:52736.service - OpenSSH per-connection server daemon (139.178.68.195:52736). Jan 30 13:55:27.130918 sshd[6594]: Accepted publickey for core from 139.178.68.195 port 52736 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:55:27.131958 sshd[6594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:55:27.138315 systemd-logind[2064]: New session 21 of user core. Jan 30 13:55:27.143276 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 30 13:55:27.462245 sshd[6594]: pam_unix(sshd:session): session closed for user core Jan 30 13:55:27.466851 systemd-logind[2064]: Session 21 logged out. Waiting for processes to exit. Jan 30 13:55:27.468484 systemd[1]: sshd@20-172.31.23.102:22-139.178.68.195:52736.service: Deactivated successfully. Jan 30 13:55:27.473459 systemd[1]: session-21.scope: Deactivated successfully. Jan 30 13:55:27.475497 systemd-logind[2064]: Removed session 21. Jan 30 13:55:32.495266 systemd[1]: Started sshd@21-172.31.23.102:22-139.178.68.195:52740.service - OpenSSH per-connection server daemon (139.178.68.195:52740). Jan 30 13:55:32.702080 sshd[6653]: Accepted publickey for core from 139.178.68.195 port 52740 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:55:32.705563 sshd[6653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:55:32.712222 systemd-logind[2064]: New session 22 of user core. Jan 30 13:55:32.715209 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 30 13:55:32.980339 sshd[6653]: pam_unix(sshd:session): session closed for user core Jan 30 13:55:32.987739 systemd-logind[2064]: Session 22 logged out. Waiting for processes to exit. Jan 30 13:55:32.987966 systemd[1]: sshd@21-172.31.23.102:22-139.178.68.195:52740.service: Deactivated successfully. Jan 30 13:55:32.996872 systemd[1]: session-22.scope: Deactivated successfully. Jan 30 13:55:32.999424 systemd-logind[2064]: Removed session 22. Jan 30 13:55:35.245509 update_engine[2068]: I20250130 13:55:35.244940 2068 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 13:55:35.245509 update_engine[2068]: I20250130 13:55:35.245239 2068 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 13:55:35.246179 update_engine[2068]: I20250130 13:55:35.246141 2068 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 13:55:35.247154 update_engine[2068]: E20250130 13:55:35.247035 2068 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 13:55:35.247154 update_engine[2068]: I20250130 13:55:35.247117 2068 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 30 13:55:38.014304 systemd[1]: Started sshd@22-172.31.23.102:22-139.178.68.195:50364.service - OpenSSH per-connection server daemon (139.178.68.195:50364). Jan 30 13:55:38.222532 sshd[6667]: Accepted publickey for core from 139.178.68.195 port 50364 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:55:38.228116 sshd[6667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:55:38.237084 systemd-logind[2064]: New session 23 of user core. Jan 30 13:55:38.246458 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 30 13:55:38.512203 sshd[6667]: pam_unix(sshd:session): session closed for user core Jan 30 13:55:38.525410 systemd-logind[2064]: Session 23 logged out. Waiting for processes to exit. Jan 30 13:55:38.526867 systemd[1]: sshd@22-172.31.23.102:22-139.178.68.195:50364.service: Deactivated successfully. Jan 30 13:55:38.543461 systemd[1]: session-23.scope: Deactivated successfully. Jan 30 13:55:38.545794 systemd-logind[2064]: Removed session 23. Jan 30 13:55:43.542707 systemd[1]: Started sshd@23-172.31.23.102:22-139.178.68.195:50376.service - OpenSSH per-connection server daemon (139.178.68.195:50376). Jan 30 13:55:43.697998 sshd[6683]: Accepted publickey for core from 139.178.68.195 port 50376 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:55:43.699973 sshd[6683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:55:43.707665 systemd-logind[2064]: New session 24 of user core. Jan 30 13:55:43.715226 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 30 13:55:44.015216 sshd[6683]: pam_unix(sshd:session): session closed for user core Jan 30 13:55:44.028386 systemd[1]: sshd@23-172.31.23.102:22-139.178.68.195:50376.service: Deactivated successfully. Jan 30 13:55:44.041249 systemd[1]: session-24.scope: Deactivated successfully. Jan 30 13:55:44.045171 systemd-logind[2064]: Session 24 logged out. Waiting for processes to exit. Jan 30 13:55:44.047628 systemd-logind[2064]: Removed session 24. Jan 30 13:55:45.245085 update_engine[2068]: I20250130 13:55:45.245003 2068 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 13:55:45.245626 update_engine[2068]: I20250130 13:55:45.245308 2068 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 13:55:45.245626 update_engine[2068]: I20250130 13:55:45.245577 2068 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 13:55:45.246161 update_engine[2068]: E20250130 13:55:45.246127 2068 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 13:55:45.246321 update_engine[2068]: I20250130 13:55:45.246202 2068 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 30 13:55:45.246321 update_engine[2068]: I20250130 13:55:45.246217 2068 omaha_request_action.cc:617] Omaha request response: Jan 30 13:55:45.246321 update_engine[2068]: E20250130 13:55:45.246315 2068 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 30 13:55:45.246442 update_engine[2068]: I20250130 13:55:45.246349 2068 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 30 13:55:45.246442 update_engine[2068]: I20250130 13:55:45.246358 2068 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 30 13:55:45.246442 update_engine[2068]: I20250130 13:55:45.246366 2068 update_attempter.cc:306] Processing Done. Jan 30 13:55:45.246442 update_engine[2068]: E20250130 13:55:45.246392 2068 update_attempter.cc:619] Update failed. Jan 30 13:55:45.246442 update_engine[2068]: I20250130 13:55:45.246402 2068 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 30 13:55:45.246442 update_engine[2068]: I20250130 13:55:45.246410 2068 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 30 13:55:45.246442 update_engine[2068]: I20250130 13:55:45.246419 2068 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 30 13:55:45.246708 update_engine[2068]: I20250130 13:55:45.246511 2068 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 30 13:55:45.246708 update_engine[2068]: I20250130 13:55:45.246543 2068 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 30 13:55:45.246708 update_engine[2068]: I20250130 13:55:45.246551 2068 omaha_request_action.cc:272] Request: Jan 30 13:55:45.246708 update_engine[2068]: Jan 30 13:55:45.246708 update_engine[2068]: Jan 30 13:55:45.246708 update_engine[2068]: Jan 30 13:55:45.246708 update_engine[2068]: Jan 30 13:55:45.246708 update_engine[2068]: Jan 30 13:55:45.246708 update_engine[2068]: Jan 30 13:55:45.246708 update_engine[2068]: I20250130 13:55:45.246562 2068 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 13:55:45.247202 update_engine[2068]: I20250130 13:55:45.246768 2068 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 13:55:45.247202 update_engine[2068]: I20250130 13:55:45.247012 2068 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 13:55:45.247949 update_engine[2068]: E20250130 13:55:45.247548 2068 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 13:55:45.247949 update_engine[2068]: I20250130 13:55:45.247610 2068 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 30 13:55:45.247949 update_engine[2068]: I20250130 13:55:45.247622 2068 omaha_request_action.cc:617] Omaha request response: Jan 30 13:55:45.247949 update_engine[2068]: I20250130 13:55:45.247630 2068 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 30 13:55:45.247949 update_engine[2068]: I20250130 13:55:45.247639 2068 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 30 13:55:45.247949 update_engine[2068]: I20250130 13:55:45.247646 2068 update_attempter.cc:306] Processing Done. Jan 30 13:55:45.247949 update_engine[2068]: I20250130 13:55:45.247654 2068 update_attempter.cc:310] Error event sent. Jan 30 13:55:45.248230 locksmithd[2115]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 30 13:55:45.248949 update_engine[2068]: I20250130 13:55:45.248669 2068 update_check_scheduler.cc:74] Next update check in 47m54s Jan 30 13:55:45.249263 locksmithd[2115]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 30 13:55:49.051118 systemd[1]: Started sshd@24-172.31.23.102:22-139.178.68.195:57342.service - OpenSSH per-connection server daemon (139.178.68.195:57342). Jan 30 13:55:49.242732 sshd[6716]: Accepted publickey for core from 139.178.68.195 port 57342 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:55:49.243506 sshd[6716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:55:49.249728 systemd-logind[2064]: New session 25 of user core. Jan 30 13:55:49.259384 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 30 13:55:49.485913 sshd[6716]: pam_unix(sshd:session): session closed for user core Jan 30 13:55:49.491446 systemd[1]: sshd@24-172.31.23.102:22-139.178.68.195:57342.service: Deactivated successfully. Jan 30 13:55:49.500555 systemd[1]: session-25.scope: Deactivated successfully. Jan 30 13:55:49.500912 systemd-logind[2064]: Session 25 logged out. Waiting for processes to exit. Jan 30 13:55:49.503405 systemd-logind[2064]: Removed session 25. Jan 30 13:55:54.524463 systemd[1]: Started sshd@25-172.31.23.102:22-139.178.68.195:57350.service - OpenSSH per-connection server daemon (139.178.68.195:57350). Jan 30 13:55:54.713369 sshd[6740]: Accepted publickey for core from 139.178.68.195 port 57350 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:55:54.716414 sshd[6740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:55:54.723746 systemd-logind[2064]: New session 26 of user core. Jan 30 13:55:54.729830 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 30 13:55:54.956335 sshd[6740]: pam_unix(sshd:session): session closed for user core Jan 30 13:55:54.963392 systemd[1]: sshd@25-172.31.23.102:22-139.178.68.195:57350.service: Deactivated successfully. Jan 30 13:55:54.972655 systemd[1]: session-26.scope: Deactivated successfully. Jan 30 13:55:54.975809 systemd-logind[2064]: Session 26 logged out. Waiting for processes to exit. Jan 30 13:55:54.978274 systemd-logind[2064]: Removed session 26. Jan 30 13:55:59.996186 systemd[1]: Started sshd@26-172.31.23.102:22-139.178.68.195:37550.service - OpenSSH per-connection server daemon (139.178.68.195:37550). Jan 30 13:56:00.214502 sshd[6780]: Accepted publickey for core from 139.178.68.195 port 37550 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:56:00.216265 sshd[6780]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:56:00.222182 systemd-logind[2064]: New session 27 of user core. Jan 30 13:56:00.227508 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 30 13:56:00.432540 sshd[6780]: pam_unix(sshd:session): session closed for user core Jan 30 13:56:00.436119 systemd[1]: sshd@26-172.31.23.102:22-139.178.68.195:37550.service: Deactivated successfully. Jan 30 13:56:00.442273 systemd-logind[2064]: Session 27 logged out. Waiting for processes to exit. Jan 30 13:56:00.444497 systemd[1]: session-27.scope: Deactivated successfully. Jan 30 13:56:00.447129 systemd-logind[2064]: Removed session 27. Jan 30 13:56:13.884314 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-db217d084c4de586edf51b723662a9e33e530180941da95d0a7f9fef89ac48cb-rootfs.mount: Deactivated successfully. Jan 30 13:56:13.956183 containerd[2095]: time="2025-01-30T13:56:13.916207295Z" level=info msg="shim disconnected" id=db217d084c4de586edf51b723662a9e33e530180941da95d0a7f9fef89ac48cb namespace=k8s.io Jan 30 13:56:13.973779 containerd[2095]: time="2025-01-30T13:56:13.973716059Z" level=warning msg="cleaning up after shim disconnected" id=db217d084c4de586edf51b723662a9e33e530180941da95d0a7f9fef89ac48cb namespace=k8s.io Jan 30 13:56:13.973779 containerd[2095]: time="2025-01-30T13:56:13.973760346Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:56:14.131799 containerd[2095]: time="2025-01-30T13:56:14.131751670Z" level=warning msg="cleanup warnings time=\"2025-01-30T13:56:14Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 30 13:56:14.706921 kubelet[3401]: I0130 13:56:14.704778 3401 scope.go:117] "RemoveContainer" containerID="db217d084c4de586edf51b723662a9e33e530180941da95d0a7f9fef89ac48cb" Jan 30 13:56:14.901707 containerd[2095]: time="2025-01-30T13:56:14.901647505Z" level=info msg="CreateContainer within sandbox \"85d9ee8ad00279a8d9bdf318c5320b158f9fbebef3f46213458d1cbe30c26e97\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 30 13:56:15.047727 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount16256411.mount: Deactivated successfully. Jan 30 13:56:15.117104 containerd[2095]: time="2025-01-30T13:56:15.117050541Z" level=info msg="CreateContainer within sandbox \"85d9ee8ad00279a8d9bdf318c5320b158f9fbebef3f46213458d1cbe30c26e97\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"6fc123b39371120d7c10003b09476e38d6669c31af9066dcad1311a57e585bcf\"" Jan 30 13:56:15.155638 containerd[2095]: time="2025-01-30T13:56:15.155226941Z" level=info msg="StartContainer for \"6fc123b39371120d7c10003b09476e38d6669c31af9066dcad1311a57e585bcf\"" Jan 30 13:56:15.303524 containerd[2095]: time="2025-01-30T13:56:15.303348136Z" level=info msg="StartContainer for \"6fc123b39371120d7c10003b09476e38d6669c31af9066dcad1311a57e585bcf\" returns successfully" Jan 30 13:56:15.344247 containerd[2095]: time="2025-01-30T13:56:15.344123117Z" level=info msg="shim disconnected" id=a5da34c208664ca9eb25b6b816c1633574f3f3955ad56a287731582b09d6aaa0 namespace=k8s.io Jan 30 13:56:15.344247 containerd[2095]: time="2025-01-30T13:56:15.344191577Z" level=warning msg="cleaning up after shim disconnected" id=a5da34c208664ca9eb25b6b816c1633574f3f3955ad56a287731582b09d6aaa0 namespace=k8s.io Jan 30 13:56:15.344247 containerd[2095]: time="2025-01-30T13:56:15.344204928Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:56:15.694910 kubelet[3401]: I0130 13:56:15.694768 3401 scope.go:117] "RemoveContainer" containerID="a5da34c208664ca9eb25b6b816c1633574f3f3955ad56a287731582b09d6aaa0" Jan 30 13:56:15.703092 containerd[2095]: time="2025-01-30T13:56:15.702939476Z" level=info msg="CreateContainer within sandbox \"cfc3ebb70a409a59d6246c0375a29a389bb8bb634ea9be8330182d66f814b4d5\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 30 13:56:15.729828 containerd[2095]: time="2025-01-30T13:56:15.729429142Z" level=info msg="CreateContainer within sandbox \"cfc3ebb70a409a59d6246c0375a29a389bb8bb634ea9be8330182d66f814b4d5\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"e8f27b7182893facd96cb5fea1207b1d9831308dbf13827ebe091c7ae3be65eb\"" Jan 30 13:56:15.730806 containerd[2095]: time="2025-01-30T13:56:15.730772556Z" level=info msg="StartContainer for \"e8f27b7182893facd96cb5fea1207b1d9831308dbf13827ebe091c7ae3be65eb\"" Jan 30 13:56:15.906626 containerd[2095]: time="2025-01-30T13:56:15.906494267Z" level=info msg="StartContainer for \"e8f27b7182893facd96cb5fea1207b1d9831308dbf13827ebe091c7ae3be65eb\" returns successfully" Jan 30 13:56:16.044105 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a5da34c208664ca9eb25b6b816c1633574f3f3955ad56a287731582b09d6aaa0-rootfs.mount: Deactivated successfully. Jan 30 13:56:19.519501 containerd[2095]: time="2025-01-30T13:56:19.519412319Z" level=info msg="shim disconnected" id=e5e36de23a234332ccd1e8bd67394937b4190b99d3fa4d6720280959113748a0 namespace=k8s.io Jan 30 13:56:19.519980 containerd[2095]: time="2025-01-30T13:56:19.519515333Z" level=warning msg="cleaning up after shim disconnected" id=e5e36de23a234332ccd1e8bd67394937b4190b99d3fa4d6720280959113748a0 namespace=k8s.io Jan 30 13:56:19.519980 containerd[2095]: time="2025-01-30T13:56:19.519528538Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:56:19.526019 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e5e36de23a234332ccd1e8bd67394937b4190b99d3fa4d6720280959113748a0-rootfs.mount: Deactivated successfully. Jan 30 13:56:19.709397 kubelet[3401]: I0130 13:56:19.709360 3401 scope.go:117] "RemoveContainer" containerID="e5e36de23a234332ccd1e8bd67394937b4190b99d3fa4d6720280959113748a0" Jan 30 13:56:19.712408 containerd[2095]: time="2025-01-30T13:56:19.712371235Z" level=info msg="CreateContainer within sandbox \"aedf1ab759dd558a861105c22b8926d5ddbc337d83535865bbc926874feee768\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 30 13:56:19.738856 containerd[2095]: time="2025-01-30T13:56:19.738809325Z" level=info msg="CreateContainer within sandbox \"aedf1ab759dd558a861105c22b8926d5ddbc337d83535865bbc926874feee768\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"1f3b95170351e400a19c04c6649999047b5c4a709c8e0b14977b23a70331cb2a\"" Jan 30 13:56:19.739822 containerd[2095]: time="2025-01-30T13:56:19.739778485Z" level=info msg="StartContainer for \"1f3b95170351e400a19c04c6649999047b5c4a709c8e0b14977b23a70331cb2a\"" Jan 30 13:56:19.849670 containerd[2095]: time="2025-01-30T13:56:19.848911202Z" level=info msg="StartContainer for \"1f3b95170351e400a19c04c6649999047b5c4a709c8e0b14977b23a70331cb2a\" returns successfully" Jan 30 13:56:23.578847 kubelet[3401]: E0130 13:56:23.578728 3401 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io ip-172-31-23-102)"