Feb 13 20:08:11.070551 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 18:03:41 -00 2025 Feb 13 20:08:11.070725 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 20:08:11.070743 kernel: BIOS-provided physical RAM map: Feb 13 20:08:11.070755 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 13 20:08:11.070767 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 13 20:08:11.070779 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 13 20:08:11.070798 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Feb 13 20:08:11.070811 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Feb 13 20:08:11.070824 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Feb 13 20:08:11.070836 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 13 20:08:11.070849 kernel: NX (Execute Disable) protection: active Feb 13 20:08:11.070862 kernel: APIC: Static calls initialized Feb 13 20:08:11.070874 kernel: SMBIOS 2.7 present. Feb 13 20:08:11.070888 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Feb 13 20:08:11.070907 kernel: Hypervisor detected: KVM Feb 13 20:08:11.070922 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 20:08:11.070936 kernel: kvm-clock: using sched offset of 6096101728 cycles Feb 13 20:08:11.070951 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 20:08:11.070966 kernel: tsc: Detected 2499.996 MHz processor Feb 13 20:08:11.070980 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 20:08:11.070995 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 20:08:11.071012 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Feb 13 20:08:11.071115 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Feb 13 20:08:11.071132 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 20:08:11.071232 kernel: Using GB pages for direct mapping Feb 13 20:08:11.071250 kernel: ACPI: Early table checksum verification disabled Feb 13 20:08:11.071265 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Feb 13 20:08:11.071276 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Feb 13 20:08:11.071288 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Feb 13 20:08:11.071300 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Feb 13 20:08:11.071317 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Feb 13 20:08:11.071329 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Feb 13 20:08:11.071342 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Feb 13 20:08:11.071354 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Feb 13 20:08:11.071364 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Feb 13 20:08:11.071381 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Feb 13 20:08:11.071398 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Feb 13 20:08:11.071415 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Feb 13 20:08:11.071430 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Feb 13 20:08:11.071453 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Feb 13 20:08:11.071473 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Feb 13 20:08:11.071486 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Feb 13 20:08:11.071501 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Feb 13 20:08:11.071516 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Feb 13 20:08:11.071534 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Feb 13 20:08:11.071549 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Feb 13 20:08:11.071564 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Feb 13 20:08:11.071579 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Feb 13 20:08:11.071594 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 13 20:08:11.071609 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 13 20:08:11.071624 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Feb 13 20:08:11.071639 kernel: NUMA: Initialized distance table, cnt=1 Feb 13 20:08:11.071653 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Feb 13 20:08:11.071671 kernel: Zone ranges: Feb 13 20:08:11.071687 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 20:08:11.071702 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Feb 13 20:08:11.071717 kernel: Normal empty Feb 13 20:08:11.071731 kernel: Movable zone start for each node Feb 13 20:08:11.071746 kernel: Early memory node ranges Feb 13 20:08:11.071761 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 13 20:08:11.071776 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Feb 13 20:08:11.071791 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Feb 13 20:08:11.071809 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 20:08:11.071824 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 13 20:08:11.071838 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Feb 13 20:08:11.071853 kernel: ACPI: PM-Timer IO Port: 0xb008 Feb 13 20:08:11.071868 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 20:08:11.071883 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Feb 13 20:08:11.071898 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 20:08:11.071913 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 20:08:11.071928 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 20:08:11.071943 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 20:08:11.071961 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 20:08:11.071976 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 20:08:11.072448 kernel: TSC deadline timer available Feb 13 20:08:11.072471 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 13 20:08:11.072487 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Feb 13 20:08:11.072561 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Feb 13 20:08:11.072577 kernel: Booting paravirtualized kernel on KVM Feb 13 20:08:11.072593 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 20:08:11.072609 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Feb 13 20:08:11.072630 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Feb 13 20:08:11.072646 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Feb 13 20:08:11.072660 kernel: pcpu-alloc: [0] 0 1 Feb 13 20:08:11.072675 kernel: kvm-guest: PV spinlocks enabled Feb 13 20:08:11.072690 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 20:08:11.072707 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 20:08:11.072723 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 20:08:11.072738 kernel: random: crng init done Feb 13 20:08:11.072756 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 20:08:11.072771 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 13 20:08:11.072786 kernel: Fallback order for Node 0: 0 Feb 13 20:08:11.072801 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Feb 13 20:08:11.072816 kernel: Policy zone: DMA32 Feb 13 20:08:11.072831 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 20:08:11.072846 kernel: Memory: 1932348K/2057760K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42840K init, 2352K bss, 125152K reserved, 0K cma-reserved) Feb 13 20:08:11.072862 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 20:08:11.072879 kernel: Kernel/User page tables isolation: enabled Feb 13 20:08:11.072895 kernel: ftrace: allocating 37921 entries in 149 pages Feb 13 20:08:11.072910 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 20:08:11.072925 kernel: Dynamic Preempt: voluntary Feb 13 20:08:11.072939 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 20:08:11.072955 kernel: rcu: RCU event tracing is enabled. Feb 13 20:08:11.072971 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 20:08:11.072986 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 20:08:11.073001 kernel: Rude variant of Tasks RCU enabled. Feb 13 20:08:11.073016 kernel: Tracing variant of Tasks RCU enabled. Feb 13 20:08:11.073034 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 20:08:11.073049 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 20:08:11.073064 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Feb 13 20:08:11.073079 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 20:08:11.073094 kernel: Console: colour VGA+ 80x25 Feb 13 20:08:11.073109 kernel: printk: console [ttyS0] enabled Feb 13 20:08:11.073124 kernel: ACPI: Core revision 20230628 Feb 13 20:08:11.073140 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Feb 13 20:08:11.073155 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 20:08:11.073184 kernel: x2apic enabled Feb 13 20:08:11.073200 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 20:08:11.073226 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Feb 13 20:08:11.073246 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Feb 13 20:08:11.073262 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 13 20:08:11.073278 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 13 20:08:11.073294 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 20:08:11.073309 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 20:08:11.073325 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 20:08:11.073340 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 20:08:11.073356 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Feb 13 20:08:11.073372 kernel: RETBleed: Vulnerable Feb 13 20:08:11.073391 kernel: Speculative Store Bypass: Vulnerable Feb 13 20:08:11.073407 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 20:08:11.073477 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 20:08:11.073496 kernel: GDS: Unknown: Dependent on hypervisor status Feb 13 20:08:11.073513 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 20:08:11.073529 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 20:08:11.073545 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 20:08:11.073565 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Feb 13 20:08:11.073581 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Feb 13 20:08:11.073597 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Feb 13 20:08:11.073613 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Feb 13 20:08:11.073629 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Feb 13 20:08:11.073645 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Feb 13 20:08:11.073661 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 20:08:11.073677 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Feb 13 20:08:11.073693 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Feb 13 20:08:11.073708 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Feb 13 20:08:11.073724 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Feb 13 20:08:11.073743 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Feb 13 20:08:11.073759 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Feb 13 20:08:11.073774 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Feb 13 20:08:11.073790 kernel: Freeing SMP alternatives memory: 32K Feb 13 20:08:11.073806 kernel: pid_max: default: 32768 minimum: 301 Feb 13 20:08:11.073921 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 20:08:11.073939 kernel: landlock: Up and running. Feb 13 20:08:11.073956 kernel: SELinux: Initializing. Feb 13 20:08:11.073972 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 13 20:08:11.073988 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 13 20:08:11.074004 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Feb 13 20:08:11.074025 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 20:08:11.074041 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 20:08:11.074058 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 20:08:11.074075 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Feb 13 20:08:11.074091 kernel: signal: max sigframe size: 3632 Feb 13 20:08:11.074107 kernel: rcu: Hierarchical SRCU implementation. Feb 13 20:08:11.074123 kernel: rcu: Max phase no-delay instances is 400. Feb 13 20:08:11.074139 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 13 20:08:11.074155 kernel: smp: Bringing up secondary CPUs ... Feb 13 20:08:11.074186 kernel: smpboot: x86: Booting SMP configuration: Feb 13 20:08:11.074202 kernel: .... node #0, CPUs: #1 Feb 13 20:08:11.074219 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Feb 13 20:08:11.074237 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 13 20:08:11.074344 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 20:08:11.074364 kernel: smpboot: Max logical packages: 1 Feb 13 20:08:11.074380 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Feb 13 20:08:11.074396 kernel: devtmpfs: initialized Feb 13 20:08:11.074416 kernel: x86/mm: Memory block size: 128MB Feb 13 20:08:11.074433 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 20:08:11.074450 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 20:08:11.074466 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 20:08:11.074482 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 20:08:11.074498 kernel: audit: initializing netlink subsys (disabled) Feb 13 20:08:11.074514 kernel: audit: type=2000 audit(1739477290.021:1): state=initialized audit_enabled=0 res=1 Feb 13 20:08:11.074530 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 20:08:11.074546 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 20:08:11.074565 kernel: cpuidle: using governor menu Feb 13 20:08:11.074581 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 20:08:11.074597 kernel: dca service started, version 1.12.1 Feb 13 20:08:11.074614 kernel: PCI: Using configuration type 1 for base access Feb 13 20:08:11.074630 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 20:08:11.074646 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 20:08:11.074662 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 20:08:11.074678 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 20:08:11.074694 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 20:08:11.074713 kernel: ACPI: Added _OSI(Module Device) Feb 13 20:08:11.074728 kernel: ACPI: Added _OSI(Processor Device) Feb 13 20:08:11.074744 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 20:08:11.074760 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 20:08:11.074777 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Feb 13 20:08:11.074844 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 20:08:11.074861 kernel: ACPI: Interpreter enabled Feb 13 20:08:11.074877 kernel: ACPI: PM: (supports S0 S5) Feb 13 20:08:11.074893 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 20:08:11.074910 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 20:08:11.075029 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 20:08:11.075050 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Feb 13 20:08:11.075067 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 20:08:11.075416 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Feb 13 20:08:11.075573 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Feb 13 20:08:11.075710 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Feb 13 20:08:11.075729 kernel: acpiphp: Slot [3] registered Feb 13 20:08:11.075750 kernel: acpiphp: Slot [4] registered Feb 13 20:08:11.075824 kernel: acpiphp: Slot [5] registered Feb 13 20:08:11.075846 kernel: acpiphp: Slot [6] registered Feb 13 20:08:11.075863 kernel: acpiphp: Slot [7] registered Feb 13 20:08:11.075879 kernel: acpiphp: Slot [8] registered Feb 13 20:08:11.075894 kernel: acpiphp: Slot [9] registered Feb 13 20:08:11.075911 kernel: acpiphp: Slot [10] registered Feb 13 20:08:11.075927 kernel: acpiphp: Slot [11] registered Feb 13 20:08:11.075943 kernel: acpiphp: Slot [12] registered Feb 13 20:08:11.075963 kernel: acpiphp: Slot [13] registered Feb 13 20:08:11.075979 kernel: acpiphp: Slot [14] registered Feb 13 20:08:11.075995 kernel: acpiphp: Slot [15] registered Feb 13 20:08:11.076011 kernel: acpiphp: Slot [16] registered Feb 13 20:08:11.076027 kernel: acpiphp: Slot [17] registered Feb 13 20:08:11.076043 kernel: acpiphp: Slot [18] registered Feb 13 20:08:11.076146 kernel: acpiphp: Slot [19] registered Feb 13 20:08:11.076179 kernel: acpiphp: Slot [20] registered Feb 13 20:08:11.076196 kernel: acpiphp: Slot [21] registered Feb 13 20:08:11.076212 kernel: acpiphp: Slot [22] registered Feb 13 20:08:11.076232 kernel: acpiphp: Slot [23] registered Feb 13 20:08:11.076249 kernel: acpiphp: Slot [24] registered Feb 13 20:08:11.076265 kernel: acpiphp: Slot [25] registered Feb 13 20:08:11.076281 kernel: acpiphp: Slot [26] registered Feb 13 20:08:11.076297 kernel: acpiphp: Slot [27] registered Feb 13 20:08:11.076313 kernel: acpiphp: Slot [28] registered Feb 13 20:08:11.076328 kernel: acpiphp: Slot [29] registered Feb 13 20:08:11.076344 kernel: acpiphp: Slot [30] registered Feb 13 20:08:11.076360 kernel: acpiphp: Slot [31] registered Feb 13 20:08:11.076379 kernel: PCI host bridge to bus 0000:00 Feb 13 20:08:11.076937 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 20:08:11.078474 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 20:08:11.078625 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 20:08:11.078833 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Feb 13 20:08:11.079077 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 20:08:11.079384 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 13 20:08:11.079705 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 13 20:08:11.080675 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Feb 13 20:08:11.080901 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Feb 13 20:08:11.081034 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Feb 13 20:08:11.081187 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Feb 13 20:08:11.081588 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Feb 13 20:08:11.081740 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Feb 13 20:08:11.081894 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Feb 13 20:08:11.082028 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Feb 13 20:08:11.082316 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Feb 13 20:08:11.082505 kernel: pci 0000:00:01.3: quirk_piix4_acpi+0x0/0x180 took 15625 usecs Feb 13 20:08:11.082886 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Feb 13 20:08:11.083039 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Feb 13 20:08:11.083373 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Feb 13 20:08:11.083570 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 20:08:11.083853 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Feb 13 20:08:11.083984 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Feb 13 20:08:11.084116 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Feb 13 20:08:11.084396 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Feb 13 20:08:11.084420 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 20:08:11.084442 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 20:08:11.084457 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 20:08:11.084472 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 20:08:11.084487 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 13 20:08:11.084567 kernel: iommu: Default domain type: Translated Feb 13 20:08:11.084584 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 20:08:11.084599 kernel: PCI: Using ACPI for IRQ routing Feb 13 20:08:11.084615 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 20:08:11.084630 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 13 20:08:11.084649 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Feb 13 20:08:11.084781 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Feb 13 20:08:11.084948 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Feb 13 20:08:11.085089 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 20:08:11.085111 kernel: vgaarb: loaded Feb 13 20:08:11.085128 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Feb 13 20:08:11.085145 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Feb 13 20:08:11.085180 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 20:08:11.085196 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 20:08:11.085219 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 20:08:11.085236 kernel: pnp: PnP ACPI init Feb 13 20:08:11.085253 kernel: pnp: PnP ACPI: found 5 devices Feb 13 20:08:11.085374 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 20:08:11.085394 kernel: NET: Registered PF_INET protocol family Feb 13 20:08:11.085412 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 20:08:11.085494 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Feb 13 20:08:11.085513 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 20:08:11.085535 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 20:08:11.085552 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 20:08:11.085570 kernel: TCP: Hash tables configured (established 16384 bind 16384) Feb 13 20:08:11.085588 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 13 20:08:11.085604 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 13 20:08:11.085621 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 20:08:11.085638 kernel: NET: Registered PF_XDP protocol family Feb 13 20:08:11.085800 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 20:08:11.085948 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 20:08:11.086091 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 20:08:11.086328 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Feb 13 20:08:11.086509 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 13 20:08:11.086533 kernel: PCI: CLS 0 bytes, default 64 Feb 13 20:08:11.086550 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 13 20:08:11.086567 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Feb 13 20:08:11.086584 kernel: clocksource: Switched to clocksource tsc Feb 13 20:08:11.086600 kernel: Initialise system trusted keyrings Feb 13 20:08:11.086621 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Feb 13 20:08:11.086637 kernel: Key type asymmetric registered Feb 13 20:08:11.086653 kernel: Asymmetric key parser 'x509' registered Feb 13 20:08:11.086669 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 20:08:11.086685 kernel: io scheduler mq-deadline registered Feb 13 20:08:11.086701 kernel: io scheduler kyber registered Feb 13 20:08:11.086765 kernel: io scheduler bfq registered Feb 13 20:08:11.086783 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 20:08:11.086799 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 20:08:11.086818 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 20:08:11.086835 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 20:08:11.086851 kernel: i8042: Warning: Keylock active Feb 13 20:08:11.086866 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 20:08:11.086882 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 20:08:11.087105 kernel: rtc_cmos 00:00: RTC can wake from S4 Feb 13 20:08:11.087322 kernel: rtc_cmos 00:00: registered as rtc0 Feb 13 20:08:11.087449 kernel: rtc_cmos 00:00: setting system clock to 2025-02-13T20:08:10 UTC (1739477290) Feb 13 20:08:11.087672 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Feb 13 20:08:11.087696 kernel: intel_pstate: CPU model not supported Feb 13 20:08:11.087713 kernel: NET: Registered PF_INET6 protocol family Feb 13 20:08:11.087989 kernel: Segment Routing with IPv6 Feb 13 20:08:11.088008 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 20:08:11.088025 kernel: NET: Registered PF_PACKET protocol family Feb 13 20:08:11.088042 kernel: Key type dns_resolver registered Feb 13 20:08:11.088223 kernel: IPI shorthand broadcast: enabled Feb 13 20:08:11.088242 kernel: sched_clock: Marking stable (705002785, 349452336)->(1152228574, -97773453) Feb 13 20:08:11.088265 kernel: registered taskstats version 1 Feb 13 20:08:11.088281 kernel: Loading compiled-in X.509 certificates Feb 13 20:08:11.088297 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 6e17590ca2768b672aa48f3e0cedc4061febfe93' Feb 13 20:08:11.088313 kernel: Key type .fscrypt registered Feb 13 20:08:11.088378 kernel: Key type fscrypt-provisioning registered Feb 13 20:08:11.088395 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 20:08:11.088412 kernel: ima: Allocated hash algorithm: sha1 Feb 13 20:08:11.088428 kernel: ima: No architecture policies found Feb 13 20:08:11.088448 kernel: clk: Disabling unused clocks Feb 13 20:08:11.088465 kernel: Freeing unused kernel image (initmem) memory: 42840K Feb 13 20:08:11.088481 kernel: Write protecting the kernel read-only data: 36864k Feb 13 20:08:11.088555 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Feb 13 20:08:11.088574 kernel: Run /init as init process Feb 13 20:08:11.088592 kernel: with arguments: Feb 13 20:08:11.088608 kernel: /init Feb 13 20:08:11.088624 kernel: with environment: Feb 13 20:08:11.088639 kernel: HOME=/ Feb 13 20:08:11.088655 kernel: TERM=linux Feb 13 20:08:11.088674 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 20:08:11.088719 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:08:11.088739 systemd[1]: Detected virtualization amazon. Feb 13 20:08:11.088757 systemd[1]: Detected architecture x86-64. Feb 13 20:08:11.088774 systemd[1]: Running in initrd. Feb 13 20:08:11.088791 systemd[1]: No hostname configured, using default hostname. Feb 13 20:08:11.088807 systemd[1]: Hostname set to <localhost>. Feb 13 20:08:11.088828 systemd[1]: Initializing machine ID from VM UUID. Feb 13 20:08:11.088845 systemd[1]: Queued start job for default target initrd.target. Feb 13 20:08:11.088863 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:08:11.088880 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:08:11.088900 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 20:08:11.088917 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:08:11.088935 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 20:08:11.089141 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 20:08:11.089177 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 20:08:11.089195 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 20:08:11.089214 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:08:11.089231 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:08:11.089249 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:08:11.089266 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:08:11.089289 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:08:11.089307 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:08:11.089637 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:08:11.089657 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:08:11.089676 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 20:08:11.089693 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 20:08:11.089711 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:08:11.089729 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:08:11.089746 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:08:11.089766 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:08:11.089781 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 20:08:11.089797 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:08:11.089815 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 20:08:11.089833 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 20:08:11.089864 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 20:08:11.089881 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:08:11.089947 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:08:11.089967 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:08:11.089984 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 20:08:11.090002 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:08:11.090022 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 20:08:11.090046 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 20:08:11.090099 systemd-journald[178]: Collecting audit messages is disabled. Feb 13 20:08:11.090141 systemd-journald[178]: Journal started Feb 13 20:08:11.090248 systemd-journald[178]: Runtime Journal (/run/log/journal/ec294bdaf3000cdfafb56d6119e9dc16) is 4.8M, max 38.6M, 33.7M free. Feb 13 20:08:11.099421 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:08:11.094219 systemd-modules-load[179]: Inserted module 'overlay' Feb 13 20:08:11.107260 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:08:11.123564 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:08:11.272114 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 20:08:11.272151 kernel: Bridge firewalling registered Feb 13 20:08:11.162525 systemd-modules-load[179]: Inserted module 'br_netfilter' Feb 13 20:08:11.285643 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:08:11.288838 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:08:11.290572 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:08:11.312107 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:08:11.330661 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:08:11.333244 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:08:11.334802 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:08:11.342391 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:08:11.347407 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:08:11.365700 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:08:11.385708 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 20:08:11.408251 dracut-cmdline[214]: dracut-dracut-053 Feb 13 20:08:11.413959 dracut-cmdline[214]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 20:08:11.420501 systemd-resolved[204]: Positive Trust Anchors: Feb 13 20:08:11.420516 systemd-resolved[204]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:08:11.420568 systemd-resolved[204]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:08:11.427632 systemd-resolved[204]: Defaulting to hostname 'linux'. Feb 13 20:08:11.432323 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:08:11.450918 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:08:11.541195 kernel: SCSI subsystem initialized Feb 13 20:08:11.553199 kernel: Loading iSCSI transport class v2.0-870. Feb 13 20:08:11.568194 kernel: iscsi: registered transport (tcp) Feb 13 20:08:11.590460 kernel: iscsi: registered transport (qla4xxx) Feb 13 20:08:11.590546 kernel: QLogic iSCSI HBA Driver Feb 13 20:08:11.632568 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 20:08:11.638437 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 20:08:11.680750 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 20:08:11.680835 kernel: device-mapper: uevent: version 1.0.3 Feb 13 20:08:11.680857 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 20:08:11.755364 kernel: raid6: avx512x4 gen() 8453 MB/s Feb 13 20:08:11.772219 kernel: raid6: avx512x2 gen() 9713 MB/s Feb 13 20:08:11.789219 kernel: raid6: avx512x1 gen() 13422 MB/s Feb 13 20:08:11.812494 kernel: raid6: avx2x4 gen() 7593 MB/s Feb 13 20:08:11.829220 kernel: raid6: avx2x2 gen() 7305 MB/s Feb 13 20:08:11.847683 kernel: raid6: avx2x1 gen() 8087 MB/s Feb 13 20:08:11.847775 kernel: raid6: using algorithm avx512x1 gen() 13422 MB/s Feb 13 20:08:11.869248 kernel: raid6: .... xor() 8576 MB/s, rmw enabled Feb 13 20:08:11.869331 kernel: raid6: using avx512x2 recovery algorithm Feb 13 20:08:11.921434 kernel: xor: automatically using best checksumming function avx Feb 13 20:08:12.207191 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 20:08:12.223600 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:08:12.231671 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:08:12.260070 systemd-udevd[396]: Using default interface naming scheme 'v255'. Feb 13 20:08:12.265893 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:08:12.280011 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 20:08:12.304825 dracut-pre-trigger[398]: rd.md=0: removing MD RAID activation Feb 13 20:08:12.345110 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:08:12.350553 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:08:12.426316 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:08:12.434468 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 20:08:12.466538 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 20:08:12.469636 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:08:12.472701 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:08:12.474446 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:08:12.489615 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 20:08:12.523207 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:08:12.567359 kernel: ena 0000:00:05.0: ENA device version: 0.10 Feb 13 20:08:12.585139 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Feb 13 20:08:12.585348 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Feb 13 20:08:12.585503 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:ab:86:05:59:6d Feb 13 20:08:12.585652 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 20:08:12.584594 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:08:12.584767 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:08:12.586348 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:08:12.587645 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:08:12.587842 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:08:12.590005 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:08:12.597242 (udev-worker)[448]: Network interface NamePolicy= disabled on kernel command line. Feb 13 20:08:12.607340 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:08:12.644587 kernel: nvme nvme0: pci function 0000:00:04.0 Feb 13 20:08:12.646294 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Feb 13 20:08:12.653192 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 20:08:12.653256 kernel: AES CTR mode by8 optimization enabled Feb 13 20:08:12.663191 kernel: nvme nvme0: 2/0/0 default/read/poll queues Feb 13 20:08:12.669229 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 20:08:12.669293 kernel: GPT:9289727 != 16777215 Feb 13 20:08:12.669313 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 20:08:12.669331 kernel: GPT:9289727 != 16777215 Feb 13 20:08:12.669347 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 20:08:12.669366 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 20:08:12.792762 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Feb 13 20:08:12.811152 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (449) Feb 13 20:08:12.816226 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:08:12.822234 kernel: BTRFS: device fsid 892c7470-7713-4b0f-880a-4c5f7bf5b72d devid 1 transid 37 /dev/nvme0n1p3 scanned by (udev-worker) (451) Feb 13 20:08:12.829399 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:08:12.866288 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:08:12.895978 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Feb 13 20:08:12.943906 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Feb 13 20:08:12.947787 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Feb 13 20:08:12.959249 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 20:08:12.966522 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 20:08:12.988438 disk-uuid[631]: Primary Header is updated. Feb 13 20:08:12.988438 disk-uuid[631]: Secondary Entries is updated. Feb 13 20:08:12.988438 disk-uuid[631]: Secondary Header is updated. Feb 13 20:08:12.993189 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 20:08:13.002317 kernel: GPT:disk_guids don't match. Feb 13 20:08:13.002372 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 20:08:13.002385 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 20:08:13.009190 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 20:08:14.009242 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 20:08:14.010995 disk-uuid[632]: The operation has completed successfully. Feb 13 20:08:14.185304 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 20:08:14.185433 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 20:08:14.221524 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 20:08:14.239174 sh[975]: Success Feb 13 20:08:14.258191 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 20:08:14.410113 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 20:08:14.440345 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 20:08:14.443312 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 20:08:14.494935 kernel: BTRFS info (device dm-0): first mount of filesystem 892c7470-7713-4b0f-880a-4c5f7bf5b72d Feb 13 20:08:14.495001 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:08:14.495021 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 20:08:14.498775 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 20:08:14.498844 kernel: BTRFS info (device dm-0): using free space tree Feb 13 20:08:14.543187 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 20:08:14.545140 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 20:08:14.547339 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 20:08:14.564426 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 20:08:14.570780 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 20:08:14.585495 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:08:14.585555 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:08:14.585568 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 20:08:14.590408 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 20:08:14.599738 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:08:14.599468 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 20:08:14.621129 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 20:08:14.633399 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 20:08:14.676339 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:08:14.683777 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:08:14.753654 systemd-networkd[1167]: lo: Link UP Feb 13 20:08:14.753667 systemd-networkd[1167]: lo: Gained carrier Feb 13 20:08:14.758079 systemd-networkd[1167]: Enumeration completed Feb 13 20:08:14.758274 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:08:14.758915 systemd-networkd[1167]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:08:14.759059 systemd-networkd[1167]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:08:14.765803 systemd[1]: Reached target network.target - Network. Feb 13 20:08:14.772935 systemd-networkd[1167]: eth0: Link UP Feb 13 20:08:14.772945 systemd-networkd[1167]: eth0: Gained carrier Feb 13 20:08:14.772962 systemd-networkd[1167]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:08:14.791460 systemd-networkd[1167]: eth0: DHCPv4 address 172.31.16.93/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 20:08:14.974977 ignition[1113]: Ignition 2.19.0 Feb 13 20:08:14.975082 ignition[1113]: Stage: fetch-offline Feb 13 20:08:14.975390 ignition[1113]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:08:14.977345 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:08:14.975402 ignition[1113]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 20:08:14.975981 ignition[1113]: Ignition finished successfully Feb 13 20:08:14.985625 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 20:08:15.014922 ignition[1176]: Ignition 2.19.0 Feb 13 20:08:15.014935 ignition[1176]: Stage: fetch Feb 13 20:08:15.015512 ignition[1176]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:08:15.015528 ignition[1176]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 20:08:15.015645 ignition[1176]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 20:08:15.028289 ignition[1176]: PUT result: OK Feb 13 20:08:15.039998 ignition[1176]: parsed url from cmdline: "" Feb 13 20:08:15.040011 ignition[1176]: no config URL provided Feb 13 20:08:15.040025 ignition[1176]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 20:08:15.040137 ignition[1176]: no config at "/usr/lib/ignition/user.ign" Feb 13 20:08:15.040186 ignition[1176]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 20:08:15.043243 ignition[1176]: PUT result: OK Feb 13 20:08:15.043303 ignition[1176]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Feb 13 20:08:15.048576 ignition[1176]: GET result: OK Feb 13 20:08:15.048676 ignition[1176]: parsing config with SHA512: 9ce9e573b6eb2e366e250d41751500ee40be1c854a0f270149220192dd4c043a0a46ba496256fb4428ca6e161d83c026d14b7f46131b747538e2a211b1839c81 Feb 13 20:08:15.060695 unknown[1176]: fetched base config from "system" Feb 13 20:08:15.060711 unknown[1176]: fetched base config from "system" Feb 13 20:08:15.062037 ignition[1176]: fetch: fetch complete Feb 13 20:08:15.060718 unknown[1176]: fetched user config from "aws" Feb 13 20:08:15.062044 ignition[1176]: fetch: fetch passed Feb 13 20:08:15.067456 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 20:08:15.062108 ignition[1176]: Ignition finished successfully Feb 13 20:08:15.081570 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 20:08:15.100852 ignition[1182]: Ignition 2.19.0 Feb 13 20:08:15.100867 ignition[1182]: Stage: kargs Feb 13 20:08:15.101381 ignition[1182]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:08:15.101394 ignition[1182]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 20:08:15.101613 ignition[1182]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 20:08:15.102968 ignition[1182]: PUT result: OK Feb 13 20:08:15.110366 ignition[1182]: kargs: kargs passed Feb 13 20:08:15.110452 ignition[1182]: Ignition finished successfully Feb 13 20:08:15.113603 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 20:08:15.122469 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 20:08:15.162973 ignition[1188]: Ignition 2.19.0 Feb 13 20:08:15.162987 ignition[1188]: Stage: disks Feb 13 20:08:15.163614 ignition[1188]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:08:15.163628 ignition[1188]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 20:08:15.163899 ignition[1188]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 20:08:15.168673 ignition[1188]: PUT result: OK Feb 13 20:08:15.179361 ignition[1188]: disks: disks passed Feb 13 20:08:15.179578 ignition[1188]: Ignition finished successfully Feb 13 20:08:15.182456 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 20:08:15.183179 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 20:08:15.187845 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 20:08:15.192017 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:08:15.192116 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:08:15.197437 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:08:15.206647 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 20:08:15.251326 systemd-fsck[1196]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 20:08:15.261080 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 20:08:15.269309 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 20:08:15.450201 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 85215ce4-0be3-4782-863e-8dde129924f0 r/w with ordered data mode. Quota mode: none. Feb 13 20:08:15.451438 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 20:08:15.454277 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 20:08:15.480000 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:08:15.484798 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 20:08:15.488506 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 20:08:15.492752 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 20:08:15.497350 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:08:15.510187 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1215) Feb 13 20:08:15.510261 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:08:15.510281 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:08:15.511997 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 20:08:15.515265 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 20:08:15.521359 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 20:08:15.527017 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 20:08:15.530296 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:08:15.909209 initrd-setup-root[1239]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 20:08:15.919071 initrd-setup-root[1246]: cut: /sysroot/etc/group: No such file or directory Feb 13 20:08:15.933133 initrd-setup-root[1253]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 20:08:15.958189 initrd-setup-root[1260]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 20:08:16.261992 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 20:08:16.268360 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 20:08:16.279543 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 20:08:16.298901 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 20:08:16.301291 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:08:16.354718 systemd-networkd[1167]: eth0: Gained IPv6LL Feb 13 20:08:16.355616 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 20:08:16.361724 ignition[1333]: INFO : Ignition 2.19.0 Feb 13 20:08:16.361724 ignition[1333]: INFO : Stage: mount Feb 13 20:08:16.365023 ignition[1333]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:08:16.365023 ignition[1333]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 20:08:16.365023 ignition[1333]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 20:08:16.370767 ignition[1333]: INFO : PUT result: OK Feb 13 20:08:16.373915 ignition[1333]: INFO : mount: mount passed Feb 13 20:08:16.373915 ignition[1333]: INFO : Ignition finished successfully Feb 13 20:08:16.376953 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 20:08:16.383311 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 20:08:16.410705 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:08:16.424233 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1345) Feb 13 20:08:16.426582 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:08:16.426644 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:08:16.426663 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 20:08:16.431189 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 20:08:16.432891 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:08:16.459364 ignition[1362]: INFO : Ignition 2.19.0 Feb 13 20:08:16.460471 ignition[1362]: INFO : Stage: files Feb 13 20:08:16.460471 ignition[1362]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:08:16.460471 ignition[1362]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 20:08:16.460471 ignition[1362]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 20:08:16.465975 ignition[1362]: INFO : PUT result: OK Feb 13 20:08:16.468335 ignition[1362]: DEBUG : files: compiled without relabeling support, skipping Feb 13 20:08:16.470857 ignition[1362]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 20:08:16.470857 ignition[1362]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 20:08:16.475943 ignition[1362]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 20:08:16.478235 ignition[1362]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 20:08:16.480819 ignition[1362]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 20:08:16.478388 unknown[1362]: wrote ssh authorized keys file for user: core Feb 13 20:08:16.484277 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 13 20:08:16.484277 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 13 20:08:16.484277 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 20:08:16.484277 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 13 20:08:16.597991 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 20:08:16.792366 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 20:08:16.796502 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 20:08:16.798529 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 20:08:16.798529 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:08:16.805245 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:08:16.805245 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:08:16.805245 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:08:16.805245 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:08:16.805245 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:08:16.820869 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:08:16.820869 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:08:16.820869 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 20:08:16.820869 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 20:08:16.820869 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 20:08:16.820869 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Feb 13 20:08:17.282124 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 20:08:17.719987 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 20:08:17.719987 ignition[1362]: INFO : files: op(c): [started] processing unit "containerd.service" Feb 13 20:08:17.726543 ignition[1362]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 13 20:08:17.730569 ignition[1362]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 13 20:08:17.730569 ignition[1362]: INFO : files: op(c): [finished] processing unit "containerd.service" Feb 13 20:08:17.730569 ignition[1362]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Feb 13 20:08:17.737741 ignition[1362]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:08:17.737741 ignition[1362]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:08:17.737741 ignition[1362]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Feb 13 20:08:17.737741 ignition[1362]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Feb 13 20:08:17.747543 ignition[1362]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 20:08:17.747543 ignition[1362]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:08:17.751925 ignition[1362]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:08:17.751925 ignition[1362]: INFO : files: files passed Feb 13 20:08:17.751925 ignition[1362]: INFO : Ignition finished successfully Feb 13 20:08:17.761769 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 20:08:17.771423 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 20:08:17.780764 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 20:08:17.790791 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 20:08:17.791095 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 20:08:17.803528 initrd-setup-root-after-ignition[1391]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:08:17.803528 initrd-setup-root-after-ignition[1391]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:08:17.807599 initrd-setup-root-after-ignition[1395]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:08:17.811643 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:08:17.811977 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 20:08:17.822349 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 20:08:17.853359 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 20:08:17.853515 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 20:08:17.856343 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 20:08:17.860130 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 20:08:17.861465 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 20:08:17.866426 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 20:08:17.892955 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:08:17.904333 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 20:08:17.916097 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:08:17.917893 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:08:17.918031 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 20:08:17.918218 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 20:08:17.918331 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:08:17.918848 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 20:08:17.919200 systemd[1]: Stopped target basic.target - Basic System. Feb 13 20:08:17.919399 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 20:08:17.919660 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:08:17.920104 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 20:08:17.937685 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 20:08:17.937854 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:08:17.944072 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 20:08:17.945797 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 20:08:17.947552 systemd[1]: Stopped target swap.target - Swaps. Feb 13 20:08:17.949949 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 20:08:17.950195 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:08:17.950559 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:08:17.950717 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:08:17.951053 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 20:08:17.953833 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:08:17.960425 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 20:08:17.960553 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 20:08:17.969930 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 20:08:17.970102 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:08:17.974818 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 20:08:17.975957 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 20:08:17.983469 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 20:08:17.984529 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 20:08:17.984663 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:08:17.991464 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 20:08:17.992980 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 20:08:17.993129 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:08:17.996770 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 20:08:17.996879 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:08:18.042475 ignition[1415]: INFO : Ignition 2.19.0 Feb 13 20:08:18.042475 ignition[1415]: INFO : Stage: umount Feb 13 20:08:18.042475 ignition[1415]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:08:18.042475 ignition[1415]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 20:08:18.042475 ignition[1415]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 20:08:18.042226 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 20:08:18.055716 ignition[1415]: INFO : PUT result: OK Feb 13 20:08:18.042361 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 20:08:18.058958 ignition[1415]: INFO : umount: umount passed Feb 13 20:08:18.058958 ignition[1415]: INFO : Ignition finished successfully Feb 13 20:08:18.060449 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 20:08:18.060586 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 20:08:18.064577 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 20:08:18.064681 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 20:08:18.068213 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 20:08:18.068388 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 20:08:18.072368 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 20:08:18.072428 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 20:08:18.073948 systemd[1]: Stopped target network.target - Network. Feb 13 20:08:18.076093 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 20:08:18.076272 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:08:18.081035 systemd[1]: Stopped target paths.target - Path Units. Feb 13 20:08:18.082226 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 20:08:18.086419 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:08:18.089655 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 20:08:18.098567 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 20:08:18.102218 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 20:08:18.103108 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:08:18.104656 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 20:08:18.104703 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:08:18.107923 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 20:08:18.108108 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 20:08:18.119862 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 20:08:18.121068 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 20:08:18.124649 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 20:08:18.136811 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 20:08:18.140772 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 20:08:18.143369 systemd-networkd[1167]: eth0: DHCPv6 lease lost Feb 13 20:08:18.151977 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 20:08:18.152158 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 20:08:18.161665 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 20:08:18.161795 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 20:08:18.168105 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 20:08:18.168196 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:08:18.178306 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 20:08:18.179722 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 20:08:18.179832 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:08:18.181700 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 20:08:18.181772 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:08:18.183238 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 20:08:18.183785 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 20:08:18.187348 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 20:08:18.187573 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:08:18.190914 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:08:18.204299 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 20:08:18.204427 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 20:08:18.212703 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 20:08:18.212907 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 20:08:18.216810 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 20:08:18.216999 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:08:18.228846 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 20:08:18.228942 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 20:08:18.230608 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 20:08:18.230644 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:08:18.240331 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 20:08:18.240404 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:08:18.246592 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 20:08:18.246668 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 20:08:18.249062 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:08:18.249121 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:08:18.271457 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 20:08:18.276726 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 20:08:18.276812 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:08:18.283754 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:08:18.283831 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:08:18.285985 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 20:08:18.286128 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 20:08:18.292126 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 20:08:18.293804 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 20:08:18.297938 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 20:08:18.308404 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 20:08:18.340701 systemd[1]: Switching root. Feb 13 20:08:18.374730 systemd-journald[178]: Journal stopped Feb 13 20:08:21.224133 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Feb 13 20:08:21.224242 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 20:08:21.224268 kernel: SELinux: policy capability open_perms=1 Feb 13 20:08:21.224292 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 20:08:21.224311 kernel: SELinux: policy capability always_check_network=0 Feb 13 20:08:21.224329 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 20:08:21.224348 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 20:08:21.224367 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 20:08:21.224386 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 20:08:21.224412 kernel: audit: type=1403 audit(1739477299.502:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 20:08:21.224438 systemd[1]: Successfully loaded SELinux policy in 70.551ms. Feb 13 20:08:21.224474 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 16.410ms. Feb 13 20:08:21.224499 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:08:21.224521 systemd[1]: Detected virtualization amazon. Feb 13 20:08:21.224543 systemd[1]: Detected architecture x86-64. Feb 13 20:08:21.224565 systemd[1]: Detected first boot. Feb 13 20:08:21.224585 systemd[1]: Initializing machine ID from VM UUID. Feb 13 20:08:21.224605 zram_generator::config[1475]: No configuration found. Feb 13 20:08:21.224633 systemd[1]: Populated /etc with preset unit settings. Feb 13 20:08:21.224654 systemd[1]: Queued start job for default target multi-user.target. Feb 13 20:08:21.224678 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Feb 13 20:08:21.224700 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 20:08:21.224720 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 20:08:21.224740 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 20:08:21.224771 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 20:08:21.224793 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 20:08:21.224813 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 20:08:21.224835 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 20:08:21.224856 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 20:08:21.224876 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:08:21.224896 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:08:21.224917 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 20:08:21.224937 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 20:08:21.224957 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 20:08:21.224977 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:08:21.225000 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 20:08:21.225022 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:08:21.225042 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 20:08:21.225062 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:08:21.225084 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:08:21.225104 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:08:21.225124 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:08:21.225227 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 20:08:21.225258 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 20:08:21.225276 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 20:08:21.225294 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 20:08:21.225312 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:08:21.225329 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:08:21.225347 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:08:21.225367 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 20:08:21.225390 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 20:08:21.225410 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 20:08:21.225428 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 20:08:21.225450 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:08:21.225469 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 20:08:21.225486 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 20:08:21.225504 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 20:08:21.225522 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 20:08:21.225542 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:08:21.225560 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:08:21.225578 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 20:08:21.225599 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:08:21.225616 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 20:08:21.225635 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:08:21.225652 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 20:08:21.225670 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:08:21.225689 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 20:08:21.225707 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 13 20:08:21.225727 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Feb 13 20:08:21.225748 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:08:21.225766 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:08:21.225784 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 20:08:21.225802 kernel: loop: module loaded Feb 13 20:08:21.225820 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 20:08:21.225838 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:08:21.225855 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:08:21.225874 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 20:08:21.225905 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 20:08:21.225926 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 20:08:21.225944 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 20:08:21.225962 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 20:08:21.225980 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 20:08:21.225998 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:08:21.226016 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 20:08:21.226034 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 20:08:21.226051 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:08:21.226069 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:08:21.226093 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:08:21.226112 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:08:21.226129 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:08:21.226147 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:08:21.226178 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 20:08:21.226232 systemd-journald[1566]: Collecting audit messages is disabled. Feb 13 20:08:21.226272 systemd-journald[1566]: Journal started Feb 13 20:08:21.226307 systemd-journald[1566]: Runtime Journal (/run/log/journal/ec294bdaf3000cdfafb56d6119e9dc16) is 4.8M, max 38.6M, 33.7M free. Feb 13 20:08:21.229248 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:08:21.229304 kernel: ACPI: bus type drm_connector registered Feb 13 20:08:21.245240 kernel: fuse: init (API version 7.39) Feb 13 20:08:21.252194 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 20:08:21.272720 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:08:21.289092 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 20:08:21.294537 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 20:08:21.297028 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 20:08:21.297837 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 20:08:21.302839 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:08:21.305902 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 20:08:21.311017 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 20:08:21.314941 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 20:08:21.345925 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 20:08:21.356847 systemd-tmpfiles[1590]: ACLs are not supported, ignoring. Feb 13 20:08:21.357660 systemd-tmpfiles[1590]: ACLs are not supported, ignoring. Feb 13 20:08:21.362298 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 20:08:21.363679 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 20:08:21.375044 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 20:08:21.395538 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 20:08:21.397347 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:08:21.408469 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 20:08:21.414536 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:08:21.421995 systemd-journald[1566]: Time spent on flushing to /var/log/journal/ec294bdaf3000cdfafb56d6119e9dc16 is 72.910ms for 950 entries. Feb 13 20:08:21.421995 systemd-journald[1566]: System Journal (/var/log/journal/ec294bdaf3000cdfafb56d6119e9dc16) is 8.0M, max 195.6M, 187.6M free. Feb 13 20:08:21.510847 systemd-journald[1566]: Received client request to flush runtime journal. Feb 13 20:08:21.426953 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 20:08:21.429860 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:08:21.431593 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 20:08:21.450390 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 20:08:21.455441 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 20:08:21.458926 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 20:08:21.469619 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:08:21.482378 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 20:08:21.515803 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 20:08:21.531801 udevadm[1636]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 20:08:21.536789 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:08:21.559803 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 20:08:21.573529 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:08:21.603051 systemd-tmpfiles[1647]: ACLs are not supported, ignoring. Feb 13 20:08:21.603713 systemd-tmpfiles[1647]: ACLs are not supported, ignoring. Feb 13 20:08:21.611022 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:08:22.341358 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 20:08:22.355430 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:08:22.403755 systemd-udevd[1653]: Using default interface naming scheme 'v255'. Feb 13 20:08:22.468578 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:08:22.495599 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:08:22.542508 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 20:08:22.617194 (udev-worker)[1665]: Network interface NamePolicy= disabled on kernel command line. Feb 13 20:08:22.669069 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Feb 13 20:08:22.702545 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 20:08:22.773204 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Feb 13 20:08:22.817528 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Feb 13 20:08:22.823199 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Feb 13 20:08:22.886221 systemd-networkd[1657]: lo: Link UP Feb 13 20:08:22.886234 systemd-networkd[1657]: lo: Gained carrier Feb 13 20:08:22.888888 systemd-networkd[1657]: Enumeration completed Feb 13 20:08:22.889129 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:08:22.889553 systemd-networkd[1657]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:08:22.889558 systemd-networkd[1657]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:08:22.916071 kernel: ACPI: button: Power Button [PWRF] Feb 13 20:08:22.916142 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Feb 13 20:08:22.903609 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 20:08:22.918532 systemd-networkd[1657]: eth0: Link UP Feb 13 20:08:22.918809 systemd-networkd[1657]: eth0: Gained carrier Feb 13 20:08:22.918861 systemd-networkd[1657]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:08:22.923183 kernel: ACPI: button: Sleep Button [SLPF] Feb 13 20:08:22.923743 systemd-networkd[1657]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:08:22.933260 systemd-networkd[1657]: eth0: DHCPv4 address 172.31.16.93/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 20:08:22.998216 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 20:08:23.017187 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (1663) Feb 13 20:08:23.029404 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:08:23.262968 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 20:08:23.378522 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 20:08:23.391550 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 20:08:23.393356 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:08:23.421395 lvm[1776]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 20:08:23.453112 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 20:08:23.456650 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:08:23.467446 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 20:08:23.475502 lvm[1780]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 20:08:23.507746 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 20:08:23.510801 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 20:08:23.512716 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 20:08:23.512751 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:08:23.514803 systemd[1]: Reached target machines.target - Containers. Feb 13 20:08:23.520590 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 20:08:23.533875 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 20:08:23.543595 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 20:08:23.546357 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:08:23.558516 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 20:08:23.563431 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 20:08:23.568519 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 20:08:23.578925 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 20:08:23.647192 kernel: loop0: detected capacity change from 0 to 61336 Feb 13 20:08:23.640848 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 20:08:23.679594 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 20:08:23.682714 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 20:08:23.726198 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 20:08:23.765430 kernel: loop1: detected capacity change from 0 to 210664 Feb 13 20:08:23.853197 kernel: loop2: detected capacity change from 0 to 140768 Feb 13 20:08:23.965208 kernel: loop3: detected capacity change from 0 to 142488 Feb 13 20:08:24.105190 kernel: loop4: detected capacity change from 0 to 61336 Feb 13 20:08:24.145719 kernel: loop5: detected capacity change from 0 to 210664 Feb 13 20:08:24.177196 kernel: loop6: detected capacity change from 0 to 140768 Feb 13 20:08:24.198202 kernel: loop7: detected capacity change from 0 to 142488 Feb 13 20:08:24.220328 (sd-merge)[1801]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Feb 13 20:08:24.220989 (sd-merge)[1801]: Merged extensions into '/usr'. Feb 13 20:08:24.226372 systemd[1]: Reloading requested from client PID 1788 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 20:08:24.226394 systemd[1]: Reloading... Feb 13 20:08:24.302191 zram_generator::config[1825]: No configuration found. Feb 13 20:08:24.482687 systemd-networkd[1657]: eth0: Gained IPv6LL Feb 13 20:08:24.605654 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:08:24.741489 systemd[1]: Reloading finished in 514 ms. Feb 13 20:08:24.763646 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 20:08:24.766752 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 20:08:24.787477 systemd[1]: Starting ensure-sysext.service... Feb 13 20:08:24.806451 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:08:24.821606 systemd[1]: Reloading requested from client PID 1885 ('systemctl') (unit ensure-sysext.service)... Feb 13 20:08:24.821646 systemd[1]: Reloading... Feb 13 20:08:24.857770 systemd-tmpfiles[1886]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 20:08:24.865349 systemd-tmpfiles[1886]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 20:08:24.866552 systemd-tmpfiles[1886]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 20:08:24.866891 systemd-tmpfiles[1886]: ACLs are not supported, ignoring. Feb 13 20:08:24.866987 systemd-tmpfiles[1886]: ACLs are not supported, ignoring. Feb 13 20:08:24.873341 systemd-tmpfiles[1886]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 20:08:24.873354 systemd-tmpfiles[1886]: Skipping /boot Feb 13 20:08:24.888416 systemd-tmpfiles[1886]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 20:08:24.888592 systemd-tmpfiles[1886]: Skipping /boot Feb 13 20:08:24.972186 zram_generator::config[1916]: No configuration found. Feb 13 20:08:25.117017 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:08:25.206621 systemd[1]: Reloading finished in 383 ms. Feb 13 20:08:25.231953 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:08:25.244403 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 20:08:25.256670 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 20:08:25.263067 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 20:08:25.277521 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:08:25.286375 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 20:08:25.317803 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:08:25.318131 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:08:25.331715 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:08:25.336137 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:08:25.356008 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:08:25.357478 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:08:25.360380 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:08:25.361869 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:08:25.362126 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:08:25.366993 ldconfig[1784]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 20:08:25.386611 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:08:25.387031 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:08:25.402697 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:08:25.404594 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:08:25.404920 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:08:25.409995 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 20:08:25.412392 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 20:08:25.416634 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:08:25.416886 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:08:25.424616 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:08:25.424878 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:08:25.435837 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 20:08:25.440228 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:08:25.444535 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:08:25.453874 augenrules[2011]: No rules Feb 13 20:08:25.462073 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 20:08:25.465014 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:08:25.466828 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:08:25.477608 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:08:25.480853 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 20:08:25.494461 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:08:25.528533 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:08:25.536406 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:08:25.536724 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 20:08:25.559656 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 20:08:25.560970 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:08:25.565140 systemd-resolved[1977]: Positive Trust Anchors: Feb 13 20:08:25.565556 systemd-resolved[1977]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:08:25.565653 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 20:08:25.565768 systemd-resolved[1977]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:08:25.568192 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:08:25.568463 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:08:25.575779 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 20:08:25.576029 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 20:08:25.579577 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:08:25.579865 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:08:25.582602 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:08:25.583488 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:08:25.587508 systemd-resolved[1977]: Defaulting to hostname 'linux'. Feb 13 20:08:25.592910 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:08:25.600823 systemd[1]: Reached target network.target - Network. Feb 13 20:08:25.602778 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 20:08:25.604261 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:08:25.605956 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:08:25.606066 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:08:25.606100 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 20:08:25.606689 systemd[1]: Finished ensure-sysext.service. Feb 13 20:08:25.613474 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 20:08:25.616085 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:08:25.617665 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 20:08:25.623602 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 20:08:25.625454 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 20:08:25.628618 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 20:08:25.631313 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 20:08:25.632964 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 20:08:25.633012 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:08:25.634277 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:08:25.636881 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 20:08:25.640092 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 20:08:25.642948 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 20:08:25.646427 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 20:08:25.647949 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:08:25.649473 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:08:25.651407 systemd[1]: System is tainted: cgroupsv1 Feb 13 20:08:25.651571 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 20:08:25.651690 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 20:08:25.656478 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 20:08:25.662344 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 20:08:25.665964 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 20:08:25.674313 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 20:08:25.687715 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 20:08:25.689580 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 20:08:25.700323 jq[2046]: false Feb 13 20:08:25.702450 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:08:25.719476 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 20:08:25.728332 systemd[1]: Started ntpd.service - Network Time Service. Feb 13 20:08:25.756523 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 20:08:25.776725 extend-filesystems[2047]: Found loop4 Feb 13 20:08:25.791424 extend-filesystems[2047]: Found loop5 Feb 13 20:08:25.791424 extend-filesystems[2047]: Found loop6 Feb 13 20:08:25.791424 extend-filesystems[2047]: Found loop7 Feb 13 20:08:25.791424 extend-filesystems[2047]: Found nvme0n1 Feb 13 20:08:25.791424 extend-filesystems[2047]: Found nvme0n1p1 Feb 13 20:08:25.791424 extend-filesystems[2047]: Found nvme0n1p2 Feb 13 20:08:25.791424 extend-filesystems[2047]: Found nvme0n1p3 Feb 13 20:08:25.791424 extend-filesystems[2047]: Found usr Feb 13 20:08:25.791424 extend-filesystems[2047]: Found nvme0n1p4 Feb 13 20:08:25.791424 extend-filesystems[2047]: Found nvme0n1p6 Feb 13 20:08:25.791424 extend-filesystems[2047]: Found nvme0n1p7 Feb 13 20:08:25.791424 extend-filesystems[2047]: Found nvme0n1p9 Feb 13 20:08:25.791424 extend-filesystems[2047]: Checking size of /dev/nvme0n1p9 Feb 13 20:08:25.788800 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 20:08:25.815392 systemd[1]: Starting setup-oem.service - Setup OEM... Feb 13 20:08:25.827729 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 20:08:25.843446 extend-filesystems[2047]: Resized partition /dev/nvme0n1p9 Feb 13 20:08:25.857744 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 20:08:25.868382 extend-filesystems[2068]: resize2fs 1.47.1 (20-May-2024) Feb 13 20:08:25.886131 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Feb 13 20:08:25.884022 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 20:08:25.883154 dbus-daemon[2044]: [system] SELinux support is enabled Feb 13 20:08:25.886003 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 20:08:25.904390 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 20:08:25.927739 dbus-daemon[2044]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1657 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 13 20:08:25.933521 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 20:08:25.937155 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 20:08:25.963287 ntpd[2052]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 17:30:53 UTC 2025 (1): Starting Feb 13 20:08:25.975485 ntpd[2052]: 13 Feb 20:08:25 ntpd[2052]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 17:30:53 UTC 2025 (1): Starting Feb 13 20:08:25.975485 ntpd[2052]: 13 Feb 20:08:25 ntpd[2052]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 20:08:25.975485 ntpd[2052]: 13 Feb 20:08:25 ntpd[2052]: ---------------------------------------------------- Feb 13 20:08:25.975485 ntpd[2052]: 13 Feb 20:08:25 ntpd[2052]: ntp-4 is maintained by Network Time Foundation, Feb 13 20:08:25.975485 ntpd[2052]: 13 Feb 20:08:25 ntpd[2052]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 20:08:25.975485 ntpd[2052]: 13 Feb 20:08:25 ntpd[2052]: corporation. Support and training for ntp-4 are Feb 13 20:08:25.975485 ntpd[2052]: 13 Feb 20:08:25 ntpd[2052]: available at https://www.nwtime.org/support Feb 13 20:08:25.975485 ntpd[2052]: 13 Feb 20:08:25 ntpd[2052]: ---------------------------------------------------- Feb 13 20:08:25.970722 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 20:08:25.963322 ntpd[2052]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 20:08:25.994529 ntpd[2052]: 13 Feb 20:08:25 ntpd[2052]: proto: precision = 0.068 usec (-24) Feb 13 20:08:25.994529 ntpd[2052]: 13 Feb 20:08:25 ntpd[2052]: basedate set to 2025-02-01 Feb 13 20:08:25.994529 ntpd[2052]: 13 Feb 20:08:25 ntpd[2052]: gps base set to 2025-02-02 (week 2352) Feb 13 20:08:25.994529 ntpd[2052]: 13 Feb 20:08:25 ntpd[2052]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 20:08:25.994529 ntpd[2052]: 13 Feb 20:08:25 ntpd[2052]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 20:08:25.994529 ntpd[2052]: 13 Feb 20:08:25 ntpd[2052]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 20:08:25.994529 ntpd[2052]: 13 Feb 20:08:25 ntpd[2052]: Listen normally on 3 eth0 172.31.16.93:123 Feb 13 20:08:25.994529 ntpd[2052]: 13 Feb 20:08:25 ntpd[2052]: Listen normally on 4 lo [::1]:123 Feb 13 20:08:25.994529 ntpd[2052]: 13 Feb 20:08:25 ntpd[2052]: Listen normally on 5 eth0 [fe80::4ab:86ff:fe05:596d%2]:123 Feb 13 20:08:25.994529 ntpd[2052]: 13 Feb 20:08:25 ntpd[2052]: Listening on routing socket on fd #22 for interface updates Feb 13 20:08:26.006572 jq[2079]: true Feb 13 20:08:25.974224 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 20:08:25.963335 ntpd[2052]: ---------------------------------------------------- Feb 13 20:08:25.985764 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 20:08:25.963345 ntpd[2052]: ntp-4 is maintained by Network Time Foundation, Feb 13 20:08:25.988595 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 20:08:25.963356 ntpd[2052]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 20:08:25.963367 ntpd[2052]: corporation. Support and training for ntp-4 are Feb 13 20:08:25.963379 ntpd[2052]: available at https://www.nwtime.org/support Feb 13 20:08:25.963390 ntpd[2052]: ---------------------------------------------------- Feb 13 20:08:25.980620 ntpd[2052]: proto: precision = 0.068 usec (-24) Feb 13 20:08:25.985834 ntpd[2052]: basedate set to 2025-02-01 Feb 13 20:08:25.985852 ntpd[2052]: gps base set to 2025-02-02 (week 2352) Feb 13 20:08:25.991600 ntpd[2052]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 20:08:25.991655 ntpd[2052]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 20:08:25.991847 ntpd[2052]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 20:08:25.991889 ntpd[2052]: Listen normally on 3 eth0 172.31.16.93:123 Feb 13 20:08:25.991935 ntpd[2052]: Listen normally on 4 lo [::1]:123 Feb 13 20:08:25.991983 ntpd[2052]: Listen normally on 5 eth0 [fe80::4ab:86ff:fe05:596d%2]:123 Feb 13 20:08:25.992021 ntpd[2052]: Listening on routing socket on fd #22 for interface updates Feb 13 20:08:26.019534 ntpd[2052]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 20:08:26.019549 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 20:08:26.019772 ntpd[2052]: 13 Feb 20:08:26 ntpd[2052]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 20:08:26.019843 ntpd[2052]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 20:08:26.019869 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 20:08:26.019979 ntpd[2052]: 13 Feb 20:08:26 ntpd[2052]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 20:08:26.069113 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Feb 13 20:08:26.077399 update_engine[2077]: I20250213 20:08:26.076774 2077 main.cc:92] Flatcar Update Engine starting Feb 13 20:08:26.114728 update_engine[2077]: I20250213 20:08:26.094066 2077 update_check_scheduler.cc:74] Next update check in 10m13s Feb 13 20:08:26.078145 (ntainerd)[2097]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 20:08:26.096832 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 20:08:26.109078 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 20:08:26.127410 extend-filesystems[2068]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Feb 13 20:08:26.127410 extend-filesystems[2068]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 20:08:26.127410 extend-filesystems[2068]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Feb 13 20:08:26.149182 jq[2092]: true Feb 13 20:08:26.109134 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 20:08:26.160448 extend-filesystems[2047]: Resized filesystem in /dev/nvme0n1p9 Feb 13 20:08:26.129789 dbus-daemon[2044]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 20:08:26.113023 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 20:08:26.113052 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 20:08:26.116434 systemd[1]: Started update-engine.service - Update Engine. Feb 13 20:08:26.118911 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 20:08:26.131979 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 20:08:26.144597 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 20:08:26.144916 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 20:08:26.204336 tar[2088]: linux-amd64/helm Feb 13 20:08:26.250426 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Feb 13 20:08:26.289993 coreos-metadata[2043]: Feb 13 20:08:26.289 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 20:08:26.298511 coreos-metadata[2043]: Feb 13 20:08:26.297 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Feb 13 20:08:26.303250 coreos-metadata[2043]: Feb 13 20:08:26.303 INFO Fetch successful Feb 13 20:08:26.306241 coreos-metadata[2043]: Feb 13 20:08:26.303 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Feb 13 20:08:26.307564 coreos-metadata[2043]: Feb 13 20:08:26.307 INFO Fetch successful Feb 13 20:08:26.314390 coreos-metadata[2043]: Feb 13 20:08:26.307 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Feb 13 20:08:26.319188 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (2129) Feb 13 20:08:26.321555 coreos-metadata[2043]: Feb 13 20:08:26.321 INFO Fetch successful Feb 13 20:08:26.321555 coreos-metadata[2043]: Feb 13 20:08:26.321 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Feb 13 20:08:26.332955 coreos-metadata[2043]: Feb 13 20:08:26.325 INFO Fetch successful Feb 13 20:08:26.332955 coreos-metadata[2043]: Feb 13 20:08:26.325 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Feb 13 20:08:26.332955 coreos-metadata[2043]: Feb 13 20:08:26.331 INFO Fetch failed with 404: resource not found Feb 13 20:08:26.332955 coreos-metadata[2043]: Feb 13 20:08:26.331 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Feb 13 20:08:26.332955 coreos-metadata[2043]: Feb 13 20:08:26.332 INFO Fetch successful Feb 13 20:08:26.332955 coreos-metadata[2043]: Feb 13 20:08:26.332 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Feb 13 20:08:26.328536 systemd[1]: Finished setup-oem.service - Setup OEM. Feb 13 20:08:26.344561 coreos-metadata[2043]: Feb 13 20:08:26.338 INFO Fetch successful Feb 13 20:08:26.344561 coreos-metadata[2043]: Feb 13 20:08:26.338 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Feb 13 20:08:26.347351 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Feb 13 20:08:26.362620 coreos-metadata[2043]: Feb 13 20:08:26.355 INFO Fetch successful Feb 13 20:08:26.362620 coreos-metadata[2043]: Feb 13 20:08:26.355 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Feb 13 20:08:26.364938 coreos-metadata[2043]: Feb 13 20:08:26.364 INFO Fetch successful Feb 13 20:08:26.364938 coreos-metadata[2043]: Feb 13 20:08:26.364 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Feb 13 20:08:26.372481 coreos-metadata[2043]: Feb 13 20:08:26.369 INFO Fetch successful Feb 13 20:08:26.549671 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 20:08:26.551588 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 20:08:26.585512 systemd-logind[2075]: Watching system buttons on /dev/input/event2 (Power Button) Feb 13 20:08:26.588900 sshd_keygen[2095]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 20:08:26.594086 systemd-logind[2075]: Watching system buttons on /dev/input/event3 (Sleep Button) Feb 13 20:08:26.594123 systemd-logind[2075]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 20:08:26.601556 systemd-logind[2075]: New seat seat0. Feb 13 20:08:26.602942 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 20:08:26.659680 bash[2169]: Updated "/home/core/.ssh/authorized_keys" Feb 13 20:08:26.669134 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 20:08:26.688898 systemd[1]: Starting sshkeys.service... Feb 13 20:08:26.784105 dbus-daemon[2044]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 13 20:08:26.786595 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Feb 13 20:08:26.790763 dbus-daemon[2044]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2132 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 13 20:08:26.813487 systemd[1]: Starting polkit.service - Authorization Manager... Feb 13 20:08:26.825927 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 20:08:26.839932 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 20:08:26.843029 amazon-ssm-agent[2141]: Initializing new seelog logger Feb 13 20:08:26.843029 amazon-ssm-agent[2141]: New Seelog Logger Creation Complete Feb 13 20:08:26.843029 amazon-ssm-agent[2141]: 2025/02/13 20:08:26 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 20:08:26.843029 amazon-ssm-agent[2141]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 20:08:26.849686 amazon-ssm-agent[2141]: 2025/02/13 20:08:26 processing appconfig overrides Feb 13 20:08:26.849686 amazon-ssm-agent[2141]: 2025/02/13 20:08:26 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 20:08:26.849961 amazon-ssm-agent[2141]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 20:08:26.849999 amazon-ssm-agent[2141]: 2025/02/13 20:08:26 processing appconfig overrides Feb 13 20:08:26.865594 amazon-ssm-agent[2141]: 2025-02-13 20:08:26 INFO Proxy environment variables: Feb 13 20:08:26.866343 amazon-ssm-agent[2141]: 2025/02/13 20:08:26 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 20:08:26.869711 amazon-ssm-agent[2141]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 20:08:26.869711 amazon-ssm-agent[2141]: 2025/02/13 20:08:26 processing appconfig overrides Feb 13 20:08:26.900876 amazon-ssm-agent[2141]: 2025/02/13 20:08:26 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 20:08:26.900876 amazon-ssm-agent[2141]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 20:08:26.900876 amazon-ssm-agent[2141]: 2025/02/13 20:08:26 processing appconfig overrides Feb 13 20:08:26.976133 polkitd[2215]: Started polkitd version 121 Feb 13 20:08:26.985884 amazon-ssm-agent[2141]: 2025-02-13 20:08:26 INFO https_proxy: Feb 13 20:08:27.024975 locksmithd[2120]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 20:08:27.043867 polkitd[2215]: Loading rules from directory /etc/polkit-1/rules.d Feb 13 20:08:27.053213 polkitd[2215]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 13 20:08:27.065186 polkitd[2215]: Finished loading, compiling and executing 2 rules Feb 13 20:08:27.066670 systemd[1]: Started polkit.service - Authorization Manager. Feb 13 20:08:27.066438 dbus-daemon[2044]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 13 20:08:27.072461 polkitd[2215]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 13 20:08:27.085065 amazon-ssm-agent[2141]: 2025-02-13 20:08:26 INFO http_proxy: Feb 13 20:08:27.118532 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 20:08:27.135764 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 20:08:27.188284 amazon-ssm-agent[2141]: 2025-02-13 20:08:26 INFO no_proxy: Feb 13 20:08:27.224443 systemd-hostnamed[2132]: Hostname set to <ip-172-31-16-93> (transient) Feb 13 20:08:27.225261 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 20:08:27.225695 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 20:08:27.227046 systemd-resolved[1977]: System hostname changed to 'ip-172-31-16-93'. Feb 13 20:08:27.255319 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 20:08:27.289659 amazon-ssm-agent[2141]: 2025-02-13 20:08:26 INFO Checking if agent identity type OnPrem can be assumed Feb 13 20:08:27.310807 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 20:08:27.327186 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 20:08:27.339371 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 20:08:27.339610 coreos-metadata[2216]: Feb 13 20:08:27.339 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 20:08:27.341699 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 20:08:27.353502 coreos-metadata[2216]: Feb 13 20:08:27.353 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Feb 13 20:08:27.354106 coreos-metadata[2216]: Feb 13 20:08:27.354 INFO Fetch successful Feb 13 20:08:27.354291 coreos-metadata[2216]: Feb 13 20:08:27.354 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 13 20:08:27.361197 coreos-metadata[2216]: Feb 13 20:08:27.358 INFO Fetch successful Feb 13 20:08:27.364264 unknown[2216]: wrote ssh authorized keys file for user: core Feb 13 20:08:27.390578 amazon-ssm-agent[2141]: 2025-02-13 20:08:26 INFO Checking if agent identity type EC2 can be assumed Feb 13 20:08:27.454591 update-ssh-keys[2301]: Updated "/home/core/.ssh/authorized_keys" Feb 13 20:08:27.473214 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 20:08:27.483998 systemd[1]: Finished sshkeys.service. Feb 13 20:08:27.496631 amazon-ssm-agent[2141]: 2025-02-13 20:08:27 INFO Agent will take identity from EC2 Feb 13 20:08:27.596306 amazon-ssm-agent[2141]: 2025-02-13 20:08:27 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 20:08:27.629816 containerd[2097]: time="2025-02-13T20:08:27.627796269Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Feb 13 20:08:27.695385 amazon-ssm-agent[2141]: 2025-02-13 20:08:27 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 20:08:27.710222 amazon-ssm-agent[2141]: 2025-02-13 20:08:27 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 20:08:27.710222 amazon-ssm-agent[2141]: 2025-02-13 20:08:27 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Feb 13 20:08:27.710222 amazon-ssm-agent[2141]: 2025-02-13 20:08:27 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Feb 13 20:08:27.710222 amazon-ssm-agent[2141]: 2025-02-13 20:08:27 INFO [amazon-ssm-agent] Starting Core Agent Feb 13 20:08:27.710222 amazon-ssm-agent[2141]: 2025-02-13 20:08:27 INFO [amazon-ssm-agent] registrar detected. Attempting registration Feb 13 20:08:27.710222 amazon-ssm-agent[2141]: 2025-02-13 20:08:27 INFO [Registrar] Starting registrar module Feb 13 20:08:27.710222 amazon-ssm-agent[2141]: 2025-02-13 20:08:27 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Feb 13 20:08:27.710222 amazon-ssm-agent[2141]: 2025-02-13 20:08:27 INFO [EC2Identity] EC2 registration was successful. Feb 13 20:08:27.710222 amazon-ssm-agent[2141]: 2025-02-13 20:08:27 INFO [CredentialRefresher] credentialRefresher has started Feb 13 20:08:27.710222 amazon-ssm-agent[2141]: 2025-02-13 20:08:27 INFO [CredentialRefresher] Starting credentials refresher loop Feb 13 20:08:27.710222 amazon-ssm-agent[2141]: 2025-02-13 20:08:27 INFO EC2RoleProvider Successfully connected with instance profile role credentials Feb 13 20:08:27.748357 containerd[2097]: time="2025-02-13T20:08:27.748301940Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:08:27.752222 containerd[2097]: time="2025-02-13T20:08:27.752002670Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:08:27.752222 containerd[2097]: time="2025-02-13T20:08:27.752051393Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 20:08:27.752222 containerd[2097]: time="2025-02-13T20:08:27.752075692Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 20:08:27.752798 containerd[2097]: time="2025-02-13T20:08:27.752589273Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 20:08:27.752798 containerd[2097]: time="2025-02-13T20:08:27.752622013Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 20:08:27.752798 containerd[2097]: time="2025-02-13T20:08:27.752694937Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:08:27.752798 containerd[2097]: time="2025-02-13T20:08:27.752714775Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:08:27.754182 containerd[2097]: time="2025-02-13T20:08:27.753246095Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:08:27.754182 containerd[2097]: time="2025-02-13T20:08:27.753275043Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 20:08:27.754182 containerd[2097]: time="2025-02-13T20:08:27.753296537Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:08:27.754182 containerd[2097]: time="2025-02-13T20:08:27.753312443Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 20:08:27.754182 containerd[2097]: time="2025-02-13T20:08:27.753415000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:08:27.754182 containerd[2097]: time="2025-02-13T20:08:27.754121133Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:08:27.754970 containerd[2097]: time="2025-02-13T20:08:27.754941229Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:08:27.755077 containerd[2097]: time="2025-02-13T20:08:27.755059910Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 20:08:27.755279 containerd[2097]: time="2025-02-13T20:08:27.755261291Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 20:08:27.755611 containerd[2097]: time="2025-02-13T20:08:27.755474993Z" level=info msg="metadata content store policy set" policy=shared Feb 13 20:08:27.766646 containerd[2097]: time="2025-02-13T20:08:27.766595868Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 20:08:27.766779 containerd[2097]: time="2025-02-13T20:08:27.766673906Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 20:08:27.766779 containerd[2097]: time="2025-02-13T20:08:27.766698078Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 20:08:27.766779 containerd[2097]: time="2025-02-13T20:08:27.766718163Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 20:08:27.766779 containerd[2097]: time="2025-02-13T20:08:27.766751726Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 20:08:27.768962 containerd[2097]: time="2025-02-13T20:08:27.767050095Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 20:08:27.771405 containerd[2097]: time="2025-02-13T20:08:27.769862875Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 20:08:27.771405 containerd[2097]: time="2025-02-13T20:08:27.770086343Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 20:08:27.771405 containerd[2097]: time="2025-02-13T20:08:27.770293718Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 20:08:27.771405 containerd[2097]: time="2025-02-13T20:08:27.770332993Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 20:08:27.771405 containerd[2097]: time="2025-02-13T20:08:27.770355385Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 20:08:27.771405 containerd[2097]: time="2025-02-13T20:08:27.770375847Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 20:08:27.771405 containerd[2097]: time="2025-02-13T20:08:27.770431339Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 20:08:27.771405 containerd[2097]: time="2025-02-13T20:08:27.770456602Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 20:08:27.771405 containerd[2097]: time="2025-02-13T20:08:27.770477978Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 20:08:27.771405 containerd[2097]: time="2025-02-13T20:08:27.770497479Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 20:08:27.771405 containerd[2097]: time="2025-02-13T20:08:27.770515979Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 20:08:27.771405 containerd[2097]: time="2025-02-13T20:08:27.770535599Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 20:08:27.771405 containerd[2097]: time="2025-02-13T20:08:27.770564842Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 20:08:27.771405 containerd[2097]: time="2025-02-13T20:08:27.770585685Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 20:08:27.771978 containerd[2097]: time="2025-02-13T20:08:27.770606836Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 20:08:27.771978 containerd[2097]: time="2025-02-13T20:08:27.770625977Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 20:08:27.771978 containerd[2097]: time="2025-02-13T20:08:27.770643732Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 20:08:27.771978 containerd[2097]: time="2025-02-13T20:08:27.770674249Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 20:08:27.771978 containerd[2097]: time="2025-02-13T20:08:27.770692020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 20:08:27.771978 containerd[2097]: time="2025-02-13T20:08:27.770710869Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 20:08:27.771978 containerd[2097]: time="2025-02-13T20:08:27.770730131Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 20:08:27.771978 containerd[2097]: time="2025-02-13T20:08:27.770751169Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 20:08:27.771978 containerd[2097]: time="2025-02-13T20:08:27.770771407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 20:08:27.771978 containerd[2097]: time="2025-02-13T20:08:27.770790234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 20:08:27.774210 containerd[2097]: time="2025-02-13T20:08:27.773070411Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 20:08:27.774210 containerd[2097]: time="2025-02-13T20:08:27.773132517Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 20:08:27.774210 containerd[2097]: time="2025-02-13T20:08:27.773193464Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 20:08:27.774210 containerd[2097]: time="2025-02-13T20:08:27.773223266Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 20:08:27.774210 containerd[2097]: time="2025-02-13T20:08:27.773300229Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 20:08:27.774210 containerd[2097]: time="2025-02-13T20:08:27.773363512Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 20:08:27.774210 containerd[2097]: time="2025-02-13T20:08:27.773396528Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 20:08:27.774210 containerd[2097]: time="2025-02-13T20:08:27.773421174Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 20:08:27.774210 containerd[2097]: time="2025-02-13T20:08:27.773446566Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 20:08:27.774210 containerd[2097]: time="2025-02-13T20:08:27.773463718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 20:08:27.774210 containerd[2097]: time="2025-02-13T20:08:27.773489243Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 20:08:27.774210 containerd[2097]: time="2025-02-13T20:08:27.773508931Z" level=info msg="NRI interface is disabled by configuration." Feb 13 20:08:27.774210 containerd[2097]: time="2025-02-13T20:08:27.773525947Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 20:08:27.774716 containerd[2097]: time="2025-02-13T20:08:27.774104106Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 20:08:27.774716 containerd[2097]: time="2025-02-13T20:08:27.774226440Z" level=info msg="Connect containerd service" Feb 13 20:08:27.774716 containerd[2097]: time="2025-02-13T20:08:27.774294634Z" level=info msg="using legacy CRI server" Feb 13 20:08:27.774716 containerd[2097]: time="2025-02-13T20:08:27.774305500Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 20:08:27.774716 containerd[2097]: time="2025-02-13T20:08:27.774469289Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 20:08:27.780551 containerd[2097]: time="2025-02-13T20:08:27.780499453Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 20:08:27.788229 containerd[2097]: time="2025-02-13T20:08:27.781112337Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 20:08:27.788229 containerd[2097]: time="2025-02-13T20:08:27.781190209Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 20:08:27.788229 containerd[2097]: time="2025-02-13T20:08:27.781236907Z" level=info msg="Start subscribing containerd event" Feb 13 20:08:27.788229 containerd[2097]: time="2025-02-13T20:08:27.781291560Z" level=info msg="Start recovering state" Feb 13 20:08:27.788229 containerd[2097]: time="2025-02-13T20:08:27.781431673Z" level=info msg="Start event monitor" Feb 13 20:08:27.788229 containerd[2097]: time="2025-02-13T20:08:27.781455071Z" level=info msg="Start snapshots syncer" Feb 13 20:08:27.788229 containerd[2097]: time="2025-02-13T20:08:27.781469880Z" level=info msg="Start cni network conf syncer for default" Feb 13 20:08:27.788229 containerd[2097]: time="2025-02-13T20:08:27.783193156Z" level=info msg="Start streaming server" Feb 13 20:08:27.788229 containerd[2097]: time="2025-02-13T20:08:27.783475768Z" level=info msg="containerd successfully booted in 0.161392s" Feb 13 20:08:27.784363 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 20:08:27.793786 amazon-ssm-agent[2141]: 2025-02-13 20:08:27 INFO [CredentialRefresher] Next credential rotation will be in 31.824993896 minutes Feb 13 20:08:28.298354 tar[2088]: linux-amd64/LICENSE Feb 13 20:08:28.298809 tar[2088]: linux-amd64/README.md Feb 13 20:08:28.317146 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 20:08:28.639547 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:08:28.641920 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 20:08:28.645398 systemd[1]: Startup finished in 9.607s (kernel) + 9.211s (userspace) = 18.818s. Feb 13 20:08:28.731396 amazon-ssm-agent[2141]: 2025-02-13 20:08:28 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Feb 13 20:08:28.797800 (kubelet)[2339]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:08:28.843007 amazon-ssm-agent[2141]: 2025-02-13 20:08:28 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2341) started Feb 13 20:08:28.941033 amazon-ssm-agent[2141]: 2025-02-13 20:08:28 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Feb 13 20:08:29.603996 kubelet[2339]: E0213 20:08:29.603907 2339 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:08:29.608004 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:08:29.608485 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:08:33.552964 systemd-resolved[1977]: Clock change detected. Flushing caches. Feb 13 20:08:33.716627 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 20:08:33.722574 systemd[1]: Started sshd@0-172.31.16.93:22-139.178.89.65:51144.service - OpenSSH per-connection server daemon (139.178.89.65:51144). Feb 13 20:08:33.932506 sshd[2362]: Accepted publickey for core from 139.178.89.65 port 51144 ssh2: RSA SHA256:7nv7xaFFWmIAvPewvKjLuTxkMrDcPy3WtQ5BDo3Wg0I Feb 13 20:08:33.935189 sshd[2362]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:08:33.950923 systemd-logind[2075]: New session 1 of user core. Feb 13 20:08:33.952505 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 20:08:33.959568 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 20:08:33.978501 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 20:08:33.989425 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 20:08:33.994999 (systemd)[2368]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 20:08:34.153331 systemd[2368]: Queued start job for default target default.target. Feb 13 20:08:34.153827 systemd[2368]: Created slice app.slice - User Application Slice. Feb 13 20:08:34.153858 systemd[2368]: Reached target paths.target - Paths. Feb 13 20:08:34.153877 systemd[2368]: Reached target timers.target - Timers. Feb 13 20:08:34.161223 systemd[2368]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 20:08:34.170556 systemd[2368]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 20:08:34.170634 systemd[2368]: Reached target sockets.target - Sockets. Feb 13 20:08:34.170652 systemd[2368]: Reached target basic.target - Basic System. Feb 13 20:08:34.170705 systemd[2368]: Reached target default.target - Main User Target. Feb 13 20:08:34.170743 systemd[2368]: Startup finished in 164ms. Feb 13 20:08:34.171412 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 20:08:34.177461 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 20:08:34.329029 systemd[1]: Started sshd@1-172.31.16.93:22-139.178.89.65:51150.service - OpenSSH per-connection server daemon (139.178.89.65:51150). Feb 13 20:08:34.493921 sshd[2380]: Accepted publickey for core from 139.178.89.65 port 51150 ssh2: RSA SHA256:7nv7xaFFWmIAvPewvKjLuTxkMrDcPy3WtQ5BDo3Wg0I Feb 13 20:08:34.495911 sshd[2380]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:08:34.502276 systemd-logind[2075]: New session 2 of user core. Feb 13 20:08:34.507997 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 20:08:34.632632 sshd[2380]: pam_unix(sshd:session): session closed for user core Feb 13 20:08:34.645202 systemd[1]: sshd@1-172.31.16.93:22-139.178.89.65:51150.service: Deactivated successfully. Feb 13 20:08:34.659920 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 20:08:34.662609 systemd-logind[2075]: Session 2 logged out. Waiting for processes to exit. Feb 13 20:08:34.680622 systemd[1]: Started sshd@2-172.31.16.93:22-139.178.89.65:56768.service - OpenSSH per-connection server daemon (139.178.89.65:56768). Feb 13 20:08:34.683449 systemd-logind[2075]: Removed session 2. Feb 13 20:08:34.840826 sshd[2388]: Accepted publickey for core from 139.178.89.65 port 56768 ssh2: RSA SHA256:7nv7xaFFWmIAvPewvKjLuTxkMrDcPy3WtQ5BDo3Wg0I Feb 13 20:08:34.842557 sshd[2388]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:08:34.850862 systemd-logind[2075]: New session 3 of user core. Feb 13 20:08:34.858447 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 20:08:34.982882 sshd[2388]: pam_unix(sshd:session): session closed for user core Feb 13 20:08:34.999752 systemd[1]: sshd@2-172.31.16.93:22-139.178.89.65:56768.service: Deactivated successfully. Feb 13 20:08:35.012106 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 20:08:35.015164 systemd-logind[2075]: Session 3 logged out. Waiting for processes to exit. Feb 13 20:08:35.031693 systemd[1]: Started sshd@3-172.31.16.93:22-139.178.89.65:56770.service - OpenSSH per-connection server daemon (139.178.89.65:56770). Feb 13 20:08:35.036217 systemd-logind[2075]: Removed session 3. Feb 13 20:08:35.209753 sshd[2396]: Accepted publickey for core from 139.178.89.65 port 56770 ssh2: RSA SHA256:7nv7xaFFWmIAvPewvKjLuTxkMrDcPy3WtQ5BDo3Wg0I Feb 13 20:08:35.210510 sshd[2396]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:08:35.221327 systemd-logind[2075]: New session 4 of user core. Feb 13 20:08:35.225584 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 20:08:35.358313 sshd[2396]: pam_unix(sshd:session): session closed for user core Feb 13 20:08:35.365193 systemd[1]: sshd@3-172.31.16.93:22-139.178.89.65:56770.service: Deactivated successfully. Feb 13 20:08:35.373252 systemd-logind[2075]: Session 4 logged out. Waiting for processes to exit. Feb 13 20:08:35.374337 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 20:08:35.378329 systemd-logind[2075]: Removed session 4. Feb 13 20:08:35.389714 systemd[1]: Started sshd@4-172.31.16.93:22-139.178.89.65:56772.service - OpenSSH per-connection server daemon (139.178.89.65:56772). Feb 13 20:08:35.569740 sshd[2404]: Accepted publickey for core from 139.178.89.65 port 56772 ssh2: RSA SHA256:7nv7xaFFWmIAvPewvKjLuTxkMrDcPy3WtQ5BDo3Wg0I Feb 13 20:08:35.571812 sshd[2404]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:08:35.585537 systemd-logind[2075]: New session 5 of user core. Feb 13 20:08:35.602463 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 20:08:35.746370 sudo[2408]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 20:08:35.747587 sudo[2408]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:08:35.771458 sudo[2408]: pam_unix(sudo:session): session closed for user root Feb 13 20:08:35.794933 sshd[2404]: pam_unix(sshd:session): session closed for user core Feb 13 20:08:35.804861 systemd[1]: sshd@4-172.31.16.93:22-139.178.89.65:56772.service: Deactivated successfully. Feb 13 20:08:35.814295 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 20:08:35.815620 systemd-logind[2075]: Session 5 logged out. Waiting for processes to exit. Feb 13 20:08:35.824392 systemd[1]: Started sshd@5-172.31.16.93:22-139.178.89.65:56784.service - OpenSSH per-connection server daemon (139.178.89.65:56784). Feb 13 20:08:35.825647 systemd-logind[2075]: Removed session 5. Feb 13 20:08:35.974820 sshd[2413]: Accepted publickey for core from 139.178.89.65 port 56784 ssh2: RSA SHA256:7nv7xaFFWmIAvPewvKjLuTxkMrDcPy3WtQ5BDo3Wg0I Feb 13 20:08:35.976565 sshd[2413]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:08:35.997057 systemd-logind[2075]: New session 6 of user core. Feb 13 20:08:36.005221 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 20:08:36.117640 sudo[2418]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 20:08:36.118029 sudo[2418]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:08:36.123169 sudo[2418]: pam_unix(sudo:session): session closed for user root Feb 13 20:08:36.130055 sudo[2417]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Feb 13 20:08:36.131153 sudo[2417]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:08:36.154544 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Feb 13 20:08:36.156931 auditctl[2421]: No rules Feb 13 20:08:36.157390 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 20:08:36.157853 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Feb 13 20:08:36.169514 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 20:08:36.201495 augenrules[2440]: No rules Feb 13 20:08:36.204048 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 20:08:36.208401 sudo[2417]: pam_unix(sudo:session): session closed for user root Feb 13 20:08:36.230714 sshd[2413]: pam_unix(sshd:session): session closed for user core Feb 13 20:08:36.236383 systemd[1]: sshd@5-172.31.16.93:22-139.178.89.65:56784.service: Deactivated successfully. Feb 13 20:08:36.240885 systemd-logind[2075]: Session 6 logged out. Waiting for processes to exit. Feb 13 20:08:36.242508 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 20:08:36.243881 systemd-logind[2075]: Removed session 6. Feb 13 20:08:36.260959 systemd[1]: Started sshd@6-172.31.16.93:22-139.178.89.65:56792.service - OpenSSH per-connection server daemon (139.178.89.65:56792). Feb 13 20:08:36.428344 sshd[2449]: Accepted publickey for core from 139.178.89.65 port 56792 ssh2: RSA SHA256:7nv7xaFFWmIAvPewvKjLuTxkMrDcPy3WtQ5BDo3Wg0I Feb 13 20:08:36.429979 sshd[2449]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:08:36.447388 systemd-logind[2075]: New session 7 of user core. Feb 13 20:08:36.453469 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 20:08:36.565305 sudo[2453]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 20:08:36.565707 sudo[2453]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:08:37.123453 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 20:08:37.125023 (dockerd)[2469]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 20:08:37.736446 dockerd[2469]: time="2025-02-13T20:08:37.736378335Z" level=info msg="Starting up" Feb 13 20:08:38.746609 dockerd[2469]: time="2025-02-13T20:08:38.746559353Z" level=info msg="Loading containers: start." Feb 13 20:08:38.957419 kernel: Initializing XFRM netlink socket Feb 13 20:08:39.058182 (udev-worker)[2491]: Network interface NamePolicy= disabled on kernel command line. Feb 13 20:08:39.154975 systemd-networkd[1657]: docker0: Link UP Feb 13 20:08:39.194119 dockerd[2469]: time="2025-02-13T20:08:39.193835881Z" level=info msg="Loading containers: done." Feb 13 20:08:39.232896 dockerd[2469]: time="2025-02-13T20:08:39.232782990Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 20:08:39.233449 dockerd[2469]: time="2025-02-13T20:08:39.232965314Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Feb 13 20:08:39.233449 dockerd[2469]: time="2025-02-13T20:08:39.233400275Z" level=info msg="Daemon has completed initialization" Feb 13 20:08:39.292620 dockerd[2469]: time="2025-02-13T20:08:39.292330177Z" level=info msg="API listen on /run/docker.sock" Feb 13 20:08:39.293278 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 20:08:40.447529 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 20:08:40.460350 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:08:40.808315 containerd[2097]: time="2025-02-13T20:08:40.806586237Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\"" Feb 13 20:08:41.283560 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:08:41.301851 (kubelet)[2626]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:08:41.384109 kubelet[2626]: E0213 20:08:41.384038 2626 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:08:41.393226 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:08:41.393960 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:08:41.704381 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount814791394.mount: Deactivated successfully. Feb 13 20:08:44.596250 containerd[2097]: time="2025-02-13T20:08:44.596195243Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:08:44.597726 containerd[2097]: time="2025-02-13T20:08:44.597535003Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.10: active requests=0, bytes read=32678214" Feb 13 20:08:44.601880 containerd[2097]: time="2025-02-13T20:08:44.599730302Z" level=info msg="ImageCreate event name:\"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:08:44.608397 containerd[2097]: time="2025-02-13T20:08:44.608325009Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:08:44.609905 containerd[2097]: time="2025-02-13T20:08:44.609695373Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.10\" with image id \"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\", size \"32675014\" in 3.803062997s" Feb 13 20:08:44.609905 containerd[2097]: time="2025-02-13T20:08:44.609749450Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\" returns image reference \"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\"" Feb 13 20:08:44.657189 containerd[2097]: time="2025-02-13T20:08:44.657144533Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\"" Feb 13 20:08:47.757812 containerd[2097]: time="2025-02-13T20:08:47.757663511Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:08:47.759578 containerd[2097]: time="2025-02-13T20:08:47.759401013Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.10: active requests=0, bytes read=29611545" Feb 13 20:08:47.761902 containerd[2097]: time="2025-02-13T20:08:47.761137866Z" level=info msg="ImageCreate event name:\"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:08:47.769763 containerd[2097]: time="2025-02-13T20:08:47.769713166Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:08:47.772874 containerd[2097]: time="2025-02-13T20:08:47.772822707Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.10\" with image id \"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\", size \"31058091\" in 3.115633837s" Feb 13 20:08:47.773199 containerd[2097]: time="2025-02-13T20:08:47.773174245Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\" returns image reference \"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\"" Feb 13 20:08:47.800702 containerd[2097]: time="2025-02-13T20:08:47.800669693Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\"" Feb 13 20:08:49.680285 containerd[2097]: time="2025-02-13T20:08:49.680237831Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:08:49.681651 containerd[2097]: time="2025-02-13T20:08:49.681521697Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.10: active requests=0, bytes read=17782130" Feb 13 20:08:49.683685 containerd[2097]: time="2025-02-13T20:08:49.683134672Z" level=info msg="ImageCreate event name:\"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:08:49.689040 containerd[2097]: time="2025-02-13T20:08:49.688994734Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:08:49.690490 containerd[2097]: time="2025-02-13T20:08:49.690443396Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.10\" with image id \"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\", size \"19228694\" in 1.889529425s" Feb 13 20:08:49.690602 containerd[2097]: time="2025-02-13T20:08:49.690495113Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\" returns image reference \"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\"" Feb 13 20:08:49.723856 containerd[2097]: time="2025-02-13T20:08:49.723812163Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\"" Feb 13 20:08:51.443842 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount414296097.mount: Deactivated successfully. Feb 13 20:08:51.447969 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 20:08:51.461462 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:08:51.934332 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:08:51.938223 (kubelet)[2731]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:08:52.052474 kubelet[2731]: E0213 20:08:52.052365 2731 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:08:52.056439 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:08:52.056728 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:08:52.291053 containerd[2097]: time="2025-02-13T20:08:52.291000421Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:08:52.292343 containerd[2097]: time="2025-02-13T20:08:52.292182177Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.10: active requests=0, bytes read=29057858" Feb 13 20:08:52.295137 containerd[2097]: time="2025-02-13T20:08:52.293847869Z" level=info msg="ImageCreate event name:\"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:08:52.297046 containerd[2097]: time="2025-02-13T20:08:52.296991779Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:08:52.297700 containerd[2097]: time="2025-02-13T20:08:52.297660678Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.10\" with image id \"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\", repo tag \"registry.k8s.io/kube-proxy:v1.30.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\", size \"29056877\" in 2.573805221s" Feb 13 20:08:52.297780 containerd[2097]: time="2025-02-13T20:08:52.297708482Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\" returns image reference \"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\"" Feb 13 20:08:52.326821 containerd[2097]: time="2025-02-13T20:08:52.326439589Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 20:08:53.086641 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount78475704.mount: Deactivated successfully. Feb 13 20:08:54.459763 containerd[2097]: time="2025-02-13T20:08:54.459661813Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:08:54.461492 containerd[2097]: time="2025-02-13T20:08:54.461406105Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Feb 13 20:08:54.463478 containerd[2097]: time="2025-02-13T20:08:54.462999591Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:08:54.466764 containerd[2097]: time="2025-02-13T20:08:54.466723967Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:08:54.468848 containerd[2097]: time="2025-02-13T20:08:54.468803388Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.142316286s" Feb 13 20:08:54.468950 containerd[2097]: time="2025-02-13T20:08:54.468854361Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Feb 13 20:08:54.502773 containerd[2097]: time="2025-02-13T20:08:54.502726208Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 20:08:55.027850 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount428420252.mount: Deactivated successfully. Feb 13 20:08:55.036378 containerd[2097]: time="2025-02-13T20:08:55.036326566Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:08:55.037807 containerd[2097]: time="2025-02-13T20:08:55.037618785Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Feb 13 20:08:55.040651 containerd[2097]: time="2025-02-13T20:08:55.039355858Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:08:55.042056 containerd[2097]: time="2025-02-13T20:08:55.042019596Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:08:55.042820 containerd[2097]: time="2025-02-13T20:08:55.042780105Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 540.002266ms" Feb 13 20:08:55.042961 containerd[2097]: time="2025-02-13T20:08:55.042827218Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 13 20:08:55.072810 containerd[2097]: time="2025-02-13T20:08:55.072765454Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Feb 13 20:08:55.735466 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3361689066.mount: Deactivated successfully. Feb 13 20:08:57.852000 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 13 20:08:59.257714 containerd[2097]: time="2025-02-13T20:08:59.257593794Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:08:59.259099 containerd[2097]: time="2025-02-13T20:08:59.258934187Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Feb 13 20:08:59.261188 containerd[2097]: time="2025-02-13T20:08:59.260892098Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:08:59.266140 containerd[2097]: time="2025-02-13T20:08:59.265407287Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:08:59.267240 containerd[2097]: time="2025-02-13T20:08:59.267195124Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 4.194386212s" Feb 13 20:08:59.267342 containerd[2097]: time="2025-02-13T20:08:59.267251749Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Feb 13 20:09:02.241947 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Feb 13 20:09:02.261263 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:09:03.085255 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:09:03.099703 (kubelet)[2923]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:09:03.189085 kubelet[2923]: E0213 20:09:03.187006 2923 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:09:03.194331 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:09:03.194578 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:09:04.332937 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:09:04.341455 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:09:04.388122 systemd[1]: Reloading requested from client PID 2941 ('systemctl') (unit session-7.scope)... Feb 13 20:09:04.388475 systemd[1]: Reloading... Feb 13 20:09:04.561099 zram_generator::config[2984]: No configuration found. Feb 13 20:09:04.780844 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:09:04.922335 systemd[1]: Reloading finished in 533 ms. Feb 13 20:09:04.993672 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 20:09:04.995505 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 20:09:04.996160 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:09:05.001396 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:09:05.510363 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:09:05.529811 (kubelet)[3050]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 20:09:05.620495 kubelet[3050]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:09:05.620495 kubelet[3050]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 20:09:05.620495 kubelet[3050]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:09:05.622899 kubelet[3050]: I0213 20:09:05.622706 3050 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 20:09:06.459575 kubelet[3050]: I0213 20:09:06.459530 3050 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 20:09:06.459575 kubelet[3050]: I0213 20:09:06.459563 3050 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 20:09:06.459844 kubelet[3050]: I0213 20:09:06.459823 3050 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 20:09:06.491536 kubelet[3050]: I0213 20:09:06.491493 3050 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 20:09:06.498279 kubelet[3050]: E0213 20:09:06.497997 3050 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.16.93:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.16.93:6443: connect: connection refused Feb 13 20:09:06.522427 kubelet[3050]: I0213 20:09:06.522399 3050 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 20:09:06.525253 kubelet[3050]: I0213 20:09:06.525196 3050 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 20:09:06.528920 kubelet[3050]: I0213 20:09:06.525251 3050 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-16-93","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 20:09:06.529156 kubelet[3050]: I0213 20:09:06.528935 3050 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 20:09:06.529156 kubelet[3050]: I0213 20:09:06.528954 3050 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 20:09:06.529156 kubelet[3050]: I0213 20:09:06.529128 3050 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:09:06.530165 kubelet[3050]: I0213 20:09:06.530145 3050 kubelet.go:400] "Attempting to sync node with API server" Feb 13 20:09:06.530249 kubelet[3050]: I0213 20:09:06.530169 3050 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 20:09:06.531115 kubelet[3050]: W0213 20:09:06.530722 3050 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.16.93:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-93&limit=500&resourceVersion=0": dial tcp 172.31.16.93:6443: connect: connection refused Feb 13 20:09:06.531115 kubelet[3050]: E0213 20:09:06.530798 3050 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.16.93:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-93&limit=500&resourceVersion=0": dial tcp 172.31.16.93:6443: connect: connection refused Feb 13 20:09:06.531115 kubelet[3050]: I0213 20:09:06.531044 3050 kubelet.go:312] "Adding apiserver pod source" Feb 13 20:09:06.533778 kubelet[3050]: I0213 20:09:06.533495 3050 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 20:09:06.539231 kubelet[3050]: W0213 20:09:06.538778 3050 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.16.93:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.16.93:6443: connect: connection refused Feb 13 20:09:06.539231 kubelet[3050]: E0213 20:09:06.538840 3050 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.16.93:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.16.93:6443: connect: connection refused Feb 13 20:09:06.539501 kubelet[3050]: I0213 20:09:06.539475 3050 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 20:09:06.545738 kubelet[3050]: I0213 20:09:06.543014 3050 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 20:09:06.545738 kubelet[3050]: W0213 20:09:06.543222 3050 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 20:09:06.545738 kubelet[3050]: I0213 20:09:06.544759 3050 server.go:1264] "Started kubelet" Feb 13 20:09:06.547668 kubelet[3050]: I0213 20:09:06.547552 3050 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 20:09:06.551333 kubelet[3050]: I0213 20:09:06.550543 3050 server.go:455] "Adding debug handlers to kubelet server" Feb 13 20:09:06.553384 kubelet[3050]: I0213 20:09:06.552774 3050 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 20:09:06.553384 kubelet[3050]: I0213 20:09:06.553092 3050 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 20:09:06.553384 kubelet[3050]: E0213 20:09:06.553265 3050 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.16.93:6443/api/v1/namespaces/default/events\": dial tcp 172.31.16.93:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-16-93.1823dd6e37a32b63 default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-16-93,UID:ip-172-31-16-93,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-16-93,},FirstTimestamp:2025-02-13 20:09:06.544724835 +0000 UTC m=+1.000905800,LastTimestamp:2025-02-13 20:09:06.544724835 +0000 UTC m=+1.000905800,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-16-93,}" Feb 13 20:09:06.559106 kubelet[3050]: I0213 20:09:06.557067 3050 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 20:09:06.559557 kubelet[3050]: I0213 20:09:06.559543 3050 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 20:09:06.561469 kubelet[3050]: I0213 20:09:06.561448 3050 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 20:09:06.561734 kubelet[3050]: I0213 20:09:06.561723 3050 reconciler.go:26] "Reconciler: start to sync state" Feb 13 20:09:06.563040 kubelet[3050]: W0213 20:09:06.562823 3050 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.16.93:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.16.93:6443: connect: connection refused Feb 13 20:09:06.563592 kubelet[3050]: E0213 20:09:06.563477 3050 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.16.93:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.16.93:6443: connect: connection refused Feb 13 20:09:06.570698 kubelet[3050]: E0213 20:09:06.570636 3050 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-93?timeout=10s\": dial tcp 172.31.16.93:6443: connect: connection refused" interval="200ms" Feb 13 20:09:06.572387 kubelet[3050]: I0213 20:09:06.572065 3050 factory.go:221] Registration of the systemd container factory successfully Feb 13 20:09:06.572387 kubelet[3050]: I0213 20:09:06.572176 3050 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 20:09:06.573229 kubelet[3050]: E0213 20:09:06.573185 3050 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 20:09:06.575927 kubelet[3050]: I0213 20:09:06.575910 3050 factory.go:221] Registration of the containerd container factory successfully Feb 13 20:09:06.610355 kubelet[3050]: I0213 20:09:06.610316 3050 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 20:09:06.625775 kubelet[3050]: I0213 20:09:06.624461 3050 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 20:09:06.625775 kubelet[3050]: I0213 20:09:06.624596 3050 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 20:09:06.625775 kubelet[3050]: I0213 20:09:06.624620 3050 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 20:09:06.625775 kubelet[3050]: E0213 20:09:06.624665 3050 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 20:09:06.631988 kubelet[3050]: W0213 20:09:06.630877 3050 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.16.93:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.16.93:6443: connect: connection refused Feb 13 20:09:06.631988 kubelet[3050]: E0213 20:09:06.631537 3050 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.16.93:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.16.93:6443: connect: connection refused Feb 13 20:09:06.638275 kubelet[3050]: I0213 20:09:06.638250 3050 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 20:09:06.638442 kubelet[3050]: I0213 20:09:06.638430 3050 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 20:09:06.638638 kubelet[3050]: I0213 20:09:06.638624 3050 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:09:06.656019 kubelet[3050]: I0213 20:09:06.655989 3050 policy_none.go:49] "None policy: Start" Feb 13 20:09:06.657346 kubelet[3050]: I0213 20:09:06.657239 3050 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 20:09:06.657447 kubelet[3050]: I0213 20:09:06.657367 3050 state_mem.go:35] "Initializing new in-memory state store" Feb 13 20:09:06.664600 kubelet[3050]: I0213 20:09:06.664561 3050 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 20:09:06.664984 kubelet[3050]: I0213 20:09:06.664888 3050 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 20:09:06.665185 kubelet[3050]: I0213 20:09:06.665177 3050 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 20:09:06.678389 kubelet[3050]: I0213 20:09:06.678351 3050 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-16-93" Feb 13 20:09:06.678996 kubelet[3050]: E0213 20:09:06.678848 3050 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.16.93:6443/api/v1/nodes\": dial tcp 172.31.16.93:6443: connect: connection refused" node="ip-172-31-16-93" Feb 13 20:09:06.679796 kubelet[3050]: E0213 20:09:06.679770 3050 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-16-93\" not found" Feb 13 20:09:06.725482 kubelet[3050]: I0213 20:09:06.725156 3050 topology_manager.go:215] "Topology Admit Handler" podUID="7b037a5102af508bef32ebcbad1b0e3b" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-16-93" Feb 13 20:09:06.728473 kubelet[3050]: I0213 20:09:06.728250 3050 topology_manager.go:215] "Topology Admit Handler" podUID="f2af8cb34e6f40dee4d03311adef1b25" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-16-93" Feb 13 20:09:06.731455 kubelet[3050]: I0213 20:09:06.731400 3050 topology_manager.go:215] "Topology Admit Handler" podUID="58d82414b5c743fa57f171ca53e496f6" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-16-93" Feb 13 20:09:06.764041 kubelet[3050]: I0213 20:09:06.763975 3050 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7b037a5102af508bef32ebcbad1b0e3b-ca-certs\") pod \"kube-apiserver-ip-172-31-16-93\" (UID: \"7b037a5102af508bef32ebcbad1b0e3b\") " pod="kube-system/kube-apiserver-ip-172-31-16-93" Feb 13 20:09:06.771492 kubelet[3050]: E0213 20:09:06.771446 3050 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-93?timeout=10s\": dial tcp 172.31.16.93:6443: connect: connection refused" interval="400ms" Feb 13 20:09:06.865230 kubelet[3050]: I0213 20:09:06.865168 3050 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7b037a5102af508bef32ebcbad1b0e3b-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-16-93\" (UID: \"7b037a5102af508bef32ebcbad1b0e3b\") " pod="kube-system/kube-apiserver-ip-172-31-16-93" Feb 13 20:09:06.865230 kubelet[3050]: I0213 20:09:06.865230 3050 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f2af8cb34e6f40dee4d03311adef1b25-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-16-93\" (UID: \"f2af8cb34e6f40dee4d03311adef1b25\") " pod="kube-system/kube-controller-manager-ip-172-31-16-93" Feb 13 20:09:06.865444 kubelet[3050]: I0213 20:09:06.865253 3050 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f2af8cb34e6f40dee4d03311adef1b25-k8s-certs\") pod \"kube-controller-manager-ip-172-31-16-93\" (UID: \"f2af8cb34e6f40dee4d03311adef1b25\") " pod="kube-system/kube-controller-manager-ip-172-31-16-93" Feb 13 20:09:06.865444 kubelet[3050]: I0213 20:09:06.865277 3050 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f2af8cb34e6f40dee4d03311adef1b25-kubeconfig\") pod \"kube-controller-manager-ip-172-31-16-93\" (UID: \"f2af8cb34e6f40dee4d03311adef1b25\") " pod="kube-system/kube-controller-manager-ip-172-31-16-93" Feb 13 20:09:06.865444 kubelet[3050]: I0213 20:09:06.865336 3050 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7b037a5102af508bef32ebcbad1b0e3b-k8s-certs\") pod \"kube-apiserver-ip-172-31-16-93\" (UID: \"7b037a5102af508bef32ebcbad1b0e3b\") " pod="kube-system/kube-apiserver-ip-172-31-16-93" Feb 13 20:09:06.865444 kubelet[3050]: I0213 20:09:06.865361 3050 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f2af8cb34e6f40dee4d03311adef1b25-ca-certs\") pod \"kube-controller-manager-ip-172-31-16-93\" (UID: \"f2af8cb34e6f40dee4d03311adef1b25\") " pod="kube-system/kube-controller-manager-ip-172-31-16-93" Feb 13 20:09:06.865444 kubelet[3050]: I0213 20:09:06.865387 3050 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f2af8cb34e6f40dee4d03311adef1b25-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-16-93\" (UID: \"f2af8cb34e6f40dee4d03311adef1b25\") " pod="kube-system/kube-controller-manager-ip-172-31-16-93" Feb 13 20:09:06.865627 kubelet[3050]: I0213 20:09:06.865414 3050 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/58d82414b5c743fa57f171ca53e496f6-kubeconfig\") pod \"kube-scheduler-ip-172-31-16-93\" (UID: \"58d82414b5c743fa57f171ca53e496f6\") " pod="kube-system/kube-scheduler-ip-172-31-16-93" Feb 13 20:09:06.880848 kubelet[3050]: I0213 20:09:06.880762 3050 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-16-93" Feb 13 20:09:06.881653 kubelet[3050]: E0213 20:09:06.881620 3050 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.16.93:6443/api/v1/nodes\": dial tcp 172.31.16.93:6443: connect: connection refused" node="ip-172-31-16-93" Feb 13 20:09:07.037183 containerd[2097]: time="2025-02-13T20:09:07.037138293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-16-93,Uid:7b037a5102af508bef32ebcbad1b0e3b,Namespace:kube-system,Attempt:0,}" Feb 13 20:09:07.040425 containerd[2097]: time="2025-02-13T20:09:07.040169225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-16-93,Uid:f2af8cb34e6f40dee4d03311adef1b25,Namespace:kube-system,Attempt:0,}" Feb 13 20:09:07.051217 containerd[2097]: time="2025-02-13T20:09:07.050503526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-16-93,Uid:58d82414b5c743fa57f171ca53e496f6,Namespace:kube-system,Attempt:0,}" Feb 13 20:09:07.172654 kubelet[3050]: E0213 20:09:07.172606 3050 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-93?timeout=10s\": dial tcp 172.31.16.93:6443: connect: connection refused" interval="800ms" Feb 13 20:09:07.284754 kubelet[3050]: I0213 20:09:07.284723 3050 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-16-93" Feb 13 20:09:07.285143 kubelet[3050]: E0213 20:09:07.285114 3050 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.16.93:6443/api/v1/nodes\": dial tcp 172.31.16.93:6443: connect: connection refused" node="ip-172-31-16-93" Feb 13 20:09:07.493925 kubelet[3050]: W0213 20:09:07.493859 3050 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.16.93:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.16.93:6443: connect: connection refused Feb 13 20:09:07.493925 kubelet[3050]: E0213 20:09:07.493924 3050 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.16.93:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.16.93:6443: connect: connection refused Feb 13 20:09:07.510562 kubelet[3050]: W0213 20:09:07.510502 3050 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.16.93:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.16.93:6443: connect: connection refused Feb 13 20:09:07.511021 kubelet[3050]: E0213 20:09:07.510646 3050 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.16.93:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.16.93:6443: connect: connection refused Feb 13 20:09:07.543260 kubelet[3050]: W0213 20:09:07.543220 3050 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.16.93:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.16.93:6443: connect: connection refused Feb 13 20:09:07.543260 kubelet[3050]: E0213 20:09:07.543263 3050 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.16.93:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.16.93:6443: connect: connection refused Feb 13 20:09:07.547878 kubelet[3050]: W0213 20:09:07.547822 3050 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.16.93:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-93&limit=500&resourceVersion=0": dial tcp 172.31.16.93:6443: connect: connection refused Feb 13 20:09:07.547878 kubelet[3050]: E0213 20:09:07.547884 3050 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.16.93:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-93&limit=500&resourceVersion=0": dial tcp 172.31.16.93:6443: connect: connection refused Feb 13 20:09:07.626989 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1969084422.mount: Deactivated successfully. Feb 13 20:09:07.648578 containerd[2097]: time="2025-02-13T20:09:07.648461681Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:09:07.650212 containerd[2097]: time="2025-02-13T20:09:07.650165264Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Feb 13 20:09:07.652297 containerd[2097]: time="2025-02-13T20:09:07.652222778Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:09:07.653871 containerd[2097]: time="2025-02-13T20:09:07.653824367Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:09:07.654668 containerd[2097]: time="2025-02-13T20:09:07.654638428Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:09:07.655989 containerd[2097]: time="2025-02-13T20:09:07.655776626Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 20:09:07.658462 containerd[2097]: time="2025-02-13T20:09:07.657514621Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 20:09:07.659788 containerd[2097]: time="2025-02-13T20:09:07.659704447Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:09:07.662900 containerd[2097]: time="2025-02-13T20:09:07.662670175Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 612.07982ms" Feb 13 20:09:07.665257 containerd[2097]: time="2025-02-13T20:09:07.665035860Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 627.81025ms" Feb 13 20:09:07.667242 containerd[2097]: time="2025-02-13T20:09:07.667207109Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 626.961391ms" Feb 13 20:09:07.940585 containerd[2097]: time="2025-02-13T20:09:07.940086343Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:09:07.940585 containerd[2097]: time="2025-02-13T20:09:07.940165207Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:09:07.940585 containerd[2097]: time="2025-02-13T20:09:07.940187811Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:09:07.940585 containerd[2097]: time="2025-02-13T20:09:07.940308455Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:09:07.945760 containerd[2097]: time="2025-02-13T20:09:07.945649468Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:09:07.946092 containerd[2097]: time="2025-02-13T20:09:07.945788912Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:09:07.946092 containerd[2097]: time="2025-02-13T20:09:07.945834327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:09:07.946092 containerd[2097]: time="2025-02-13T20:09:07.945941162Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:09:07.956572 containerd[2097]: time="2025-02-13T20:09:07.956232911Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:09:07.956572 containerd[2097]: time="2025-02-13T20:09:07.956299566Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:09:07.956572 containerd[2097]: time="2025-02-13T20:09:07.956317582Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:09:07.956572 containerd[2097]: time="2025-02-13T20:09:07.956436769Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:09:07.973391 kubelet[3050]: E0213 20:09:07.973336 3050 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-93?timeout=10s\": dial tcp 172.31.16.93:6443: connect: connection refused" interval="1.6s" Feb 13 20:09:08.074012 containerd[2097]: time="2025-02-13T20:09:08.073369803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-16-93,Uid:7b037a5102af508bef32ebcbad1b0e3b,Namespace:kube-system,Attempt:0,} returns sandbox id \"41d881874cdc69bfb582944daac12e9b1631de3dc4ac1b6ec7146c4e7fb7c0a9\"" Feb 13 20:09:08.089243 kubelet[3050]: I0213 20:09:08.088818 3050 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-16-93" Feb 13 20:09:08.089840 kubelet[3050]: E0213 20:09:08.089807 3050 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.16.93:6443/api/v1/nodes\": dial tcp 172.31.16.93:6443: connect: connection refused" node="ip-172-31-16-93" Feb 13 20:09:08.090350 containerd[2097]: time="2025-02-13T20:09:08.090261965Z" level=info msg="CreateContainer within sandbox \"41d881874cdc69bfb582944daac12e9b1631de3dc4ac1b6ec7146c4e7fb7c0a9\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 20:09:08.104010 containerd[2097]: time="2025-02-13T20:09:08.103970931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-16-93,Uid:f2af8cb34e6f40dee4d03311adef1b25,Namespace:kube-system,Attempt:0,} returns sandbox id \"efdfbe826aa377c99190f78c4f630b97d4338ed63ad377d9e331bcde5e7c84fd\"" Feb 13 20:09:08.109387 containerd[2097]: time="2025-02-13T20:09:08.109309250Z" level=info msg="CreateContainer within sandbox \"efdfbe826aa377c99190f78c4f630b97d4338ed63ad377d9e331bcde5e7c84fd\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 20:09:08.120946 containerd[2097]: time="2025-02-13T20:09:08.120888912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-16-93,Uid:58d82414b5c743fa57f171ca53e496f6,Namespace:kube-system,Attempt:0,} returns sandbox id \"9cea9f2d4987fc80e48f972fd2b85d4f298b5d7145c1ed90b29c03dc1df30507\"" Feb 13 20:09:08.125638 containerd[2097]: time="2025-02-13T20:09:08.125510825Z" level=info msg="CreateContainer within sandbox \"9cea9f2d4987fc80e48f972fd2b85d4f298b5d7145c1ed90b29c03dc1df30507\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 20:09:08.148231 containerd[2097]: time="2025-02-13T20:09:08.148183922Z" level=info msg="CreateContainer within sandbox \"efdfbe826aa377c99190f78c4f630b97d4338ed63ad377d9e331bcde5e7c84fd\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f046b4067495709899dd90bb39b6e67ebab80c50347a01d17d78cbce2374d1b1\"" Feb 13 20:09:08.149316 containerd[2097]: time="2025-02-13T20:09:08.149203353Z" level=info msg="StartContainer for \"f046b4067495709899dd90bb39b6e67ebab80c50347a01d17d78cbce2374d1b1\"" Feb 13 20:09:08.157706 containerd[2097]: time="2025-02-13T20:09:08.157653236Z" level=info msg="CreateContainer within sandbox \"41d881874cdc69bfb582944daac12e9b1631de3dc4ac1b6ec7146c4e7fb7c0a9\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"18e7de1c1803cb3dedbdedffe9d456f7b8f538e5c096443639963f44bcc74cff\"" Feb 13 20:09:08.158573 containerd[2097]: time="2025-02-13T20:09:08.158394444Z" level=info msg="StartContainer for \"18e7de1c1803cb3dedbdedffe9d456f7b8f538e5c096443639963f44bcc74cff\"" Feb 13 20:09:08.162976 containerd[2097]: time="2025-02-13T20:09:08.162935794Z" level=info msg="CreateContainer within sandbox \"9cea9f2d4987fc80e48f972fd2b85d4f298b5d7145c1ed90b29c03dc1df30507\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5f41783b7c65f53a2e864a2d15a8d94bbf1e484c8e98cb9b99c81271a8f3f042\"" Feb 13 20:09:08.164096 containerd[2097]: time="2025-02-13T20:09:08.164054447Z" level=info msg="StartContainer for \"5f41783b7c65f53a2e864a2d15a8d94bbf1e484c8e98cb9b99c81271a8f3f042\"" Feb 13 20:09:08.301799 containerd[2097]: time="2025-02-13T20:09:08.301688990Z" level=info msg="StartContainer for \"f046b4067495709899dd90bb39b6e67ebab80c50347a01d17d78cbce2374d1b1\" returns successfully" Feb 13 20:09:08.372201 containerd[2097]: time="2025-02-13T20:09:08.372143209Z" level=info msg="StartContainer for \"18e7de1c1803cb3dedbdedffe9d456f7b8f538e5c096443639963f44bcc74cff\" returns successfully" Feb 13 20:09:08.394570 containerd[2097]: time="2025-02-13T20:09:08.394527943Z" level=info msg="StartContainer for \"5f41783b7c65f53a2e864a2d15a8d94bbf1e484c8e98cb9b99c81271a8f3f042\" returns successfully" Feb 13 20:09:08.698001 kubelet[3050]: E0213 20:09:08.697888 3050 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.16.93:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.16.93:6443: connect: connection refused Feb 13 20:09:09.696101 kubelet[3050]: I0213 20:09:09.694791 3050 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-16-93" Feb 13 20:09:11.459680 kubelet[3050]: E0213 20:09:11.459635 3050 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-16-93\" not found" node="ip-172-31-16-93" Feb 13 20:09:11.541304 kubelet[3050]: I0213 20:09:11.541269 3050 apiserver.go:52] "Watching apiserver" Feb 13 20:09:11.547046 kubelet[3050]: I0213 20:09:11.547008 3050 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-16-93" Feb 13 20:09:11.562747 kubelet[3050]: I0213 20:09:11.562721 3050 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 20:09:12.430364 update_engine[2077]: I20250213 20:09:12.430119 2077 update_attempter.cc:509] Updating boot flags... Feb 13 20:09:12.549690 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3338) Feb 13 20:09:13.024129 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3329) Feb 13 20:09:13.403175 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3329) Feb 13 20:09:14.033150 systemd[1]: Reloading requested from client PID 3592 ('systemctl') (unit session-7.scope)... Feb 13 20:09:14.033169 systemd[1]: Reloading... Feb 13 20:09:14.162102 zram_generator::config[3635]: No configuration found. Feb 13 20:09:14.320910 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:09:14.420298 systemd[1]: Reloading finished in 386 ms. Feb 13 20:09:14.464366 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:09:14.466028 kubelet[3050]: E0213 20:09:14.465183 3050 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{ip-172-31-16-93.1823dd6e37a32b63 default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-16-93,UID:ip-172-31-16-93,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-16-93,},FirstTimestamp:2025-02-13 20:09:06.544724835 +0000 UTC m=+1.000905800,LastTimestamp:2025-02-13 20:09:06.544724835 +0000 UTC m=+1.000905800,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-16-93,}" Feb 13 20:09:14.482539 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 20:09:14.482871 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:09:14.492502 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:09:15.119410 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:09:15.133737 (kubelet)[3699]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 20:09:15.224099 kubelet[3699]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:09:15.224099 kubelet[3699]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 20:09:15.224099 kubelet[3699]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:09:15.224099 kubelet[3699]: I0213 20:09:15.223939 3699 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 20:09:15.234594 kubelet[3699]: I0213 20:09:15.234549 3699 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 20:09:15.234594 kubelet[3699]: I0213 20:09:15.234578 3699 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 20:09:15.234872 kubelet[3699]: I0213 20:09:15.234844 3699 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 20:09:15.246918 kubelet[3699]: I0213 20:09:15.246887 3699 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 20:09:15.256929 kubelet[3699]: I0213 20:09:15.255897 3699 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 20:09:15.284113 kubelet[3699]: I0213 20:09:15.284056 3699 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 20:09:15.284960 kubelet[3699]: I0213 20:09:15.284924 3699 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 20:09:15.287389 kubelet[3699]: I0213 20:09:15.285063 3699 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-16-93","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 20:09:15.287605 kubelet[3699]: I0213 20:09:15.287593 3699 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 20:09:15.287665 kubelet[3699]: I0213 20:09:15.287658 3699 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 20:09:15.287748 kubelet[3699]: I0213 20:09:15.287742 3699 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:09:15.287897 kubelet[3699]: I0213 20:09:15.287887 3699 kubelet.go:400] "Attempting to sync node with API server" Feb 13 20:09:15.287969 kubelet[3699]: I0213 20:09:15.287960 3699 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 20:09:15.288045 kubelet[3699]: I0213 20:09:15.288039 3699 kubelet.go:312] "Adding apiserver pod source" Feb 13 20:09:15.288134 kubelet[3699]: I0213 20:09:15.288125 3699 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 20:09:15.300414 kubelet[3699]: I0213 20:09:15.300367 3699 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 20:09:15.300630 kubelet[3699]: I0213 20:09:15.300616 3699 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 20:09:15.301151 kubelet[3699]: I0213 20:09:15.301134 3699 server.go:1264] "Started kubelet" Feb 13 20:09:15.306114 kubelet[3699]: I0213 20:09:15.304008 3699 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 20:09:15.320630 kubelet[3699]: I0213 20:09:15.318298 3699 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 20:09:15.320630 kubelet[3699]: I0213 20:09:15.319895 3699 server.go:455] "Adding debug handlers to kubelet server" Feb 13 20:09:15.328662 kubelet[3699]: I0213 20:09:15.323582 3699 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 20:09:15.328662 kubelet[3699]: I0213 20:09:15.324052 3699 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 20:09:15.328662 kubelet[3699]: I0213 20:09:15.324556 3699 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 20:09:15.334031 kubelet[3699]: I0213 20:09:15.329684 3699 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 20:09:15.334031 kubelet[3699]: I0213 20:09:15.329868 3699 reconciler.go:26] "Reconciler: start to sync state" Feb 13 20:09:15.336330 kubelet[3699]: I0213 20:09:15.336063 3699 factory.go:221] Registration of the systemd container factory successfully Feb 13 20:09:15.336457 kubelet[3699]: I0213 20:09:15.336409 3699 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 20:09:15.342115 kubelet[3699]: E0213 20:09:15.342091 3699 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 20:09:15.345839 kubelet[3699]: I0213 20:09:15.345816 3699 factory.go:221] Registration of the containerd container factory successfully Feb 13 20:09:15.351589 kubelet[3699]: I0213 20:09:15.351423 3699 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 20:09:15.356890 kubelet[3699]: I0213 20:09:15.356053 3699 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 20:09:15.356890 kubelet[3699]: I0213 20:09:15.356108 3699 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 20:09:15.356890 kubelet[3699]: I0213 20:09:15.356125 3699 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 20:09:15.356890 kubelet[3699]: E0213 20:09:15.356173 3699 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 20:09:15.435492 kubelet[3699]: I0213 20:09:15.435381 3699 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-16-93" Feb 13 20:09:15.450258 kubelet[3699]: I0213 20:09:15.449495 3699 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-16-93" Feb 13 20:09:15.450258 kubelet[3699]: I0213 20:09:15.449633 3699 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-16-93" Feb 13 20:09:15.460446 kubelet[3699]: E0213 20:09:15.460054 3699 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 20:09:15.522113 kubelet[3699]: I0213 20:09:15.521689 3699 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 20:09:15.522113 kubelet[3699]: I0213 20:09:15.521707 3699 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 20:09:15.522113 kubelet[3699]: I0213 20:09:15.521732 3699 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:09:15.522113 kubelet[3699]: I0213 20:09:15.521997 3699 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 20:09:15.522113 kubelet[3699]: I0213 20:09:15.522010 3699 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 20:09:15.522113 kubelet[3699]: I0213 20:09:15.522030 3699 policy_none.go:49] "None policy: Start" Feb 13 20:09:15.528253 kubelet[3699]: I0213 20:09:15.527827 3699 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 20:09:15.528253 kubelet[3699]: I0213 20:09:15.527860 3699 state_mem.go:35] "Initializing new in-memory state store" Feb 13 20:09:15.528253 kubelet[3699]: I0213 20:09:15.528211 3699 state_mem.go:75] "Updated machine memory state" Feb 13 20:09:15.532971 kubelet[3699]: I0213 20:09:15.530490 3699 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 20:09:15.532971 kubelet[3699]: I0213 20:09:15.530679 3699 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 20:09:15.537935 kubelet[3699]: I0213 20:09:15.537906 3699 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 20:09:15.661153 kubelet[3699]: I0213 20:09:15.660765 3699 topology_manager.go:215] "Topology Admit Handler" podUID="7b037a5102af508bef32ebcbad1b0e3b" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-16-93" Feb 13 20:09:15.661153 kubelet[3699]: I0213 20:09:15.660901 3699 topology_manager.go:215] "Topology Admit Handler" podUID="f2af8cb34e6f40dee4d03311adef1b25" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-16-93" Feb 13 20:09:15.661153 kubelet[3699]: I0213 20:09:15.660974 3699 topology_manager.go:215] "Topology Admit Handler" podUID="58d82414b5c743fa57f171ca53e496f6" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-16-93" Feb 13 20:09:15.682148 kubelet[3699]: E0213 20:09:15.682110 3699 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-16-93\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-16-93" Feb 13 20:09:15.686563 kubelet[3699]: E0213 20:09:15.686230 3699 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-16-93\" already exists" pod="kube-system/kube-apiserver-ip-172-31-16-93" Feb 13 20:09:15.735381 kubelet[3699]: I0213 20:09:15.735094 3699 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f2af8cb34e6f40dee4d03311adef1b25-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-16-93\" (UID: \"f2af8cb34e6f40dee4d03311adef1b25\") " pod="kube-system/kube-controller-manager-ip-172-31-16-93" Feb 13 20:09:15.735381 kubelet[3699]: I0213 20:09:15.735195 3699 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f2af8cb34e6f40dee4d03311adef1b25-k8s-certs\") pod \"kube-controller-manager-ip-172-31-16-93\" (UID: \"f2af8cb34e6f40dee4d03311adef1b25\") " pod="kube-system/kube-controller-manager-ip-172-31-16-93" Feb 13 20:09:15.735381 kubelet[3699]: I0213 20:09:15.735223 3699 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f2af8cb34e6f40dee4d03311adef1b25-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-16-93\" (UID: \"f2af8cb34e6f40dee4d03311adef1b25\") " pod="kube-system/kube-controller-manager-ip-172-31-16-93" Feb 13 20:09:15.735381 kubelet[3699]: I0213 20:09:15.735249 3699 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7b037a5102af508bef32ebcbad1b0e3b-ca-certs\") pod \"kube-apiserver-ip-172-31-16-93\" (UID: \"7b037a5102af508bef32ebcbad1b0e3b\") " pod="kube-system/kube-apiserver-ip-172-31-16-93" Feb 13 20:09:15.735381 kubelet[3699]: I0213 20:09:15.735274 3699 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7b037a5102af508bef32ebcbad1b0e3b-k8s-certs\") pod \"kube-apiserver-ip-172-31-16-93\" (UID: \"7b037a5102af508bef32ebcbad1b0e3b\") " pod="kube-system/kube-apiserver-ip-172-31-16-93" Feb 13 20:09:15.735820 kubelet[3699]: I0213 20:09:15.735499 3699 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7b037a5102af508bef32ebcbad1b0e3b-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-16-93\" (UID: \"7b037a5102af508bef32ebcbad1b0e3b\") " pod="kube-system/kube-apiserver-ip-172-31-16-93" Feb 13 20:09:15.736205 kubelet[3699]: I0213 20:09:15.736026 3699 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f2af8cb34e6f40dee4d03311adef1b25-ca-certs\") pod \"kube-controller-manager-ip-172-31-16-93\" (UID: \"f2af8cb34e6f40dee4d03311adef1b25\") " pod="kube-system/kube-controller-manager-ip-172-31-16-93" Feb 13 20:09:15.736205 kubelet[3699]: I0213 20:09:15.736115 3699 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f2af8cb34e6f40dee4d03311adef1b25-kubeconfig\") pod \"kube-controller-manager-ip-172-31-16-93\" (UID: \"f2af8cb34e6f40dee4d03311adef1b25\") " pod="kube-system/kube-controller-manager-ip-172-31-16-93" Feb 13 20:09:15.736205 kubelet[3699]: I0213 20:09:15.736175 3699 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/58d82414b5c743fa57f171ca53e496f6-kubeconfig\") pod \"kube-scheduler-ip-172-31-16-93\" (UID: \"58d82414b5c743fa57f171ca53e496f6\") " pod="kube-system/kube-scheduler-ip-172-31-16-93" Feb 13 20:09:16.299379 kubelet[3699]: I0213 20:09:16.296479 3699 apiserver.go:52] "Watching apiserver" Feb 13 20:09:16.330575 kubelet[3699]: I0213 20:09:16.330451 3699 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 20:09:16.444403 kubelet[3699]: E0213 20:09:16.444367 3699 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-16-93\" already exists" pod="kube-system/kube-scheduler-ip-172-31-16-93" Feb 13 20:09:16.478261 kubelet[3699]: I0213 20:09:16.477431 3699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-16-93" podStartSLOduration=2.477411028 podStartE2EDuration="2.477411028s" podCreationTimestamp="2025-02-13 20:09:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:09:16.453269049 +0000 UTC m=+1.309221858" watchObservedRunningTime="2025-02-13 20:09:16.477411028 +0000 UTC m=+1.333363829" Feb 13 20:09:16.495109 kubelet[3699]: I0213 20:09:16.494491 3699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-16-93" podStartSLOduration=4.494469319 podStartE2EDuration="4.494469319s" podCreationTimestamp="2025-02-13 20:09:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:09:16.478651985 +0000 UTC m=+1.334604793" watchObservedRunningTime="2025-02-13 20:09:16.494469319 +0000 UTC m=+1.350422130" Feb 13 20:09:16.519981 kubelet[3699]: I0213 20:09:16.519901 3699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-16-93" podStartSLOduration=1.519860937 podStartE2EDuration="1.519860937s" podCreationTimestamp="2025-02-13 20:09:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:09:16.494925548 +0000 UTC m=+1.350878356" watchObservedRunningTime="2025-02-13 20:09:16.519860937 +0000 UTC m=+1.375813745" Feb 13 20:09:21.828879 sudo[2453]: pam_unix(sudo:session): session closed for user root Feb 13 20:09:21.853350 sshd[2449]: pam_unix(sshd:session): session closed for user core Feb 13 20:09:21.857256 systemd[1]: sshd@6-172.31.16.93:22-139.178.89.65:56792.service: Deactivated successfully. Feb 13 20:09:21.867482 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 20:09:21.868708 systemd-logind[2075]: Session 7 logged out. Waiting for processes to exit. Feb 13 20:09:21.870617 systemd-logind[2075]: Removed session 7. Feb 13 20:09:28.863784 kubelet[3699]: I0213 20:09:28.863754 3699 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 20:09:28.867777 kubelet[3699]: I0213 20:09:28.867047 3699 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 20:09:28.867863 containerd[2097]: time="2025-02-13T20:09:28.866716376Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 20:09:29.812912 kubelet[3699]: I0213 20:09:29.810928 3699 topology_manager.go:215] "Topology Admit Handler" podUID="738f1e08-2510-4bb2-866a-424816ecec56" podNamespace="kube-system" podName="kube-proxy-njgq6" Feb 13 20:09:29.952744 kubelet[3699]: I0213 20:09:29.952551 3699 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/738f1e08-2510-4bb2-866a-424816ecec56-kube-proxy\") pod \"kube-proxy-njgq6\" (UID: \"738f1e08-2510-4bb2-866a-424816ecec56\") " pod="kube-system/kube-proxy-njgq6" Feb 13 20:09:29.955575 kubelet[3699]: I0213 20:09:29.955301 3699 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhr7v\" (UniqueName: \"kubernetes.io/projected/738f1e08-2510-4bb2-866a-424816ecec56-kube-api-access-bhr7v\") pod \"kube-proxy-njgq6\" (UID: \"738f1e08-2510-4bb2-866a-424816ecec56\") " pod="kube-system/kube-proxy-njgq6" Feb 13 20:09:29.957268 kubelet[3699]: I0213 20:09:29.957129 3699 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/738f1e08-2510-4bb2-866a-424816ecec56-xtables-lock\") pod \"kube-proxy-njgq6\" (UID: \"738f1e08-2510-4bb2-866a-424816ecec56\") " pod="kube-system/kube-proxy-njgq6" Feb 13 20:09:29.957268 kubelet[3699]: I0213 20:09:29.957213 3699 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/738f1e08-2510-4bb2-866a-424816ecec56-lib-modules\") pod \"kube-proxy-njgq6\" (UID: \"738f1e08-2510-4bb2-866a-424816ecec56\") " pod="kube-system/kube-proxy-njgq6" Feb 13 20:09:29.969833 kubelet[3699]: I0213 20:09:29.969776 3699 topology_manager.go:215] "Topology Admit Handler" podUID="aaa2ae9e-4ec6-43f0-8e4d-eed7f2e5bb04" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-mk28x" Feb 13 20:09:30.060617 kubelet[3699]: I0213 20:09:30.060473 3699 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6tfn\" (UniqueName: \"kubernetes.io/projected/aaa2ae9e-4ec6-43f0-8e4d-eed7f2e5bb04-kube-api-access-z6tfn\") pod \"tigera-operator-7bc55997bb-mk28x\" (UID: \"aaa2ae9e-4ec6-43f0-8e4d-eed7f2e5bb04\") " pod="tigera-operator/tigera-operator-7bc55997bb-mk28x" Feb 13 20:09:30.060781 kubelet[3699]: I0213 20:09:30.060633 3699 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/aaa2ae9e-4ec6-43f0-8e4d-eed7f2e5bb04-var-lib-calico\") pod \"tigera-operator-7bc55997bb-mk28x\" (UID: \"aaa2ae9e-4ec6-43f0-8e4d-eed7f2e5bb04\") " pod="tigera-operator/tigera-operator-7bc55997bb-mk28x" Feb 13 20:09:30.118949 containerd[2097]: time="2025-02-13T20:09:30.118825264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-njgq6,Uid:738f1e08-2510-4bb2-866a-424816ecec56,Namespace:kube-system,Attempt:0,}" Feb 13 20:09:30.197959 containerd[2097]: time="2025-02-13T20:09:30.197849542Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:09:30.198939 containerd[2097]: time="2025-02-13T20:09:30.198460140Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:09:30.198939 containerd[2097]: time="2025-02-13T20:09:30.198522616Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:09:30.198939 containerd[2097]: time="2025-02-13T20:09:30.198681041Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:09:30.268251 containerd[2097]: time="2025-02-13T20:09:30.268209865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-njgq6,Uid:738f1e08-2510-4bb2-866a-424816ecec56,Namespace:kube-system,Attempt:0,} returns sandbox id \"ce95dbb6f508bf10c212d662f346943e01b8d0c6258023e444ab455cd2cc0080\"" Feb 13 20:09:30.278105 containerd[2097]: time="2025-02-13T20:09:30.278025640Z" level=info msg="CreateContainer within sandbox \"ce95dbb6f508bf10c212d662f346943e01b8d0c6258023e444ab455cd2cc0080\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 20:09:30.280488 containerd[2097]: time="2025-02-13T20:09:30.280126289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-mk28x,Uid:aaa2ae9e-4ec6-43f0-8e4d-eed7f2e5bb04,Namespace:tigera-operator,Attempt:0,}" Feb 13 20:09:30.322092 containerd[2097]: time="2025-02-13T20:09:30.321897425Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:09:30.322092 containerd[2097]: time="2025-02-13T20:09:30.321971989Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:09:30.322598 containerd[2097]: time="2025-02-13T20:09:30.322149599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:09:30.322598 containerd[2097]: time="2025-02-13T20:09:30.322371266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:09:30.350332 containerd[2097]: time="2025-02-13T20:09:30.350259360Z" level=info msg="CreateContainer within sandbox \"ce95dbb6f508bf10c212d662f346943e01b8d0c6258023e444ab455cd2cc0080\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a8dd4f5bcaa531699c53c0d11f0d5705d26a9fc74616466a486fe1dd5f45fdbc\"" Feb 13 20:09:30.355165 containerd[2097]: time="2025-02-13T20:09:30.355128140Z" level=info msg="StartContainer for \"a8dd4f5bcaa531699c53c0d11f0d5705d26a9fc74616466a486fe1dd5f45fdbc\"" Feb 13 20:09:30.460439 containerd[2097]: time="2025-02-13T20:09:30.459938401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-mk28x,Uid:aaa2ae9e-4ec6-43f0-8e4d-eed7f2e5bb04,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"2b7488ee04028f66f422c0c90991f053fddac1b4247bb9dbd90d9c232970cf37\"" Feb 13 20:09:30.495797 containerd[2097]: time="2025-02-13T20:09:30.495467729Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Feb 13 20:09:30.517743 containerd[2097]: time="2025-02-13T20:09:30.517674223Z" level=info msg="StartContainer for \"a8dd4f5bcaa531699c53c0d11f0d5705d26a9fc74616466a486fe1dd5f45fdbc\" returns successfully" Feb 13 20:09:32.597262 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount38987691.mount: Deactivated successfully. Feb 13 20:09:33.348904 containerd[2097]: time="2025-02-13T20:09:33.348850139Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:09:33.350887 containerd[2097]: time="2025-02-13T20:09:33.350754371Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Feb 13 20:09:33.352912 containerd[2097]: time="2025-02-13T20:09:33.352854989Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:09:33.358837 containerd[2097]: time="2025-02-13T20:09:33.358226730Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:09:33.360483 containerd[2097]: time="2025-02-13T20:09:33.360323955Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 2.864809007s" Feb 13 20:09:33.360483 containerd[2097]: time="2025-02-13T20:09:33.360382369Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Feb 13 20:09:33.364938 containerd[2097]: time="2025-02-13T20:09:33.364902671Z" level=info msg="CreateContainer within sandbox \"2b7488ee04028f66f422c0c90991f053fddac1b4247bb9dbd90d9c232970cf37\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Feb 13 20:09:33.408422 containerd[2097]: time="2025-02-13T20:09:33.407906129Z" level=info msg="CreateContainer within sandbox \"2b7488ee04028f66f422c0c90991f053fddac1b4247bb9dbd90d9c232970cf37\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"fae9f866c4e77496a4a2b11ca227961aae687e3384139f6ae0d4f6cf61fd940b\"" Feb 13 20:09:33.410518 containerd[2097]: time="2025-02-13T20:09:33.410421293Z" level=info msg="StartContainer for \"fae9f866c4e77496a4a2b11ca227961aae687e3384139f6ae0d4f6cf61fd940b\"" Feb 13 20:09:33.536250 containerd[2097]: time="2025-02-13T20:09:33.536188595Z" level=info msg="StartContainer for \"fae9f866c4e77496a4a2b11ca227961aae687e3384139f6ae0d4f6cf61fd940b\" returns successfully" Feb 13 20:09:34.064534 systemd-resolved[1977]: Under memory pressure, flushing caches. Feb 13 20:09:34.064606 systemd-resolved[1977]: Flushed all caches. Feb 13 20:09:34.066111 systemd-journald[1566]: Under memory pressure, flushing caches. Feb 13 20:09:34.542465 kubelet[3699]: I0213 20:09:34.535999 3699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-njgq6" podStartSLOduration=5.535976342 podStartE2EDuration="5.535976342s" podCreationTimestamp="2025-02-13 20:09:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:09:31.534284349 +0000 UTC m=+16.390237159" watchObservedRunningTime="2025-02-13 20:09:34.535976342 +0000 UTC m=+19.391929149" Feb 13 20:09:34.564355 kubelet[3699]: I0213 20:09:34.560551 3699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-mk28x" podStartSLOduration=2.661849947 podStartE2EDuration="5.560529051s" podCreationTimestamp="2025-02-13 20:09:29 +0000 UTC" firstStartedPulling="2025-02-13 20:09:30.46357847 +0000 UTC m=+15.319531261" lastFinishedPulling="2025-02-13 20:09:33.362257571 +0000 UTC m=+18.218210365" observedRunningTime="2025-02-13 20:09:34.542674034 +0000 UTC m=+19.398626833" watchObservedRunningTime="2025-02-13 20:09:34.560529051 +0000 UTC m=+19.416481860" Feb 13 20:09:37.106189 kubelet[3699]: I0213 20:09:37.105018 3699 topology_manager.go:215] "Topology Admit Handler" podUID="bd52eebf-117d-42e1-a2fb-4681a3e748a4" podNamespace="calico-system" podName="calico-typha-7d9fd7cc4d-bfjtl" Feb 13 20:09:37.223317 kubelet[3699]: I0213 20:09:37.223267 3699 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd52eebf-117d-42e1-a2fb-4681a3e748a4-tigera-ca-bundle\") pod \"calico-typha-7d9fd7cc4d-bfjtl\" (UID: \"bd52eebf-117d-42e1-a2fb-4681a3e748a4\") " pod="calico-system/calico-typha-7d9fd7cc4d-bfjtl" Feb 13 20:09:37.224940 kubelet[3699]: I0213 20:09:37.223323 3699 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m29gh\" (UniqueName: \"kubernetes.io/projected/bd52eebf-117d-42e1-a2fb-4681a3e748a4-kube-api-access-m29gh\") pod \"calico-typha-7d9fd7cc4d-bfjtl\" (UID: \"bd52eebf-117d-42e1-a2fb-4681a3e748a4\") " pod="calico-system/calico-typha-7d9fd7cc4d-bfjtl" Feb 13 20:09:37.224940 kubelet[3699]: I0213 20:09:37.223351 3699 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/bd52eebf-117d-42e1-a2fb-4681a3e748a4-typha-certs\") pod \"calico-typha-7d9fd7cc4d-bfjtl\" (UID: \"bd52eebf-117d-42e1-a2fb-4681a3e748a4\") " pod="calico-system/calico-typha-7d9fd7cc4d-bfjtl" Feb 13 20:09:37.427121 containerd[2097]: time="2025-02-13T20:09:37.426901915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7d9fd7cc4d-bfjtl,Uid:bd52eebf-117d-42e1-a2fb-4681a3e748a4,Namespace:calico-system,Attempt:0,}" Feb 13 20:09:37.480936 kubelet[3699]: I0213 20:09:37.478540 3699 topology_manager.go:215] "Topology Admit Handler" podUID="f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc" podNamespace="calico-system" podName="calico-node-pdtsj" Feb 13 20:09:37.634097 containerd[2097]: time="2025-02-13T20:09:37.630737217Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:09:37.634097 containerd[2097]: time="2025-02-13T20:09:37.632697035Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:09:37.634097 containerd[2097]: time="2025-02-13T20:09:37.632768429Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:09:37.634097 containerd[2097]: time="2025-02-13T20:09:37.633677132Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:09:37.654852 kubelet[3699]: I0213 20:09:37.653332 3699 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc-policysync\") pod \"calico-node-pdtsj\" (UID: \"f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc\") " pod="calico-system/calico-node-pdtsj" Feb 13 20:09:37.654852 kubelet[3699]: I0213 20:09:37.653391 3699 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc-cni-log-dir\") pod \"calico-node-pdtsj\" (UID: \"f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc\") " pod="calico-system/calico-node-pdtsj" Feb 13 20:09:37.654852 kubelet[3699]: I0213 20:09:37.653421 3699 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc-flexvol-driver-host\") pod \"calico-node-pdtsj\" (UID: \"f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc\") " pod="calico-system/calico-node-pdtsj" Feb 13 20:09:37.654852 kubelet[3699]: I0213 20:09:37.653447 3699 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc-lib-modules\") pod \"calico-node-pdtsj\" (UID: \"f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc\") " pod="calico-system/calico-node-pdtsj" Feb 13 20:09:37.654852 kubelet[3699]: I0213 20:09:37.653468 3699 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc-cni-bin-dir\") pod \"calico-node-pdtsj\" (UID: \"f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc\") " pod="calico-system/calico-node-pdtsj" Feb 13 20:09:37.655193 kubelet[3699]: I0213 20:09:37.653492 3699 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc-cni-net-dir\") pod \"calico-node-pdtsj\" (UID: \"f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc\") " pod="calico-system/calico-node-pdtsj" Feb 13 20:09:37.655193 kubelet[3699]: I0213 20:09:37.653513 3699 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc-var-run-calico\") pod \"calico-node-pdtsj\" (UID: \"f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc\") " pod="calico-system/calico-node-pdtsj" Feb 13 20:09:37.655193 kubelet[3699]: I0213 20:09:37.653537 3699 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc-var-lib-calico\") pod \"calico-node-pdtsj\" (UID: \"f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc\") " pod="calico-system/calico-node-pdtsj" Feb 13 20:09:37.655193 kubelet[3699]: I0213 20:09:37.653566 3699 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9z99\" (UniqueName: \"kubernetes.io/projected/f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc-kube-api-access-j9z99\") pod \"calico-node-pdtsj\" (UID: \"f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc\") " pod="calico-system/calico-node-pdtsj" Feb 13 20:09:37.655193 kubelet[3699]: I0213 20:09:37.653590 3699 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc-tigera-ca-bundle\") pod \"calico-node-pdtsj\" (UID: \"f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc\") " pod="calico-system/calico-node-pdtsj" Feb 13 20:09:37.655328 kubelet[3699]: I0213 20:09:37.653616 3699 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc-node-certs\") pod \"calico-node-pdtsj\" (UID: \"f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc\") " pod="calico-system/calico-node-pdtsj" Feb 13 20:09:37.655328 kubelet[3699]: I0213 20:09:37.653643 3699 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc-xtables-lock\") pod \"calico-node-pdtsj\" (UID: \"f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc\") " pod="calico-system/calico-node-pdtsj" Feb 13 20:09:37.672265 kubelet[3699]: I0213 20:09:37.670471 3699 topology_manager.go:215] "Topology Admit Handler" podUID="1913a1ef-26a6-4963-ad3b-0e30d0c766c9" podNamespace="calico-system" podName="csi-node-driver-g2mq8" Feb 13 20:09:37.675155 kubelet[3699]: E0213 20:09:37.674708 3699 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g2mq8" podUID="1913a1ef-26a6-4963-ad3b-0e30d0c766c9" Feb 13 20:09:37.788155 kubelet[3699]: E0213 20:09:37.787835 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:37.789663 kubelet[3699]: W0213 20:09:37.788729 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:37.793144 kubelet[3699]: E0213 20:09:37.792722 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:37.793784 kubelet[3699]: E0213 20:09:37.793564 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:37.793784 kubelet[3699]: W0213 20:09:37.793585 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:37.793784 kubelet[3699]: E0213 20:09:37.793608 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:37.820062 containerd[2097]: time="2025-02-13T20:09:37.819723634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pdtsj,Uid:f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc,Namespace:calico-system,Attempt:0,}" Feb 13 20:09:37.867110 kubelet[3699]: E0213 20:09:37.862456 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:37.867110 kubelet[3699]: W0213 20:09:37.862496 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:37.867110 kubelet[3699]: E0213 20:09:37.862524 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:37.867110 kubelet[3699]: I0213 20:09:37.862565 3699 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1913a1ef-26a6-4963-ad3b-0e30d0c766c9-registration-dir\") pod \"csi-node-driver-g2mq8\" (UID: \"1913a1ef-26a6-4963-ad3b-0e30d0c766c9\") " pod="calico-system/csi-node-driver-g2mq8" Feb 13 20:09:37.868856 kubelet[3699]: E0213 20:09:37.868347 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:37.868856 kubelet[3699]: W0213 20:09:37.868377 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:37.868856 kubelet[3699]: E0213 20:09:37.868417 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:37.869796 kubelet[3699]: I0213 20:09:37.869493 3699 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1913a1ef-26a6-4963-ad3b-0e30d0c766c9-socket-dir\") pod \"csi-node-driver-g2mq8\" (UID: \"1913a1ef-26a6-4963-ad3b-0e30d0c766c9\") " pod="calico-system/csi-node-driver-g2mq8" Feb 13 20:09:37.870528 kubelet[3699]: E0213 20:09:37.870156 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:37.870528 kubelet[3699]: W0213 20:09:37.870173 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:37.870528 kubelet[3699]: E0213 20:09:37.870485 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:37.873430 kubelet[3699]: E0213 20:09:37.873254 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:37.873430 kubelet[3699]: W0213 20:09:37.873272 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:37.873430 kubelet[3699]: E0213 20:09:37.873392 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:37.875420 kubelet[3699]: E0213 20:09:37.874681 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:37.875420 kubelet[3699]: W0213 20:09:37.874698 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:37.875420 kubelet[3699]: E0213 20:09:37.875323 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:37.875420 kubelet[3699]: I0213 20:09:37.875364 3699 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1913a1ef-26a6-4963-ad3b-0e30d0c766c9-kubelet-dir\") pod \"csi-node-driver-g2mq8\" (UID: \"1913a1ef-26a6-4963-ad3b-0e30d0c766c9\") " pod="calico-system/csi-node-driver-g2mq8" Feb 13 20:09:37.884350 kubelet[3699]: E0213 20:09:37.876270 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:37.884350 kubelet[3699]: W0213 20:09:37.876284 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:37.884350 kubelet[3699]: E0213 20:09:37.876312 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:37.884350 kubelet[3699]: E0213 20:09:37.880349 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:37.884350 kubelet[3699]: W0213 20:09:37.880370 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:37.884350 kubelet[3699]: E0213 20:09:37.880394 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:37.884350 kubelet[3699]: E0213 20:09:37.881915 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:37.884350 kubelet[3699]: W0213 20:09:37.881933 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:37.884350 kubelet[3699]: E0213 20:09:37.881955 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:37.884823 kubelet[3699]: I0213 20:09:37.882610 3699 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85rxk\" (UniqueName: \"kubernetes.io/projected/1913a1ef-26a6-4963-ad3b-0e30d0c766c9-kube-api-access-85rxk\") pod \"csi-node-driver-g2mq8\" (UID: \"1913a1ef-26a6-4963-ad3b-0e30d0c766c9\") " pod="calico-system/csi-node-driver-g2mq8" Feb 13 20:09:37.884823 kubelet[3699]: E0213 20:09:37.883646 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:37.884823 kubelet[3699]: W0213 20:09:37.883663 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:37.884823 kubelet[3699]: E0213 20:09:37.883681 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:37.884823 kubelet[3699]: I0213 20:09:37.883710 3699 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/1913a1ef-26a6-4963-ad3b-0e30d0c766c9-varrun\") pod \"csi-node-driver-g2mq8\" (UID: \"1913a1ef-26a6-4963-ad3b-0e30d0c766c9\") " pod="calico-system/csi-node-driver-g2mq8" Feb 13 20:09:37.886437 kubelet[3699]: E0213 20:09:37.886411 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:37.886437 kubelet[3699]: W0213 20:09:37.886435 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:37.886821 kubelet[3699]: E0213 20:09:37.886527 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:37.887273 kubelet[3699]: E0213 20:09:37.887250 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:37.887273 kubelet[3699]: W0213 20:09:37.887268 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:37.888081 kubelet[3699]: E0213 20:09:37.887970 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:37.891661 kubelet[3699]: E0213 20:09:37.889494 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:37.891661 kubelet[3699]: W0213 20:09:37.889517 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:37.891661 kubelet[3699]: E0213 20:09:37.889629 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:37.900923 kubelet[3699]: E0213 20:09:37.896318 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:37.900923 kubelet[3699]: W0213 20:09:37.896345 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:37.900923 kubelet[3699]: E0213 20:09:37.896372 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:37.900923 kubelet[3699]: E0213 20:09:37.897575 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:37.900923 kubelet[3699]: W0213 20:09:37.897591 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:37.900923 kubelet[3699]: E0213 20:09:37.897609 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:37.900923 kubelet[3699]: E0213 20:09:37.898824 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:37.900923 kubelet[3699]: W0213 20:09:37.898839 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:37.900923 kubelet[3699]: E0213 20:09:37.898857 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:37.988095 kubelet[3699]: E0213 20:09:37.987612 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:37.988095 kubelet[3699]: W0213 20:09:37.987659 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:37.988095 kubelet[3699]: E0213 20:09:37.987684 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:37.989393 kubelet[3699]: E0213 20:09:37.988399 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:37.989393 kubelet[3699]: W0213 20:09:37.988415 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:37.989393 kubelet[3699]: E0213 20:09:37.988444 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:37.989393 kubelet[3699]: E0213 20:09:37.988733 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:37.989393 kubelet[3699]: W0213 20:09:37.988743 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:37.989393 kubelet[3699]: E0213 20:09:37.988942 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:37.989393 kubelet[3699]: E0213 20:09:37.989035 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:37.989393 kubelet[3699]: W0213 20:09:37.989043 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:37.989393 kubelet[3699]: E0213 20:09:37.989162 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:37.989393 kubelet[3699]: E0213 20:09:37.989389 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:37.989938 kubelet[3699]: W0213 20:09:37.989399 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:37.989938 kubelet[3699]: E0213 20:09:37.989449 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:37.989938 kubelet[3699]: E0213 20:09:37.989825 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:37.989938 kubelet[3699]: W0213 20:09:37.989838 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:37.989938 kubelet[3699]: E0213 20:09:37.989854 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:37.991246 kubelet[3699]: E0213 20:09:37.990167 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:37.991246 kubelet[3699]: W0213 20:09:37.990178 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:37.991246 kubelet[3699]: E0213 20:09:37.990199 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:37.991246 kubelet[3699]: E0213 20:09:37.990528 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:37.991246 kubelet[3699]: W0213 20:09:37.990539 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:37.991246 kubelet[3699]: E0213 20:09:37.990657 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:37.991246 kubelet[3699]: E0213 20:09:37.990937 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:37.991246 kubelet[3699]: W0213 20:09:37.990947 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:37.991246 kubelet[3699]: E0213 20:09:37.990997 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:37.991714 kubelet[3699]: E0213 20:09:37.991257 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:37.991714 kubelet[3699]: W0213 20:09:37.991267 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:37.991714 kubelet[3699]: E0213 20:09:37.991380 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:37.991714 kubelet[3699]: E0213 20:09:37.991683 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:37.991714 kubelet[3699]: W0213 20:09:37.991694 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:37.992000 kubelet[3699]: E0213 20:09:37.991732 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:37.994686 kubelet[3699]: E0213 20:09:37.992064 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:37.994686 kubelet[3699]: W0213 20:09:37.992098 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:37.994686 kubelet[3699]: E0213 20:09:37.992213 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:37.994686 kubelet[3699]: E0213 20:09:37.992631 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:37.994686 kubelet[3699]: W0213 20:09:37.992658 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:37.994686 kubelet[3699]: E0213 20:09:37.992996 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:37.994686 kubelet[3699]: W0213 20:09:37.993042 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:37.994686 kubelet[3699]: E0213 20:09:37.993020 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:37.994686 kubelet[3699]: E0213 20:09:37.993139 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:37.994686 kubelet[3699]: E0213 20:09:37.993476 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:37.995203 kubelet[3699]: W0213 20:09:37.993487 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:37.995203 kubelet[3699]: E0213 20:09:37.993591 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:37.995203 kubelet[3699]: E0213 20:09:37.993795 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:37.995203 kubelet[3699]: W0213 20:09:37.993804 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:37.995203 kubelet[3699]: E0213 20:09:37.993936 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:37.995203 kubelet[3699]: E0213 20:09:37.994128 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:37.995203 kubelet[3699]: W0213 20:09:37.994137 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:37.995203 kubelet[3699]: E0213 20:09:37.994179 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:37.995203 kubelet[3699]: E0213 20:09:37.994427 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:37.995203 kubelet[3699]: W0213 20:09:37.994444 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:37.995712 kubelet[3699]: E0213 20:09:37.994488 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:37.995712 kubelet[3699]: E0213 20:09:37.994752 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:37.995712 kubelet[3699]: W0213 20:09:37.994762 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:37.995712 kubelet[3699]: E0213 20:09:37.994853 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:37.995712 kubelet[3699]: E0213 20:09:37.995064 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:37.995712 kubelet[3699]: W0213 20:09:37.995117 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:37.995712 kubelet[3699]: E0213 20:09:37.995208 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:37.995712 kubelet[3699]: E0213 20:09:37.995398 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:37.995712 kubelet[3699]: W0213 20:09:37.995406 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:37.995712 kubelet[3699]: E0213 20:09:37.995579 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:37.996150 kubelet[3699]: E0213 20:09:37.995775 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:37.996150 kubelet[3699]: W0213 20:09:37.995785 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:37.996150 kubelet[3699]: E0213 20:09:37.995829 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:37.996150 kubelet[3699]: E0213 20:09:37.996094 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:37.996150 kubelet[3699]: W0213 20:09:37.996103 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:37.996150 kubelet[3699]: E0213 20:09:37.996119 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:37.996891 kubelet[3699]: E0213 20:09:37.996869 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:37.996963 kubelet[3699]: W0213 20:09:37.996887 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:37.996963 kubelet[3699]: E0213 20:09:37.996910 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:37.999205 kubelet[3699]: E0213 20:09:37.999179 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:37.999301 kubelet[3699]: W0213 20:09:37.999216 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:37.999301 kubelet[3699]: E0213 20:09:37.999233 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:38.000481 containerd[2097]: time="2025-02-13T20:09:38.000362204Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:09:38.000481 containerd[2097]: time="2025-02-13T20:09:38.000457765Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:09:38.002191 containerd[2097]: time="2025-02-13T20:09:38.000761527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:09:38.002191 containerd[2097]: time="2025-02-13T20:09:38.001539124Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:09:38.023216 kubelet[3699]: E0213 20:09:38.023189 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:38.023456 kubelet[3699]: W0213 20:09:38.023364 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:38.023456 kubelet[3699]: E0213 20:09:38.023396 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:38.093326 containerd[2097]: time="2025-02-13T20:09:38.093201493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7d9fd7cc4d-bfjtl,Uid:bd52eebf-117d-42e1-a2fb-4681a3e748a4,Namespace:calico-system,Attempt:0,} returns sandbox id \"9c5b76d0cde9bb37ceb8c8f94ac3cd38d15f4b18c25456dab6cfe056157069a6\"" Feb 13 20:09:38.098934 containerd[2097]: time="2025-02-13T20:09:38.098635515Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Feb 13 20:09:38.268495 containerd[2097]: time="2025-02-13T20:09:38.268275116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pdtsj,Uid:f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc,Namespace:calico-system,Attempt:0,} returns sandbox id \"efde2821bdb249357e7243eb993bc73b38886bd7d526624708a8f91ac759a7a0\"" Feb 13 20:09:39.356668 kubelet[3699]: E0213 20:09:39.356620 3699 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g2mq8" podUID="1913a1ef-26a6-4963-ad3b-0e30d0c766c9" Feb 13 20:09:40.214417 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2799455718.mount: Deactivated successfully. Feb 13 20:09:41.240449 containerd[2097]: time="2025-02-13T20:09:41.240399393Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:09:41.242378 containerd[2097]: time="2025-02-13T20:09:41.242320550Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Feb 13 20:09:41.245235 containerd[2097]: time="2025-02-13T20:09:41.245177343Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:09:41.248390 containerd[2097]: time="2025-02-13T20:09:41.248329396Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:09:41.255858 containerd[2097]: time="2025-02-13T20:09:41.255812314Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 3.157130386s" Feb 13 20:09:41.256449 containerd[2097]: time="2025-02-13T20:09:41.255934930Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Feb 13 20:09:41.263091 containerd[2097]: time="2025-02-13T20:09:41.259184283Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 20:09:41.280839 containerd[2097]: time="2025-02-13T20:09:41.280772487Z" level=info msg="CreateContainer within sandbox \"9c5b76d0cde9bb37ceb8c8f94ac3cd38d15f4b18c25456dab6cfe056157069a6\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 13 20:09:41.318171 containerd[2097]: time="2025-02-13T20:09:41.317756247Z" level=info msg="CreateContainer within sandbox \"9c5b76d0cde9bb37ceb8c8f94ac3cd38d15f4b18c25456dab6cfe056157069a6\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"8dda719a9af59965cf5532e837373d9eefdece1ebad0add03b4bb253975fc085\"" Feb 13 20:09:41.319683 containerd[2097]: time="2025-02-13T20:09:41.319121762Z" level=info msg="StartContainer for \"8dda719a9af59965cf5532e837373d9eefdece1ebad0add03b4bb253975fc085\"" Feb 13 20:09:41.357422 kubelet[3699]: E0213 20:09:41.357372 3699 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g2mq8" podUID="1913a1ef-26a6-4963-ad3b-0e30d0c766c9" Feb 13 20:09:41.516770 containerd[2097]: time="2025-02-13T20:09:41.516667357Z" level=info msg="StartContainer for \"8dda719a9af59965cf5532e837373d9eefdece1ebad0add03b4bb253975fc085\" returns successfully" Feb 13 20:09:41.720232 kubelet[3699]: E0213 20:09:41.720198 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:41.720232 kubelet[3699]: W0213 20:09:41.720225 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:41.796345 kubelet[3699]: E0213 20:09:41.720251 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:41.796345 kubelet[3699]: E0213 20:09:41.720642 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:41.796345 kubelet[3699]: W0213 20:09:41.720654 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:41.796345 kubelet[3699]: E0213 20:09:41.720666 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:41.796345 kubelet[3699]: E0213 20:09:41.720890 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:41.796345 kubelet[3699]: W0213 20:09:41.720899 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:41.796345 kubelet[3699]: E0213 20:09:41.720908 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:41.796345 kubelet[3699]: E0213 20:09:41.721144 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:41.796345 kubelet[3699]: W0213 20:09:41.721154 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:41.796345 kubelet[3699]: E0213 20:09:41.721175 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:41.796909 kubelet[3699]: E0213 20:09:41.721516 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:41.796909 kubelet[3699]: W0213 20:09:41.721529 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:41.796909 kubelet[3699]: E0213 20:09:41.721543 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:41.796909 kubelet[3699]: E0213 20:09:41.721764 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:41.796909 kubelet[3699]: W0213 20:09:41.721773 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:41.796909 kubelet[3699]: E0213 20:09:41.721785 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:41.796909 kubelet[3699]: E0213 20:09:41.721986 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:41.796909 kubelet[3699]: W0213 20:09:41.721996 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:41.796909 kubelet[3699]: E0213 20:09:41.722008 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:41.796909 kubelet[3699]: E0213 20:09:41.722242 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:41.797394 kubelet[3699]: W0213 20:09:41.722252 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:41.797394 kubelet[3699]: E0213 20:09:41.722264 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:41.797394 kubelet[3699]: E0213 20:09:41.722483 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:41.797394 kubelet[3699]: W0213 20:09:41.722492 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:41.797394 kubelet[3699]: E0213 20:09:41.722504 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:41.797394 kubelet[3699]: E0213 20:09:41.722710 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:41.797394 kubelet[3699]: W0213 20:09:41.722896 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:41.797394 kubelet[3699]: E0213 20:09:41.722913 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:41.797394 kubelet[3699]: E0213 20:09:41.723185 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:41.797394 kubelet[3699]: W0213 20:09:41.723196 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:41.797804 kubelet[3699]: E0213 20:09:41.723209 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:41.797804 kubelet[3699]: E0213 20:09:41.723784 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:41.797804 kubelet[3699]: W0213 20:09:41.723793 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:41.797804 kubelet[3699]: E0213 20:09:41.723817 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:41.797804 kubelet[3699]: E0213 20:09:41.724112 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:41.797804 kubelet[3699]: W0213 20:09:41.724133 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:41.797804 kubelet[3699]: E0213 20:09:41.724149 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:41.797804 kubelet[3699]: E0213 20:09:41.724361 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:41.797804 kubelet[3699]: W0213 20:09:41.724380 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:41.797804 kubelet[3699]: E0213 20:09:41.724399 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:41.798173 kubelet[3699]: E0213 20:09:41.724653 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:41.798173 kubelet[3699]: W0213 20:09:41.724663 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:41.798173 kubelet[3699]: E0213 20:09:41.724675 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:41.798173 kubelet[3699]: E0213 20:09:41.733250 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:41.798173 kubelet[3699]: W0213 20:09:41.733268 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:41.798173 kubelet[3699]: E0213 20:09:41.733288 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:41.798173 kubelet[3699]: E0213 20:09:41.733577 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:41.798173 kubelet[3699]: W0213 20:09:41.733586 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:41.798173 kubelet[3699]: E0213 20:09:41.733605 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:41.798173 kubelet[3699]: E0213 20:09:41.733825 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:41.798429 kubelet[3699]: W0213 20:09:41.733835 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:41.798429 kubelet[3699]: E0213 20:09:41.733852 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:41.798429 kubelet[3699]: E0213 20:09:41.734041 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:41.798429 kubelet[3699]: W0213 20:09:41.734050 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:41.798429 kubelet[3699]: E0213 20:09:41.734065 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:41.798429 kubelet[3699]: E0213 20:09:41.734282 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:41.798429 kubelet[3699]: W0213 20:09:41.734292 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:41.798429 kubelet[3699]: E0213 20:09:41.734308 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:41.798429 kubelet[3699]: E0213 20:09:41.734525 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:41.798429 kubelet[3699]: W0213 20:09:41.734534 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:41.798667 kubelet[3699]: E0213 20:09:41.734548 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:41.798667 kubelet[3699]: E0213 20:09:41.734836 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:41.798667 kubelet[3699]: W0213 20:09:41.734848 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:41.798667 kubelet[3699]: E0213 20:09:41.734873 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:41.798667 kubelet[3699]: E0213 20:09:41.735165 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:41.798667 kubelet[3699]: W0213 20:09:41.735175 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:41.798667 kubelet[3699]: E0213 20:09:41.735200 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:41.798667 kubelet[3699]: E0213 20:09:41.735403 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:41.798667 kubelet[3699]: W0213 20:09:41.735422 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:41.798667 kubelet[3699]: E0213 20:09:41.735449 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:41.798929 kubelet[3699]: E0213 20:09:41.735717 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:41.798929 kubelet[3699]: W0213 20:09:41.735726 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:41.798929 kubelet[3699]: E0213 20:09:41.735743 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:41.798929 kubelet[3699]: E0213 20:09:41.736050 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:41.798929 kubelet[3699]: W0213 20:09:41.736059 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:41.798929 kubelet[3699]: E0213 20:09:41.736091 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:41.798929 kubelet[3699]: E0213 20:09:41.736375 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:41.798929 kubelet[3699]: W0213 20:09:41.736384 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:41.798929 kubelet[3699]: E0213 20:09:41.736400 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:41.798929 kubelet[3699]: E0213 20:09:41.736663 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:41.799349 kubelet[3699]: W0213 20:09:41.736672 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:41.799349 kubelet[3699]: E0213 20:09:41.736701 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:41.799349 kubelet[3699]: E0213 20:09:41.736974 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:41.799349 kubelet[3699]: W0213 20:09:41.736991 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:41.799349 kubelet[3699]: E0213 20:09:41.737010 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:41.799349 kubelet[3699]: E0213 20:09:41.737225 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:41.799349 kubelet[3699]: W0213 20:09:41.737236 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:41.799349 kubelet[3699]: E0213 20:09:41.737250 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:41.799349 kubelet[3699]: E0213 20:09:41.739263 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:41.799349 kubelet[3699]: W0213 20:09:41.739279 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:41.800027 kubelet[3699]: E0213 20:09:41.739297 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:41.800027 kubelet[3699]: E0213 20:09:41.739530 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:41.800027 kubelet[3699]: W0213 20:09:41.739539 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:41.800027 kubelet[3699]: E0213 20:09:41.739551 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:41.800027 kubelet[3699]: E0213 20:09:41.741383 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:41.800027 kubelet[3699]: W0213 20:09:41.741394 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:41.800027 kubelet[3699]: E0213 20:09:41.741415 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:42.649700 kubelet[3699]: I0213 20:09:42.649344 3699 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:09:42.733104 kubelet[3699]: E0213 20:09:42.733057 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:42.733104 kubelet[3699]: W0213 20:09:42.733099 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:42.733304 kubelet[3699]: E0213 20:09:42.733126 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:42.733410 kubelet[3699]: E0213 20:09:42.733389 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:42.733410 kubelet[3699]: W0213 20:09:42.733403 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:42.733620 kubelet[3699]: E0213 20:09:42.733418 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:42.733677 kubelet[3699]: E0213 20:09:42.733631 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:42.733677 kubelet[3699]: W0213 20:09:42.733642 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:42.733677 kubelet[3699]: E0213 20:09:42.733656 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:42.733911 kubelet[3699]: E0213 20:09:42.733886 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:42.733911 kubelet[3699]: W0213 20:09:42.733902 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:42.734415 kubelet[3699]: E0213 20:09:42.733915 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:42.734550 kubelet[3699]: E0213 20:09:42.734533 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:42.734550 kubelet[3699]: W0213 20:09:42.734547 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:42.734989 kubelet[3699]: E0213 20:09:42.734562 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:42.734989 kubelet[3699]: E0213 20:09:42.734926 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:42.734989 kubelet[3699]: W0213 20:09:42.734938 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:42.734989 kubelet[3699]: E0213 20:09:42.734951 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:42.735364 kubelet[3699]: E0213 20:09:42.735198 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:42.735364 kubelet[3699]: W0213 20:09:42.735208 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:42.735364 kubelet[3699]: E0213 20:09:42.735220 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:42.735707 kubelet[3699]: E0213 20:09:42.735418 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:42.735707 kubelet[3699]: W0213 20:09:42.735427 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:42.735707 kubelet[3699]: E0213 20:09:42.735466 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:42.736247 kubelet[3699]: E0213 20:09:42.735819 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:42.736247 kubelet[3699]: W0213 20:09:42.735830 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:42.736247 kubelet[3699]: E0213 20:09:42.735843 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:42.736247 kubelet[3699]: E0213 20:09:42.736233 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:42.736247 kubelet[3699]: W0213 20:09:42.736246 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:42.736771 kubelet[3699]: E0213 20:09:42.736260 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:42.736845 kubelet[3699]: E0213 20:09:42.736833 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:42.736894 kubelet[3699]: W0213 20:09:42.736845 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:42.736894 kubelet[3699]: E0213 20:09:42.736859 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:42.737169 kubelet[3699]: E0213 20:09:42.737066 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:42.737169 kubelet[3699]: W0213 20:09:42.737135 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:42.737169 kubelet[3699]: E0213 20:09:42.737149 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:42.737457 kubelet[3699]: E0213 20:09:42.737413 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:42.737457 kubelet[3699]: W0213 20:09:42.737425 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:42.737457 kubelet[3699]: E0213 20:09:42.737438 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:42.737704 kubelet[3699]: E0213 20:09:42.737691 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:42.737756 kubelet[3699]: W0213 20:09:42.737705 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:42.737756 kubelet[3699]: E0213 20:09:42.737718 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:42.737967 kubelet[3699]: E0213 20:09:42.737950 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:42.737967 kubelet[3699]: W0213 20:09:42.737965 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:42.738064 kubelet[3699]: E0213 20:09:42.737978 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:42.744259 kubelet[3699]: E0213 20:09:42.743847 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:42.744259 kubelet[3699]: W0213 20:09:42.743864 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:42.744259 kubelet[3699]: E0213 20:09:42.743880 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:42.744620 kubelet[3699]: E0213 20:09:42.744278 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:42.744620 kubelet[3699]: W0213 20:09:42.744289 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:42.744620 kubelet[3699]: E0213 20:09:42.744303 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:42.744757 kubelet[3699]: E0213 20:09:42.744685 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:42.744757 kubelet[3699]: W0213 20:09:42.744697 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:42.744757 kubelet[3699]: E0213 20:09:42.744711 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:42.745093 kubelet[3699]: E0213 20:09:42.744959 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:42.745093 kubelet[3699]: W0213 20:09:42.745051 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:42.745093 kubelet[3699]: E0213 20:09:42.745065 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:42.745985 kubelet[3699]: E0213 20:09:42.745697 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:42.745985 kubelet[3699]: W0213 20:09:42.745710 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:42.745985 kubelet[3699]: E0213 20:09:42.745724 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:42.745985 kubelet[3699]: E0213 20:09:42.745938 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:42.745985 kubelet[3699]: W0213 20:09:42.745949 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:42.746253 kubelet[3699]: E0213 20:09:42.746106 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:42.746785 kubelet[3699]: E0213 20:09:42.746530 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:42.746785 kubelet[3699]: W0213 20:09:42.746544 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:42.746785 kubelet[3699]: E0213 20:09:42.746583 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:42.746941 kubelet[3699]: E0213 20:09:42.746839 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:42.746941 kubelet[3699]: W0213 20:09:42.746851 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:42.746941 kubelet[3699]: E0213 20:09:42.746870 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:42.747273 kubelet[3699]: E0213 20:09:42.747252 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:42.747273 kubelet[3699]: W0213 20:09:42.747266 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:42.747373 kubelet[3699]: E0213 20:09:42.747281 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:42.747725 kubelet[3699]: E0213 20:09:42.747707 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:42.747725 kubelet[3699]: W0213 20:09:42.747721 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:42.747850 kubelet[3699]: E0213 20:09:42.747818 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:42.748047 kubelet[3699]: E0213 20:09:42.748028 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:42.748047 kubelet[3699]: W0213 20:09:42.748043 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:42.748225 kubelet[3699]: E0213 20:09:42.748210 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:42.748398 kubelet[3699]: E0213 20:09:42.748359 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:42.748398 kubelet[3699]: W0213 20:09:42.748373 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:42.748592 kubelet[3699]: E0213 20:09:42.748559 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:42.748820 kubelet[3699]: E0213 20:09:42.748804 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:42.748820 kubelet[3699]: W0213 20:09:42.748817 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:42.748925 kubelet[3699]: E0213 20:09:42.748846 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:42.749317 kubelet[3699]: E0213 20:09:42.749302 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:42.749317 kubelet[3699]: W0213 20:09:42.749317 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:42.749546 kubelet[3699]: E0213 20:09:42.749331 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:42.749546 kubelet[3699]: E0213 20:09:42.749524 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:42.749546 kubelet[3699]: W0213 20:09:42.749534 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:42.749679 kubelet[3699]: E0213 20:09:42.749564 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:42.749905 kubelet[3699]: E0213 20:09:42.749886 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:42.749905 kubelet[3699]: W0213 20:09:42.749901 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:42.751337 kubelet[3699]: E0213 20:09:42.749920 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:42.751900 kubelet[3699]: E0213 20:09:42.751881 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:42.751900 kubelet[3699]: W0213 20:09:42.751896 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:42.752209 kubelet[3699]: E0213 20:09:42.751916 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:42.752335 kubelet[3699]: E0213 20:09:42.752319 3699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:09:42.752335 kubelet[3699]: W0213 20:09:42.752333 3699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:09:42.752426 kubelet[3699]: E0213 20:09:42.752348 3699 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:09:42.949584 containerd[2097]: time="2025-02-13T20:09:42.949467166Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:09:42.952304 containerd[2097]: time="2025-02-13T20:09:42.952227794Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Feb 13 20:09:42.954670 containerd[2097]: time="2025-02-13T20:09:42.953234059Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:09:42.956851 containerd[2097]: time="2025-02-13T20:09:42.956478234Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:09:42.957528 containerd[2097]: time="2025-02-13T20:09:42.957370703Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.698147201s" Feb 13 20:09:42.957528 containerd[2097]: time="2025-02-13T20:09:42.957411001Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Feb 13 20:09:42.961019 containerd[2097]: time="2025-02-13T20:09:42.960984504Z" level=info msg="CreateContainer within sandbox \"efde2821bdb249357e7243eb993bc73b38886bd7d526624708a8f91ac759a7a0\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 20:09:43.013048 containerd[2097]: time="2025-02-13T20:09:43.013006281Z" level=info msg="CreateContainer within sandbox \"efde2821bdb249357e7243eb993bc73b38886bd7d526624708a8f91ac759a7a0\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"03de15521f8f730e1f2cf7974f3d581159750c4d6a9ab1182d051daf15a78256\"" Feb 13 20:09:43.013733 containerd[2097]: time="2025-02-13T20:09:43.013701271Z" level=info msg="StartContainer for \"03de15521f8f730e1f2cf7974f3d581159750c4d6a9ab1182d051daf15a78256\"" Feb 13 20:09:43.138578 containerd[2097]: time="2025-02-13T20:09:43.138426234Z" level=info msg="StartContainer for \"03de15521f8f730e1f2cf7974f3d581159750c4d6a9ab1182d051daf15a78256\" returns successfully" Feb 13 20:09:43.197594 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-03de15521f8f730e1f2cf7974f3d581159750c4d6a9ab1182d051daf15a78256-rootfs.mount: Deactivated successfully. Feb 13 20:09:43.228108 containerd[2097]: time="2025-02-13T20:09:43.217174151Z" level=info msg="shim disconnected" id=03de15521f8f730e1f2cf7974f3d581159750c4d6a9ab1182d051daf15a78256 namespace=k8s.io Feb 13 20:09:43.228108 containerd[2097]: time="2025-02-13T20:09:43.227527081Z" level=warning msg="cleaning up after shim disconnected" id=03de15521f8f730e1f2cf7974f3d581159750c4d6a9ab1182d051daf15a78256 namespace=k8s.io Feb 13 20:09:43.228108 containerd[2097]: time="2025-02-13T20:09:43.227547590Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:09:43.246876 containerd[2097]: time="2025-02-13T20:09:43.246824275Z" level=warning msg="cleanup warnings time=\"2025-02-13T20:09:43Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 20:09:43.358698 kubelet[3699]: E0213 20:09:43.357308 3699 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g2mq8" podUID="1913a1ef-26a6-4963-ad3b-0e30d0c766c9" Feb 13 20:09:43.654110 containerd[2097]: time="2025-02-13T20:09:43.654054507Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 20:09:43.715175 kubelet[3699]: I0213 20:09:43.714861 3699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7d9fd7cc4d-bfjtl" podStartSLOduration=3.552718001 podStartE2EDuration="6.714837829s" podCreationTimestamp="2025-02-13 20:09:37 +0000 UTC" firstStartedPulling="2025-02-13 20:09:38.095190247 +0000 UTC m=+22.951143047" lastFinishedPulling="2025-02-13 20:09:41.257310082 +0000 UTC m=+26.113262875" observedRunningTime="2025-02-13 20:09:41.664108143 +0000 UTC m=+26.520060952" watchObservedRunningTime="2025-02-13 20:09:43.714837829 +0000 UTC m=+28.570790640" Feb 13 20:09:45.380130 kubelet[3699]: E0213 20:09:45.363219 3699 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g2mq8" podUID="1913a1ef-26a6-4963-ad3b-0e30d0c766c9" Feb 13 20:09:47.358482 kubelet[3699]: E0213 20:09:47.358027 3699 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g2mq8" podUID="1913a1ef-26a6-4963-ad3b-0e30d0c766c9" Feb 13 20:09:48.960766 containerd[2097]: time="2025-02-13T20:09:48.960715554Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:09:48.965517 containerd[2097]: time="2025-02-13T20:09:48.965451082Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Feb 13 20:09:48.973471 containerd[2097]: time="2025-02-13T20:09:48.973395143Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:09:48.979857 containerd[2097]: time="2025-02-13T20:09:48.978022729Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:09:48.979857 containerd[2097]: time="2025-02-13T20:09:48.978786875Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 5.324670628s" Feb 13 20:09:48.979857 containerd[2097]: time="2025-02-13T20:09:48.978907682Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Feb 13 20:09:48.987867 containerd[2097]: time="2025-02-13T20:09:48.987826018Z" level=info msg="CreateContainer within sandbox \"efde2821bdb249357e7243eb993bc73b38886bd7d526624708a8f91ac759a7a0\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 20:09:49.168685 containerd[2097]: time="2025-02-13T20:09:49.168608136Z" level=info msg="CreateContainer within sandbox \"efde2821bdb249357e7243eb993bc73b38886bd7d526624708a8f91ac759a7a0\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"1e4d34d0fb69be4282f151c41b755fd2d818438404354fa7888b2d1101bdf209\"" Feb 13 20:09:49.175107 containerd[2097]: time="2025-02-13T20:09:49.169306654Z" level=info msg="StartContainer for \"1e4d34d0fb69be4282f151c41b755fd2d818438404354fa7888b2d1101bdf209\"" Feb 13 20:09:49.289419 containerd[2097]: time="2025-02-13T20:09:49.289364968Z" level=info msg="StartContainer for \"1e4d34d0fb69be4282f151c41b755fd2d818438404354fa7888b2d1101bdf209\" returns successfully" Feb 13 20:09:49.357412 kubelet[3699]: E0213 20:09:49.357369 3699 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g2mq8" podUID="1913a1ef-26a6-4963-ad3b-0e30d0c766c9" Feb 13 20:09:51.357932 kubelet[3699]: E0213 20:09:51.356458 3699 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g2mq8" podUID="1913a1ef-26a6-4963-ad3b-0e30d0c766c9" Feb 13 20:09:53.068536 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1e4d34d0fb69be4282f151c41b755fd2d818438404354fa7888b2d1101bdf209-rootfs.mount: Deactivated successfully. Feb 13 20:09:53.076532 kubelet[3699]: I0213 20:09:53.076490 3699 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 20:09:53.081697 containerd[2097]: time="2025-02-13T20:09:53.081627542Z" level=info msg="shim disconnected" id=1e4d34d0fb69be4282f151c41b755fd2d818438404354fa7888b2d1101bdf209 namespace=k8s.io Feb 13 20:09:53.081697 containerd[2097]: time="2025-02-13T20:09:53.081694494Z" level=warning msg="cleaning up after shim disconnected" id=1e4d34d0fb69be4282f151c41b755fd2d818438404354fa7888b2d1101bdf209 namespace=k8s.io Feb 13 20:09:53.082632 containerd[2097]: time="2025-02-13T20:09:53.081706382Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:09:53.132649 containerd[2097]: time="2025-02-13T20:09:53.132456235Z" level=warning msg="cleanup warnings time=\"2025-02-13T20:09:53Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 20:09:53.185362 kubelet[3699]: I0213 20:09:53.181936 3699 topology_manager.go:215] "Topology Admit Handler" podUID="a95f6702-f897-4b44-9e9f-23c6d7c2741b" podNamespace="kube-system" podName="coredns-7db6d8ff4d-5wkpf" Feb 13 20:09:53.205046 kubelet[3699]: I0213 20:09:53.204515 3699 topology_manager.go:215] "Topology Admit Handler" podUID="42c53b1a-63e3-4525-8d7f-9aecdf031a3b" podNamespace="calico-apiserver" podName="calico-apiserver-7766b6c6c6-45497" Feb 13 20:09:53.205046 kubelet[3699]: I0213 20:09:53.204708 3699 topology_manager.go:215] "Topology Admit Handler" podUID="ca33156d-daf7-4956-9c3c-459f7e2dd2f5" podNamespace="kube-system" podName="coredns-7db6d8ff4d-6tz9h" Feb 13 20:09:53.205046 kubelet[3699]: I0213 20:09:53.204843 3699 topology_manager.go:215] "Topology Admit Handler" podUID="454aef06-4008-4ca0-a239-19f3296963f5" podNamespace="calico-apiserver" podName="calico-apiserver-7766b6c6c6-42f6t" Feb 13 20:09:53.225873 kubelet[3699]: I0213 20:09:53.225450 3699 topology_manager.go:215] "Topology Admit Handler" podUID="ba17dcdf-1279-4496-b0fc-fdde00ad61dc" podNamespace="calico-system" podName="calico-kube-controllers-587f87bbd4-mm8mf" Feb 13 20:09:53.267482 kubelet[3699]: I0213 20:09:53.266948 3699 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxzv5\" (UniqueName: \"kubernetes.io/projected/a95f6702-f897-4b44-9e9f-23c6d7c2741b-kube-api-access-nxzv5\") pod \"coredns-7db6d8ff4d-5wkpf\" (UID: \"a95f6702-f897-4b44-9e9f-23c6d7c2741b\") " pod="kube-system/coredns-7db6d8ff4d-5wkpf" Feb 13 20:09:53.267482 kubelet[3699]: I0213 20:09:53.266996 3699 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlb24\" (UniqueName: \"kubernetes.io/projected/ba17dcdf-1279-4496-b0fc-fdde00ad61dc-kube-api-access-hlb24\") pod \"calico-kube-controllers-587f87bbd4-mm8mf\" (UID: \"ba17dcdf-1279-4496-b0fc-fdde00ad61dc\") " pod="calico-system/calico-kube-controllers-587f87bbd4-mm8mf" Feb 13 20:09:53.267482 kubelet[3699]: I0213 20:09:53.267025 3699 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/42c53b1a-63e3-4525-8d7f-9aecdf031a3b-calico-apiserver-certs\") pod \"calico-apiserver-7766b6c6c6-45497\" (UID: \"42c53b1a-63e3-4525-8d7f-9aecdf031a3b\") " pod="calico-apiserver/calico-apiserver-7766b6c6c6-45497" Feb 13 20:09:53.267482 kubelet[3699]: I0213 20:09:53.267091 3699 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ca33156d-daf7-4956-9c3c-459f7e2dd2f5-config-volume\") pod \"coredns-7db6d8ff4d-6tz9h\" (UID: \"ca33156d-daf7-4956-9c3c-459f7e2dd2f5\") " pod="kube-system/coredns-7db6d8ff4d-6tz9h" Feb 13 20:09:53.267482 kubelet[3699]: I0213 20:09:53.267118 3699 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2zfg\" (UniqueName: \"kubernetes.io/projected/ca33156d-daf7-4956-9c3c-459f7e2dd2f5-kube-api-access-p2zfg\") pod \"coredns-7db6d8ff4d-6tz9h\" (UID: \"ca33156d-daf7-4956-9c3c-459f7e2dd2f5\") " pod="kube-system/coredns-7db6d8ff4d-6tz9h" Feb 13 20:09:53.267840 kubelet[3699]: I0213 20:09:53.267167 3699 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a95f6702-f897-4b44-9e9f-23c6d7c2741b-config-volume\") pod \"coredns-7db6d8ff4d-5wkpf\" (UID: \"a95f6702-f897-4b44-9e9f-23c6d7c2741b\") " pod="kube-system/coredns-7db6d8ff4d-5wkpf" Feb 13 20:09:53.267840 kubelet[3699]: I0213 20:09:53.267193 3699 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhzwr\" (UniqueName: \"kubernetes.io/projected/454aef06-4008-4ca0-a239-19f3296963f5-kube-api-access-fhzwr\") pod \"calico-apiserver-7766b6c6c6-42f6t\" (UID: \"454aef06-4008-4ca0-a239-19f3296963f5\") " pod="calico-apiserver/calico-apiserver-7766b6c6c6-42f6t" Feb 13 20:09:53.267840 kubelet[3699]: I0213 20:09:53.267217 3699 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/454aef06-4008-4ca0-a239-19f3296963f5-calico-apiserver-certs\") pod \"calico-apiserver-7766b6c6c6-42f6t\" (UID: \"454aef06-4008-4ca0-a239-19f3296963f5\") " pod="calico-apiserver/calico-apiserver-7766b6c6c6-42f6t" Feb 13 20:09:53.267840 kubelet[3699]: I0213 20:09:53.267245 3699 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba17dcdf-1279-4496-b0fc-fdde00ad61dc-tigera-ca-bundle\") pod \"calico-kube-controllers-587f87bbd4-mm8mf\" (UID: \"ba17dcdf-1279-4496-b0fc-fdde00ad61dc\") " pod="calico-system/calico-kube-controllers-587f87bbd4-mm8mf" Feb 13 20:09:53.267840 kubelet[3699]: I0213 20:09:53.267274 3699 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rttxw\" (UniqueName: \"kubernetes.io/projected/42c53b1a-63e3-4525-8d7f-9aecdf031a3b-kube-api-access-rttxw\") pod \"calico-apiserver-7766b6c6c6-45497\" (UID: \"42c53b1a-63e3-4525-8d7f-9aecdf031a3b\") " pod="calico-apiserver/calico-apiserver-7766b6c6c6-45497" Feb 13 20:09:53.365029 containerd[2097]: time="2025-02-13T20:09:53.363809347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-g2mq8,Uid:1913a1ef-26a6-4963-ad3b-0e30d0c766c9,Namespace:calico-system,Attempt:0,}" Feb 13 20:09:53.549144 containerd[2097]: time="2025-02-13T20:09:53.548758307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5wkpf,Uid:a95f6702-f897-4b44-9e9f-23c6d7c2741b,Namespace:kube-system,Attempt:0,}" Feb 13 20:09:53.562978 containerd[2097]: time="2025-02-13T20:09:53.562896092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7766b6c6c6-45497,Uid:42c53b1a-63e3-4525-8d7f-9aecdf031a3b,Namespace:calico-apiserver,Attempt:0,}" Feb 13 20:09:53.563299 containerd[2097]: time="2025-02-13T20:09:53.563268144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-587f87bbd4-mm8mf,Uid:ba17dcdf-1279-4496-b0fc-fdde00ad61dc,Namespace:calico-system,Attempt:0,}" Feb 13 20:09:53.564391 containerd[2097]: time="2025-02-13T20:09:53.564358044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7766b6c6c6-42f6t,Uid:454aef06-4008-4ca0-a239-19f3296963f5,Namespace:calico-apiserver,Attempt:0,}" Feb 13 20:09:53.565095 containerd[2097]: time="2025-02-13T20:09:53.565039557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6tz9h,Uid:ca33156d-daf7-4956-9c3c-459f7e2dd2f5,Namespace:kube-system,Attempt:0,}" Feb 13 20:09:53.832788 containerd[2097]: time="2025-02-13T20:09:53.832742976Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 20:09:53.836038 containerd[2097]: time="2025-02-13T20:09:53.835539067Z" level=error msg="Failed to destroy network for sandbox \"a7e7c47ce2129c91594868baca4041d2f1eb491e7ccbf6d3af47a11a3cfd71b8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:09:53.854066 containerd[2097]: time="2025-02-13T20:09:53.853907787Z" level=error msg="Failed to destroy network for sandbox \"b329744ddb60f7eceb2b5701f32b0822d1a8da2cff82cb838995ebacfc9eb13e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:09:53.858739 containerd[2097]: time="2025-02-13T20:09:53.858611284Z" level=error msg="encountered an error cleaning up failed sandbox \"b329744ddb60f7eceb2b5701f32b0822d1a8da2cff82cb838995ebacfc9eb13e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:09:53.878465 containerd[2097]: time="2025-02-13T20:09:53.878154316Z" level=error msg="encountered an error cleaning up failed sandbox \"a7e7c47ce2129c91594868baca4041d2f1eb491e7ccbf6d3af47a11a3cfd71b8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:09:53.885113 containerd[2097]: time="2025-02-13T20:09:53.884344272Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-g2mq8,Uid:1913a1ef-26a6-4963-ad3b-0e30d0c766c9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a7e7c47ce2129c91594868baca4041d2f1eb491e7ccbf6d3af47a11a3cfd71b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:09:53.915429 containerd[2097]: time="2025-02-13T20:09:53.915180609Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5wkpf,Uid:a95f6702-f897-4b44-9e9f-23c6d7c2741b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b329744ddb60f7eceb2b5701f32b0822d1a8da2cff82cb838995ebacfc9eb13e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:09:53.934634 kubelet[3699]: E0213 20:09:53.912973 3699 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7e7c47ce2129c91594868baca4041d2f1eb491e7ccbf6d3af47a11a3cfd71b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:09:53.934634 kubelet[3699]: E0213 20:09:53.920085 3699 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b329744ddb60f7eceb2b5701f32b0822d1a8da2cff82cb838995ebacfc9eb13e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:09:53.934634 kubelet[3699]: E0213 20:09:53.932890 3699 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b329744ddb60f7eceb2b5701f32b0822d1a8da2cff82cb838995ebacfc9eb13e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-5wkpf" Feb 13 20:09:53.934634 kubelet[3699]: E0213 20:09:53.932925 3699 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b329744ddb60f7eceb2b5701f32b0822d1a8da2cff82cb838995ebacfc9eb13e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-5wkpf" Feb 13 20:09:53.936173 kubelet[3699]: E0213 20:09:53.932984 3699 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-5wkpf_kube-system(a95f6702-f897-4b44-9e9f-23c6d7c2741b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-5wkpf_kube-system(a95f6702-f897-4b44-9e9f-23c6d7c2741b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b329744ddb60f7eceb2b5701f32b0822d1a8da2cff82cb838995ebacfc9eb13e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-5wkpf" podUID="a95f6702-f897-4b44-9e9f-23c6d7c2741b" Feb 13 20:09:53.936173 kubelet[3699]: E0213 20:09:53.933930 3699 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7e7c47ce2129c91594868baca4041d2f1eb491e7ccbf6d3af47a11a3cfd71b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-g2mq8" Feb 13 20:09:53.936173 kubelet[3699]: E0213 20:09:53.933968 3699 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7e7c47ce2129c91594868baca4041d2f1eb491e7ccbf6d3af47a11a3cfd71b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-g2mq8" Feb 13 20:09:53.936484 kubelet[3699]: E0213 20:09:53.934020 3699 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-g2mq8_calico-system(1913a1ef-26a6-4963-ad3b-0e30d0c766c9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-g2mq8_calico-system(1913a1ef-26a6-4963-ad3b-0e30d0c766c9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a7e7c47ce2129c91594868baca4041d2f1eb491e7ccbf6d3af47a11a3cfd71b8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-g2mq8" podUID="1913a1ef-26a6-4963-ad3b-0e30d0c766c9" Feb 13 20:09:54.077887 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a7e7c47ce2129c91594868baca4041d2f1eb491e7ccbf6d3af47a11a3cfd71b8-shm.mount: Deactivated successfully. Feb 13 20:09:54.138169 containerd[2097]: time="2025-02-13T20:09:54.136783067Z" level=error msg="Failed to destroy network for sandbox \"c50a0fd76b8dcb7dab360ed62e5c96aaf7894b2822f90330b884f2d8e053a981\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:09:54.140431 containerd[2097]: time="2025-02-13T20:09:54.138195597Z" level=error msg="encountered an error cleaning up failed sandbox \"c50a0fd76b8dcb7dab360ed62e5c96aaf7894b2822f90330b884f2d8e053a981\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:09:54.140431 containerd[2097]: time="2025-02-13T20:09:54.138292252Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7766b6c6c6-45497,Uid:42c53b1a-63e3-4525-8d7f-9aecdf031a3b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c50a0fd76b8dcb7dab360ed62e5c96aaf7894b2822f90330b884f2d8e053a981\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:09:54.141325 kubelet[3699]: E0213 20:09:54.140343 3699 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c50a0fd76b8dcb7dab360ed62e5c96aaf7894b2822f90330b884f2d8e053a981\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:09:54.141325 kubelet[3699]: E0213 20:09:54.140709 3699 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c50a0fd76b8dcb7dab360ed62e5c96aaf7894b2822f90330b884f2d8e053a981\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7766b6c6c6-45497" Feb 13 20:09:54.141325 kubelet[3699]: E0213 20:09:54.140740 3699 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c50a0fd76b8dcb7dab360ed62e5c96aaf7894b2822f90330b884f2d8e053a981\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7766b6c6c6-45497" Feb 13 20:09:54.141761 kubelet[3699]: E0213 20:09:54.140801 3699 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7766b6c6c6-45497_calico-apiserver(42c53b1a-63e3-4525-8d7f-9aecdf031a3b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7766b6c6c6-45497_calico-apiserver(42c53b1a-63e3-4525-8d7f-9aecdf031a3b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c50a0fd76b8dcb7dab360ed62e5c96aaf7894b2822f90330b884f2d8e053a981\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7766b6c6c6-45497" podUID="42c53b1a-63e3-4525-8d7f-9aecdf031a3b" Feb 13 20:09:54.143050 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c50a0fd76b8dcb7dab360ed62e5c96aaf7894b2822f90330b884f2d8e053a981-shm.mount: Deactivated successfully. Feb 13 20:09:54.157003 containerd[2097]: time="2025-02-13T20:09:54.156936417Z" level=error msg="Failed to destroy network for sandbox \"3d1ca5284543f1dc3550ba7634c6498c40c87dc09b279448626637eebd5af2d6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:09:54.164109 containerd[2097]: time="2025-02-13T20:09:54.164023486Z" level=error msg="Failed to destroy network for sandbox \"f9dccd54955daa6f0903bf78e5bbf99ac85ab89aa8f2db62860f39747f28c482\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:09:54.166112 containerd[2097]: time="2025-02-13T20:09:54.164824868Z" level=error msg="encountered an error cleaning up failed sandbox \"f9dccd54955daa6f0903bf78e5bbf99ac85ab89aa8f2db62860f39747f28c482\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:09:54.166112 containerd[2097]: time="2025-02-13T20:09:54.164893630Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6tz9h,Uid:ca33156d-daf7-4956-9c3c-459f7e2dd2f5,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f9dccd54955daa6f0903bf78e5bbf99ac85ab89aa8f2db62860f39747f28c482\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:09:54.168944 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f9dccd54955daa6f0903bf78e5bbf99ac85ab89aa8f2db62860f39747f28c482-shm.mount: Deactivated successfully. Feb 13 20:09:54.170223 kubelet[3699]: E0213 20:09:54.169113 3699 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9dccd54955daa6f0903bf78e5bbf99ac85ab89aa8f2db62860f39747f28c482\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:09:54.170223 kubelet[3699]: E0213 20:09:54.169986 3699 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9dccd54955daa6f0903bf78e5bbf99ac85ab89aa8f2db62860f39747f28c482\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-6tz9h" Feb 13 20:09:54.170223 kubelet[3699]: E0213 20:09:54.170143 3699 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9dccd54955daa6f0903bf78e5bbf99ac85ab89aa8f2db62860f39747f28c482\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-6tz9h" Feb 13 20:09:54.174875 kubelet[3699]: E0213 20:09:54.174751 3699 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-6tz9h_kube-system(ca33156d-daf7-4956-9c3c-459f7e2dd2f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-6tz9h_kube-system(ca33156d-daf7-4956-9c3c-459f7e2dd2f5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f9dccd54955daa6f0903bf78e5bbf99ac85ab89aa8f2db62860f39747f28c482\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-6tz9h" podUID="ca33156d-daf7-4956-9c3c-459f7e2dd2f5" Feb 13 20:09:54.177648 containerd[2097]: time="2025-02-13T20:09:54.177598595Z" level=error msg="Failed to destroy network for sandbox \"93efc1025ebd330db77b04ca102e7ec8308a59565feecc61baada5f9ebb71aa3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:09:54.183022 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-93efc1025ebd330db77b04ca102e7ec8308a59565feecc61baada5f9ebb71aa3-shm.mount: Deactivated successfully. Feb 13 20:09:54.184947 containerd[2097]: time="2025-02-13T20:09:54.184107792Z" level=error msg="encountered an error cleaning up failed sandbox \"93efc1025ebd330db77b04ca102e7ec8308a59565feecc61baada5f9ebb71aa3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:09:54.184947 containerd[2097]: time="2025-02-13T20:09:54.184182229Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7766b6c6c6-42f6t,Uid:454aef06-4008-4ca0-a239-19f3296963f5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"93efc1025ebd330db77b04ca102e7ec8308a59565feecc61baada5f9ebb71aa3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:09:54.185424 kubelet[3699]: E0213 20:09:54.184425 3699 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93efc1025ebd330db77b04ca102e7ec8308a59565feecc61baada5f9ebb71aa3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:09:54.185424 kubelet[3699]: E0213 20:09:54.184483 3699 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93efc1025ebd330db77b04ca102e7ec8308a59565feecc61baada5f9ebb71aa3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7766b6c6c6-42f6t" Feb 13 20:09:54.185424 kubelet[3699]: E0213 20:09:54.184512 3699 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93efc1025ebd330db77b04ca102e7ec8308a59565feecc61baada5f9ebb71aa3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7766b6c6c6-42f6t" Feb 13 20:09:54.185950 kubelet[3699]: E0213 20:09:54.184562 3699 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7766b6c6c6-42f6t_calico-apiserver(454aef06-4008-4ca0-a239-19f3296963f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7766b6c6c6-42f6t_calico-apiserver(454aef06-4008-4ca0-a239-19f3296963f5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"93efc1025ebd330db77b04ca102e7ec8308a59565feecc61baada5f9ebb71aa3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7766b6c6c6-42f6t" podUID="454aef06-4008-4ca0-a239-19f3296963f5" Feb 13 20:09:54.200815 containerd[2097]: time="2025-02-13T20:09:54.200760467Z" level=error msg="encountered an error cleaning up failed sandbox \"3d1ca5284543f1dc3550ba7634c6498c40c87dc09b279448626637eebd5af2d6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:09:54.201038 containerd[2097]: time="2025-02-13T20:09:54.201010535Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-587f87bbd4-mm8mf,Uid:ba17dcdf-1279-4496-b0fc-fdde00ad61dc,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3d1ca5284543f1dc3550ba7634c6498c40c87dc09b279448626637eebd5af2d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:09:54.201595 kubelet[3699]: E0213 20:09:54.201552 3699 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d1ca5284543f1dc3550ba7634c6498c40c87dc09b279448626637eebd5af2d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:09:54.201958 kubelet[3699]: E0213 20:09:54.201918 3699 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d1ca5284543f1dc3550ba7634c6498c40c87dc09b279448626637eebd5af2d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-587f87bbd4-mm8mf" Feb 13 20:09:54.202102 kubelet[3699]: E0213 20:09:54.202082 3699 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d1ca5284543f1dc3550ba7634c6498c40c87dc09b279448626637eebd5af2d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-587f87bbd4-mm8mf" Feb 13 20:09:54.202525 kubelet[3699]: E0213 20:09:54.202242 3699 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-587f87bbd4-mm8mf_calico-system(ba17dcdf-1279-4496-b0fc-fdde00ad61dc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-587f87bbd4-mm8mf_calico-system(ba17dcdf-1279-4496-b0fc-fdde00ad61dc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3d1ca5284543f1dc3550ba7634c6498c40c87dc09b279448626637eebd5af2d6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-587f87bbd4-mm8mf" podUID="ba17dcdf-1279-4496-b0fc-fdde00ad61dc" Feb 13 20:09:54.206301 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3d1ca5284543f1dc3550ba7634c6498c40c87dc09b279448626637eebd5af2d6-shm.mount: Deactivated successfully. Feb 13 20:09:54.736172 kubelet[3699]: I0213 20:09:54.736120 3699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c50a0fd76b8dcb7dab360ed62e5c96aaf7894b2822f90330b884f2d8e053a981" Feb 13 20:09:54.741126 kubelet[3699]: I0213 20:09:54.741063 3699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b329744ddb60f7eceb2b5701f32b0822d1a8da2cff82cb838995ebacfc9eb13e" Feb 13 20:09:54.756824 kubelet[3699]: I0213 20:09:54.756102 3699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7e7c47ce2129c91594868baca4041d2f1eb491e7ccbf6d3af47a11a3cfd71b8" Feb 13 20:09:54.759837 kubelet[3699]: I0213 20:09:54.758162 3699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9dccd54955daa6f0903bf78e5bbf99ac85ab89aa8f2db62860f39747f28c482" Feb 13 20:09:54.762824 kubelet[3699]: I0213 20:09:54.761926 3699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93efc1025ebd330db77b04ca102e7ec8308a59565feecc61baada5f9ebb71aa3" Feb 13 20:09:54.764897 kubelet[3699]: I0213 20:09:54.764869 3699 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d1ca5284543f1dc3550ba7634c6498c40c87dc09b279448626637eebd5af2d6" Feb 13 20:09:54.834663 containerd[2097]: time="2025-02-13T20:09:54.834207260Z" level=info msg="StopPodSandbox for \"a7e7c47ce2129c91594868baca4041d2f1eb491e7ccbf6d3af47a11a3cfd71b8\"" Feb 13 20:09:54.836856 containerd[2097]: time="2025-02-13T20:09:54.835805395Z" level=info msg="StopPodSandbox for \"f9dccd54955daa6f0903bf78e5bbf99ac85ab89aa8f2db62860f39747f28c482\"" Feb 13 20:09:54.837010 containerd[2097]: time="2025-02-13T20:09:54.836853122Z" level=info msg="Ensure that sandbox f9dccd54955daa6f0903bf78e5bbf99ac85ab89aa8f2db62860f39747f28c482 in task-service has been cleanup successfully" Feb 13 20:09:54.837693 containerd[2097]: time="2025-02-13T20:09:54.837189369Z" level=info msg="StopPodSandbox for \"b329744ddb60f7eceb2b5701f32b0822d1a8da2cff82cb838995ebacfc9eb13e\"" Feb 13 20:09:54.837693 containerd[2097]: time="2025-02-13T20:09:54.837422825Z" level=info msg="Ensure that sandbox b329744ddb60f7eceb2b5701f32b0822d1a8da2cff82cb838995ebacfc9eb13e in task-service has been cleanup successfully" Feb 13 20:09:54.854793 containerd[2097]: time="2025-02-13T20:09:54.854752277Z" level=info msg="StopPodSandbox for \"93efc1025ebd330db77b04ca102e7ec8308a59565feecc61baada5f9ebb71aa3\"" Feb 13 20:09:54.855215 containerd[2097]: time="2025-02-13T20:09:54.855180365Z" level=info msg="StopPodSandbox for \"c50a0fd76b8dcb7dab360ed62e5c96aaf7894b2822f90330b884f2d8e053a981\"" Feb 13 20:09:54.855619 containerd[2097]: time="2025-02-13T20:09:54.855446179Z" level=info msg="Ensure that sandbox c50a0fd76b8dcb7dab360ed62e5c96aaf7894b2822f90330b884f2d8e053a981 in task-service has been cleanup successfully" Feb 13 20:09:54.855901 containerd[2097]: time="2025-02-13T20:09:54.855876610Z" level=info msg="Ensure that sandbox 93efc1025ebd330db77b04ca102e7ec8308a59565feecc61baada5f9ebb71aa3 in task-service has been cleanup successfully" Feb 13 20:09:54.858340 containerd[2097]: time="2025-02-13T20:09:54.858294670Z" level=info msg="Ensure that sandbox a7e7c47ce2129c91594868baca4041d2f1eb491e7ccbf6d3af47a11a3cfd71b8 in task-service has been cleanup successfully" Feb 13 20:09:54.858803 containerd[2097]: time="2025-02-13T20:09:54.858773614Z" level=info msg="StopPodSandbox for \"3d1ca5284543f1dc3550ba7634c6498c40c87dc09b279448626637eebd5af2d6\"" Feb 13 20:09:54.861779 containerd[2097]: time="2025-02-13T20:09:54.861687880Z" level=info msg="Ensure that sandbox 3d1ca5284543f1dc3550ba7634c6498c40c87dc09b279448626637eebd5af2d6 in task-service has been cleanup successfully" Feb 13 20:09:55.085512 containerd[2097]: time="2025-02-13T20:09:55.085458772Z" level=error msg="StopPodSandbox for \"3d1ca5284543f1dc3550ba7634c6498c40c87dc09b279448626637eebd5af2d6\" failed" error="failed to destroy network for sandbox \"3d1ca5284543f1dc3550ba7634c6498c40c87dc09b279448626637eebd5af2d6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:09:55.086144 kubelet[3699]: E0213 20:09:55.085907 3699 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3d1ca5284543f1dc3550ba7634c6498c40c87dc09b279448626637eebd5af2d6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3d1ca5284543f1dc3550ba7634c6498c40c87dc09b279448626637eebd5af2d6" Feb 13 20:09:55.086144 kubelet[3699]: E0213 20:09:55.085977 3699 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3d1ca5284543f1dc3550ba7634c6498c40c87dc09b279448626637eebd5af2d6"} Feb 13 20:09:55.086144 kubelet[3699]: E0213 20:09:55.086055 3699 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ba17dcdf-1279-4496-b0fc-fdde00ad61dc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3d1ca5284543f1dc3550ba7634c6498c40c87dc09b279448626637eebd5af2d6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:09:55.088380 kubelet[3699]: E0213 20:09:55.088191 3699 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ba17dcdf-1279-4496-b0fc-fdde00ad61dc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3d1ca5284543f1dc3550ba7634c6498c40c87dc09b279448626637eebd5af2d6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-587f87bbd4-mm8mf" podUID="ba17dcdf-1279-4496-b0fc-fdde00ad61dc" Feb 13 20:09:55.134940 containerd[2097]: time="2025-02-13T20:09:55.134831693Z" level=error msg="StopPodSandbox for \"93efc1025ebd330db77b04ca102e7ec8308a59565feecc61baada5f9ebb71aa3\" failed" error="failed to destroy network for sandbox \"93efc1025ebd330db77b04ca102e7ec8308a59565feecc61baada5f9ebb71aa3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:09:55.138881 kubelet[3699]: E0213 20:09:55.135540 3699 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"93efc1025ebd330db77b04ca102e7ec8308a59565feecc61baada5f9ebb71aa3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="93efc1025ebd330db77b04ca102e7ec8308a59565feecc61baada5f9ebb71aa3" Feb 13 20:09:55.138881 kubelet[3699]: E0213 20:09:55.138720 3699 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"93efc1025ebd330db77b04ca102e7ec8308a59565feecc61baada5f9ebb71aa3"} Feb 13 20:09:55.138881 kubelet[3699]: E0213 20:09:55.138792 3699 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"454aef06-4008-4ca0-a239-19f3296963f5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"93efc1025ebd330db77b04ca102e7ec8308a59565feecc61baada5f9ebb71aa3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:09:55.138881 kubelet[3699]: E0213 20:09:55.138826 3699 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"454aef06-4008-4ca0-a239-19f3296963f5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"93efc1025ebd330db77b04ca102e7ec8308a59565feecc61baada5f9ebb71aa3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7766b6c6c6-42f6t" podUID="454aef06-4008-4ca0-a239-19f3296963f5" Feb 13 20:09:55.139868 containerd[2097]: time="2025-02-13T20:09:55.139652890Z" level=error msg="StopPodSandbox for \"f9dccd54955daa6f0903bf78e5bbf99ac85ab89aa8f2db62860f39747f28c482\" failed" error="failed to destroy network for sandbox \"f9dccd54955daa6f0903bf78e5bbf99ac85ab89aa8f2db62860f39747f28c482\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:09:55.142194 kubelet[3699]: E0213 20:09:55.139991 3699 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f9dccd54955daa6f0903bf78e5bbf99ac85ab89aa8f2db62860f39747f28c482\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f9dccd54955daa6f0903bf78e5bbf99ac85ab89aa8f2db62860f39747f28c482" Feb 13 20:09:55.142194 kubelet[3699]: E0213 20:09:55.140042 3699 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f9dccd54955daa6f0903bf78e5bbf99ac85ab89aa8f2db62860f39747f28c482"} Feb 13 20:09:55.142194 kubelet[3699]: E0213 20:09:55.140257 3699 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ca33156d-daf7-4956-9c3c-459f7e2dd2f5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f9dccd54955daa6f0903bf78e5bbf99ac85ab89aa8f2db62860f39747f28c482\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:09:55.142194 kubelet[3699]: E0213 20:09:55.140294 3699 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ca33156d-daf7-4956-9c3c-459f7e2dd2f5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f9dccd54955daa6f0903bf78e5bbf99ac85ab89aa8f2db62860f39747f28c482\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-6tz9h" podUID="ca33156d-daf7-4956-9c3c-459f7e2dd2f5" Feb 13 20:09:55.148153 containerd[2097]: time="2025-02-13T20:09:55.148024806Z" level=error msg="StopPodSandbox for \"c50a0fd76b8dcb7dab360ed62e5c96aaf7894b2822f90330b884f2d8e053a981\" failed" error="failed to destroy network for sandbox \"c50a0fd76b8dcb7dab360ed62e5c96aaf7894b2822f90330b884f2d8e053a981\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:09:55.148426 containerd[2097]: time="2025-02-13T20:09:55.148252361Z" level=error msg="StopPodSandbox for \"b329744ddb60f7eceb2b5701f32b0822d1a8da2cff82cb838995ebacfc9eb13e\" failed" error="failed to destroy network for sandbox \"b329744ddb60f7eceb2b5701f32b0822d1a8da2cff82cb838995ebacfc9eb13e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:09:55.148854 kubelet[3699]: E0213 20:09:55.148550 3699 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c50a0fd76b8dcb7dab360ed62e5c96aaf7894b2822f90330b884f2d8e053a981\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c50a0fd76b8dcb7dab360ed62e5c96aaf7894b2822f90330b884f2d8e053a981" Feb 13 20:09:55.148854 kubelet[3699]: E0213 20:09:55.148716 3699 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c50a0fd76b8dcb7dab360ed62e5c96aaf7894b2822f90330b884f2d8e053a981"} Feb 13 20:09:55.148854 kubelet[3699]: E0213 20:09:55.148550 3699 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b329744ddb60f7eceb2b5701f32b0822d1a8da2cff82cb838995ebacfc9eb13e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b329744ddb60f7eceb2b5701f32b0822d1a8da2cff82cb838995ebacfc9eb13e" Feb 13 20:09:55.148854 kubelet[3699]: E0213 20:09:55.148768 3699 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b329744ddb60f7eceb2b5701f32b0822d1a8da2cff82cb838995ebacfc9eb13e"} Feb 13 20:09:55.148854 kubelet[3699]: E0213 20:09:55.148760 3699 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"42c53b1a-63e3-4525-8d7f-9aecdf031a3b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c50a0fd76b8dcb7dab360ed62e5c96aaf7894b2822f90330b884f2d8e053a981\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:09:55.149279 kubelet[3699]: E0213 20:09:55.148801 3699 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a95f6702-f897-4b44-9e9f-23c6d7c2741b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b329744ddb60f7eceb2b5701f32b0822d1a8da2cff82cb838995ebacfc9eb13e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:09:55.149279 kubelet[3699]: E0213 20:09:55.148849 3699 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a95f6702-f897-4b44-9e9f-23c6d7c2741b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b329744ddb60f7eceb2b5701f32b0822d1a8da2cff82cb838995ebacfc9eb13e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-5wkpf" podUID="a95f6702-f897-4b44-9e9f-23c6d7c2741b" Feb 13 20:09:55.149279 kubelet[3699]: E0213 20:09:55.148803 3699 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"42c53b1a-63e3-4525-8d7f-9aecdf031a3b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c50a0fd76b8dcb7dab360ed62e5c96aaf7894b2822f90330b884f2d8e053a981\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7766b6c6c6-45497" podUID="42c53b1a-63e3-4525-8d7f-9aecdf031a3b" Feb 13 20:09:55.155492 containerd[2097]: time="2025-02-13T20:09:55.155439581Z" level=error msg="StopPodSandbox for \"a7e7c47ce2129c91594868baca4041d2f1eb491e7ccbf6d3af47a11a3cfd71b8\" failed" error="failed to destroy network for sandbox \"a7e7c47ce2129c91594868baca4041d2f1eb491e7ccbf6d3af47a11a3cfd71b8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:09:55.156253 kubelet[3699]: E0213 20:09:55.155689 3699 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a7e7c47ce2129c91594868baca4041d2f1eb491e7ccbf6d3af47a11a3cfd71b8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a7e7c47ce2129c91594868baca4041d2f1eb491e7ccbf6d3af47a11a3cfd71b8" Feb 13 20:09:55.156253 kubelet[3699]: E0213 20:09:55.156030 3699 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a7e7c47ce2129c91594868baca4041d2f1eb491e7ccbf6d3af47a11a3cfd71b8"} Feb 13 20:09:55.156253 kubelet[3699]: E0213 20:09:55.156106 3699 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1913a1ef-26a6-4963-ad3b-0e30d0c766c9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a7e7c47ce2129c91594868baca4041d2f1eb491e7ccbf6d3af47a11a3cfd71b8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:09:55.156253 kubelet[3699]: E0213 20:09:55.156132 3699 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1913a1ef-26a6-4963-ad3b-0e30d0c766c9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a7e7c47ce2129c91594868baca4041d2f1eb491e7ccbf6d3af47a11a3cfd71b8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-g2mq8" podUID="1913a1ef-26a6-4963-ad3b-0e30d0c766c9" Feb 13 20:10:02.542591 kubelet[3699]: I0213 20:10:02.542539 3699 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:10:04.082331 systemd-journald[1566]: Under memory pressure, flushing caches. Feb 13 20:10:04.080266 systemd-resolved[1977]: Under memory pressure, flushing caches. Feb 13 20:10:04.080338 systemd-resolved[1977]: Flushed all caches. Feb 13 20:10:04.942898 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1684015855.mount: Deactivated successfully. Feb 13 20:10:05.331121 containerd[2097]: time="2025-02-13T20:10:05.290447018Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Feb 13 20:10:05.331121 containerd[2097]: time="2025-02-13T20:10:05.328946352Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 11.484051224s" Feb 13 20:10:05.331121 containerd[2097]: time="2025-02-13T20:10:05.329040484Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Feb 13 20:10:05.339814 containerd[2097]: time="2025-02-13T20:10:05.339606835Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:10:05.380494 containerd[2097]: time="2025-02-13T20:10:05.380438996Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:10:05.381318 containerd[2097]: time="2025-02-13T20:10:05.381246071Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:10:05.450216 containerd[2097]: time="2025-02-13T20:10:05.450166504Z" level=info msg="CreateContainer within sandbox \"efde2821bdb249357e7243eb993bc73b38886bd7d526624708a8f91ac759a7a0\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 20:10:05.682563 containerd[2097]: time="2025-02-13T20:10:05.682228324Z" level=info msg="CreateContainer within sandbox \"efde2821bdb249357e7243eb993bc73b38886bd7d526624708a8f91ac759a7a0\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"1c5f1f3b1844676e41aa005ebeb53ddb3d2550d6a2ab6df1911bdd6a8352903a\"" Feb 13 20:10:05.697174 containerd[2097]: time="2025-02-13T20:10:05.690300629Z" level=info msg="StartContainer for \"1c5f1f3b1844676e41aa005ebeb53ddb3d2550d6a2ab6df1911bdd6a8352903a\"" Feb 13 20:10:05.884418 systemd[1]: Started sshd@7-172.31.16.93:22-139.178.89.65:45244.service - OpenSSH per-connection server daemon (139.178.89.65:45244). Feb 13 20:10:06.131155 systemd-journald[1566]: Under memory pressure, flushing caches. Feb 13 20:10:06.129913 systemd-resolved[1977]: Under memory pressure, flushing caches. Feb 13 20:10:06.129948 systemd-resolved[1977]: Flushed all caches. Feb 13 20:10:06.184731 sshd[4769]: Accepted publickey for core from 139.178.89.65 port 45244 ssh2: RSA SHA256:7nv7xaFFWmIAvPewvKjLuTxkMrDcPy3WtQ5BDo3Wg0I Feb 13 20:10:06.201654 sshd[4769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:10:06.279009 systemd-logind[2075]: New session 8 of user core. Feb 13 20:10:06.290485 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 20:10:06.327042 systemd[1]: run-containerd-runc-k8s.io-1c5f1f3b1844676e41aa005ebeb53ddb3d2550d6a2ab6df1911bdd6a8352903a-runc.waSi1Z.mount: Deactivated successfully. Feb 13 20:10:06.369422 containerd[2097]: time="2025-02-13T20:10:06.364518373Z" level=info msg="StopPodSandbox for \"b329744ddb60f7eceb2b5701f32b0822d1a8da2cff82cb838995ebacfc9eb13e\"" Feb 13 20:10:06.550199 containerd[2097]: time="2025-02-13T20:10:06.550136377Z" level=info msg="StartContainer for \"1c5f1f3b1844676e41aa005ebeb53ddb3d2550d6a2ab6df1911bdd6a8352903a\" returns successfully" Feb 13 20:10:06.602227 containerd[2097]: time="2025-02-13T20:10:06.602170545Z" level=error msg="StopPodSandbox for \"b329744ddb60f7eceb2b5701f32b0822d1a8da2cff82cb838995ebacfc9eb13e\" failed" error="failed to destroy network for sandbox \"b329744ddb60f7eceb2b5701f32b0822d1a8da2cff82cb838995ebacfc9eb13e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:10:06.602454 kubelet[3699]: E0213 20:10:06.602405 3699 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b329744ddb60f7eceb2b5701f32b0822d1a8da2cff82cb838995ebacfc9eb13e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b329744ddb60f7eceb2b5701f32b0822d1a8da2cff82cb838995ebacfc9eb13e" Feb 13 20:10:06.603045 kubelet[3699]: E0213 20:10:06.602467 3699 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b329744ddb60f7eceb2b5701f32b0822d1a8da2cff82cb838995ebacfc9eb13e"} Feb 13 20:10:06.613897 kubelet[3699]: E0213 20:10:06.613189 3699 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a95f6702-f897-4b44-9e9f-23c6d7c2741b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b329744ddb60f7eceb2b5701f32b0822d1a8da2cff82cb838995ebacfc9eb13e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:10:06.613897 kubelet[3699]: E0213 20:10:06.613328 3699 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a95f6702-f897-4b44-9e9f-23c6d7c2741b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b329744ddb60f7eceb2b5701f32b0822d1a8da2cff82cb838995ebacfc9eb13e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-5wkpf" podUID="a95f6702-f897-4b44-9e9f-23c6d7c2741b" Feb 13 20:10:06.750377 sshd[4769]: pam_unix(sshd:session): session closed for user core Feb 13 20:10:06.757232 systemd-logind[2075]: Session 8 logged out. Waiting for processes to exit. Feb 13 20:10:06.759757 systemd[1]: sshd@7-172.31.16.93:22-139.178.89.65:45244.service: Deactivated successfully. Feb 13 20:10:06.769038 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 20:10:06.771291 systemd-logind[2075]: Removed session 8. Feb 13 20:10:07.077130 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 20:10:07.077736 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved. Feb 13 20:10:07.379571 containerd[2097]: time="2025-02-13T20:10:07.378574579Z" level=info msg="StopPodSandbox for \"a7e7c47ce2129c91594868baca4041d2f1eb491e7ccbf6d3af47a11a3cfd71b8\"" Feb 13 20:10:07.592395 kubelet[3699]: I0213 20:10:07.575992 3699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-pdtsj" podStartSLOduration=3.455009644 podStartE2EDuration="30.565680685s" podCreationTimestamp="2025-02-13 20:09:37 +0000 UTC" firstStartedPulling="2025-02-13 20:09:38.272715113 +0000 UTC m=+23.128667914" lastFinishedPulling="2025-02-13 20:10:05.383386147 +0000 UTC m=+50.239338955" observedRunningTime="2025-02-13 20:10:06.959808402 +0000 UTC m=+51.815761210" watchObservedRunningTime="2025-02-13 20:10:07.565680685 +0000 UTC m=+52.421633497" Feb 13 20:10:07.937263 systemd[1]: run-containerd-runc-k8s.io-1c5f1f3b1844676e41aa005ebeb53ddb3d2550d6a2ab6df1911bdd6a8352903a-runc.7AMU7F.mount: Deactivated successfully. Feb 13 20:10:08.107860 containerd[2097]: 2025-02-13 20:10:07.572 [INFO][4888] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a7e7c47ce2129c91594868baca4041d2f1eb491e7ccbf6d3af47a11a3cfd71b8" Feb 13 20:10:08.107860 containerd[2097]: 2025-02-13 20:10:07.573 [INFO][4888] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a7e7c47ce2129c91594868baca4041d2f1eb491e7ccbf6d3af47a11a3cfd71b8" iface="eth0" netns="/var/run/netns/cni-11ba9aad-046a-bf8d-a1df-fb5fa30c6666" Feb 13 20:10:08.107860 containerd[2097]: 2025-02-13 20:10:07.574 [INFO][4888] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a7e7c47ce2129c91594868baca4041d2f1eb491e7ccbf6d3af47a11a3cfd71b8" iface="eth0" netns="/var/run/netns/cni-11ba9aad-046a-bf8d-a1df-fb5fa30c6666" Feb 13 20:10:08.107860 containerd[2097]: 2025-02-13 20:10:07.578 [INFO][4888] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a7e7c47ce2129c91594868baca4041d2f1eb491e7ccbf6d3af47a11a3cfd71b8" iface="eth0" netns="/var/run/netns/cni-11ba9aad-046a-bf8d-a1df-fb5fa30c6666" Feb 13 20:10:08.107860 containerd[2097]: 2025-02-13 20:10:07.578 [INFO][4888] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a7e7c47ce2129c91594868baca4041d2f1eb491e7ccbf6d3af47a11a3cfd71b8" Feb 13 20:10:08.107860 containerd[2097]: 2025-02-13 20:10:07.578 [INFO][4888] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a7e7c47ce2129c91594868baca4041d2f1eb491e7ccbf6d3af47a11a3cfd71b8" Feb 13 20:10:08.107860 containerd[2097]: 2025-02-13 20:10:08.070 [INFO][4895] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a7e7c47ce2129c91594868baca4041d2f1eb491e7ccbf6d3af47a11a3cfd71b8" HandleID="k8s-pod-network.a7e7c47ce2129c91594868baca4041d2f1eb491e7ccbf6d3af47a11a3cfd71b8" Workload="ip--172--31--16--93-k8s-csi--node--driver--g2mq8-eth0" Feb 13 20:10:08.107860 containerd[2097]: 2025-02-13 20:10:08.073 [INFO][4895] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:10:08.107860 containerd[2097]: 2025-02-13 20:10:08.073 [INFO][4895] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:10:08.107860 containerd[2097]: 2025-02-13 20:10:08.096 [WARNING][4895] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a7e7c47ce2129c91594868baca4041d2f1eb491e7ccbf6d3af47a11a3cfd71b8" HandleID="k8s-pod-network.a7e7c47ce2129c91594868baca4041d2f1eb491e7ccbf6d3af47a11a3cfd71b8" Workload="ip--172--31--16--93-k8s-csi--node--driver--g2mq8-eth0" Feb 13 20:10:08.107860 containerd[2097]: 2025-02-13 20:10:08.096 [INFO][4895] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a7e7c47ce2129c91594868baca4041d2f1eb491e7ccbf6d3af47a11a3cfd71b8" HandleID="k8s-pod-network.a7e7c47ce2129c91594868baca4041d2f1eb491e7ccbf6d3af47a11a3cfd71b8" Workload="ip--172--31--16--93-k8s-csi--node--driver--g2mq8-eth0" Feb 13 20:10:08.107860 containerd[2097]: 2025-02-13 20:10:08.099 [INFO][4895] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:10:08.107860 containerd[2097]: 2025-02-13 20:10:08.103 [INFO][4888] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a7e7c47ce2129c91594868baca4041d2f1eb491e7ccbf6d3af47a11a3cfd71b8" Feb 13 20:10:08.112000 containerd[2097]: time="2025-02-13T20:10:08.108327517Z" level=info msg="TearDown network for sandbox \"a7e7c47ce2129c91594868baca4041d2f1eb491e7ccbf6d3af47a11a3cfd71b8\" successfully" Feb 13 20:10:08.112000 containerd[2097]: time="2025-02-13T20:10:08.108362163Z" level=info msg="StopPodSandbox for \"a7e7c47ce2129c91594868baca4041d2f1eb491e7ccbf6d3af47a11a3cfd71b8\" returns successfully" Feb 13 20:10:08.112631 containerd[2097]: time="2025-02-13T20:10:08.112591150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-g2mq8,Uid:1913a1ef-26a6-4963-ad3b-0e30d0c766c9,Namespace:calico-system,Attempt:1,}" Feb 13 20:10:08.120528 systemd[1]: run-netns-cni\x2d11ba9aad\x2d046a\x2dbf8d\x2da1df\x2dfb5fa30c6666.mount: Deactivated successfully. Feb 13 20:10:08.360771 containerd[2097]: time="2025-02-13T20:10:08.360733630Z" level=info msg="StopPodSandbox for \"3d1ca5284543f1dc3550ba7634c6498c40c87dc09b279448626637eebd5af2d6\"" Feb 13 20:10:08.360986 containerd[2097]: time="2025-02-13T20:10:08.360800650Z" level=info msg="StopPodSandbox for \"c50a0fd76b8dcb7dab360ed62e5c96aaf7894b2822f90330b884f2d8e053a981\"" Feb 13 20:10:08.381327 systemd-networkd[1657]: cali505b60cef5d: Link UP Feb 13 20:10:08.383050 (udev-worker)[4841]: Network interface NamePolicy= disabled on kernel command line. Feb 13 20:10:08.399157 systemd-networkd[1657]: cali505b60cef5d: Gained carrier Feb 13 20:10:08.458042 containerd[2097]: 2025-02-13 20:10:08.220 [INFO][4937] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 20:10:08.458042 containerd[2097]: 2025-02-13 20:10:08.240 [INFO][4937] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--93-k8s-csi--node--driver--g2mq8-eth0 csi-node-driver- calico-system 1913a1ef-26a6-4963-ad3b-0e30d0c766c9 838 0 2025-02-13 20:09:37 +0000 UTC <nil> <nil> map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-16-93 csi-node-driver-g2mq8 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali505b60cef5d [] []}} ContainerID="302708740f814e31b72665f701b91cf4edeb31c3871c2c108ab6b60330bd43b0" Namespace="calico-system" Pod="csi-node-driver-g2mq8" WorkloadEndpoint="ip--172--31--16--93-k8s-csi--node--driver--g2mq8-" Feb 13 20:10:08.458042 containerd[2097]: 2025-02-13 20:10:08.240 [INFO][4937] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="302708740f814e31b72665f701b91cf4edeb31c3871c2c108ab6b60330bd43b0" Namespace="calico-system" Pod="csi-node-driver-g2mq8" WorkloadEndpoint="ip--172--31--16--93-k8s-csi--node--driver--g2mq8-eth0" Feb 13 20:10:08.458042 containerd[2097]: 2025-02-13 20:10:08.288 [INFO][4948] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="302708740f814e31b72665f701b91cf4edeb31c3871c2c108ab6b60330bd43b0" HandleID="k8s-pod-network.302708740f814e31b72665f701b91cf4edeb31c3871c2c108ab6b60330bd43b0" Workload="ip--172--31--16--93-k8s-csi--node--driver--g2mq8-eth0" Feb 13 20:10:08.458042 containerd[2097]: 2025-02-13 20:10:08.299 [INFO][4948] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="302708740f814e31b72665f701b91cf4edeb31c3871c2c108ab6b60330bd43b0" HandleID="k8s-pod-network.302708740f814e31b72665f701b91cf4edeb31c3871c2c108ab6b60330bd43b0" Workload="ip--172--31--16--93-k8s-csi--node--driver--g2mq8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004d39b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-16-93", "pod":"csi-node-driver-g2mq8", "timestamp":"2025-02-13 20:10:08.288257214 +0000 UTC"}, Hostname:"ip-172-31-16-93", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:10:08.458042 containerd[2097]: 2025-02-13 20:10:08.299 [INFO][4948] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:10:08.458042 containerd[2097]: 2025-02-13 20:10:08.299 [INFO][4948] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:10:08.458042 containerd[2097]: 2025-02-13 20:10:08.299 [INFO][4948] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-93' Feb 13 20:10:08.458042 containerd[2097]: 2025-02-13 20:10:08.302 [INFO][4948] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.302708740f814e31b72665f701b91cf4edeb31c3871c2c108ab6b60330bd43b0" host="ip-172-31-16-93" Feb 13 20:10:08.458042 containerd[2097]: 2025-02-13 20:10:08.316 [INFO][4948] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-16-93" Feb 13 20:10:08.458042 containerd[2097]: 2025-02-13 20:10:08.326 [INFO][4948] ipam/ipam.go 489: Trying affinity for 192.168.111.64/26 host="ip-172-31-16-93" Feb 13 20:10:08.458042 containerd[2097]: 2025-02-13 20:10:08.330 [INFO][4948] ipam/ipam.go 155: Attempting to load block cidr=192.168.111.64/26 host="ip-172-31-16-93" Feb 13 20:10:08.458042 containerd[2097]: 2025-02-13 20:10:08.333 [INFO][4948] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.111.64/26 host="ip-172-31-16-93" Feb 13 20:10:08.458042 containerd[2097]: 2025-02-13 20:10:08.333 [INFO][4948] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.111.64/26 handle="k8s-pod-network.302708740f814e31b72665f701b91cf4edeb31c3871c2c108ab6b60330bd43b0" host="ip-172-31-16-93" Feb 13 20:10:08.458042 containerd[2097]: 2025-02-13 20:10:08.335 [INFO][4948] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.302708740f814e31b72665f701b91cf4edeb31c3871c2c108ab6b60330bd43b0 Feb 13 20:10:08.458042 containerd[2097]: 2025-02-13 20:10:08.341 [INFO][4948] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.111.64/26 handle="k8s-pod-network.302708740f814e31b72665f701b91cf4edeb31c3871c2c108ab6b60330bd43b0" host="ip-172-31-16-93" Feb 13 20:10:08.458042 containerd[2097]: 2025-02-13 20:10:08.347 [INFO][4948] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.111.65/26] block=192.168.111.64/26 handle="k8s-pod-network.302708740f814e31b72665f701b91cf4edeb31c3871c2c108ab6b60330bd43b0" host="ip-172-31-16-93" Feb 13 20:10:08.458042 containerd[2097]: 2025-02-13 20:10:08.347 [INFO][4948] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.111.65/26] handle="k8s-pod-network.302708740f814e31b72665f701b91cf4edeb31c3871c2c108ab6b60330bd43b0" host="ip-172-31-16-93" Feb 13 20:10:08.458042 containerd[2097]: 2025-02-13 20:10:08.347 [INFO][4948] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:10:08.458042 containerd[2097]: 2025-02-13 20:10:08.348 [INFO][4948] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.111.65/26] IPv6=[] ContainerID="302708740f814e31b72665f701b91cf4edeb31c3871c2c108ab6b60330bd43b0" HandleID="k8s-pod-network.302708740f814e31b72665f701b91cf4edeb31c3871c2c108ab6b60330bd43b0" Workload="ip--172--31--16--93-k8s-csi--node--driver--g2mq8-eth0" Feb 13 20:10:08.461798 containerd[2097]: 2025-02-13 20:10:08.351 [INFO][4937] cni-plugin/k8s.go 386: Populated endpoint ContainerID="302708740f814e31b72665f701b91cf4edeb31c3871c2c108ab6b60330bd43b0" Namespace="calico-system" Pod="csi-node-driver-g2mq8" WorkloadEndpoint="ip--172--31--16--93-k8s-csi--node--driver--g2mq8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--93-k8s-csi--node--driver--g2mq8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1913a1ef-26a6-4963-ad3b-0e30d0c766c9", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 9, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-93", ContainerID:"", Pod:"csi-node-driver-g2mq8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.111.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali505b60cef5d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:10:08.461798 containerd[2097]: 2025-02-13 20:10:08.351 [INFO][4937] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.111.65/32] ContainerID="302708740f814e31b72665f701b91cf4edeb31c3871c2c108ab6b60330bd43b0" Namespace="calico-system" Pod="csi-node-driver-g2mq8" WorkloadEndpoint="ip--172--31--16--93-k8s-csi--node--driver--g2mq8-eth0" Feb 13 20:10:08.461798 containerd[2097]: 2025-02-13 20:10:08.351 [INFO][4937] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali505b60cef5d ContainerID="302708740f814e31b72665f701b91cf4edeb31c3871c2c108ab6b60330bd43b0" Namespace="calico-system" Pod="csi-node-driver-g2mq8" WorkloadEndpoint="ip--172--31--16--93-k8s-csi--node--driver--g2mq8-eth0" Feb 13 20:10:08.461798 containerd[2097]: 2025-02-13 20:10:08.400 [INFO][4937] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="302708740f814e31b72665f701b91cf4edeb31c3871c2c108ab6b60330bd43b0" Namespace="calico-system" Pod="csi-node-driver-g2mq8" WorkloadEndpoint="ip--172--31--16--93-k8s-csi--node--driver--g2mq8-eth0" Feb 13 20:10:08.461798 containerd[2097]: 2025-02-13 20:10:08.412 [INFO][4937] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="302708740f814e31b72665f701b91cf4edeb31c3871c2c108ab6b60330bd43b0" Namespace="calico-system" Pod="csi-node-driver-g2mq8" WorkloadEndpoint="ip--172--31--16--93-k8s-csi--node--driver--g2mq8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--93-k8s-csi--node--driver--g2mq8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1913a1ef-26a6-4963-ad3b-0e30d0c766c9", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 9, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-93", ContainerID:"302708740f814e31b72665f701b91cf4edeb31c3871c2c108ab6b60330bd43b0", Pod:"csi-node-driver-g2mq8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.111.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali505b60cef5d", MAC:"a2:8e:9e:01:28:c5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:10:08.461798 containerd[2097]: 2025-02-13 20:10:08.450 [INFO][4937] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="302708740f814e31b72665f701b91cf4edeb31c3871c2c108ab6b60330bd43b0" Namespace="calico-system" Pod="csi-node-driver-g2mq8" WorkloadEndpoint="ip--172--31--16--93-k8s-csi--node--driver--g2mq8-eth0" Feb 13 20:10:08.543529 containerd[2097]: time="2025-02-13T20:10:08.543407189Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:10:08.545015 containerd[2097]: time="2025-02-13T20:10:08.544458538Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:10:08.545015 containerd[2097]: time="2025-02-13T20:10:08.544657982Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:10:08.546246 containerd[2097]: time="2025-02-13T20:10:08.546173071Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:10:08.695852 containerd[2097]: 2025-02-13 20:10:08.574 [INFO][4986] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3d1ca5284543f1dc3550ba7634c6498c40c87dc09b279448626637eebd5af2d6" Feb 13 20:10:08.695852 containerd[2097]: 2025-02-13 20:10:08.581 [INFO][4986] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3d1ca5284543f1dc3550ba7634c6498c40c87dc09b279448626637eebd5af2d6" iface="eth0" netns="/var/run/netns/cni-9cac3a1f-d9a8-bf10-57b8-71aba7f5e266" Feb 13 20:10:08.695852 containerd[2097]: 2025-02-13 20:10:08.583 [INFO][4986] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3d1ca5284543f1dc3550ba7634c6498c40c87dc09b279448626637eebd5af2d6" iface="eth0" netns="/var/run/netns/cni-9cac3a1f-d9a8-bf10-57b8-71aba7f5e266" Feb 13 20:10:08.695852 containerd[2097]: 2025-02-13 20:10:08.584 [INFO][4986] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3d1ca5284543f1dc3550ba7634c6498c40c87dc09b279448626637eebd5af2d6" iface="eth0" netns="/var/run/netns/cni-9cac3a1f-d9a8-bf10-57b8-71aba7f5e266" Feb 13 20:10:08.695852 containerd[2097]: 2025-02-13 20:10:08.584 [INFO][4986] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3d1ca5284543f1dc3550ba7634c6498c40c87dc09b279448626637eebd5af2d6" Feb 13 20:10:08.695852 containerd[2097]: 2025-02-13 20:10:08.584 [INFO][4986] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3d1ca5284543f1dc3550ba7634c6498c40c87dc09b279448626637eebd5af2d6" Feb 13 20:10:08.695852 containerd[2097]: 2025-02-13 20:10:08.651 [INFO][5034] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3d1ca5284543f1dc3550ba7634c6498c40c87dc09b279448626637eebd5af2d6" HandleID="k8s-pod-network.3d1ca5284543f1dc3550ba7634c6498c40c87dc09b279448626637eebd5af2d6" Workload="ip--172--31--16--93-k8s-calico--kube--controllers--587f87bbd4--mm8mf-eth0" Feb 13 20:10:08.695852 containerd[2097]: 2025-02-13 20:10:08.651 [INFO][5034] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:10:08.695852 containerd[2097]: 2025-02-13 20:10:08.651 [INFO][5034] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:10:08.695852 containerd[2097]: 2025-02-13 20:10:08.662 [WARNING][5034] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3d1ca5284543f1dc3550ba7634c6498c40c87dc09b279448626637eebd5af2d6" HandleID="k8s-pod-network.3d1ca5284543f1dc3550ba7634c6498c40c87dc09b279448626637eebd5af2d6" Workload="ip--172--31--16--93-k8s-calico--kube--controllers--587f87bbd4--mm8mf-eth0" Feb 13 20:10:08.695852 containerd[2097]: 2025-02-13 20:10:08.662 [INFO][5034] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3d1ca5284543f1dc3550ba7634c6498c40c87dc09b279448626637eebd5af2d6" HandleID="k8s-pod-network.3d1ca5284543f1dc3550ba7634c6498c40c87dc09b279448626637eebd5af2d6" Workload="ip--172--31--16--93-k8s-calico--kube--controllers--587f87bbd4--mm8mf-eth0" Feb 13 20:10:08.695852 containerd[2097]: 2025-02-13 20:10:08.664 [INFO][5034] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:10:08.695852 containerd[2097]: 2025-02-13 20:10:08.682 [INFO][4986] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3d1ca5284543f1dc3550ba7634c6498c40c87dc09b279448626637eebd5af2d6" Feb 13 20:10:08.697179 containerd[2097]: time="2025-02-13T20:10:08.697143684Z" level=info msg="TearDown network for sandbox \"3d1ca5284543f1dc3550ba7634c6498c40c87dc09b279448626637eebd5af2d6\" successfully" Feb 13 20:10:08.697296 containerd[2097]: time="2025-02-13T20:10:08.697280240Z" level=info msg="StopPodSandbox for \"3d1ca5284543f1dc3550ba7634c6498c40c87dc09b279448626637eebd5af2d6\" returns successfully" Feb 13 20:10:08.699400 containerd[2097]: time="2025-02-13T20:10:08.699368157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-587f87bbd4-mm8mf,Uid:ba17dcdf-1279-4496-b0fc-fdde00ad61dc,Namespace:calico-system,Attempt:1,}" Feb 13 20:10:08.736928 containerd[2097]: 2025-02-13 20:10:08.583 [INFO][4982] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c50a0fd76b8dcb7dab360ed62e5c96aaf7894b2822f90330b884f2d8e053a981" Feb 13 20:10:08.736928 containerd[2097]: 2025-02-13 20:10:08.583 [INFO][4982] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c50a0fd76b8dcb7dab360ed62e5c96aaf7894b2822f90330b884f2d8e053a981" iface="eth0" netns="/var/run/netns/cni-edc7a730-6d69-b2ba-1771-d2af1e078c6c" Feb 13 20:10:08.736928 containerd[2097]: 2025-02-13 20:10:08.584 [INFO][4982] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c50a0fd76b8dcb7dab360ed62e5c96aaf7894b2822f90330b884f2d8e053a981" iface="eth0" netns="/var/run/netns/cni-edc7a730-6d69-b2ba-1771-d2af1e078c6c" Feb 13 20:10:08.736928 containerd[2097]: 2025-02-13 20:10:08.584 [INFO][4982] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c50a0fd76b8dcb7dab360ed62e5c96aaf7894b2822f90330b884f2d8e053a981" iface="eth0" netns="/var/run/netns/cni-edc7a730-6d69-b2ba-1771-d2af1e078c6c" Feb 13 20:10:08.736928 containerd[2097]: 2025-02-13 20:10:08.584 [INFO][4982] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c50a0fd76b8dcb7dab360ed62e5c96aaf7894b2822f90330b884f2d8e053a981" Feb 13 20:10:08.736928 containerd[2097]: 2025-02-13 20:10:08.585 [INFO][4982] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c50a0fd76b8dcb7dab360ed62e5c96aaf7894b2822f90330b884f2d8e053a981" Feb 13 20:10:08.736928 containerd[2097]: 2025-02-13 20:10:08.680 [INFO][5035] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c50a0fd76b8dcb7dab360ed62e5c96aaf7894b2822f90330b884f2d8e053a981" HandleID="k8s-pod-network.c50a0fd76b8dcb7dab360ed62e5c96aaf7894b2822f90330b884f2d8e053a981" Workload="ip--172--31--16--93-k8s-calico--apiserver--7766b6c6c6--45497-eth0" Feb 13 20:10:08.736928 containerd[2097]: 2025-02-13 20:10:08.681 [INFO][5035] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:10:08.736928 containerd[2097]: 2025-02-13 20:10:08.682 [INFO][5035] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:10:08.736928 containerd[2097]: 2025-02-13 20:10:08.709 [WARNING][5035] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c50a0fd76b8dcb7dab360ed62e5c96aaf7894b2822f90330b884f2d8e053a981" HandleID="k8s-pod-network.c50a0fd76b8dcb7dab360ed62e5c96aaf7894b2822f90330b884f2d8e053a981" Workload="ip--172--31--16--93-k8s-calico--apiserver--7766b6c6c6--45497-eth0" Feb 13 20:10:08.736928 containerd[2097]: 2025-02-13 20:10:08.709 [INFO][5035] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c50a0fd76b8dcb7dab360ed62e5c96aaf7894b2822f90330b884f2d8e053a981" HandleID="k8s-pod-network.c50a0fd76b8dcb7dab360ed62e5c96aaf7894b2822f90330b884f2d8e053a981" Workload="ip--172--31--16--93-k8s-calico--apiserver--7766b6c6c6--45497-eth0" Feb 13 20:10:08.736928 containerd[2097]: 2025-02-13 20:10:08.717 [INFO][5035] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:10:08.736928 containerd[2097]: 2025-02-13 20:10:08.728 [INFO][4982] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c50a0fd76b8dcb7dab360ed62e5c96aaf7894b2822f90330b884f2d8e053a981" Feb 13 20:10:08.770466 containerd[2097]: time="2025-02-13T20:10:08.738037504Z" level=info msg="TearDown network for sandbox \"c50a0fd76b8dcb7dab360ed62e5c96aaf7894b2822f90330b884f2d8e053a981\" successfully" Feb 13 20:10:08.770466 containerd[2097]: time="2025-02-13T20:10:08.738273499Z" level=info msg="StopPodSandbox for \"c50a0fd76b8dcb7dab360ed62e5c96aaf7894b2822f90330b884f2d8e053a981\" returns successfully" Feb 13 20:10:08.770466 containerd[2097]: time="2025-02-13T20:10:08.743825007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-g2mq8,Uid:1913a1ef-26a6-4963-ad3b-0e30d0c766c9,Namespace:calico-system,Attempt:1,} returns sandbox id \"302708740f814e31b72665f701b91cf4edeb31c3871c2c108ab6b60330bd43b0\"" Feb 13 20:10:08.770466 containerd[2097]: time="2025-02-13T20:10:08.748189382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7766b6c6c6-45497,Uid:42c53b1a-63e3-4525-8d7f-9aecdf031a3b,Namespace:calico-apiserver,Attempt:1,}" Feb 13 20:10:08.770466 containerd[2097]: time="2025-02-13T20:10:08.751280848Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 20:10:08.943100 systemd[1]: run-netns-cni\x2dedc7a730\x2d6d69\x2db2ba\x2d1771\x2dd2af1e078c6c.mount: Deactivated successfully. Feb 13 20:10:08.944319 systemd[1]: run-netns-cni\x2d9cac3a1f\x2dd9a8\x2dbf10\x2d57b8\x2d71aba7f5e266.mount: Deactivated successfully. Feb 13 20:10:09.363530 containerd[2097]: time="2025-02-13T20:10:09.363366055Z" level=info msg="StopPodSandbox for \"f9dccd54955daa6f0903bf78e5bbf99ac85ab89aa8f2db62860f39747f28c482\"" Feb 13 20:10:09.369162 containerd[2097]: time="2025-02-13T20:10:09.368834016Z" level=info msg="StopPodSandbox for \"93efc1025ebd330db77b04ca102e7ec8308a59565feecc61baada5f9ebb71aa3\"" Feb 13 20:10:09.519424 systemd-networkd[1657]: calie35944538e5: Link UP Feb 13 20:10:09.524617 systemd-networkd[1657]: calie35944538e5: Gained carrier Feb 13 20:10:09.586359 systemd-networkd[1657]: cali505b60cef5d: Gained IPv6LL Feb 13 20:10:09.616754 containerd[2097]: 2025-02-13 20:10:09.041 [INFO][5061] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 20:10:09.616754 containerd[2097]: 2025-02-13 20:10:09.100 [INFO][5061] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--93-k8s-calico--kube--controllers--587f87bbd4--mm8mf-eth0 calico-kube-controllers-587f87bbd4- calico-system ba17dcdf-1279-4496-b0fc-fdde00ad61dc 857 0 2025-02-13 20:09:37 +0000 UTC <nil> <nil> map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:587f87bbd4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-16-93 calico-kube-controllers-587f87bbd4-mm8mf eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calie35944538e5 [] []}} ContainerID="08fe7c3fceb4e68bf1525388f5ee0c7c65b68b682b0d9d568cd1f3214b738468" Namespace="calico-system" Pod="calico-kube-controllers-587f87bbd4-mm8mf" WorkloadEndpoint="ip--172--31--16--93-k8s-calico--kube--controllers--587f87bbd4--mm8mf-" Feb 13 20:10:09.616754 containerd[2097]: 2025-02-13 20:10:09.100 [INFO][5061] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="08fe7c3fceb4e68bf1525388f5ee0c7c65b68b682b0d9d568cd1f3214b738468" Namespace="calico-system" Pod="calico-kube-controllers-587f87bbd4-mm8mf" WorkloadEndpoint="ip--172--31--16--93-k8s-calico--kube--controllers--587f87bbd4--mm8mf-eth0" Feb 13 20:10:09.616754 containerd[2097]: 2025-02-13 20:10:09.260 [INFO][5114] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="08fe7c3fceb4e68bf1525388f5ee0c7c65b68b682b0d9d568cd1f3214b738468" HandleID="k8s-pod-network.08fe7c3fceb4e68bf1525388f5ee0c7c65b68b682b0d9d568cd1f3214b738468" Workload="ip--172--31--16--93-k8s-calico--kube--controllers--587f87bbd4--mm8mf-eth0" Feb 13 20:10:09.616754 containerd[2097]: 2025-02-13 20:10:09.286 [INFO][5114] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="08fe7c3fceb4e68bf1525388f5ee0c7c65b68b682b0d9d568cd1f3214b738468" HandleID="k8s-pod-network.08fe7c3fceb4e68bf1525388f5ee0c7c65b68b682b0d9d568cd1f3214b738468" Workload="ip--172--31--16--93-k8s-calico--kube--controllers--587f87bbd4--mm8mf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051570), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-16-93", "pod":"calico-kube-controllers-587f87bbd4-mm8mf", "timestamp":"2025-02-13 20:10:09.259862473 +0000 UTC"}, Hostname:"ip-172-31-16-93", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:10:09.616754 containerd[2097]: 2025-02-13 20:10:09.287 [INFO][5114] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:10:09.616754 containerd[2097]: 2025-02-13 20:10:09.287 [INFO][5114] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:10:09.616754 containerd[2097]: 2025-02-13 20:10:09.288 [INFO][5114] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-93' Feb 13 20:10:09.616754 containerd[2097]: 2025-02-13 20:10:09.292 [INFO][5114] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.08fe7c3fceb4e68bf1525388f5ee0c7c65b68b682b0d9d568cd1f3214b738468" host="ip-172-31-16-93" Feb 13 20:10:09.616754 containerd[2097]: 2025-02-13 20:10:09.308 [INFO][5114] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-16-93" Feb 13 20:10:09.616754 containerd[2097]: 2025-02-13 20:10:09.339 [INFO][5114] ipam/ipam.go 489: Trying affinity for 192.168.111.64/26 host="ip-172-31-16-93" Feb 13 20:10:09.616754 containerd[2097]: 2025-02-13 20:10:09.346 [INFO][5114] ipam/ipam.go 155: Attempting to load block cidr=192.168.111.64/26 host="ip-172-31-16-93" Feb 13 20:10:09.616754 containerd[2097]: 2025-02-13 20:10:09.363 [INFO][5114] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.111.64/26 host="ip-172-31-16-93" Feb 13 20:10:09.616754 containerd[2097]: 2025-02-13 20:10:09.365 [INFO][5114] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.111.64/26 handle="k8s-pod-network.08fe7c3fceb4e68bf1525388f5ee0c7c65b68b682b0d9d568cd1f3214b738468" host="ip-172-31-16-93" Feb 13 20:10:09.616754 containerd[2097]: 2025-02-13 20:10:09.372 [INFO][5114] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.08fe7c3fceb4e68bf1525388f5ee0c7c65b68b682b0d9d568cd1f3214b738468 Feb 13 20:10:09.616754 containerd[2097]: 2025-02-13 20:10:09.395 [INFO][5114] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.111.64/26 handle="k8s-pod-network.08fe7c3fceb4e68bf1525388f5ee0c7c65b68b682b0d9d568cd1f3214b738468" host="ip-172-31-16-93" Feb 13 20:10:09.616754 containerd[2097]: 2025-02-13 20:10:09.420 [INFO][5114] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.111.66/26] block=192.168.111.64/26 handle="k8s-pod-network.08fe7c3fceb4e68bf1525388f5ee0c7c65b68b682b0d9d568cd1f3214b738468" host="ip-172-31-16-93" Feb 13 20:10:09.616754 containerd[2097]: 2025-02-13 20:10:09.420 [INFO][5114] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.111.66/26] handle="k8s-pod-network.08fe7c3fceb4e68bf1525388f5ee0c7c65b68b682b0d9d568cd1f3214b738468" host="ip-172-31-16-93" Feb 13 20:10:09.616754 containerd[2097]: 2025-02-13 20:10:09.420 [INFO][5114] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:10:09.616754 containerd[2097]: 2025-02-13 20:10:09.420 [INFO][5114] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.111.66/26] IPv6=[] ContainerID="08fe7c3fceb4e68bf1525388f5ee0c7c65b68b682b0d9d568cd1f3214b738468" HandleID="k8s-pod-network.08fe7c3fceb4e68bf1525388f5ee0c7c65b68b682b0d9d568cd1f3214b738468" Workload="ip--172--31--16--93-k8s-calico--kube--controllers--587f87bbd4--mm8mf-eth0" Feb 13 20:10:09.629003 containerd[2097]: 2025-02-13 20:10:09.434 [INFO][5061] cni-plugin/k8s.go 386: Populated endpoint ContainerID="08fe7c3fceb4e68bf1525388f5ee0c7c65b68b682b0d9d568cd1f3214b738468" Namespace="calico-system" Pod="calico-kube-controllers-587f87bbd4-mm8mf" WorkloadEndpoint="ip--172--31--16--93-k8s-calico--kube--controllers--587f87bbd4--mm8mf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--93-k8s-calico--kube--controllers--587f87bbd4--mm8mf-eth0", GenerateName:"calico-kube-controllers-587f87bbd4-", Namespace:"calico-system", SelfLink:"", UID:"ba17dcdf-1279-4496-b0fc-fdde00ad61dc", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 9, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"587f87bbd4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-93", ContainerID:"", Pod:"calico-kube-controllers-587f87bbd4-mm8mf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.111.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie35944538e5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:10:09.629003 containerd[2097]: 2025-02-13 20:10:09.496 [INFO][5061] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.111.66/32] ContainerID="08fe7c3fceb4e68bf1525388f5ee0c7c65b68b682b0d9d568cd1f3214b738468" Namespace="calico-system" Pod="calico-kube-controllers-587f87bbd4-mm8mf" WorkloadEndpoint="ip--172--31--16--93-k8s-calico--kube--controllers--587f87bbd4--mm8mf-eth0" Feb 13 20:10:09.629003 containerd[2097]: 2025-02-13 20:10:09.497 [INFO][5061] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie35944538e5 ContainerID="08fe7c3fceb4e68bf1525388f5ee0c7c65b68b682b0d9d568cd1f3214b738468" Namespace="calico-system" Pod="calico-kube-controllers-587f87bbd4-mm8mf" WorkloadEndpoint="ip--172--31--16--93-k8s-calico--kube--controllers--587f87bbd4--mm8mf-eth0" Feb 13 20:10:09.629003 containerd[2097]: 2025-02-13 20:10:09.524 [INFO][5061] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="08fe7c3fceb4e68bf1525388f5ee0c7c65b68b682b0d9d568cd1f3214b738468" Namespace="calico-system" Pod="calico-kube-controllers-587f87bbd4-mm8mf" WorkloadEndpoint="ip--172--31--16--93-k8s-calico--kube--controllers--587f87bbd4--mm8mf-eth0" Feb 13 20:10:09.629003 containerd[2097]: 2025-02-13 20:10:09.534 [INFO][5061] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="08fe7c3fceb4e68bf1525388f5ee0c7c65b68b682b0d9d568cd1f3214b738468" Namespace="calico-system" Pod="calico-kube-controllers-587f87bbd4-mm8mf" WorkloadEndpoint="ip--172--31--16--93-k8s-calico--kube--controllers--587f87bbd4--mm8mf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--93-k8s-calico--kube--controllers--587f87bbd4--mm8mf-eth0", GenerateName:"calico-kube-controllers-587f87bbd4-", Namespace:"calico-system", SelfLink:"", UID:"ba17dcdf-1279-4496-b0fc-fdde00ad61dc", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 9, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"587f87bbd4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-93", ContainerID:"08fe7c3fceb4e68bf1525388f5ee0c7c65b68b682b0d9d568cd1f3214b738468", Pod:"calico-kube-controllers-587f87bbd4-mm8mf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.111.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie35944538e5", MAC:"6e:f2:16:56:de:e8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:10:09.629003 containerd[2097]: 2025-02-13 20:10:09.558 [INFO][5061] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="08fe7c3fceb4e68bf1525388f5ee0c7c65b68b682b0d9d568cd1f3214b738468" Namespace="calico-system" Pod="calico-kube-controllers-587f87bbd4-mm8mf" WorkloadEndpoint="ip--172--31--16--93-k8s-calico--kube--controllers--587f87bbd4--mm8mf-eth0" Feb 13 20:10:09.849134 systemd-networkd[1657]: calie44524c144e: Link UP Feb 13 20:10:09.871559 systemd-networkd[1657]: calie44524c144e: Gained carrier Feb 13 20:10:09.951551 containerd[2097]: 2025-02-13 20:10:09.089 [INFO][5068] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 20:10:09.951551 containerd[2097]: 2025-02-13 20:10:09.188 [INFO][5068] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--93-k8s-calico--apiserver--7766b6c6c6--45497-eth0 calico-apiserver-7766b6c6c6- calico-apiserver 42c53b1a-63e3-4525-8d7f-9aecdf031a3b 858 0 2025-02-13 20:09:38 +0000 UTC <nil> <nil> map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7766b6c6c6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-16-93 calico-apiserver-7766b6c6c6-45497 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie44524c144e [] []}} ContainerID="45dd1348fb40060ace48b4ec1d8756b08fb329e83bae839941262dc81e2aa5a4" Namespace="calico-apiserver" Pod="calico-apiserver-7766b6c6c6-45497" WorkloadEndpoint="ip--172--31--16--93-k8s-calico--apiserver--7766b6c6c6--45497-" Feb 13 20:10:09.951551 containerd[2097]: 2025-02-13 20:10:09.189 [INFO][5068] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="45dd1348fb40060ace48b4ec1d8756b08fb329e83bae839941262dc81e2aa5a4" Namespace="calico-apiserver" Pod="calico-apiserver-7766b6c6c6-45497" WorkloadEndpoint="ip--172--31--16--93-k8s-calico--apiserver--7766b6c6c6--45497-eth0" Feb 13 20:10:09.951551 containerd[2097]: 2025-02-13 20:10:09.436 [INFO][5144] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="45dd1348fb40060ace48b4ec1d8756b08fb329e83bae839941262dc81e2aa5a4" HandleID="k8s-pod-network.45dd1348fb40060ace48b4ec1d8756b08fb329e83bae839941262dc81e2aa5a4" Workload="ip--172--31--16--93-k8s-calico--apiserver--7766b6c6c6--45497-eth0" Feb 13 20:10:09.951551 containerd[2097]: 2025-02-13 20:10:09.581 [INFO][5144] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="45dd1348fb40060ace48b4ec1d8756b08fb329e83bae839941262dc81e2aa5a4" HandleID="k8s-pod-network.45dd1348fb40060ace48b4ec1d8756b08fb329e83bae839941262dc81e2aa5a4" Workload="ip--172--31--16--93-k8s-calico--apiserver--7766b6c6c6--45497-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00035b4a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-16-93", "pod":"calico-apiserver-7766b6c6c6-45497", "timestamp":"2025-02-13 20:10:09.43658298 +0000 UTC"}, Hostname:"ip-172-31-16-93", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:10:09.951551 containerd[2097]: 2025-02-13 20:10:09.581 [INFO][5144] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:10:09.951551 containerd[2097]: 2025-02-13 20:10:09.582 [INFO][5144] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:10:09.951551 containerd[2097]: 2025-02-13 20:10:09.582 [INFO][5144] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-93' Feb 13 20:10:09.951551 containerd[2097]: 2025-02-13 20:10:09.599 [INFO][5144] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.45dd1348fb40060ace48b4ec1d8756b08fb329e83bae839941262dc81e2aa5a4" host="ip-172-31-16-93" Feb 13 20:10:09.951551 containerd[2097]: 2025-02-13 20:10:09.658 [INFO][5144] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-16-93" Feb 13 20:10:09.951551 containerd[2097]: 2025-02-13 20:10:09.698 [INFO][5144] ipam/ipam.go 489: Trying affinity for 192.168.111.64/26 host="ip-172-31-16-93" Feb 13 20:10:09.951551 containerd[2097]: 2025-02-13 20:10:09.709 [INFO][5144] ipam/ipam.go 155: Attempting to load block cidr=192.168.111.64/26 host="ip-172-31-16-93" Feb 13 20:10:09.951551 containerd[2097]: 2025-02-13 20:10:09.718 [INFO][5144] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.111.64/26 host="ip-172-31-16-93" Feb 13 20:10:09.951551 containerd[2097]: 2025-02-13 20:10:09.718 [INFO][5144] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.111.64/26 handle="k8s-pod-network.45dd1348fb40060ace48b4ec1d8756b08fb329e83bae839941262dc81e2aa5a4" host="ip-172-31-16-93" Feb 13 20:10:09.951551 containerd[2097]: 2025-02-13 20:10:09.739 [INFO][5144] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.45dd1348fb40060ace48b4ec1d8756b08fb329e83bae839941262dc81e2aa5a4 Feb 13 20:10:09.951551 containerd[2097]: 2025-02-13 20:10:09.761 [INFO][5144] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.111.64/26 handle="k8s-pod-network.45dd1348fb40060ace48b4ec1d8756b08fb329e83bae839941262dc81e2aa5a4" host="ip-172-31-16-93" Feb 13 20:10:09.951551 containerd[2097]: 2025-02-13 20:10:09.789 [INFO][5144] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.111.67/26] block=192.168.111.64/26 handle="k8s-pod-network.45dd1348fb40060ace48b4ec1d8756b08fb329e83bae839941262dc81e2aa5a4" host="ip-172-31-16-93" Feb 13 20:10:09.951551 containerd[2097]: 2025-02-13 20:10:09.790 [INFO][5144] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.111.67/26] handle="k8s-pod-network.45dd1348fb40060ace48b4ec1d8756b08fb329e83bae839941262dc81e2aa5a4" host="ip-172-31-16-93" Feb 13 20:10:09.951551 containerd[2097]: 2025-02-13 20:10:09.790 [INFO][5144] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:10:09.951551 containerd[2097]: 2025-02-13 20:10:09.793 [INFO][5144] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.111.67/26] IPv6=[] ContainerID="45dd1348fb40060ace48b4ec1d8756b08fb329e83bae839941262dc81e2aa5a4" HandleID="k8s-pod-network.45dd1348fb40060ace48b4ec1d8756b08fb329e83bae839941262dc81e2aa5a4" Workload="ip--172--31--16--93-k8s-calico--apiserver--7766b6c6c6--45497-eth0" Feb 13 20:10:09.953352 containerd[2097]: 2025-02-13 20:10:09.812 [INFO][5068] cni-plugin/k8s.go 386: Populated endpoint ContainerID="45dd1348fb40060ace48b4ec1d8756b08fb329e83bae839941262dc81e2aa5a4" Namespace="calico-apiserver" Pod="calico-apiserver-7766b6c6c6-45497" WorkloadEndpoint="ip--172--31--16--93-k8s-calico--apiserver--7766b6c6c6--45497-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--93-k8s-calico--apiserver--7766b6c6c6--45497-eth0", GenerateName:"calico-apiserver-7766b6c6c6-", Namespace:"calico-apiserver", SelfLink:"", UID:"42c53b1a-63e3-4525-8d7f-9aecdf031a3b", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 9, 38, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7766b6c6c6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-93", ContainerID:"", Pod:"calico-apiserver-7766b6c6c6-45497", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.111.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie44524c144e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:10:09.953352 containerd[2097]: 2025-02-13 20:10:09.814 [INFO][5068] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.111.67/32] ContainerID="45dd1348fb40060ace48b4ec1d8756b08fb329e83bae839941262dc81e2aa5a4" Namespace="calico-apiserver" Pod="calico-apiserver-7766b6c6c6-45497" WorkloadEndpoint="ip--172--31--16--93-k8s-calico--apiserver--7766b6c6c6--45497-eth0" Feb 13 20:10:09.953352 containerd[2097]: 2025-02-13 20:10:09.815 [INFO][5068] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie44524c144e ContainerID="45dd1348fb40060ace48b4ec1d8756b08fb329e83bae839941262dc81e2aa5a4" Namespace="calico-apiserver" Pod="calico-apiserver-7766b6c6c6-45497" WorkloadEndpoint="ip--172--31--16--93-k8s-calico--apiserver--7766b6c6c6--45497-eth0" Feb 13 20:10:09.953352 containerd[2097]: 2025-02-13 20:10:09.874 [INFO][5068] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="45dd1348fb40060ace48b4ec1d8756b08fb329e83bae839941262dc81e2aa5a4" Namespace="calico-apiserver" Pod="calico-apiserver-7766b6c6c6-45497" WorkloadEndpoint="ip--172--31--16--93-k8s-calico--apiserver--7766b6c6c6--45497-eth0" Feb 13 20:10:09.953352 containerd[2097]: 2025-02-13 20:10:09.906 [INFO][5068] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="45dd1348fb40060ace48b4ec1d8756b08fb329e83bae839941262dc81e2aa5a4" Namespace="calico-apiserver" Pod="calico-apiserver-7766b6c6c6-45497" WorkloadEndpoint="ip--172--31--16--93-k8s-calico--apiserver--7766b6c6c6--45497-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--93-k8s-calico--apiserver--7766b6c6c6--45497-eth0", GenerateName:"calico-apiserver-7766b6c6c6-", Namespace:"calico-apiserver", SelfLink:"", UID:"42c53b1a-63e3-4525-8d7f-9aecdf031a3b", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 9, 38, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7766b6c6c6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-93", ContainerID:"45dd1348fb40060ace48b4ec1d8756b08fb329e83bae839941262dc81e2aa5a4", Pod:"calico-apiserver-7766b6c6c6-45497", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.111.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie44524c144e", MAC:"ba:a1:e7:b0:c5:85", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:10:09.953352 containerd[2097]: 2025-02-13 20:10:09.946 [INFO][5068] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="45dd1348fb40060ace48b4ec1d8756b08fb329e83bae839941262dc81e2aa5a4" Namespace="calico-apiserver" Pod="calico-apiserver-7766b6c6c6-45497" WorkloadEndpoint="ip--172--31--16--93-k8s-calico--apiserver--7766b6c6c6--45497-eth0" Feb 13 20:10:10.068906 containerd[2097]: time="2025-02-13T20:10:10.068336012Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:10:10.068906 containerd[2097]: time="2025-02-13T20:10:10.068415417Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:10:10.068906 containerd[2097]: time="2025-02-13T20:10:10.068443111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:10:10.068906 containerd[2097]: time="2025-02-13T20:10:10.068584768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:10:10.118765 containerd[2097]: 2025-02-13 20:10:09.927 [INFO][5191] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f9dccd54955daa6f0903bf78e5bbf99ac85ab89aa8f2db62860f39747f28c482" Feb 13 20:10:10.118765 containerd[2097]: 2025-02-13 20:10:09.935 [INFO][5191] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f9dccd54955daa6f0903bf78e5bbf99ac85ab89aa8f2db62860f39747f28c482" iface="eth0" netns="/var/run/netns/cni-55c51574-8502-cca3-50f9-f29297c68f40" Feb 13 20:10:10.118765 containerd[2097]: 2025-02-13 20:10:09.937 [INFO][5191] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f9dccd54955daa6f0903bf78e5bbf99ac85ab89aa8f2db62860f39747f28c482" iface="eth0" netns="/var/run/netns/cni-55c51574-8502-cca3-50f9-f29297c68f40" Feb 13 20:10:10.118765 containerd[2097]: 2025-02-13 20:10:09.937 [INFO][5191] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f9dccd54955daa6f0903bf78e5bbf99ac85ab89aa8f2db62860f39747f28c482" iface="eth0" netns="/var/run/netns/cni-55c51574-8502-cca3-50f9-f29297c68f40" Feb 13 20:10:10.118765 containerd[2097]: 2025-02-13 20:10:09.937 [INFO][5191] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f9dccd54955daa6f0903bf78e5bbf99ac85ab89aa8f2db62860f39747f28c482" Feb 13 20:10:10.118765 containerd[2097]: 2025-02-13 20:10:09.937 [INFO][5191] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f9dccd54955daa6f0903bf78e5bbf99ac85ab89aa8f2db62860f39747f28c482" Feb 13 20:10:10.118765 containerd[2097]: 2025-02-13 20:10:10.064 [INFO][5241] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f9dccd54955daa6f0903bf78e5bbf99ac85ab89aa8f2db62860f39747f28c482" HandleID="k8s-pod-network.f9dccd54955daa6f0903bf78e5bbf99ac85ab89aa8f2db62860f39747f28c482" Workload="ip--172--31--16--93-k8s-coredns--7db6d8ff4d--6tz9h-eth0" Feb 13 20:10:10.118765 containerd[2097]: 2025-02-13 20:10:10.064 [INFO][5241] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:10:10.118765 containerd[2097]: 2025-02-13 20:10:10.064 [INFO][5241] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:10:10.118765 containerd[2097]: 2025-02-13 20:10:10.086 [WARNING][5241] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f9dccd54955daa6f0903bf78e5bbf99ac85ab89aa8f2db62860f39747f28c482" HandleID="k8s-pod-network.f9dccd54955daa6f0903bf78e5bbf99ac85ab89aa8f2db62860f39747f28c482" Workload="ip--172--31--16--93-k8s-coredns--7db6d8ff4d--6tz9h-eth0" Feb 13 20:10:10.118765 containerd[2097]: 2025-02-13 20:10:10.086 [INFO][5241] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f9dccd54955daa6f0903bf78e5bbf99ac85ab89aa8f2db62860f39747f28c482" HandleID="k8s-pod-network.f9dccd54955daa6f0903bf78e5bbf99ac85ab89aa8f2db62860f39747f28c482" Workload="ip--172--31--16--93-k8s-coredns--7db6d8ff4d--6tz9h-eth0" Feb 13 20:10:10.118765 containerd[2097]: 2025-02-13 20:10:10.094 [INFO][5241] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:10:10.118765 containerd[2097]: 2025-02-13 20:10:10.106 [INFO][5191] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f9dccd54955daa6f0903bf78e5bbf99ac85ab89aa8f2db62860f39747f28c482" Feb 13 20:10:10.123864 containerd[2097]: time="2025-02-13T20:10:10.123459842Z" level=info msg="TearDown network for sandbox \"f9dccd54955daa6f0903bf78e5bbf99ac85ab89aa8f2db62860f39747f28c482\" successfully" Feb 13 20:10:10.123864 containerd[2097]: time="2025-02-13T20:10:10.123500672Z" level=info msg="StopPodSandbox for \"f9dccd54955daa6f0903bf78e5bbf99ac85ab89aa8f2db62860f39747f28c482\" returns successfully" Feb 13 20:10:10.132741 containerd[2097]: time="2025-02-13T20:10:10.130219277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6tz9h,Uid:ca33156d-daf7-4956-9c3c-459f7e2dd2f5,Namespace:kube-system,Attempt:1,}" Feb 13 20:10:10.138175 systemd[1]: run-netns-cni\x2d55c51574\x2d8502\x2dcca3\x2d50f9\x2df29297c68f40.mount: Deactivated successfully. Feb 13 20:10:10.167650 containerd[2097]: time="2025-02-13T20:10:10.166564737Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:10:10.168143 containerd[2097]: time="2025-02-13T20:10:10.168038945Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:10:10.168320 containerd[2097]: time="2025-02-13T20:10:10.168292657Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:10:10.169459 containerd[2097]: time="2025-02-13T20:10:10.169400120Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:10:10.231183 systemd-journald[1566]: Under memory pressure, flushing caches. Feb 13 20:10:10.228403 systemd-resolved[1977]: Under memory pressure, flushing caches. Feb 13 20:10:10.228438 systemd-resolved[1977]: Flushed all caches. Feb 13 20:10:10.263032 systemd[1]: run-containerd-runc-k8s.io-08fe7c3fceb4e68bf1525388f5ee0c7c65b68b682b0d9d568cd1f3214b738468-runc.hwe3bX.mount: Deactivated successfully. Feb 13 20:10:10.291562 containerd[2097]: 2025-02-13 20:10:09.805 [INFO][5204] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="93efc1025ebd330db77b04ca102e7ec8308a59565feecc61baada5f9ebb71aa3" Feb 13 20:10:10.291562 containerd[2097]: 2025-02-13 20:10:09.805 [INFO][5204] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="93efc1025ebd330db77b04ca102e7ec8308a59565feecc61baada5f9ebb71aa3" iface="eth0" netns="/var/run/netns/cni-f7822e75-a9d4-47f5-8f12-7f0c320a80c6" Feb 13 20:10:10.291562 containerd[2097]: 2025-02-13 20:10:09.806 [INFO][5204] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="93efc1025ebd330db77b04ca102e7ec8308a59565feecc61baada5f9ebb71aa3" iface="eth0" netns="/var/run/netns/cni-f7822e75-a9d4-47f5-8f12-7f0c320a80c6" Feb 13 20:10:10.291562 containerd[2097]: 2025-02-13 20:10:09.806 [INFO][5204] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="93efc1025ebd330db77b04ca102e7ec8308a59565feecc61baada5f9ebb71aa3" iface="eth0" netns="/var/run/netns/cni-f7822e75-a9d4-47f5-8f12-7f0c320a80c6" Feb 13 20:10:10.291562 containerd[2097]: 2025-02-13 20:10:09.806 [INFO][5204] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="93efc1025ebd330db77b04ca102e7ec8308a59565feecc61baada5f9ebb71aa3" Feb 13 20:10:10.291562 containerd[2097]: 2025-02-13 20:10:09.807 [INFO][5204] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="93efc1025ebd330db77b04ca102e7ec8308a59565feecc61baada5f9ebb71aa3" Feb 13 20:10:10.291562 containerd[2097]: 2025-02-13 20:10:10.099 [INFO][5231] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="93efc1025ebd330db77b04ca102e7ec8308a59565feecc61baada5f9ebb71aa3" HandleID="k8s-pod-network.93efc1025ebd330db77b04ca102e7ec8308a59565feecc61baada5f9ebb71aa3" Workload="ip--172--31--16--93-k8s-calico--apiserver--7766b6c6c6--42f6t-eth0" Feb 13 20:10:10.291562 containerd[2097]: 2025-02-13 20:10:10.102 [INFO][5231] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:10:10.291562 containerd[2097]: 2025-02-13 20:10:10.102 [INFO][5231] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:10:10.291562 containerd[2097]: 2025-02-13 20:10:10.168 [WARNING][5231] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="93efc1025ebd330db77b04ca102e7ec8308a59565feecc61baada5f9ebb71aa3" HandleID="k8s-pod-network.93efc1025ebd330db77b04ca102e7ec8308a59565feecc61baada5f9ebb71aa3" Workload="ip--172--31--16--93-k8s-calico--apiserver--7766b6c6c6--42f6t-eth0" Feb 13 20:10:10.291562 containerd[2097]: 2025-02-13 20:10:10.168 [INFO][5231] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="93efc1025ebd330db77b04ca102e7ec8308a59565feecc61baada5f9ebb71aa3" HandleID="k8s-pod-network.93efc1025ebd330db77b04ca102e7ec8308a59565feecc61baada5f9ebb71aa3" Workload="ip--172--31--16--93-k8s-calico--apiserver--7766b6c6c6--42f6t-eth0" Feb 13 20:10:10.291562 containerd[2097]: 2025-02-13 20:10:10.181 [INFO][5231] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:10:10.291562 containerd[2097]: 2025-02-13 20:10:10.212 [INFO][5204] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="93efc1025ebd330db77b04ca102e7ec8308a59565feecc61baada5f9ebb71aa3" Feb 13 20:10:10.294254 containerd[2097]: time="2025-02-13T20:10:10.293564424Z" level=info msg="TearDown network for sandbox \"93efc1025ebd330db77b04ca102e7ec8308a59565feecc61baada5f9ebb71aa3\" successfully" Feb 13 20:10:10.294254 containerd[2097]: time="2025-02-13T20:10:10.293602897Z" level=info msg="StopPodSandbox for \"93efc1025ebd330db77b04ca102e7ec8308a59565feecc61baada5f9ebb71aa3\" returns successfully" Feb 13 20:10:10.300810 containerd[2097]: time="2025-02-13T20:10:10.299948213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7766b6c6c6-42f6t,Uid:454aef06-4008-4ca0-a239-19f3296963f5,Namespace:calico-apiserver,Attempt:1,}" Feb 13 20:10:10.543328 systemd-networkd[1657]: calie35944538e5: Gained IPv6LL Feb 13 20:10:10.616291 containerd[2097]: time="2025-02-13T20:10:10.616251539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-587f87bbd4-mm8mf,Uid:ba17dcdf-1279-4496-b0fc-fdde00ad61dc,Namespace:calico-system,Attempt:1,} returns sandbox id \"08fe7c3fceb4e68bf1525388f5ee0c7c65b68b682b0d9d568cd1f3214b738468\"" Feb 13 20:10:10.617820 containerd[2097]: time="2025-02-13T20:10:10.617777653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7766b6c6c6-45497,Uid:42c53b1a-63e3-4525-8d7f-9aecdf031a3b,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"45dd1348fb40060ace48b4ec1d8756b08fb329e83bae839941262dc81e2aa5a4\"" Feb 13 20:10:10.800267 kernel: bpftool[5403]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 20:10:10.881679 systemd-networkd[1657]: cali0dc172e4dd4: Link UP Feb 13 20:10:10.884210 systemd-networkd[1657]: cali0dc172e4dd4: Gained carrier Feb 13 20:10:10.917036 containerd[2097]: 2025-02-13 20:10:10.639 [INFO][5334] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 20:10:10.917036 containerd[2097]: 2025-02-13 20:10:10.677 [INFO][5334] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--93-k8s-coredns--7db6d8ff4d--6tz9h-eth0 coredns-7db6d8ff4d- kube-system ca33156d-daf7-4956-9c3c-459f7e2dd2f5 874 0 2025-02-13 20:09:29 +0000 UTC <nil> <nil> map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-16-93 coredns-7db6d8ff4d-6tz9h eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0dc172e4dd4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="c95376eaf1f5afe4131ebf3e3848912f04ab887a10f437a6ec5ad08ba0a70c8b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6tz9h" WorkloadEndpoint="ip--172--31--16--93-k8s-coredns--7db6d8ff4d--6tz9h-" Feb 13 20:10:10.917036 containerd[2097]: 2025-02-13 20:10:10.677 [INFO][5334] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c95376eaf1f5afe4131ebf3e3848912f04ab887a10f437a6ec5ad08ba0a70c8b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6tz9h" WorkloadEndpoint="ip--172--31--16--93-k8s-coredns--7db6d8ff4d--6tz9h-eth0" Feb 13 20:10:10.917036 containerd[2097]: 2025-02-13 20:10:10.787 [INFO][5387] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c95376eaf1f5afe4131ebf3e3848912f04ab887a10f437a6ec5ad08ba0a70c8b" HandleID="k8s-pod-network.c95376eaf1f5afe4131ebf3e3848912f04ab887a10f437a6ec5ad08ba0a70c8b" Workload="ip--172--31--16--93-k8s-coredns--7db6d8ff4d--6tz9h-eth0" Feb 13 20:10:10.917036 containerd[2097]: 2025-02-13 20:10:10.802 [INFO][5387] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c95376eaf1f5afe4131ebf3e3848912f04ab887a10f437a6ec5ad08ba0a70c8b" HandleID="k8s-pod-network.c95376eaf1f5afe4131ebf3e3848912f04ab887a10f437a6ec5ad08ba0a70c8b" Workload="ip--172--31--16--93-k8s-coredns--7db6d8ff4d--6tz9h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000050dc0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-16-93", "pod":"coredns-7db6d8ff4d-6tz9h", "timestamp":"2025-02-13 20:10:10.787583936 +0000 UTC"}, Hostname:"ip-172-31-16-93", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:10:10.917036 containerd[2097]: 2025-02-13 20:10:10.802 [INFO][5387] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:10:10.917036 containerd[2097]: 2025-02-13 20:10:10.802 [INFO][5387] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:10:10.917036 containerd[2097]: 2025-02-13 20:10:10.802 [INFO][5387] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-93' Feb 13 20:10:10.917036 containerd[2097]: 2025-02-13 20:10:10.806 [INFO][5387] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c95376eaf1f5afe4131ebf3e3848912f04ab887a10f437a6ec5ad08ba0a70c8b" host="ip-172-31-16-93" Feb 13 20:10:10.917036 containerd[2097]: 2025-02-13 20:10:10.813 [INFO][5387] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-16-93" Feb 13 20:10:10.917036 containerd[2097]: 2025-02-13 20:10:10.825 [INFO][5387] ipam/ipam.go 489: Trying affinity for 192.168.111.64/26 host="ip-172-31-16-93" Feb 13 20:10:10.917036 containerd[2097]: 2025-02-13 20:10:10.831 [INFO][5387] ipam/ipam.go 155: Attempting to load block cidr=192.168.111.64/26 host="ip-172-31-16-93" Feb 13 20:10:10.917036 containerd[2097]: 2025-02-13 20:10:10.836 [INFO][5387] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.111.64/26 host="ip-172-31-16-93" Feb 13 20:10:10.917036 containerd[2097]: 2025-02-13 20:10:10.836 [INFO][5387] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.111.64/26 handle="k8s-pod-network.c95376eaf1f5afe4131ebf3e3848912f04ab887a10f437a6ec5ad08ba0a70c8b" host="ip-172-31-16-93" Feb 13 20:10:10.917036 containerd[2097]: 2025-02-13 20:10:10.839 [INFO][5387] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c95376eaf1f5afe4131ebf3e3848912f04ab887a10f437a6ec5ad08ba0a70c8b Feb 13 20:10:10.917036 containerd[2097]: 2025-02-13 20:10:10.848 [INFO][5387] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.111.64/26 handle="k8s-pod-network.c95376eaf1f5afe4131ebf3e3848912f04ab887a10f437a6ec5ad08ba0a70c8b" host="ip-172-31-16-93" Feb 13 20:10:10.917036 containerd[2097]: 2025-02-13 20:10:10.867 [INFO][5387] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.111.68/26] block=192.168.111.64/26 handle="k8s-pod-network.c95376eaf1f5afe4131ebf3e3848912f04ab887a10f437a6ec5ad08ba0a70c8b" host="ip-172-31-16-93" Feb 13 20:10:10.917036 containerd[2097]: 2025-02-13 20:10:10.867 [INFO][5387] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.111.68/26] handle="k8s-pod-network.c95376eaf1f5afe4131ebf3e3848912f04ab887a10f437a6ec5ad08ba0a70c8b" host="ip-172-31-16-93" Feb 13 20:10:10.917036 containerd[2097]: 2025-02-13 20:10:10.867 [INFO][5387] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:10:10.917036 containerd[2097]: 2025-02-13 20:10:10.867 [INFO][5387] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.111.68/26] IPv6=[] ContainerID="c95376eaf1f5afe4131ebf3e3848912f04ab887a10f437a6ec5ad08ba0a70c8b" HandleID="k8s-pod-network.c95376eaf1f5afe4131ebf3e3848912f04ab887a10f437a6ec5ad08ba0a70c8b" Workload="ip--172--31--16--93-k8s-coredns--7db6d8ff4d--6tz9h-eth0" Feb 13 20:10:10.919809 containerd[2097]: 2025-02-13 20:10:10.873 [INFO][5334] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c95376eaf1f5afe4131ebf3e3848912f04ab887a10f437a6ec5ad08ba0a70c8b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6tz9h" WorkloadEndpoint="ip--172--31--16--93-k8s-coredns--7db6d8ff4d--6tz9h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--93-k8s-coredns--7db6d8ff4d--6tz9h-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ca33156d-daf7-4956-9c3c-459f7e2dd2f5", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 9, 29, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-93", ContainerID:"", Pod:"coredns-7db6d8ff4d-6tz9h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.111.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0dc172e4dd4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:10:10.919809 containerd[2097]: 2025-02-13 20:10:10.873 [INFO][5334] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.111.68/32] ContainerID="c95376eaf1f5afe4131ebf3e3848912f04ab887a10f437a6ec5ad08ba0a70c8b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6tz9h" WorkloadEndpoint="ip--172--31--16--93-k8s-coredns--7db6d8ff4d--6tz9h-eth0" Feb 13 20:10:10.919809 containerd[2097]: 2025-02-13 20:10:10.873 [INFO][5334] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0dc172e4dd4 ContainerID="c95376eaf1f5afe4131ebf3e3848912f04ab887a10f437a6ec5ad08ba0a70c8b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6tz9h" WorkloadEndpoint="ip--172--31--16--93-k8s-coredns--7db6d8ff4d--6tz9h-eth0" Feb 13 20:10:10.919809 containerd[2097]: 2025-02-13 20:10:10.876 [INFO][5334] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c95376eaf1f5afe4131ebf3e3848912f04ab887a10f437a6ec5ad08ba0a70c8b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6tz9h" WorkloadEndpoint="ip--172--31--16--93-k8s-coredns--7db6d8ff4d--6tz9h-eth0" Feb 13 20:10:10.919809 containerd[2097]: 2025-02-13 20:10:10.877 [INFO][5334] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c95376eaf1f5afe4131ebf3e3848912f04ab887a10f437a6ec5ad08ba0a70c8b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6tz9h" WorkloadEndpoint="ip--172--31--16--93-k8s-coredns--7db6d8ff4d--6tz9h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--93-k8s-coredns--7db6d8ff4d--6tz9h-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ca33156d-daf7-4956-9c3c-459f7e2dd2f5", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 9, 29, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-93", ContainerID:"c95376eaf1f5afe4131ebf3e3848912f04ab887a10f437a6ec5ad08ba0a70c8b", Pod:"coredns-7db6d8ff4d-6tz9h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.111.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0dc172e4dd4", MAC:"d6:2f:44:40:be:4b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:10:10.919809 containerd[2097]: 2025-02-13 20:10:10.913 [INFO][5334] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c95376eaf1f5afe4131ebf3e3848912f04ab887a10f437a6ec5ad08ba0a70c8b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6tz9h" WorkloadEndpoint="ip--172--31--16--93-k8s-coredns--7db6d8ff4d--6tz9h-eth0" Feb 13 20:10:10.984223 containerd[2097]: time="2025-02-13T20:10:10.982052646Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:10:10.984223 containerd[2097]: time="2025-02-13T20:10:10.982154033Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:10:10.984223 containerd[2097]: time="2025-02-13T20:10:10.982192789Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:10:10.984223 containerd[2097]: time="2025-02-13T20:10:10.982339547Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:10:10.992851 systemd-networkd[1657]: calie44524c144e: Gained IPv6LL Feb 13 20:10:10.997043 systemd-networkd[1657]: cali750aa138a6e: Link UP Feb 13 20:10:10.997417 systemd-networkd[1657]: cali750aa138a6e: Gained carrier Feb 13 20:10:11.050835 containerd[2097]: 2025-02-13 20:10:10.662 [INFO][5348] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 20:10:11.050835 containerd[2097]: 2025-02-13 20:10:10.705 [INFO][5348] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--93-k8s-calico--apiserver--7766b6c6c6--42f6t-eth0 calico-apiserver-7766b6c6c6- calico-apiserver 454aef06-4008-4ca0-a239-19f3296963f5 873 0 2025-02-13 20:09:38 +0000 UTC <nil> <nil> map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7766b6c6c6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-16-93 calico-apiserver-7766b6c6c6-42f6t eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali750aa138a6e [] []}} ContainerID="4cec07d6d0a5fb13fc2e9924bb6ce02a24bf99a60611a535d69a35ed9f18102f" Namespace="calico-apiserver" Pod="calico-apiserver-7766b6c6c6-42f6t" WorkloadEndpoint="ip--172--31--16--93-k8s-calico--apiserver--7766b6c6c6--42f6t-" Feb 13 20:10:11.050835 containerd[2097]: 2025-02-13 20:10:10.751 [INFO][5348] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4cec07d6d0a5fb13fc2e9924bb6ce02a24bf99a60611a535d69a35ed9f18102f" Namespace="calico-apiserver" Pod="calico-apiserver-7766b6c6c6-42f6t" WorkloadEndpoint="ip--172--31--16--93-k8s-calico--apiserver--7766b6c6c6--42f6t-eth0" Feb 13 20:10:11.050835 containerd[2097]: 2025-02-13 20:10:10.849 [INFO][5398] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4cec07d6d0a5fb13fc2e9924bb6ce02a24bf99a60611a535d69a35ed9f18102f" HandleID="k8s-pod-network.4cec07d6d0a5fb13fc2e9924bb6ce02a24bf99a60611a535d69a35ed9f18102f" Workload="ip--172--31--16--93-k8s-calico--apiserver--7766b6c6c6--42f6t-eth0" Feb 13 20:10:11.050835 containerd[2097]: 2025-02-13 20:10:10.870 [INFO][5398] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4cec07d6d0a5fb13fc2e9924bb6ce02a24bf99a60611a535d69a35ed9f18102f" HandleID="k8s-pod-network.4cec07d6d0a5fb13fc2e9924bb6ce02a24bf99a60611a535d69a35ed9f18102f" Workload="ip--172--31--16--93-k8s-calico--apiserver--7766b6c6c6--42f6t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002914e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-16-93", "pod":"calico-apiserver-7766b6c6c6-42f6t", "timestamp":"2025-02-13 20:10:10.849221658 +0000 UTC"}, Hostname:"ip-172-31-16-93", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:10:11.050835 containerd[2097]: 2025-02-13 20:10:10.870 [INFO][5398] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:10:11.050835 containerd[2097]: 2025-02-13 20:10:10.870 [INFO][5398] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:10:11.050835 containerd[2097]: 2025-02-13 20:10:10.870 [INFO][5398] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-93' Feb 13 20:10:11.050835 containerd[2097]: 2025-02-13 20:10:10.873 [INFO][5398] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4cec07d6d0a5fb13fc2e9924bb6ce02a24bf99a60611a535d69a35ed9f18102f" host="ip-172-31-16-93" Feb 13 20:10:11.050835 containerd[2097]: 2025-02-13 20:10:10.893 [INFO][5398] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-16-93" Feb 13 20:10:11.050835 containerd[2097]: 2025-02-13 20:10:10.913 [INFO][5398] ipam/ipam.go 489: Trying affinity for 192.168.111.64/26 host="ip-172-31-16-93" Feb 13 20:10:11.050835 containerd[2097]: 2025-02-13 20:10:10.920 [INFO][5398] ipam/ipam.go 155: Attempting to load block cidr=192.168.111.64/26 host="ip-172-31-16-93" Feb 13 20:10:11.050835 containerd[2097]: 2025-02-13 20:10:10.927 [INFO][5398] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.111.64/26 host="ip-172-31-16-93" Feb 13 20:10:11.050835 containerd[2097]: 2025-02-13 20:10:10.928 [INFO][5398] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.111.64/26 handle="k8s-pod-network.4cec07d6d0a5fb13fc2e9924bb6ce02a24bf99a60611a535d69a35ed9f18102f" host="ip-172-31-16-93" Feb 13 20:10:11.050835 containerd[2097]: 2025-02-13 20:10:10.934 [INFO][5398] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4cec07d6d0a5fb13fc2e9924bb6ce02a24bf99a60611a535d69a35ed9f18102f Feb 13 20:10:11.050835 containerd[2097]: 2025-02-13 20:10:10.947 [INFO][5398] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.111.64/26 handle="k8s-pod-network.4cec07d6d0a5fb13fc2e9924bb6ce02a24bf99a60611a535d69a35ed9f18102f" host="ip-172-31-16-93" Feb 13 20:10:11.050835 containerd[2097]: 2025-02-13 20:10:10.958 [INFO][5398] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.111.69/26] block=192.168.111.64/26 handle="k8s-pod-network.4cec07d6d0a5fb13fc2e9924bb6ce02a24bf99a60611a535d69a35ed9f18102f" host="ip-172-31-16-93" Feb 13 20:10:11.050835 containerd[2097]: 2025-02-13 20:10:10.959 [INFO][5398] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.111.69/26] handle="k8s-pod-network.4cec07d6d0a5fb13fc2e9924bb6ce02a24bf99a60611a535d69a35ed9f18102f" host="ip-172-31-16-93" Feb 13 20:10:11.050835 containerd[2097]: 2025-02-13 20:10:10.959 [INFO][5398] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:10:11.050835 containerd[2097]: 2025-02-13 20:10:10.959 [INFO][5398] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.111.69/26] IPv6=[] ContainerID="4cec07d6d0a5fb13fc2e9924bb6ce02a24bf99a60611a535d69a35ed9f18102f" HandleID="k8s-pod-network.4cec07d6d0a5fb13fc2e9924bb6ce02a24bf99a60611a535d69a35ed9f18102f" Workload="ip--172--31--16--93-k8s-calico--apiserver--7766b6c6c6--42f6t-eth0" Feb 13 20:10:11.054869 containerd[2097]: 2025-02-13 20:10:10.974 [INFO][5348] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4cec07d6d0a5fb13fc2e9924bb6ce02a24bf99a60611a535d69a35ed9f18102f" Namespace="calico-apiserver" Pod="calico-apiserver-7766b6c6c6-42f6t" WorkloadEndpoint="ip--172--31--16--93-k8s-calico--apiserver--7766b6c6c6--42f6t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--93-k8s-calico--apiserver--7766b6c6c6--42f6t-eth0", GenerateName:"calico-apiserver-7766b6c6c6-", Namespace:"calico-apiserver", SelfLink:"", UID:"454aef06-4008-4ca0-a239-19f3296963f5", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 9, 38, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7766b6c6c6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-93", ContainerID:"", Pod:"calico-apiserver-7766b6c6c6-42f6t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.111.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali750aa138a6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:10:11.054869 containerd[2097]: 2025-02-13 20:10:10.974 [INFO][5348] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.111.69/32] ContainerID="4cec07d6d0a5fb13fc2e9924bb6ce02a24bf99a60611a535d69a35ed9f18102f" Namespace="calico-apiserver" Pod="calico-apiserver-7766b6c6c6-42f6t" WorkloadEndpoint="ip--172--31--16--93-k8s-calico--apiserver--7766b6c6c6--42f6t-eth0" Feb 13 20:10:11.054869 containerd[2097]: 2025-02-13 20:10:10.974 [INFO][5348] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali750aa138a6e ContainerID="4cec07d6d0a5fb13fc2e9924bb6ce02a24bf99a60611a535d69a35ed9f18102f" Namespace="calico-apiserver" Pod="calico-apiserver-7766b6c6c6-42f6t" WorkloadEndpoint="ip--172--31--16--93-k8s-calico--apiserver--7766b6c6c6--42f6t-eth0" Feb 13 20:10:11.054869 containerd[2097]: 2025-02-13 20:10:11.002 [INFO][5348] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4cec07d6d0a5fb13fc2e9924bb6ce02a24bf99a60611a535d69a35ed9f18102f" Namespace="calico-apiserver" Pod="calico-apiserver-7766b6c6c6-42f6t" WorkloadEndpoint="ip--172--31--16--93-k8s-calico--apiserver--7766b6c6c6--42f6t-eth0" Feb 13 20:10:11.054869 containerd[2097]: 2025-02-13 20:10:11.009 [INFO][5348] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4cec07d6d0a5fb13fc2e9924bb6ce02a24bf99a60611a535d69a35ed9f18102f" Namespace="calico-apiserver" Pod="calico-apiserver-7766b6c6c6-42f6t" WorkloadEndpoint="ip--172--31--16--93-k8s-calico--apiserver--7766b6c6c6--42f6t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--93-k8s-calico--apiserver--7766b6c6c6--42f6t-eth0", GenerateName:"calico-apiserver-7766b6c6c6-", Namespace:"calico-apiserver", SelfLink:"", UID:"454aef06-4008-4ca0-a239-19f3296963f5", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 9, 38, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7766b6c6c6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-93", ContainerID:"4cec07d6d0a5fb13fc2e9924bb6ce02a24bf99a60611a535d69a35ed9f18102f", Pod:"calico-apiserver-7766b6c6c6-42f6t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.111.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali750aa138a6e", MAC:"16:7b:c7:01:52:00", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:10:11.054869 containerd[2097]: 2025-02-13 20:10:11.039 [INFO][5348] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4cec07d6d0a5fb13fc2e9924bb6ce02a24bf99a60611a535d69a35ed9f18102f" Namespace="calico-apiserver" Pod="calico-apiserver-7766b6c6c6-42f6t" WorkloadEndpoint="ip--172--31--16--93-k8s-calico--apiserver--7766b6c6c6--42f6t-eth0" Feb 13 20:10:11.119580 systemd[1]: run-netns-cni\x2df7822e75\x2da9d4\x2d47f5\x2d8f12\x2d7f0c320a80c6.mount: Deactivated successfully. Feb 13 20:10:11.208424 containerd[2097]: time="2025-02-13T20:10:11.208288343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6tz9h,Uid:ca33156d-daf7-4956-9c3c-459f7e2dd2f5,Namespace:kube-system,Attempt:1,} returns sandbox id \"c95376eaf1f5afe4131ebf3e3848912f04ab887a10f437a6ec5ad08ba0a70c8b\"" Feb 13 20:10:11.240190 containerd[2097]: time="2025-02-13T20:10:11.239886020Z" level=info msg="CreateContainer within sandbox \"c95376eaf1f5afe4131ebf3e3848912f04ab887a10f437a6ec5ad08ba0a70c8b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 20:10:11.290816 containerd[2097]: time="2025-02-13T20:10:11.288712466Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:10:11.290816 containerd[2097]: time="2025-02-13T20:10:11.288798471Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:10:11.290816 containerd[2097]: time="2025-02-13T20:10:11.288824163Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:10:11.290816 containerd[2097]: time="2025-02-13T20:10:11.288943136Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:10:11.480501 containerd[2097]: time="2025-02-13T20:10:11.479577411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7766b6c6c6-42f6t,Uid:454aef06-4008-4ca0-a239-19f3296963f5,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"4cec07d6d0a5fb13fc2e9924bb6ce02a24bf99a60611a535d69a35ed9f18102f\"" Feb 13 20:10:11.520063 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1635471655.mount: Deactivated successfully. Feb 13 20:10:11.545669 containerd[2097]: time="2025-02-13T20:10:11.545616770Z" level=info msg="CreateContainer within sandbox \"c95376eaf1f5afe4131ebf3e3848912f04ab887a10f437a6ec5ad08ba0a70c8b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"73fc571f117773d37c2d60b16020bee5beda911e6a0862b409f7c4842823e17c\"" Feb 13 20:10:11.547067 containerd[2097]: time="2025-02-13T20:10:11.547032700Z" level=info msg="StartContainer for \"73fc571f117773d37c2d60b16020bee5beda911e6a0862b409f7c4842823e17c\"" Feb 13 20:10:11.818303 systemd[1]: Started sshd@8-172.31.16.93:22-139.178.89.65:45258.service - OpenSSH per-connection server daemon (139.178.89.65:45258). Feb 13 20:10:11.821031 systemd-networkd[1657]: vxlan.calico: Link UP Feb 13 20:10:11.821037 systemd-networkd[1657]: vxlan.calico: Gained carrier Feb 13 20:10:11.827249 (udev-worker)[4840]: Network interface NamePolicy= disabled on kernel command line. Feb 13 20:10:12.019448 containerd[2097]: time="2025-02-13T20:10:12.013990528Z" level=info msg="StartContainer for \"73fc571f117773d37c2d60b16020bee5beda911e6a0862b409f7c4842823e17c\" returns successfully" Feb 13 20:10:12.211009 sshd[5556]: Accepted publickey for core from 139.178.89.65 port 45258 ssh2: RSA SHA256:7nv7xaFFWmIAvPewvKjLuTxkMrDcPy3WtQ5BDo3Wg0I Feb 13 20:10:12.214269 sshd[5556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:10:12.228827 systemd-logind[2075]: New session 9 of user core. Feb 13 20:10:12.237498 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 20:10:12.314571 kubelet[3699]: I0213 20:10:12.313519 3699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-6tz9h" podStartSLOduration=43.313494207 podStartE2EDuration="43.313494207s" podCreationTimestamp="2025-02-13 20:09:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:10:12.310519343 +0000 UTC m=+57.166472152" watchObservedRunningTime="2025-02-13 20:10:12.313494207 +0000 UTC m=+57.169447015" Feb 13 20:10:12.413051 systemd-networkd[1657]: cali0dc172e4dd4: Gained IPv6LL Feb 13 20:10:12.639523 containerd[2097]: time="2025-02-13T20:10:12.639479042Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:10:12.642648 containerd[2097]: time="2025-02-13T20:10:12.642566454Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Feb 13 20:10:12.645020 containerd[2097]: time="2025-02-13T20:10:12.644951805Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:10:12.650216 containerd[2097]: time="2025-02-13T20:10:12.650136633Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:10:12.652727 containerd[2097]: time="2025-02-13T20:10:12.651131862Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 3.899811091s" Feb 13 20:10:12.652727 containerd[2097]: time="2025-02-13T20:10:12.651174084Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Feb 13 20:10:12.660329 containerd[2097]: time="2025-02-13T20:10:12.660285758Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 20:10:12.664718 containerd[2097]: time="2025-02-13T20:10:12.663805379Z" level=info msg="CreateContainer within sandbox \"302708740f814e31b72665f701b91cf4edeb31c3871c2c108ab6b60330bd43b0\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 20:10:12.708398 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount378062713.mount: Deactivated successfully. Feb 13 20:10:12.716734 containerd[2097]: time="2025-02-13T20:10:12.715679686Z" level=info msg="CreateContainer within sandbox \"302708740f814e31b72665f701b91cf4edeb31c3871c2c108ab6b60330bd43b0\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"26e15a2117bf8af2869861ef5c7fe39adb2b8628fccd75962131d4f057a96c5a\"" Feb 13 20:10:12.719389 containerd[2097]: time="2025-02-13T20:10:12.719350821Z" level=info msg="StartContainer for \"26e15a2117bf8af2869861ef5c7fe39adb2b8628fccd75962131d4f057a96c5a\"" Feb 13 20:10:12.719614 systemd-networkd[1657]: cali750aa138a6e: Gained IPv6LL Feb 13 20:10:12.971721 containerd[2097]: time="2025-02-13T20:10:12.971415008Z" level=info msg="StartContainer for \"26e15a2117bf8af2869861ef5c7fe39adb2b8628fccd75962131d4f057a96c5a\" returns successfully" Feb 13 20:10:13.196142 sshd[5556]: pam_unix(sshd:session): session closed for user core Feb 13 20:10:13.202739 systemd[1]: sshd@8-172.31.16.93:22-139.178.89.65:45258.service: Deactivated successfully. Feb 13 20:10:13.203340 systemd-logind[2075]: Session 9 logged out. Waiting for processes to exit. Feb 13 20:10:13.211681 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 20:10:13.213790 systemd-logind[2075]: Removed session 9. Feb 13 20:10:13.423339 systemd-networkd[1657]: vxlan.calico: Gained IPv6LL Feb 13 20:10:14.063473 systemd-resolved[1977]: Under memory pressure, flushing caches. Feb 13 20:10:14.063517 systemd-resolved[1977]: Flushed all caches. Feb 13 20:10:14.065209 systemd-journald[1566]: Under memory pressure, flushing caches. Feb 13 20:10:15.402805 containerd[2097]: time="2025-02-13T20:10:15.402467797Z" level=info msg="StopPodSandbox for \"93efc1025ebd330db77b04ca102e7ec8308a59565feecc61baada5f9ebb71aa3\"" Feb 13 20:10:15.553031 ntpd[2052]: Listen normally on 6 vxlan.calico 192.168.111.64:123 Feb 13 20:10:15.553137 ntpd[2052]: Listen normally on 7 cali505b60cef5d [fe80::ecee:eeff:feee:eeee%4]:123 Feb 13 20:10:15.556976 ntpd[2052]: 13 Feb 20:10:15 ntpd[2052]: Listen normally on 6 vxlan.calico 192.168.111.64:123 Feb 13 20:10:15.556976 ntpd[2052]: 13 Feb 20:10:15 ntpd[2052]: Listen normally on 7 cali505b60cef5d [fe80::ecee:eeff:feee:eeee%4]:123 Feb 13 20:10:15.556976 ntpd[2052]: 13 Feb 20:10:15 ntpd[2052]: Listen normally on 8 calie35944538e5 [fe80::ecee:eeff:feee:eeee%5]:123 Feb 13 20:10:15.556976 ntpd[2052]: 13 Feb 20:10:15 ntpd[2052]: Listen normally on 9 calie44524c144e [fe80::ecee:eeff:feee:eeee%6]:123 Feb 13 20:10:15.556976 ntpd[2052]: 13 Feb 20:10:15 ntpd[2052]: Listen normally on 10 cali0dc172e4dd4 [fe80::ecee:eeff:feee:eeee%7]:123 Feb 13 20:10:15.556976 ntpd[2052]: 13 Feb 20:10:15 ntpd[2052]: Listen normally on 11 cali750aa138a6e [fe80::ecee:eeff:feee:eeee%8]:123 Feb 13 20:10:15.556976 ntpd[2052]: 13 Feb 20:10:15 ntpd[2052]: Listen normally on 12 vxlan.calico [fe80::6462:5cff:fed3:73ba%9]:123 Feb 13 20:10:15.553193 ntpd[2052]: Listen normally on 8 calie35944538e5 [fe80::ecee:eeff:feee:eeee%5]:123 Feb 13 20:10:15.553233 ntpd[2052]: Listen normally on 9 calie44524c144e [fe80::ecee:eeff:feee:eeee%6]:123 Feb 13 20:10:15.553274 ntpd[2052]: Listen normally on 10 cali0dc172e4dd4 [fe80::ecee:eeff:feee:eeee%7]:123 Feb 13 20:10:15.553314 ntpd[2052]: Listen normally on 11 cali750aa138a6e [fe80::ecee:eeff:feee:eeee%8]:123 Feb 13 20:10:15.553351 ntpd[2052]: Listen normally on 12 vxlan.calico [fe80::6462:5cff:fed3:73ba%9]:123 Feb 13 20:10:15.782545 containerd[2097]: 2025-02-13 20:10:15.654 [WARNING][5694] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="93efc1025ebd330db77b04ca102e7ec8308a59565feecc61baada5f9ebb71aa3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--93-k8s-calico--apiserver--7766b6c6c6--42f6t-eth0", GenerateName:"calico-apiserver-7766b6c6c6-", Namespace:"calico-apiserver", SelfLink:"", UID:"454aef06-4008-4ca0-a239-19f3296963f5", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 9, 38, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7766b6c6c6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-93", ContainerID:"4cec07d6d0a5fb13fc2e9924bb6ce02a24bf99a60611a535d69a35ed9f18102f", Pod:"calico-apiserver-7766b6c6c6-42f6t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.111.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali750aa138a6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:10:15.782545 containerd[2097]: 2025-02-13 20:10:15.654 [INFO][5694] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="93efc1025ebd330db77b04ca102e7ec8308a59565feecc61baada5f9ebb71aa3" Feb 13 20:10:15.782545 containerd[2097]: 2025-02-13 20:10:15.654 [INFO][5694] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="93efc1025ebd330db77b04ca102e7ec8308a59565feecc61baada5f9ebb71aa3" iface="eth0" netns="" Feb 13 20:10:15.782545 containerd[2097]: 2025-02-13 20:10:15.654 [INFO][5694] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="93efc1025ebd330db77b04ca102e7ec8308a59565feecc61baada5f9ebb71aa3" Feb 13 20:10:15.782545 containerd[2097]: 2025-02-13 20:10:15.654 [INFO][5694] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="93efc1025ebd330db77b04ca102e7ec8308a59565feecc61baada5f9ebb71aa3" Feb 13 20:10:15.782545 containerd[2097]: 2025-02-13 20:10:15.749 [INFO][5700] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="93efc1025ebd330db77b04ca102e7ec8308a59565feecc61baada5f9ebb71aa3" HandleID="k8s-pod-network.93efc1025ebd330db77b04ca102e7ec8308a59565feecc61baada5f9ebb71aa3" Workload="ip--172--31--16--93-k8s-calico--apiserver--7766b6c6c6--42f6t-eth0" Feb 13 20:10:15.782545 containerd[2097]: 2025-02-13 20:10:15.749 [INFO][5700] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:10:15.782545 containerd[2097]: 2025-02-13 20:10:15.749 [INFO][5700] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:10:15.782545 containerd[2097]: 2025-02-13 20:10:15.766 [WARNING][5700] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="93efc1025ebd330db77b04ca102e7ec8308a59565feecc61baada5f9ebb71aa3" HandleID="k8s-pod-network.93efc1025ebd330db77b04ca102e7ec8308a59565feecc61baada5f9ebb71aa3" Workload="ip--172--31--16--93-k8s-calico--apiserver--7766b6c6c6--42f6t-eth0" Feb 13 20:10:15.782545 containerd[2097]: 2025-02-13 20:10:15.766 [INFO][5700] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="93efc1025ebd330db77b04ca102e7ec8308a59565feecc61baada5f9ebb71aa3" HandleID="k8s-pod-network.93efc1025ebd330db77b04ca102e7ec8308a59565feecc61baada5f9ebb71aa3" Workload="ip--172--31--16--93-k8s-calico--apiserver--7766b6c6c6--42f6t-eth0" Feb 13 20:10:15.782545 containerd[2097]: 2025-02-13 20:10:15.770 [INFO][5700] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:10:15.782545 containerd[2097]: 2025-02-13 20:10:15.774 [INFO][5694] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="93efc1025ebd330db77b04ca102e7ec8308a59565feecc61baada5f9ebb71aa3" Feb 13 20:10:15.782545 containerd[2097]: time="2025-02-13T20:10:15.782373887Z" level=info msg="TearDown network for sandbox \"93efc1025ebd330db77b04ca102e7ec8308a59565feecc61baada5f9ebb71aa3\" successfully" Feb 13 20:10:15.782545 containerd[2097]: time="2025-02-13T20:10:15.782405262Z" level=info msg="StopPodSandbox for \"93efc1025ebd330db77b04ca102e7ec8308a59565feecc61baada5f9ebb71aa3\" returns successfully" Feb 13 20:10:15.804942 containerd[2097]: time="2025-02-13T20:10:15.804883486Z" level=info msg="RemovePodSandbox for \"93efc1025ebd330db77b04ca102e7ec8308a59565feecc61baada5f9ebb71aa3\"" Feb 13 20:10:15.804942 containerd[2097]: time="2025-02-13T20:10:15.804932802Z" level=info msg="Forcibly stopping sandbox \"93efc1025ebd330db77b04ca102e7ec8308a59565feecc61baada5f9ebb71aa3\"" Feb 13 20:10:16.047689 containerd[2097]: 2025-02-13 20:10:15.924 [WARNING][5718] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="93efc1025ebd330db77b04ca102e7ec8308a59565feecc61baada5f9ebb71aa3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--93-k8s-calico--apiserver--7766b6c6c6--42f6t-eth0", GenerateName:"calico-apiserver-7766b6c6c6-", Namespace:"calico-apiserver", SelfLink:"", UID:"454aef06-4008-4ca0-a239-19f3296963f5", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 9, 38, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7766b6c6c6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-93", ContainerID:"4cec07d6d0a5fb13fc2e9924bb6ce02a24bf99a60611a535d69a35ed9f18102f", Pod:"calico-apiserver-7766b6c6c6-42f6t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.111.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali750aa138a6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:10:16.047689 containerd[2097]: 2025-02-13 20:10:15.928 [INFO][5718] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="93efc1025ebd330db77b04ca102e7ec8308a59565feecc61baada5f9ebb71aa3" Feb 13 20:10:16.047689 containerd[2097]: 2025-02-13 20:10:15.928 [INFO][5718] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="93efc1025ebd330db77b04ca102e7ec8308a59565feecc61baada5f9ebb71aa3" iface="eth0" netns="" Feb 13 20:10:16.047689 containerd[2097]: 2025-02-13 20:10:15.928 [INFO][5718] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="93efc1025ebd330db77b04ca102e7ec8308a59565feecc61baada5f9ebb71aa3" Feb 13 20:10:16.047689 containerd[2097]: 2025-02-13 20:10:15.928 [INFO][5718] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="93efc1025ebd330db77b04ca102e7ec8308a59565feecc61baada5f9ebb71aa3" Feb 13 20:10:16.047689 containerd[2097]: 2025-02-13 20:10:16.009 [INFO][5725] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="93efc1025ebd330db77b04ca102e7ec8308a59565feecc61baada5f9ebb71aa3" HandleID="k8s-pod-network.93efc1025ebd330db77b04ca102e7ec8308a59565feecc61baada5f9ebb71aa3" Workload="ip--172--31--16--93-k8s-calico--apiserver--7766b6c6c6--42f6t-eth0" Feb 13 20:10:16.047689 containerd[2097]: 2025-02-13 20:10:16.009 [INFO][5725] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:10:16.047689 containerd[2097]: 2025-02-13 20:10:16.009 [INFO][5725] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:10:16.047689 containerd[2097]: 2025-02-13 20:10:16.027 [WARNING][5725] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="93efc1025ebd330db77b04ca102e7ec8308a59565feecc61baada5f9ebb71aa3" HandleID="k8s-pod-network.93efc1025ebd330db77b04ca102e7ec8308a59565feecc61baada5f9ebb71aa3" Workload="ip--172--31--16--93-k8s-calico--apiserver--7766b6c6c6--42f6t-eth0" Feb 13 20:10:16.047689 containerd[2097]: 2025-02-13 20:10:16.027 [INFO][5725] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="93efc1025ebd330db77b04ca102e7ec8308a59565feecc61baada5f9ebb71aa3" HandleID="k8s-pod-network.93efc1025ebd330db77b04ca102e7ec8308a59565feecc61baada5f9ebb71aa3" Workload="ip--172--31--16--93-k8s-calico--apiserver--7766b6c6c6--42f6t-eth0" Feb 13 20:10:16.047689 containerd[2097]: 2025-02-13 20:10:16.033 [INFO][5725] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:10:16.047689 containerd[2097]: 2025-02-13 20:10:16.040 [INFO][5718] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="93efc1025ebd330db77b04ca102e7ec8308a59565feecc61baada5f9ebb71aa3" Feb 13 20:10:16.047689 containerd[2097]: time="2025-02-13T20:10:16.047618982Z" level=info msg="TearDown network for sandbox \"93efc1025ebd330db77b04ca102e7ec8308a59565feecc61baada5f9ebb71aa3\" successfully" Feb 13 20:10:16.083088 containerd[2097]: time="2025-02-13T20:10:16.082764504Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"93efc1025ebd330db77b04ca102e7ec8308a59565feecc61baada5f9ebb71aa3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:10:16.083088 containerd[2097]: time="2025-02-13T20:10:16.082946571Z" level=info msg="RemovePodSandbox \"93efc1025ebd330db77b04ca102e7ec8308a59565feecc61baada5f9ebb71aa3\" returns successfully" Feb 13 20:10:16.084198 containerd[2097]: time="2025-02-13T20:10:16.084160622Z" level=info msg="StopPodSandbox for \"c50a0fd76b8dcb7dab360ed62e5c96aaf7894b2822f90330b884f2d8e053a981\"" Feb 13 20:10:16.111278 systemd-resolved[1977]: Under memory pressure, flushing caches. Feb 13 20:10:16.114227 systemd-journald[1566]: Under memory pressure, flushing caches. Feb 13 20:10:16.111287 systemd-resolved[1977]: Flushed all caches. Feb 13 20:10:16.246661 containerd[2097]: 2025-02-13 20:10:16.174 [WARNING][5745] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c50a0fd76b8dcb7dab360ed62e5c96aaf7894b2822f90330b884f2d8e053a981" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--93-k8s-calico--apiserver--7766b6c6c6--45497-eth0", GenerateName:"calico-apiserver-7766b6c6c6-", Namespace:"calico-apiserver", SelfLink:"", UID:"42c53b1a-63e3-4525-8d7f-9aecdf031a3b", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 9, 38, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7766b6c6c6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-93", ContainerID:"45dd1348fb40060ace48b4ec1d8756b08fb329e83bae839941262dc81e2aa5a4", Pod:"calico-apiserver-7766b6c6c6-45497", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.111.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie44524c144e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:10:16.246661 containerd[2097]: 2025-02-13 20:10:16.177 [INFO][5745] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c50a0fd76b8dcb7dab360ed62e5c96aaf7894b2822f90330b884f2d8e053a981" Feb 13 20:10:16.246661 containerd[2097]: 2025-02-13 20:10:16.177 [INFO][5745] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c50a0fd76b8dcb7dab360ed62e5c96aaf7894b2822f90330b884f2d8e053a981" iface="eth0" netns="" Feb 13 20:10:16.246661 containerd[2097]: 2025-02-13 20:10:16.177 [INFO][5745] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c50a0fd76b8dcb7dab360ed62e5c96aaf7894b2822f90330b884f2d8e053a981" Feb 13 20:10:16.246661 containerd[2097]: 2025-02-13 20:10:16.177 [INFO][5745] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c50a0fd76b8dcb7dab360ed62e5c96aaf7894b2822f90330b884f2d8e053a981" Feb 13 20:10:16.246661 containerd[2097]: 2025-02-13 20:10:16.228 [INFO][5752] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c50a0fd76b8dcb7dab360ed62e5c96aaf7894b2822f90330b884f2d8e053a981" HandleID="k8s-pod-network.c50a0fd76b8dcb7dab360ed62e5c96aaf7894b2822f90330b884f2d8e053a981" Workload="ip--172--31--16--93-k8s-calico--apiserver--7766b6c6c6--45497-eth0" Feb 13 20:10:16.246661 containerd[2097]: 2025-02-13 20:10:16.228 [INFO][5752] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:10:16.246661 containerd[2097]: 2025-02-13 20:10:16.228 [INFO][5752] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:10:16.246661 containerd[2097]: 2025-02-13 20:10:16.238 [WARNING][5752] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c50a0fd76b8dcb7dab360ed62e5c96aaf7894b2822f90330b884f2d8e053a981" HandleID="k8s-pod-network.c50a0fd76b8dcb7dab360ed62e5c96aaf7894b2822f90330b884f2d8e053a981" Workload="ip--172--31--16--93-k8s-calico--apiserver--7766b6c6c6--45497-eth0" Feb 13 20:10:16.246661 containerd[2097]: 2025-02-13 20:10:16.239 [INFO][5752] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c50a0fd76b8dcb7dab360ed62e5c96aaf7894b2822f90330b884f2d8e053a981" HandleID="k8s-pod-network.c50a0fd76b8dcb7dab360ed62e5c96aaf7894b2822f90330b884f2d8e053a981" Workload="ip--172--31--16--93-k8s-calico--apiserver--7766b6c6c6--45497-eth0" Feb 13 20:10:16.246661 containerd[2097]: 2025-02-13 20:10:16.241 [INFO][5752] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:10:16.246661 containerd[2097]: 2025-02-13 20:10:16.243 [INFO][5745] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c50a0fd76b8dcb7dab360ed62e5c96aaf7894b2822f90330b884f2d8e053a981" Feb 13 20:10:16.248983 containerd[2097]: time="2025-02-13T20:10:16.247247079Z" level=info msg="TearDown network for sandbox \"c50a0fd76b8dcb7dab360ed62e5c96aaf7894b2822f90330b884f2d8e053a981\" successfully" Feb 13 20:10:16.248983 containerd[2097]: time="2025-02-13T20:10:16.247281727Z" level=info msg="StopPodSandbox for \"c50a0fd76b8dcb7dab360ed62e5c96aaf7894b2822f90330b884f2d8e053a981\" returns successfully" Feb 13 20:10:16.249678 containerd[2097]: time="2025-02-13T20:10:16.249219759Z" level=info msg="RemovePodSandbox for \"c50a0fd76b8dcb7dab360ed62e5c96aaf7894b2822f90330b884f2d8e053a981\"" Feb 13 20:10:16.249678 containerd[2097]: time="2025-02-13T20:10:16.249261855Z" level=info msg="Forcibly stopping sandbox \"c50a0fd76b8dcb7dab360ed62e5c96aaf7894b2822f90330b884f2d8e053a981\"" Feb 13 20:10:16.435754 containerd[2097]: 2025-02-13 20:10:16.347 [WARNING][5771] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c50a0fd76b8dcb7dab360ed62e5c96aaf7894b2822f90330b884f2d8e053a981" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--93-k8s-calico--apiserver--7766b6c6c6--45497-eth0", GenerateName:"calico-apiserver-7766b6c6c6-", Namespace:"calico-apiserver", SelfLink:"", UID:"42c53b1a-63e3-4525-8d7f-9aecdf031a3b", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 9, 38, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7766b6c6c6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-93", ContainerID:"45dd1348fb40060ace48b4ec1d8756b08fb329e83bae839941262dc81e2aa5a4", Pod:"calico-apiserver-7766b6c6c6-45497", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.111.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie44524c144e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:10:16.435754 containerd[2097]: 2025-02-13 20:10:16.348 [INFO][5771] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c50a0fd76b8dcb7dab360ed62e5c96aaf7894b2822f90330b884f2d8e053a981" Feb 13 20:10:16.435754 containerd[2097]: 2025-02-13 20:10:16.348 [INFO][5771] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c50a0fd76b8dcb7dab360ed62e5c96aaf7894b2822f90330b884f2d8e053a981" iface="eth0" netns="" Feb 13 20:10:16.435754 containerd[2097]: 2025-02-13 20:10:16.348 [INFO][5771] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c50a0fd76b8dcb7dab360ed62e5c96aaf7894b2822f90330b884f2d8e053a981" Feb 13 20:10:16.435754 containerd[2097]: 2025-02-13 20:10:16.348 [INFO][5771] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c50a0fd76b8dcb7dab360ed62e5c96aaf7894b2822f90330b884f2d8e053a981" Feb 13 20:10:16.435754 containerd[2097]: 2025-02-13 20:10:16.414 [INFO][5777] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c50a0fd76b8dcb7dab360ed62e5c96aaf7894b2822f90330b884f2d8e053a981" HandleID="k8s-pod-network.c50a0fd76b8dcb7dab360ed62e5c96aaf7894b2822f90330b884f2d8e053a981" Workload="ip--172--31--16--93-k8s-calico--apiserver--7766b6c6c6--45497-eth0" Feb 13 20:10:16.435754 containerd[2097]: 2025-02-13 20:10:16.414 [INFO][5777] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:10:16.435754 containerd[2097]: 2025-02-13 20:10:16.414 [INFO][5777] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:10:16.435754 containerd[2097]: 2025-02-13 20:10:16.424 [WARNING][5777] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c50a0fd76b8dcb7dab360ed62e5c96aaf7894b2822f90330b884f2d8e053a981" HandleID="k8s-pod-network.c50a0fd76b8dcb7dab360ed62e5c96aaf7894b2822f90330b884f2d8e053a981" Workload="ip--172--31--16--93-k8s-calico--apiserver--7766b6c6c6--45497-eth0" Feb 13 20:10:16.435754 containerd[2097]: 2025-02-13 20:10:16.424 [INFO][5777] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c50a0fd76b8dcb7dab360ed62e5c96aaf7894b2822f90330b884f2d8e053a981" HandleID="k8s-pod-network.c50a0fd76b8dcb7dab360ed62e5c96aaf7894b2822f90330b884f2d8e053a981" Workload="ip--172--31--16--93-k8s-calico--apiserver--7766b6c6c6--45497-eth0" Feb 13 20:10:16.435754 containerd[2097]: 2025-02-13 20:10:16.427 [INFO][5777] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:10:16.435754 containerd[2097]: 2025-02-13 20:10:16.430 [INFO][5771] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c50a0fd76b8dcb7dab360ed62e5c96aaf7894b2822f90330b884f2d8e053a981" Feb 13 20:10:16.438508 containerd[2097]: time="2025-02-13T20:10:16.437316772Z" level=info msg="TearDown network for sandbox \"c50a0fd76b8dcb7dab360ed62e5c96aaf7894b2822f90330b884f2d8e053a981\" successfully" Feb 13 20:10:16.452718 containerd[2097]: time="2025-02-13T20:10:16.452058873Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c50a0fd76b8dcb7dab360ed62e5c96aaf7894b2822f90330b884f2d8e053a981\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:10:16.453011 containerd[2097]: time="2025-02-13T20:10:16.452873654Z" level=info msg="RemovePodSandbox \"c50a0fd76b8dcb7dab360ed62e5c96aaf7894b2822f90330b884f2d8e053a981\" returns successfully" Feb 13 20:10:16.454979 containerd[2097]: time="2025-02-13T20:10:16.454948493Z" level=info msg="StopPodSandbox for \"f9dccd54955daa6f0903bf78e5bbf99ac85ab89aa8f2db62860f39747f28c482\"" Feb 13 20:10:16.595709 containerd[2097]: 2025-02-13 20:10:16.533 [WARNING][5795] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f9dccd54955daa6f0903bf78e5bbf99ac85ab89aa8f2db62860f39747f28c482" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--93-k8s-coredns--7db6d8ff4d--6tz9h-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ca33156d-daf7-4956-9c3c-459f7e2dd2f5", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 9, 29, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-93", ContainerID:"c95376eaf1f5afe4131ebf3e3848912f04ab887a10f437a6ec5ad08ba0a70c8b", Pod:"coredns-7db6d8ff4d-6tz9h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.111.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0dc172e4dd4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:10:16.595709 containerd[2097]: 2025-02-13 20:10:16.533 [INFO][5795] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f9dccd54955daa6f0903bf78e5bbf99ac85ab89aa8f2db62860f39747f28c482" Feb 13 20:10:16.595709 containerd[2097]: 2025-02-13 20:10:16.533 [INFO][5795] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f9dccd54955daa6f0903bf78e5bbf99ac85ab89aa8f2db62860f39747f28c482" iface="eth0" netns="" Feb 13 20:10:16.595709 containerd[2097]: 2025-02-13 20:10:16.533 [INFO][5795] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f9dccd54955daa6f0903bf78e5bbf99ac85ab89aa8f2db62860f39747f28c482" Feb 13 20:10:16.595709 containerd[2097]: 2025-02-13 20:10:16.533 [INFO][5795] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f9dccd54955daa6f0903bf78e5bbf99ac85ab89aa8f2db62860f39747f28c482" Feb 13 20:10:16.595709 containerd[2097]: 2025-02-13 20:10:16.578 [INFO][5802] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f9dccd54955daa6f0903bf78e5bbf99ac85ab89aa8f2db62860f39747f28c482" HandleID="k8s-pod-network.f9dccd54955daa6f0903bf78e5bbf99ac85ab89aa8f2db62860f39747f28c482" Workload="ip--172--31--16--93-k8s-coredns--7db6d8ff4d--6tz9h-eth0" Feb 13 20:10:16.595709 containerd[2097]: 2025-02-13 20:10:16.578 [INFO][5802] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:10:16.595709 containerd[2097]: 2025-02-13 20:10:16.578 [INFO][5802] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:10:16.595709 containerd[2097]: 2025-02-13 20:10:16.588 [WARNING][5802] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f9dccd54955daa6f0903bf78e5bbf99ac85ab89aa8f2db62860f39747f28c482" HandleID="k8s-pod-network.f9dccd54955daa6f0903bf78e5bbf99ac85ab89aa8f2db62860f39747f28c482" Workload="ip--172--31--16--93-k8s-coredns--7db6d8ff4d--6tz9h-eth0" Feb 13 20:10:16.595709 containerd[2097]: 2025-02-13 20:10:16.588 [INFO][5802] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f9dccd54955daa6f0903bf78e5bbf99ac85ab89aa8f2db62860f39747f28c482" HandleID="k8s-pod-network.f9dccd54955daa6f0903bf78e5bbf99ac85ab89aa8f2db62860f39747f28c482" Workload="ip--172--31--16--93-k8s-coredns--7db6d8ff4d--6tz9h-eth0" Feb 13 20:10:16.595709 containerd[2097]: 2025-02-13 20:10:16.591 [INFO][5802] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:10:16.595709 containerd[2097]: 2025-02-13 20:10:16.592 [INFO][5795] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f9dccd54955daa6f0903bf78e5bbf99ac85ab89aa8f2db62860f39747f28c482" Feb 13 20:10:16.597257 containerd[2097]: time="2025-02-13T20:10:16.595750752Z" level=info msg="TearDown network for sandbox \"f9dccd54955daa6f0903bf78e5bbf99ac85ab89aa8f2db62860f39747f28c482\" successfully" Feb 13 20:10:16.597257 containerd[2097]: time="2025-02-13T20:10:16.595779567Z" level=info msg="StopPodSandbox for \"f9dccd54955daa6f0903bf78e5bbf99ac85ab89aa8f2db62860f39747f28c482\" returns successfully" Feb 13 20:10:16.597257 containerd[2097]: time="2025-02-13T20:10:16.596488975Z" level=info msg="RemovePodSandbox for \"f9dccd54955daa6f0903bf78e5bbf99ac85ab89aa8f2db62860f39747f28c482\"" Feb 13 20:10:16.597257 containerd[2097]: time="2025-02-13T20:10:16.596539968Z" level=info msg="Forcibly stopping sandbox \"f9dccd54955daa6f0903bf78e5bbf99ac85ab89aa8f2db62860f39747f28c482\"" Feb 13 20:10:16.774342 containerd[2097]: 2025-02-13 20:10:16.690 [WARNING][5820] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f9dccd54955daa6f0903bf78e5bbf99ac85ab89aa8f2db62860f39747f28c482" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--93-k8s-coredns--7db6d8ff4d--6tz9h-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ca33156d-daf7-4956-9c3c-459f7e2dd2f5", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 9, 29, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-93", ContainerID:"c95376eaf1f5afe4131ebf3e3848912f04ab887a10f437a6ec5ad08ba0a70c8b", Pod:"coredns-7db6d8ff4d-6tz9h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.111.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0dc172e4dd4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:10:16.774342 containerd[2097]: 2025-02-13 20:10:16.691 [INFO][5820] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f9dccd54955daa6f0903bf78e5bbf99ac85ab89aa8f2db62860f39747f28c482" Feb 13 20:10:16.774342 containerd[2097]: 2025-02-13 20:10:16.691 [INFO][5820] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f9dccd54955daa6f0903bf78e5bbf99ac85ab89aa8f2db62860f39747f28c482" iface="eth0" netns="" Feb 13 20:10:16.774342 containerd[2097]: 2025-02-13 20:10:16.691 [INFO][5820] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f9dccd54955daa6f0903bf78e5bbf99ac85ab89aa8f2db62860f39747f28c482" Feb 13 20:10:16.774342 containerd[2097]: 2025-02-13 20:10:16.692 [INFO][5820] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f9dccd54955daa6f0903bf78e5bbf99ac85ab89aa8f2db62860f39747f28c482" Feb 13 20:10:16.774342 containerd[2097]: 2025-02-13 20:10:16.750 [INFO][5827] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f9dccd54955daa6f0903bf78e5bbf99ac85ab89aa8f2db62860f39747f28c482" HandleID="k8s-pod-network.f9dccd54955daa6f0903bf78e5bbf99ac85ab89aa8f2db62860f39747f28c482" Workload="ip--172--31--16--93-k8s-coredns--7db6d8ff4d--6tz9h-eth0" Feb 13 20:10:16.774342 containerd[2097]: 2025-02-13 20:10:16.751 [INFO][5827] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:10:16.774342 containerd[2097]: 2025-02-13 20:10:16.751 [INFO][5827] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:10:16.774342 containerd[2097]: 2025-02-13 20:10:16.762 [WARNING][5827] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f9dccd54955daa6f0903bf78e5bbf99ac85ab89aa8f2db62860f39747f28c482" HandleID="k8s-pod-network.f9dccd54955daa6f0903bf78e5bbf99ac85ab89aa8f2db62860f39747f28c482" Workload="ip--172--31--16--93-k8s-coredns--7db6d8ff4d--6tz9h-eth0" Feb 13 20:10:16.774342 containerd[2097]: 2025-02-13 20:10:16.762 [INFO][5827] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f9dccd54955daa6f0903bf78e5bbf99ac85ab89aa8f2db62860f39747f28c482" HandleID="k8s-pod-network.f9dccd54955daa6f0903bf78e5bbf99ac85ab89aa8f2db62860f39747f28c482" Workload="ip--172--31--16--93-k8s-coredns--7db6d8ff4d--6tz9h-eth0" Feb 13 20:10:16.774342 containerd[2097]: 2025-02-13 20:10:16.766 [INFO][5827] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:10:16.774342 containerd[2097]: 2025-02-13 20:10:16.771 [INFO][5820] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f9dccd54955daa6f0903bf78e5bbf99ac85ab89aa8f2db62860f39747f28c482" Feb 13 20:10:16.776007 containerd[2097]: time="2025-02-13T20:10:16.775004434Z" level=info msg="TearDown network for sandbox \"f9dccd54955daa6f0903bf78e5bbf99ac85ab89aa8f2db62860f39747f28c482\" successfully" Feb 13 20:10:16.781794 containerd[2097]: time="2025-02-13T20:10:16.781750712Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f9dccd54955daa6f0903bf78e5bbf99ac85ab89aa8f2db62860f39747f28c482\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:10:16.782055 containerd[2097]: time="2025-02-13T20:10:16.781826224Z" level=info msg="RemovePodSandbox \"f9dccd54955daa6f0903bf78e5bbf99ac85ab89aa8f2db62860f39747f28c482\" returns successfully" Feb 13 20:10:16.783297 containerd[2097]: time="2025-02-13T20:10:16.783268217Z" level=info msg="StopPodSandbox for \"3d1ca5284543f1dc3550ba7634c6498c40c87dc09b279448626637eebd5af2d6\"" Feb 13 20:10:16.981107 containerd[2097]: 2025-02-13 20:10:16.878 [WARNING][5846] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3d1ca5284543f1dc3550ba7634c6498c40c87dc09b279448626637eebd5af2d6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--93-k8s-calico--kube--controllers--587f87bbd4--mm8mf-eth0", GenerateName:"calico-kube-controllers-587f87bbd4-", Namespace:"calico-system", SelfLink:"", UID:"ba17dcdf-1279-4496-b0fc-fdde00ad61dc", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 9, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"587f87bbd4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-93", ContainerID:"08fe7c3fceb4e68bf1525388f5ee0c7c65b68b682b0d9d568cd1f3214b738468", Pod:"calico-kube-controllers-587f87bbd4-mm8mf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.111.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie35944538e5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:10:16.981107 containerd[2097]: 2025-02-13 20:10:16.879 [INFO][5846] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3d1ca5284543f1dc3550ba7634c6498c40c87dc09b279448626637eebd5af2d6" Feb 13 20:10:16.981107 containerd[2097]: 2025-02-13 20:10:16.879 [INFO][5846] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3d1ca5284543f1dc3550ba7634c6498c40c87dc09b279448626637eebd5af2d6" iface="eth0" netns="" Feb 13 20:10:16.981107 containerd[2097]: 2025-02-13 20:10:16.879 [INFO][5846] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3d1ca5284543f1dc3550ba7634c6498c40c87dc09b279448626637eebd5af2d6" Feb 13 20:10:16.981107 containerd[2097]: 2025-02-13 20:10:16.879 [INFO][5846] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3d1ca5284543f1dc3550ba7634c6498c40c87dc09b279448626637eebd5af2d6" Feb 13 20:10:16.981107 containerd[2097]: 2025-02-13 20:10:16.958 [INFO][5852] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3d1ca5284543f1dc3550ba7634c6498c40c87dc09b279448626637eebd5af2d6" HandleID="k8s-pod-network.3d1ca5284543f1dc3550ba7634c6498c40c87dc09b279448626637eebd5af2d6" Workload="ip--172--31--16--93-k8s-calico--kube--controllers--587f87bbd4--mm8mf-eth0" Feb 13 20:10:16.981107 containerd[2097]: 2025-02-13 20:10:16.958 [INFO][5852] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:10:16.981107 containerd[2097]: 2025-02-13 20:10:16.958 [INFO][5852] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:10:16.981107 containerd[2097]: 2025-02-13 20:10:16.969 [WARNING][5852] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3d1ca5284543f1dc3550ba7634c6498c40c87dc09b279448626637eebd5af2d6" HandleID="k8s-pod-network.3d1ca5284543f1dc3550ba7634c6498c40c87dc09b279448626637eebd5af2d6" Workload="ip--172--31--16--93-k8s-calico--kube--controllers--587f87bbd4--mm8mf-eth0" Feb 13 20:10:16.981107 containerd[2097]: 2025-02-13 20:10:16.969 [INFO][5852] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3d1ca5284543f1dc3550ba7634c6498c40c87dc09b279448626637eebd5af2d6" HandleID="k8s-pod-network.3d1ca5284543f1dc3550ba7634c6498c40c87dc09b279448626637eebd5af2d6" Workload="ip--172--31--16--93-k8s-calico--kube--controllers--587f87bbd4--mm8mf-eth0" Feb 13 20:10:16.981107 containerd[2097]: 2025-02-13 20:10:16.973 [INFO][5852] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:10:16.981107 containerd[2097]: 2025-02-13 20:10:16.976 [INFO][5846] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3d1ca5284543f1dc3550ba7634c6498c40c87dc09b279448626637eebd5af2d6" Feb 13 20:10:16.983186 containerd[2097]: time="2025-02-13T20:10:16.981136807Z" level=info msg="TearDown network for sandbox \"3d1ca5284543f1dc3550ba7634c6498c40c87dc09b279448626637eebd5af2d6\" successfully" Feb 13 20:10:16.983186 containerd[2097]: time="2025-02-13T20:10:16.981169767Z" level=info msg="StopPodSandbox for \"3d1ca5284543f1dc3550ba7634c6498c40c87dc09b279448626637eebd5af2d6\" returns successfully" Feb 13 20:10:16.983186 containerd[2097]: time="2025-02-13T20:10:16.982266801Z" level=info msg="RemovePodSandbox for \"3d1ca5284543f1dc3550ba7634c6498c40c87dc09b279448626637eebd5af2d6\"" Feb 13 20:10:16.983186 containerd[2097]: time="2025-02-13T20:10:16.982297208Z" level=info msg="Forcibly stopping sandbox \"3d1ca5284543f1dc3550ba7634c6498c40c87dc09b279448626637eebd5af2d6\"" Feb 13 20:10:17.158602 containerd[2097]: 2025-02-13 20:10:17.078 [WARNING][5870] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3d1ca5284543f1dc3550ba7634c6498c40c87dc09b279448626637eebd5af2d6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--93-k8s-calico--kube--controllers--587f87bbd4--mm8mf-eth0", GenerateName:"calico-kube-controllers-587f87bbd4-", Namespace:"calico-system", SelfLink:"", UID:"ba17dcdf-1279-4496-b0fc-fdde00ad61dc", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 9, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"587f87bbd4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-93", ContainerID:"08fe7c3fceb4e68bf1525388f5ee0c7c65b68b682b0d9d568cd1f3214b738468", Pod:"calico-kube-controllers-587f87bbd4-mm8mf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.111.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie35944538e5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:10:17.158602 containerd[2097]: 2025-02-13 20:10:17.079 [INFO][5870] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3d1ca5284543f1dc3550ba7634c6498c40c87dc09b279448626637eebd5af2d6" Feb 13 20:10:17.158602 containerd[2097]: 2025-02-13 20:10:17.079 [INFO][5870] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3d1ca5284543f1dc3550ba7634c6498c40c87dc09b279448626637eebd5af2d6" iface="eth0" netns="" Feb 13 20:10:17.158602 containerd[2097]: 2025-02-13 20:10:17.079 [INFO][5870] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3d1ca5284543f1dc3550ba7634c6498c40c87dc09b279448626637eebd5af2d6" Feb 13 20:10:17.158602 containerd[2097]: 2025-02-13 20:10:17.079 [INFO][5870] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3d1ca5284543f1dc3550ba7634c6498c40c87dc09b279448626637eebd5af2d6" Feb 13 20:10:17.158602 containerd[2097]: 2025-02-13 20:10:17.137 [INFO][5876] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3d1ca5284543f1dc3550ba7634c6498c40c87dc09b279448626637eebd5af2d6" HandleID="k8s-pod-network.3d1ca5284543f1dc3550ba7634c6498c40c87dc09b279448626637eebd5af2d6" Workload="ip--172--31--16--93-k8s-calico--kube--controllers--587f87bbd4--mm8mf-eth0" Feb 13 20:10:17.158602 containerd[2097]: 2025-02-13 20:10:17.137 [INFO][5876] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:10:17.158602 containerd[2097]: 2025-02-13 20:10:17.137 [INFO][5876] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:10:17.158602 containerd[2097]: 2025-02-13 20:10:17.145 [WARNING][5876] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3d1ca5284543f1dc3550ba7634c6498c40c87dc09b279448626637eebd5af2d6" HandleID="k8s-pod-network.3d1ca5284543f1dc3550ba7634c6498c40c87dc09b279448626637eebd5af2d6" Workload="ip--172--31--16--93-k8s-calico--kube--controllers--587f87bbd4--mm8mf-eth0" Feb 13 20:10:17.158602 containerd[2097]: 2025-02-13 20:10:17.146 [INFO][5876] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3d1ca5284543f1dc3550ba7634c6498c40c87dc09b279448626637eebd5af2d6" HandleID="k8s-pod-network.3d1ca5284543f1dc3550ba7634c6498c40c87dc09b279448626637eebd5af2d6" Workload="ip--172--31--16--93-k8s-calico--kube--controllers--587f87bbd4--mm8mf-eth0" Feb 13 20:10:17.158602 containerd[2097]: 2025-02-13 20:10:17.151 [INFO][5876] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:10:17.158602 containerd[2097]: 2025-02-13 20:10:17.156 [INFO][5870] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3d1ca5284543f1dc3550ba7634c6498c40c87dc09b279448626637eebd5af2d6" Feb 13 20:10:17.160576 containerd[2097]: time="2025-02-13T20:10:17.159441221Z" level=info msg="TearDown network for sandbox \"3d1ca5284543f1dc3550ba7634c6498c40c87dc09b279448626637eebd5af2d6\" successfully" Feb 13 20:10:17.168771 containerd[2097]: time="2025-02-13T20:10:17.168722912Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3d1ca5284543f1dc3550ba7634c6498c40c87dc09b279448626637eebd5af2d6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:10:17.169128 containerd[2097]: time="2025-02-13T20:10:17.169090297Z" level=info msg="RemovePodSandbox \"3d1ca5284543f1dc3550ba7634c6498c40c87dc09b279448626637eebd5af2d6\" returns successfully" Feb 13 20:10:17.170377 containerd[2097]: time="2025-02-13T20:10:17.170193506Z" level=info msg="StopPodSandbox for \"a7e7c47ce2129c91594868baca4041d2f1eb491e7ccbf6d3af47a11a3cfd71b8\"" Feb 13 20:10:17.365127 containerd[2097]: 2025-02-13 20:10:17.279 [WARNING][5894] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a7e7c47ce2129c91594868baca4041d2f1eb491e7ccbf6d3af47a11a3cfd71b8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--93-k8s-csi--node--driver--g2mq8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1913a1ef-26a6-4963-ad3b-0e30d0c766c9", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 9, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-93", ContainerID:"302708740f814e31b72665f701b91cf4edeb31c3871c2c108ab6b60330bd43b0", Pod:"csi-node-driver-g2mq8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.111.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali505b60cef5d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:10:17.365127 containerd[2097]: 2025-02-13 20:10:17.279 [INFO][5894] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a7e7c47ce2129c91594868baca4041d2f1eb491e7ccbf6d3af47a11a3cfd71b8" Feb 13 20:10:17.365127 containerd[2097]: 2025-02-13 20:10:17.279 [INFO][5894] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a7e7c47ce2129c91594868baca4041d2f1eb491e7ccbf6d3af47a11a3cfd71b8" iface="eth0" netns="" Feb 13 20:10:17.365127 containerd[2097]: 2025-02-13 20:10:17.280 [INFO][5894] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a7e7c47ce2129c91594868baca4041d2f1eb491e7ccbf6d3af47a11a3cfd71b8" Feb 13 20:10:17.365127 containerd[2097]: 2025-02-13 20:10:17.280 [INFO][5894] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a7e7c47ce2129c91594868baca4041d2f1eb491e7ccbf6d3af47a11a3cfd71b8" Feb 13 20:10:17.365127 containerd[2097]: 2025-02-13 20:10:17.334 [INFO][5900] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a7e7c47ce2129c91594868baca4041d2f1eb491e7ccbf6d3af47a11a3cfd71b8" HandleID="k8s-pod-network.a7e7c47ce2129c91594868baca4041d2f1eb491e7ccbf6d3af47a11a3cfd71b8" Workload="ip--172--31--16--93-k8s-csi--node--driver--g2mq8-eth0" Feb 13 20:10:17.365127 containerd[2097]: 2025-02-13 20:10:17.335 [INFO][5900] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:10:17.365127 containerd[2097]: 2025-02-13 20:10:17.335 [INFO][5900] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:10:17.365127 containerd[2097]: 2025-02-13 20:10:17.351 [WARNING][5900] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a7e7c47ce2129c91594868baca4041d2f1eb491e7ccbf6d3af47a11a3cfd71b8" HandleID="k8s-pod-network.a7e7c47ce2129c91594868baca4041d2f1eb491e7ccbf6d3af47a11a3cfd71b8" Workload="ip--172--31--16--93-k8s-csi--node--driver--g2mq8-eth0" Feb 13 20:10:17.365127 containerd[2097]: 2025-02-13 20:10:17.351 [INFO][5900] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a7e7c47ce2129c91594868baca4041d2f1eb491e7ccbf6d3af47a11a3cfd71b8" HandleID="k8s-pod-network.a7e7c47ce2129c91594868baca4041d2f1eb491e7ccbf6d3af47a11a3cfd71b8" Workload="ip--172--31--16--93-k8s-csi--node--driver--g2mq8-eth0" Feb 13 20:10:17.365127 containerd[2097]: 2025-02-13 20:10:17.355 [INFO][5900] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:10:17.365127 containerd[2097]: 2025-02-13 20:10:17.360 [INFO][5894] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a7e7c47ce2129c91594868baca4041d2f1eb491e7ccbf6d3af47a11a3cfd71b8" Feb 13 20:10:17.368153 containerd[2097]: time="2025-02-13T20:10:17.365131663Z" level=info msg="TearDown network for sandbox \"a7e7c47ce2129c91594868baca4041d2f1eb491e7ccbf6d3af47a11a3cfd71b8\" successfully" Feb 13 20:10:17.368153 containerd[2097]: time="2025-02-13T20:10:17.365161988Z" level=info msg="StopPodSandbox for \"a7e7c47ce2129c91594868baca4041d2f1eb491e7ccbf6d3af47a11a3cfd71b8\" returns successfully" Feb 13 20:10:17.368153 containerd[2097]: time="2025-02-13T20:10:17.366620601Z" level=info msg="RemovePodSandbox for \"a7e7c47ce2129c91594868baca4041d2f1eb491e7ccbf6d3af47a11a3cfd71b8\"" Feb 13 20:10:17.368153 containerd[2097]: time="2025-02-13T20:10:17.366810760Z" level=info msg="Forcibly stopping sandbox \"a7e7c47ce2129c91594868baca4041d2f1eb491e7ccbf6d3af47a11a3cfd71b8\"" Feb 13 20:10:17.571382 containerd[2097]: 2025-02-13 20:10:17.483 [WARNING][5918] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a7e7c47ce2129c91594868baca4041d2f1eb491e7ccbf6d3af47a11a3cfd71b8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--93-k8s-csi--node--driver--g2mq8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1913a1ef-26a6-4963-ad3b-0e30d0c766c9", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 9, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-93", ContainerID:"302708740f814e31b72665f701b91cf4edeb31c3871c2c108ab6b60330bd43b0", Pod:"csi-node-driver-g2mq8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.111.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali505b60cef5d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:10:17.571382 containerd[2097]: 2025-02-13 20:10:17.485 [INFO][5918] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a7e7c47ce2129c91594868baca4041d2f1eb491e7ccbf6d3af47a11a3cfd71b8" Feb 13 20:10:17.571382 containerd[2097]: 2025-02-13 20:10:17.485 [INFO][5918] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a7e7c47ce2129c91594868baca4041d2f1eb491e7ccbf6d3af47a11a3cfd71b8" iface="eth0" netns="" Feb 13 20:10:17.571382 containerd[2097]: 2025-02-13 20:10:17.485 [INFO][5918] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a7e7c47ce2129c91594868baca4041d2f1eb491e7ccbf6d3af47a11a3cfd71b8" Feb 13 20:10:17.571382 containerd[2097]: 2025-02-13 20:10:17.485 [INFO][5918] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a7e7c47ce2129c91594868baca4041d2f1eb491e7ccbf6d3af47a11a3cfd71b8" Feb 13 20:10:17.571382 containerd[2097]: 2025-02-13 20:10:17.551 [INFO][5924] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a7e7c47ce2129c91594868baca4041d2f1eb491e7ccbf6d3af47a11a3cfd71b8" HandleID="k8s-pod-network.a7e7c47ce2129c91594868baca4041d2f1eb491e7ccbf6d3af47a11a3cfd71b8" Workload="ip--172--31--16--93-k8s-csi--node--driver--g2mq8-eth0" Feb 13 20:10:17.571382 containerd[2097]: 2025-02-13 20:10:17.551 [INFO][5924] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:10:17.571382 containerd[2097]: 2025-02-13 20:10:17.551 [INFO][5924] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:10:17.571382 containerd[2097]: 2025-02-13 20:10:17.560 [WARNING][5924] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a7e7c47ce2129c91594868baca4041d2f1eb491e7ccbf6d3af47a11a3cfd71b8" HandleID="k8s-pod-network.a7e7c47ce2129c91594868baca4041d2f1eb491e7ccbf6d3af47a11a3cfd71b8" Workload="ip--172--31--16--93-k8s-csi--node--driver--g2mq8-eth0" Feb 13 20:10:17.571382 containerd[2097]: 2025-02-13 20:10:17.560 [INFO][5924] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a7e7c47ce2129c91594868baca4041d2f1eb491e7ccbf6d3af47a11a3cfd71b8" HandleID="k8s-pod-network.a7e7c47ce2129c91594868baca4041d2f1eb491e7ccbf6d3af47a11a3cfd71b8" Workload="ip--172--31--16--93-k8s-csi--node--driver--g2mq8-eth0" Feb 13 20:10:17.571382 containerd[2097]: 2025-02-13 20:10:17.563 [INFO][5924] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:10:17.571382 containerd[2097]: 2025-02-13 20:10:17.566 [INFO][5918] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a7e7c47ce2129c91594868baca4041d2f1eb491e7ccbf6d3af47a11a3cfd71b8" Feb 13 20:10:17.571382 containerd[2097]: time="2025-02-13T20:10:17.571320129Z" level=info msg="TearDown network for sandbox \"a7e7c47ce2129c91594868baca4041d2f1eb491e7ccbf6d3af47a11a3cfd71b8\" successfully" Feb 13 20:10:17.599182 containerd[2097]: time="2025-02-13T20:10:17.599131278Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a7e7c47ce2129c91594868baca4041d2f1eb491e7ccbf6d3af47a11a3cfd71b8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:10:17.599333 containerd[2097]: time="2025-02-13T20:10:17.599213576Z" level=info msg="RemovePodSandbox \"a7e7c47ce2129c91594868baca4041d2f1eb491e7ccbf6d3af47a11a3cfd71b8\" returns successfully" Feb 13 20:10:17.735331 containerd[2097]: time="2025-02-13T20:10:17.735284811Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:10:17.737137 containerd[2097]: time="2025-02-13T20:10:17.737064733Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Feb 13 20:10:17.739024 containerd[2097]: time="2025-02-13T20:10:17.738962792Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:10:17.744357 containerd[2097]: time="2025-02-13T20:10:17.744285967Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:10:17.745400 containerd[2097]: time="2025-02-13T20:10:17.745237550Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 5.084896102s" Feb 13 20:10:17.745400 containerd[2097]: time="2025-02-13T20:10:17.745278382Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Feb 13 20:10:17.748375 containerd[2097]: time="2025-02-13T20:10:17.748320160Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Feb 13 20:10:17.751435 containerd[2097]: time="2025-02-13T20:10:17.751403570Z" level=info msg="CreateContainer within sandbox \"45dd1348fb40060ace48b4ec1d8756b08fb329e83bae839941262dc81e2aa5a4\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 20:10:17.778805 containerd[2097]: time="2025-02-13T20:10:17.778762890Z" level=info msg="CreateContainer within sandbox \"45dd1348fb40060ace48b4ec1d8756b08fb329e83bae839941262dc81e2aa5a4\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"af5a1c6d58f318163dd2bcc82d679930b02589415f53e90eec8da494ba9ae241\"" Feb 13 20:10:17.779518 containerd[2097]: time="2025-02-13T20:10:17.779404639Z" level=info msg="StartContainer for \"af5a1c6d58f318163dd2bcc82d679930b02589415f53e90eec8da494ba9ae241\"" Feb 13 20:10:17.921306 containerd[2097]: time="2025-02-13T20:10:17.921125590Z" level=info msg="StartContainer for \"af5a1c6d58f318163dd2bcc82d679930b02589415f53e90eec8da494ba9ae241\" returns successfully" Feb 13 20:10:18.243628 systemd[1]: Started sshd@9-172.31.16.93:22-139.178.89.65:54646.service - OpenSSH per-connection server daemon (139.178.89.65:54646). Feb 13 20:10:18.388688 kubelet[3699]: I0213 20:10:18.388603 3699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7766b6c6c6-45497" podStartSLOduration=33.291510488 podStartE2EDuration="40.388578594s" podCreationTimestamp="2025-02-13 20:09:38 +0000 UTC" firstStartedPulling="2025-02-13 20:10:10.650705218 +0000 UTC m=+55.506658007" lastFinishedPulling="2025-02-13 20:10:17.747773313 +0000 UTC m=+62.603726113" observedRunningTime="2025-02-13 20:10:18.379875473 +0000 UTC m=+63.235828280" watchObservedRunningTime="2025-02-13 20:10:18.388578594 +0000 UTC m=+63.244531402" Feb 13 20:10:18.502950 sshd[5972]: Accepted publickey for core from 139.178.89.65 port 54646 ssh2: RSA SHA256:7nv7xaFFWmIAvPewvKjLuTxkMrDcPy3WtQ5BDo3Wg0I Feb 13 20:10:18.505203 sshd[5972]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:10:18.516042 systemd-logind[2075]: New session 10 of user core. Feb 13 20:10:18.523581 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 20:10:19.109636 sshd[5972]: pam_unix(sshd:session): session closed for user core Feb 13 20:10:19.114477 systemd-logind[2075]: Session 10 logged out. Waiting for processes to exit. Feb 13 20:10:19.117114 systemd[1]: sshd@9-172.31.16.93:22-139.178.89.65:54646.service: Deactivated successfully. Feb 13 20:10:19.125787 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 20:10:19.130131 systemd-logind[2075]: Removed session 10. Feb 13 20:10:19.358175 containerd[2097]: time="2025-02-13T20:10:19.357874794Z" level=info msg="StopPodSandbox for \"b329744ddb60f7eceb2b5701f32b0822d1a8da2cff82cb838995ebacfc9eb13e\"" Feb 13 20:10:19.371085 kubelet[3699]: I0213 20:10:19.370193 3699 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:10:19.573757 containerd[2097]: 2025-02-13 20:10:19.494 [INFO][6010] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b329744ddb60f7eceb2b5701f32b0822d1a8da2cff82cb838995ebacfc9eb13e" Feb 13 20:10:19.573757 containerd[2097]: 2025-02-13 20:10:19.496 [INFO][6010] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b329744ddb60f7eceb2b5701f32b0822d1a8da2cff82cb838995ebacfc9eb13e" iface="eth0" netns="/var/run/netns/cni-136d88f3-9969-84fc-f623-009aaa360985" Feb 13 20:10:19.573757 containerd[2097]: 2025-02-13 20:10:19.496 [INFO][6010] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b329744ddb60f7eceb2b5701f32b0822d1a8da2cff82cb838995ebacfc9eb13e" iface="eth0" netns="/var/run/netns/cni-136d88f3-9969-84fc-f623-009aaa360985" Feb 13 20:10:19.573757 containerd[2097]: 2025-02-13 20:10:19.497 [INFO][6010] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b329744ddb60f7eceb2b5701f32b0822d1a8da2cff82cb838995ebacfc9eb13e" iface="eth0" netns="/var/run/netns/cni-136d88f3-9969-84fc-f623-009aaa360985" Feb 13 20:10:19.573757 containerd[2097]: 2025-02-13 20:10:19.497 [INFO][6010] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b329744ddb60f7eceb2b5701f32b0822d1a8da2cff82cb838995ebacfc9eb13e" Feb 13 20:10:19.573757 containerd[2097]: 2025-02-13 20:10:19.497 [INFO][6010] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b329744ddb60f7eceb2b5701f32b0822d1a8da2cff82cb838995ebacfc9eb13e" Feb 13 20:10:19.573757 containerd[2097]: 2025-02-13 20:10:19.554 [INFO][6016] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b329744ddb60f7eceb2b5701f32b0822d1a8da2cff82cb838995ebacfc9eb13e" HandleID="k8s-pod-network.b329744ddb60f7eceb2b5701f32b0822d1a8da2cff82cb838995ebacfc9eb13e" Workload="ip--172--31--16--93-k8s-coredns--7db6d8ff4d--5wkpf-eth0" Feb 13 20:10:19.573757 containerd[2097]: 2025-02-13 20:10:19.554 [INFO][6016] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:10:19.573757 containerd[2097]: 2025-02-13 20:10:19.554 [INFO][6016] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:10:19.573757 containerd[2097]: 2025-02-13 20:10:19.561 [WARNING][6016] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b329744ddb60f7eceb2b5701f32b0822d1a8da2cff82cb838995ebacfc9eb13e" HandleID="k8s-pod-network.b329744ddb60f7eceb2b5701f32b0822d1a8da2cff82cb838995ebacfc9eb13e" Workload="ip--172--31--16--93-k8s-coredns--7db6d8ff4d--5wkpf-eth0" Feb 13 20:10:19.573757 containerd[2097]: 2025-02-13 20:10:19.561 [INFO][6016] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b329744ddb60f7eceb2b5701f32b0822d1a8da2cff82cb838995ebacfc9eb13e" HandleID="k8s-pod-network.b329744ddb60f7eceb2b5701f32b0822d1a8da2cff82cb838995ebacfc9eb13e" Workload="ip--172--31--16--93-k8s-coredns--7db6d8ff4d--5wkpf-eth0" Feb 13 20:10:19.573757 containerd[2097]: 2025-02-13 20:10:19.563 [INFO][6016] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:10:19.573757 containerd[2097]: 2025-02-13 20:10:19.566 [INFO][6010] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b329744ddb60f7eceb2b5701f32b0822d1a8da2cff82cb838995ebacfc9eb13e" Feb 13 20:10:19.587735 containerd[2097]: time="2025-02-13T20:10:19.573922256Z" level=info msg="TearDown network for sandbox \"b329744ddb60f7eceb2b5701f32b0822d1a8da2cff82cb838995ebacfc9eb13e\" successfully" Feb 13 20:10:19.587735 containerd[2097]: time="2025-02-13T20:10:19.573955496Z" level=info msg="StopPodSandbox for \"b329744ddb60f7eceb2b5701f32b0822d1a8da2cff82cb838995ebacfc9eb13e\" returns successfully" Feb 13 20:10:19.587735 containerd[2097]: time="2025-02-13T20:10:19.582988493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5wkpf,Uid:a95f6702-f897-4b44-9e9f-23c6d7c2741b,Namespace:kube-system,Attempt:1,}" Feb 13 20:10:19.591289 systemd[1]: run-netns-cni\x2d136d88f3\x2d9969\x2d84fc\x2df623\x2d009aaa360985.mount: Deactivated successfully. Feb 13 20:10:19.807575 systemd-networkd[1657]: calid789b3c6948: Link UP Feb 13 20:10:19.808464 systemd-networkd[1657]: calid789b3c6948: Gained carrier Feb 13 20:10:19.824836 (udev-worker)[6041]: Network interface NamePolicy= disabled on kernel command line. Feb 13 20:10:19.845091 containerd[2097]: 2025-02-13 20:10:19.688 [INFO][6022] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--93-k8s-coredns--7db6d8ff4d--5wkpf-eth0 coredns-7db6d8ff4d- kube-system a95f6702-f897-4b44-9e9f-23c6d7c2741b 946 0 2025-02-13 20:09:29 +0000 UTC <nil> <nil> map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-16-93 coredns-7db6d8ff4d-5wkpf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid789b3c6948 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="4a071a026be0bcba719f332f7fe1a79053b7749a9f9a24a0e155dc36026ece54" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5wkpf" WorkloadEndpoint="ip--172--31--16--93-k8s-coredns--7db6d8ff4d--5wkpf-" Feb 13 20:10:19.845091 containerd[2097]: 2025-02-13 20:10:19.689 [INFO][6022] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4a071a026be0bcba719f332f7fe1a79053b7749a9f9a24a0e155dc36026ece54" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5wkpf" WorkloadEndpoint="ip--172--31--16--93-k8s-coredns--7db6d8ff4d--5wkpf-eth0" Feb 13 20:10:19.845091 containerd[2097]: 2025-02-13 20:10:19.730 [INFO][6034] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4a071a026be0bcba719f332f7fe1a79053b7749a9f9a24a0e155dc36026ece54" HandleID="k8s-pod-network.4a071a026be0bcba719f332f7fe1a79053b7749a9f9a24a0e155dc36026ece54" Workload="ip--172--31--16--93-k8s-coredns--7db6d8ff4d--5wkpf-eth0" Feb 13 20:10:19.845091 containerd[2097]: 2025-02-13 20:10:19.753 [INFO][6034] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4a071a026be0bcba719f332f7fe1a79053b7749a9f9a24a0e155dc36026ece54" HandleID="k8s-pod-network.4a071a026be0bcba719f332f7fe1a79053b7749a9f9a24a0e155dc36026ece54" Workload="ip--172--31--16--93-k8s-coredns--7db6d8ff4d--5wkpf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290b70), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-16-93", "pod":"coredns-7db6d8ff4d-5wkpf", "timestamp":"2025-02-13 20:10:19.73056934 +0000 UTC"}, Hostname:"ip-172-31-16-93", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:10:19.845091 containerd[2097]: 2025-02-13 20:10:19.753 [INFO][6034] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:10:19.845091 containerd[2097]: 2025-02-13 20:10:19.753 [INFO][6034] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:10:19.845091 containerd[2097]: 2025-02-13 20:10:19.754 [INFO][6034] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-93' Feb 13 20:10:19.845091 containerd[2097]: 2025-02-13 20:10:19.757 [INFO][6034] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4a071a026be0bcba719f332f7fe1a79053b7749a9f9a24a0e155dc36026ece54" host="ip-172-31-16-93" Feb 13 20:10:19.845091 containerd[2097]: 2025-02-13 20:10:19.764 [INFO][6034] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-16-93" Feb 13 20:10:19.845091 containerd[2097]: 2025-02-13 20:10:19.771 [INFO][6034] ipam/ipam.go 489: Trying affinity for 192.168.111.64/26 host="ip-172-31-16-93" Feb 13 20:10:19.845091 containerd[2097]: 2025-02-13 20:10:19.774 [INFO][6034] ipam/ipam.go 155: Attempting to load block cidr=192.168.111.64/26 host="ip-172-31-16-93" Feb 13 20:10:19.845091 containerd[2097]: 2025-02-13 20:10:19.779 [INFO][6034] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.111.64/26 host="ip-172-31-16-93" Feb 13 20:10:19.845091 containerd[2097]: 2025-02-13 20:10:19.779 [INFO][6034] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.111.64/26 handle="k8s-pod-network.4a071a026be0bcba719f332f7fe1a79053b7749a9f9a24a0e155dc36026ece54" host="ip-172-31-16-93" Feb 13 20:10:19.845091 containerd[2097]: 2025-02-13 20:10:19.783 [INFO][6034] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4a071a026be0bcba719f332f7fe1a79053b7749a9f9a24a0e155dc36026ece54 Feb 13 20:10:19.845091 containerd[2097]: 2025-02-13 20:10:19.788 [INFO][6034] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.111.64/26 handle="k8s-pod-network.4a071a026be0bcba719f332f7fe1a79053b7749a9f9a24a0e155dc36026ece54" host="ip-172-31-16-93" Feb 13 20:10:19.845091 containerd[2097]: 2025-02-13 20:10:19.798 [INFO][6034] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.111.70/26] block=192.168.111.64/26 handle="k8s-pod-network.4a071a026be0bcba719f332f7fe1a79053b7749a9f9a24a0e155dc36026ece54" host="ip-172-31-16-93" Feb 13 20:10:19.845091 containerd[2097]: 2025-02-13 20:10:19.798 [INFO][6034] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.111.70/26] handle="k8s-pod-network.4a071a026be0bcba719f332f7fe1a79053b7749a9f9a24a0e155dc36026ece54" host="ip-172-31-16-93" Feb 13 20:10:19.845091 containerd[2097]: 2025-02-13 20:10:19.798 [INFO][6034] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:10:19.845091 containerd[2097]: 2025-02-13 20:10:19.798 [INFO][6034] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.111.70/26] IPv6=[] ContainerID="4a071a026be0bcba719f332f7fe1a79053b7749a9f9a24a0e155dc36026ece54" HandleID="k8s-pod-network.4a071a026be0bcba719f332f7fe1a79053b7749a9f9a24a0e155dc36026ece54" Workload="ip--172--31--16--93-k8s-coredns--7db6d8ff4d--5wkpf-eth0" Feb 13 20:10:19.847566 containerd[2097]: 2025-02-13 20:10:19.802 [INFO][6022] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4a071a026be0bcba719f332f7fe1a79053b7749a9f9a24a0e155dc36026ece54" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5wkpf" WorkloadEndpoint="ip--172--31--16--93-k8s-coredns--7db6d8ff4d--5wkpf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--93-k8s-coredns--7db6d8ff4d--5wkpf-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"a95f6702-f897-4b44-9e9f-23c6d7c2741b", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 9, 29, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-93", ContainerID:"", Pod:"coredns-7db6d8ff4d-5wkpf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.111.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid789b3c6948", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:10:19.847566 containerd[2097]: 2025-02-13 20:10:19.802 [INFO][6022] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.111.70/32] ContainerID="4a071a026be0bcba719f332f7fe1a79053b7749a9f9a24a0e155dc36026ece54" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5wkpf" WorkloadEndpoint="ip--172--31--16--93-k8s-coredns--7db6d8ff4d--5wkpf-eth0" Feb 13 20:10:19.847566 containerd[2097]: 2025-02-13 20:10:19.802 [INFO][6022] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid789b3c6948 ContainerID="4a071a026be0bcba719f332f7fe1a79053b7749a9f9a24a0e155dc36026ece54" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5wkpf" WorkloadEndpoint="ip--172--31--16--93-k8s-coredns--7db6d8ff4d--5wkpf-eth0" Feb 13 20:10:19.847566 containerd[2097]: 2025-02-13 20:10:19.811 [INFO][6022] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4a071a026be0bcba719f332f7fe1a79053b7749a9f9a24a0e155dc36026ece54" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5wkpf" WorkloadEndpoint="ip--172--31--16--93-k8s-coredns--7db6d8ff4d--5wkpf-eth0" Feb 13 20:10:19.847566 containerd[2097]: 2025-02-13 20:10:19.814 [INFO][6022] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4a071a026be0bcba719f332f7fe1a79053b7749a9f9a24a0e155dc36026ece54" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5wkpf" WorkloadEndpoint="ip--172--31--16--93-k8s-coredns--7db6d8ff4d--5wkpf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--93-k8s-coredns--7db6d8ff4d--5wkpf-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"a95f6702-f897-4b44-9e9f-23c6d7c2741b", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 9, 29, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-93", ContainerID:"4a071a026be0bcba719f332f7fe1a79053b7749a9f9a24a0e155dc36026ece54", Pod:"coredns-7db6d8ff4d-5wkpf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.111.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid789b3c6948", MAC:"3e:4c:1e:9a:8c:b4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:10:19.847566 containerd[2097]: 2025-02-13 20:10:19.834 [INFO][6022] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4a071a026be0bcba719f332f7fe1a79053b7749a9f9a24a0e155dc36026ece54" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5wkpf" WorkloadEndpoint="ip--172--31--16--93-k8s-coredns--7db6d8ff4d--5wkpf-eth0" Feb 13 20:10:19.951265 containerd[2097]: time="2025-02-13T20:10:19.950672030Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:10:19.951265 containerd[2097]: time="2025-02-13T20:10:19.951164528Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:10:19.951265 containerd[2097]: time="2025-02-13T20:10:19.951189228Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:10:19.951736 containerd[2097]: time="2025-02-13T20:10:19.951580023Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:10:20.099003 containerd[2097]: time="2025-02-13T20:10:20.098412653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5wkpf,Uid:a95f6702-f897-4b44-9e9f-23c6d7c2741b,Namespace:kube-system,Attempt:1,} returns sandbox id \"4a071a026be0bcba719f332f7fe1a79053b7749a9f9a24a0e155dc36026ece54\"" Feb 13 20:10:20.106048 containerd[2097]: time="2025-02-13T20:10:20.106015374Z" level=info msg="CreateContainer within sandbox \"4a071a026be0bcba719f332f7fe1a79053b7749a9f9a24a0e155dc36026ece54\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 20:10:20.268327 containerd[2097]: time="2025-02-13T20:10:20.268279859Z" level=info msg="CreateContainer within sandbox \"4a071a026be0bcba719f332f7fe1a79053b7749a9f9a24a0e155dc36026ece54\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b53521bb8e11a058321aca91e3add0910ee5a2a8f151908c02b80fa305acbad4\"" Feb 13 20:10:20.271235 containerd[2097]: time="2025-02-13T20:10:20.270938929Z" level=info msg="StartContainer for \"b53521bb8e11a058321aca91e3add0910ee5a2a8f151908c02b80fa305acbad4\"" Feb 13 20:10:20.372601 containerd[2097]: time="2025-02-13T20:10:20.369455996Z" level=info msg="StartContainer for \"b53521bb8e11a058321aca91e3add0910ee5a2a8f151908c02b80fa305acbad4\" returns successfully" Feb 13 20:10:21.494580 kubelet[3699]: I0213 20:10:21.494447 3699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-5wkpf" podStartSLOduration=52.494399289 podStartE2EDuration="52.494399289s" podCreationTimestamp="2025-02-13 20:09:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:10:20.453725899 +0000 UTC m=+65.309678706" watchObservedRunningTime="2025-02-13 20:10:21.494399289 +0000 UTC m=+66.350352098" Feb 13 20:10:21.568309 containerd[2097]: time="2025-02-13T20:10:21.568195532Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:10:21.570042 containerd[2097]: time="2025-02-13T20:10:21.569887101Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Feb 13 20:10:21.571651 containerd[2097]: time="2025-02-13T20:10:21.571617441Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:10:21.583773 containerd[2097]: time="2025-02-13T20:10:21.583647764Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:10:21.589533 containerd[2097]: time="2025-02-13T20:10:21.589362939Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 3.840995886s" Feb 13 20:10:21.589533 containerd[2097]: time="2025-02-13T20:10:21.589419153Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Feb 13 20:10:21.606891 containerd[2097]: time="2025-02-13T20:10:21.603383385Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 20:10:21.651016 containerd[2097]: time="2025-02-13T20:10:21.650973741Z" level=info msg="CreateContainer within sandbox \"08fe7c3fceb4e68bf1525388f5ee0c7c65b68b682b0d9d568cd1f3214b738468\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Feb 13 20:10:21.674988 containerd[2097]: time="2025-02-13T20:10:21.674903816Z" level=info msg="CreateContainer within sandbox \"08fe7c3fceb4e68bf1525388f5ee0c7c65b68b682b0d9d568cd1f3214b738468\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"eea71472521956c533e029500e97e5c491bc8f0220ec2dbd6d3f76afd53c253f\"" Feb 13 20:10:21.677819 containerd[2097]: time="2025-02-13T20:10:21.676320418Z" level=info msg="StartContainer for \"eea71472521956c533e029500e97e5c491bc8f0220ec2dbd6d3f76afd53c253f\"" Feb 13 20:10:21.793751 containerd[2097]: time="2025-02-13T20:10:21.793627657Z" level=info msg="StartContainer for \"eea71472521956c533e029500e97e5c491bc8f0220ec2dbd6d3f76afd53c253f\" returns successfully" Feb 13 20:10:21.807247 systemd-networkd[1657]: calid789b3c6948: Gained IPv6LL Feb 13 20:10:22.063327 systemd-resolved[1977]: Under memory pressure, flushing caches. Feb 13 20:10:22.065284 systemd-journald[1566]: Under memory pressure, flushing caches. Feb 13 20:10:22.063372 systemd-resolved[1977]: Flushed all caches. Feb 13 20:10:22.169547 containerd[2097]: time="2025-02-13T20:10:22.169498138Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:10:22.177092 containerd[2097]: time="2025-02-13T20:10:22.174867917Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Feb 13 20:10:22.178606 containerd[2097]: time="2025-02-13T20:10:22.178561751Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 574.936584ms" Feb 13 20:10:22.178606 containerd[2097]: time="2025-02-13T20:10:22.178607893Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Feb 13 20:10:22.182379 containerd[2097]: time="2025-02-13T20:10:22.180004289Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 20:10:22.186157 containerd[2097]: time="2025-02-13T20:10:22.186114563Z" level=info msg="CreateContainer within sandbox \"4cec07d6d0a5fb13fc2e9924bb6ce02a24bf99a60611a535d69a35ed9f18102f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 20:10:22.211346 containerd[2097]: time="2025-02-13T20:10:22.211006268Z" level=info msg="CreateContainer within sandbox \"4cec07d6d0a5fb13fc2e9924bb6ce02a24bf99a60611a535d69a35ed9f18102f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"640760e5b0d578cf5483833ab590a6d142eb7948636ab1fceecadc3181968502\"" Feb 13 20:10:22.214439 containerd[2097]: time="2025-02-13T20:10:22.212828675Z" level=info msg="StartContainer for \"640760e5b0d578cf5483833ab590a6d142eb7948636ab1fceecadc3181968502\"" Feb 13 20:10:22.324503 containerd[2097]: time="2025-02-13T20:10:22.324305846Z" level=info msg="StartContainer for \"640760e5b0d578cf5483833ab590a6d142eb7948636ab1fceecadc3181968502\" returns successfully" Feb 13 20:10:22.470896 kubelet[3699]: I0213 20:10:22.470218 3699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7766b6c6c6-42f6t" podStartSLOduration=33.789898724 podStartE2EDuration="44.470131194s" podCreationTimestamp="2025-02-13 20:09:38 +0000 UTC" firstStartedPulling="2025-02-13 20:10:11.499506319 +0000 UTC m=+56.355459119" lastFinishedPulling="2025-02-13 20:10:22.179738795 +0000 UTC m=+67.035691589" observedRunningTime="2025-02-13 20:10:22.467221618 +0000 UTC m=+67.323174425" watchObservedRunningTime="2025-02-13 20:10:22.470131194 +0000 UTC m=+67.326084003" Feb 13 20:10:22.505605 kubelet[3699]: I0213 20:10:22.505546 3699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-587f87bbd4-mm8mf" podStartSLOduration=34.561886163 podStartE2EDuration="45.505522436s" podCreationTimestamp="2025-02-13 20:09:37 +0000 UTC" firstStartedPulling="2025-02-13 20:10:10.651024845 +0000 UTC m=+55.506977633" lastFinishedPulling="2025-02-13 20:10:21.594661102 +0000 UTC m=+66.450613906" observedRunningTime="2025-02-13 20:10:22.505387553 +0000 UTC m=+67.361340361" watchObservedRunningTime="2025-02-13 20:10:22.505522436 +0000 UTC m=+67.361475244" Feb 13 20:10:23.440121 kubelet[3699]: I0213 20:10:23.439385 3699 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:10:24.147602 systemd[1]: Started sshd@10-172.31.16.93:22-139.178.89.65:54652.service - OpenSSH per-connection server daemon (139.178.89.65:54652). Feb 13 20:10:24.479362 sshd[6257]: Accepted publickey for core from 139.178.89.65 port 54652 ssh2: RSA SHA256:7nv7xaFFWmIAvPewvKjLuTxkMrDcPy3WtQ5BDo3Wg0I Feb 13 20:10:24.483721 sshd[6257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:10:24.497897 systemd-logind[2075]: New session 11 of user core. Feb 13 20:10:24.501699 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 20:10:24.554480 ntpd[2052]: Listen normally on 13 calid789b3c6948 [fe80::ecee:eeff:feee:eeee%12]:123 Feb 13 20:10:24.564932 ntpd[2052]: 13 Feb 20:10:24 ntpd[2052]: Listen normally on 13 calid789b3c6948 [fe80::ecee:eeff:feee:eeee%12]:123 Feb 13 20:10:24.790187 containerd[2097]: time="2025-02-13T20:10:24.789817544Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:10:24.792870 containerd[2097]: time="2025-02-13T20:10:24.792713850Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Feb 13 20:10:24.796277 containerd[2097]: time="2025-02-13T20:10:24.796215351Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:10:24.802098 containerd[2097]: time="2025-02-13T20:10:24.800813325Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:10:24.817979 containerd[2097]: time="2025-02-13T20:10:24.813993027Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 2.633945709s" Feb 13 20:10:24.823805 containerd[2097]: time="2025-02-13T20:10:24.819783221Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Feb 13 20:10:24.842760 containerd[2097]: time="2025-02-13T20:10:24.842382384Z" level=info msg="CreateContainer within sandbox \"302708740f814e31b72665f701b91cf4edeb31c3871c2c108ab6b60330bd43b0\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 20:10:24.876248 containerd[2097]: time="2025-02-13T20:10:24.872580131Z" level=info msg="CreateContainer within sandbox \"302708740f814e31b72665f701b91cf4edeb31c3871c2c108ab6b60330bd43b0\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"7a2e623a98b1d8ff2d921aa34e80958eac784dd952658a47ac9bd27fe77f4172\"" Feb 13 20:10:24.876248 containerd[2097]: time="2025-02-13T20:10:24.874433029Z" level=info msg="StartContainer for \"7a2e623a98b1d8ff2d921aa34e80958eac784dd952658a47ac9bd27fe77f4172\"" Feb 13 20:10:25.001662 systemd[1]: run-containerd-runc-k8s.io-7a2e623a98b1d8ff2d921aa34e80958eac784dd952658a47ac9bd27fe77f4172-runc.emKyiZ.mount: Deactivated successfully. Feb 13 20:10:25.083351 containerd[2097]: time="2025-02-13T20:10:25.081870129Z" level=info msg="StartContainer for \"7a2e623a98b1d8ff2d921aa34e80958eac784dd952658a47ac9bd27fe77f4172\" returns successfully" Feb 13 20:10:25.475702 kubelet[3699]: I0213 20:10:25.475541 3699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-g2mq8" podStartSLOduration=32.394363122 podStartE2EDuration="48.475516534s" podCreationTimestamp="2025-02-13 20:09:37 +0000 UTC" firstStartedPulling="2025-02-13 20:10:08.750240883 +0000 UTC m=+53.606193676" lastFinishedPulling="2025-02-13 20:10:24.83139429 +0000 UTC m=+69.687347088" observedRunningTime="2025-02-13 20:10:25.475013353 +0000 UTC m=+70.330966161" watchObservedRunningTime="2025-02-13 20:10:25.475516534 +0000 UTC m=+70.331469342" Feb 13 20:10:25.718713 sshd[6257]: pam_unix(sshd:session): session closed for user core Feb 13 20:10:25.726520 systemd-logind[2075]: Session 11 logged out. Waiting for processes to exit. Feb 13 20:10:25.729300 systemd[1]: sshd@10-172.31.16.93:22-139.178.89.65:54652.service: Deactivated successfully. Feb 13 20:10:25.742688 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 20:10:25.751646 systemd-logind[2075]: Removed session 11. Feb 13 20:10:25.755864 systemd[1]: Started sshd@11-172.31.16.93:22-139.178.89.65:36712.service - OpenSSH per-connection server daemon (139.178.89.65:36712). Feb 13 20:10:25.952326 sshd[6313]: Accepted publickey for core from 139.178.89.65 port 36712 ssh2: RSA SHA256:7nv7xaFFWmIAvPewvKjLuTxkMrDcPy3WtQ5BDo3Wg0I Feb 13 20:10:25.954940 sshd[6313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:10:25.970445 systemd-logind[2075]: New session 12 of user core. Feb 13 20:10:25.978523 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 20:10:26.012789 kubelet[3699]: I0213 20:10:26.012729 3699 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 20:10:26.027323 kubelet[3699]: I0213 20:10:26.027237 3699 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 20:10:26.099307 systemd-journald[1566]: Under memory pressure, flushing caches. Feb 13 20:10:26.097754 systemd-resolved[1977]: Under memory pressure, flushing caches. Feb 13 20:10:26.097855 systemd-resolved[1977]: Flushed all caches. Feb 13 20:10:26.520708 sshd[6313]: pam_unix(sshd:session): session closed for user core Feb 13 20:10:26.529312 systemd[1]: sshd@11-172.31.16.93:22-139.178.89.65:36712.service: Deactivated successfully. Feb 13 20:10:26.533536 systemd-logind[2075]: Session 12 logged out. Waiting for processes to exit. Feb 13 20:10:26.547628 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 20:10:26.588158 systemd[1]: Started sshd@12-172.31.16.93:22-139.178.89.65:36726.service - OpenSSH per-connection server daemon (139.178.89.65:36726). Feb 13 20:10:26.591779 systemd-logind[2075]: Removed session 12. Feb 13 20:10:26.795090 sshd[6329]: Accepted publickey for core from 139.178.89.65 port 36726 ssh2: RSA SHA256:7nv7xaFFWmIAvPewvKjLuTxkMrDcPy3WtQ5BDo3Wg0I Feb 13 20:10:26.798334 sshd[6329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:10:26.804414 systemd-logind[2075]: New session 13 of user core. Feb 13 20:10:26.811825 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 20:10:27.157514 sshd[6329]: pam_unix(sshd:session): session closed for user core Feb 13 20:10:27.162287 systemd[1]: sshd@12-172.31.16.93:22-139.178.89.65:36726.service: Deactivated successfully. Feb 13 20:10:27.176689 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 20:10:27.176702 systemd-logind[2075]: Session 13 logged out. Waiting for processes to exit. Feb 13 20:10:27.179457 systemd-logind[2075]: Removed session 13. Feb 13 20:10:27.292758 kubelet[3699]: I0213 20:10:27.292381 3699 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:10:28.144735 systemd-resolved[1977]: Under memory pressure, flushing caches. Feb 13 20:10:28.145357 systemd-journald[1566]: Under memory pressure, flushing caches. Feb 13 20:10:28.144769 systemd-resolved[1977]: Flushed all caches. Feb 13 20:10:31.088863 kubelet[3699]: I0213 20:10:31.088817 3699 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:10:32.185912 systemd[1]: Started sshd@13-172.31.16.93:22-139.178.89.65:36738.service - OpenSSH per-connection server daemon (139.178.89.65:36738). Feb 13 20:10:32.354013 sshd[6352]: Accepted publickey for core from 139.178.89.65 port 36738 ssh2: RSA SHA256:7nv7xaFFWmIAvPewvKjLuTxkMrDcPy3WtQ5BDo3Wg0I Feb 13 20:10:32.358946 sshd[6352]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:10:32.365363 systemd-logind[2075]: New session 14 of user core. Feb 13 20:10:32.371435 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 20:10:32.849518 sshd[6352]: pam_unix(sshd:session): session closed for user core Feb 13 20:10:32.854704 systemd[1]: sshd@13-172.31.16.93:22-139.178.89.65:36738.service: Deactivated successfully. Feb 13 20:10:32.859059 systemd-logind[2075]: Session 14 logged out. Waiting for processes to exit. Feb 13 20:10:32.859801 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 20:10:32.863483 systemd-logind[2075]: Removed session 14. Feb 13 20:10:33.264266 systemd[1]: run-containerd-runc-k8s.io-1c5f1f3b1844676e41aa005ebeb53ddb3d2550d6a2ab6df1911bdd6a8352903a-runc.l3hzZX.mount: Deactivated successfully. Feb 13 20:10:34.096296 systemd-journald[1566]: Under memory pressure, flushing caches. Feb 13 20:10:34.095368 systemd-resolved[1977]: Under memory pressure, flushing caches. Feb 13 20:10:34.095393 systemd-resolved[1977]: Flushed all caches. Feb 13 20:10:37.883575 systemd[1]: Started sshd@14-172.31.16.93:22-139.178.89.65:34520.service - OpenSSH per-connection server daemon (139.178.89.65:34520). Feb 13 20:10:38.086871 sshd[6397]: Accepted publickey for core from 139.178.89.65 port 34520 ssh2: RSA SHA256:7nv7xaFFWmIAvPewvKjLuTxkMrDcPy3WtQ5BDo3Wg0I Feb 13 20:10:38.089554 sshd[6397]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:10:38.099259 systemd-logind[2075]: New session 15 of user core. Feb 13 20:10:38.110607 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 20:10:38.735693 sshd[6397]: pam_unix(sshd:session): session closed for user core Feb 13 20:10:38.754643 systemd[1]: sshd@14-172.31.16.93:22-139.178.89.65:34520.service: Deactivated successfully. Feb 13 20:10:38.762641 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 20:10:38.762960 systemd-logind[2075]: Session 15 logged out. Waiting for processes to exit. Feb 13 20:10:38.774478 systemd-logind[2075]: Removed session 15. Feb 13 20:10:39.206149 containerd[2097]: time="2025-02-13T20:10:39.205849858Z" level=info msg="StopContainer for \"8dda719a9af59965cf5532e837373d9eefdece1ebad0add03b4bb253975fc085\" with timeout 300 (s)" Feb 13 20:10:39.211444 containerd[2097]: time="2025-02-13T20:10:39.211202467Z" level=info msg="Stop container \"8dda719a9af59965cf5532e837373d9eefdece1ebad0add03b4bb253975fc085\" with signal terminated" Feb 13 20:10:39.979119 containerd[2097]: time="2025-02-13T20:10:39.978840109Z" level=info msg="StopContainer for \"eea71472521956c533e029500e97e5c491bc8f0220ec2dbd6d3f76afd53c253f\" with timeout 30 (s)" Feb 13 20:10:39.992427 containerd[2097]: time="2025-02-13T20:10:39.992373008Z" level=info msg="Stop container \"eea71472521956c533e029500e97e5c491bc8f0220ec2dbd6d3f76afd53c253f\" with signal terminated" Feb 13 20:10:40.185245 systemd-journald[1566]: Under memory pressure, flushing caches. Feb 13 20:10:40.179406 systemd-resolved[1977]: Under memory pressure, flushing caches. Feb 13 20:10:40.179439 systemd-resolved[1977]: Flushed all caches. Feb 13 20:10:40.214430 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eea71472521956c533e029500e97e5c491bc8f0220ec2dbd6d3f76afd53c253f-rootfs.mount: Deactivated successfully. Feb 13 20:10:40.242810 containerd[2097]: time="2025-02-13T20:10:40.229568210Z" level=info msg="shim disconnected" id=eea71472521956c533e029500e97e5c491bc8f0220ec2dbd6d3f76afd53c253f namespace=k8s.io Feb 13 20:10:40.268187 containerd[2097]: time="2025-02-13T20:10:40.268108782Z" level=warning msg="cleaning up after shim disconnected" id=eea71472521956c533e029500e97e5c491bc8f0220ec2dbd6d3f76afd53c253f namespace=k8s.io Feb 13 20:10:40.268187 containerd[2097]: time="2025-02-13T20:10:40.268181993Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:10:40.324956 containerd[2097]: time="2025-02-13T20:10:40.324550544Z" level=info msg="StopContainer for \"eea71472521956c533e029500e97e5c491bc8f0220ec2dbd6d3f76afd53c253f\" returns successfully" Feb 13 20:10:40.356224 containerd[2097]: time="2025-02-13T20:10:40.356042512Z" level=info msg="StopPodSandbox for \"08fe7c3fceb4e68bf1525388f5ee0c7c65b68b682b0d9d568cd1f3214b738468\"" Feb 13 20:10:40.356224 containerd[2097]: time="2025-02-13T20:10:40.356113546Z" level=info msg="Container to stop \"eea71472521956c533e029500e97e5c491bc8f0220ec2dbd6d3f76afd53c253f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 20:10:40.374811 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-08fe7c3fceb4e68bf1525388f5ee0c7c65b68b682b0d9d568cd1f3214b738468-shm.mount: Deactivated successfully. Feb 13 20:10:40.451300 containerd[2097]: time="2025-02-13T20:10:40.448138863Z" level=info msg="StopContainer for \"1c5f1f3b1844676e41aa005ebeb53ddb3d2550d6a2ab6df1911bdd6a8352903a\" with timeout 4 (s)" Feb 13 20:10:40.452028 containerd[2097]: time="2025-02-13T20:10:40.451991986Z" level=info msg="Stop container \"1c5f1f3b1844676e41aa005ebeb53ddb3d2550d6a2ab6df1911bdd6a8352903a\" with signal terminated" Feb 13 20:10:40.499346 containerd[2097]: time="2025-02-13T20:10:40.496914432Z" level=info msg="shim disconnected" id=08fe7c3fceb4e68bf1525388f5ee0c7c65b68b682b0d9d568cd1f3214b738468 namespace=k8s.io Feb 13 20:10:40.499346 containerd[2097]: time="2025-02-13T20:10:40.497010311Z" level=warning msg="cleaning up after shim disconnected" id=08fe7c3fceb4e68bf1525388f5ee0c7c65b68b682b0d9d568cd1f3214b738468 namespace=k8s.io Feb 13 20:10:40.499346 containerd[2097]: time="2025-02-13T20:10:40.497024200Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:10:40.508463 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-08fe7c3fceb4e68bf1525388f5ee0c7c65b68b682b0d9d568cd1f3214b738468-rootfs.mount: Deactivated successfully. Feb 13 20:10:40.661043 containerd[2097]: time="2025-02-13T20:10:40.660612096Z" level=info msg="shim disconnected" id=1c5f1f3b1844676e41aa005ebeb53ddb3d2550d6a2ab6df1911bdd6a8352903a namespace=k8s.io Feb 13 20:10:40.661043 containerd[2097]: time="2025-02-13T20:10:40.660848820Z" level=warning msg="cleaning up after shim disconnected" id=1c5f1f3b1844676e41aa005ebeb53ddb3d2550d6a2ab6df1911bdd6a8352903a namespace=k8s.io Feb 13 20:10:40.661043 containerd[2097]: time="2025-02-13T20:10:40.660866216Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:10:40.668706 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1c5f1f3b1844676e41aa005ebeb53ddb3d2550d6a2ab6df1911bdd6a8352903a-rootfs.mount: Deactivated successfully. Feb 13 20:10:40.810732 containerd[2097]: time="2025-02-13T20:10:40.806963071Z" level=info msg="StopContainer for \"1c5f1f3b1844676e41aa005ebeb53ddb3d2550d6a2ab6df1911bdd6a8352903a\" returns successfully" Feb 13 20:10:40.813895 containerd[2097]: time="2025-02-13T20:10:40.812131590Z" level=info msg="StopPodSandbox for \"efde2821bdb249357e7243eb993bc73b38886bd7d526624708a8f91ac759a7a0\"" Feb 13 20:10:40.813895 containerd[2097]: time="2025-02-13T20:10:40.812332488Z" level=info msg="Container to stop \"1e4d34d0fb69be4282f151c41b755fd2d818438404354fa7888b2d1101bdf209\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 20:10:40.813895 containerd[2097]: time="2025-02-13T20:10:40.812362087Z" level=info msg="Container to stop \"1c5f1f3b1844676e41aa005ebeb53ddb3d2550d6a2ab6df1911bdd6a8352903a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 20:10:40.813895 containerd[2097]: time="2025-02-13T20:10:40.812408709Z" level=info msg="Container to stop \"03de15521f8f730e1f2cf7974f3d581159750c4d6a9ab1182d051daf15a78256\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 20:10:40.827164 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-efde2821bdb249357e7243eb993bc73b38886bd7d526624708a8f91ac759a7a0-shm.mount: Deactivated successfully. Feb 13 20:10:40.909098 containerd[2097]: time="2025-02-13T20:10:40.907842178Z" level=info msg="shim disconnected" id=efde2821bdb249357e7243eb993bc73b38886bd7d526624708a8f91ac759a7a0 namespace=k8s.io Feb 13 20:10:40.909098 containerd[2097]: time="2025-02-13T20:10:40.907917001Z" level=warning msg="cleaning up after shim disconnected" id=efde2821bdb249357e7243eb993bc73b38886bd7d526624708a8f91ac759a7a0 namespace=k8s.io Feb 13 20:10:40.909098 containerd[2097]: time="2025-02-13T20:10:40.907929542Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:10:40.913912 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-efde2821bdb249357e7243eb993bc73b38886bd7d526624708a8f91ac759a7a0-rootfs.mount: Deactivated successfully. Feb 13 20:10:40.967014 containerd[2097]: time="2025-02-13T20:10:40.966968653Z" level=info msg="TearDown network for sandbox \"efde2821bdb249357e7243eb993bc73b38886bd7d526624708a8f91ac759a7a0\" successfully" Feb 13 20:10:40.967598 containerd[2097]: time="2025-02-13T20:10:40.967474317Z" level=info msg="StopPodSandbox for \"efde2821bdb249357e7243eb993bc73b38886bd7d526624708a8f91ac759a7a0\" returns successfully" Feb 13 20:10:41.066316 systemd-networkd[1657]: calie35944538e5: Link DOWN Feb 13 20:10:41.066394 systemd-networkd[1657]: calie35944538e5: Lost carrier Feb 13 20:10:41.160528 kubelet[3699]: I0213 20:10:41.160045 3699 topology_manager.go:215] "Topology Admit Handler" podUID="0df5b4d4-bbb9-4166-9f29-c1f36126a099" podNamespace="calico-system" podName="calico-node-gdzn7" Feb 13 20:10:41.238599 kubelet[3699]: I0213 20:10:41.238308 3699 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc-xtables-lock\") pod \"f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc\" (UID: \"f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc\") " Feb 13 20:10:41.239167 kubelet[3699]: I0213 20:10:41.238651 3699 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc-cni-bin-dir\") pod \"f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc\" (UID: \"f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc\") " Feb 13 20:10:41.239167 kubelet[3699]: I0213 20:10:41.238719 3699 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc-var-lib-calico\") pod \"f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc\" (UID: \"f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc\") " Feb 13 20:10:41.239167 kubelet[3699]: I0213 20:10:41.238744 3699 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc-flexvol-driver-host\") pod \"f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc\" (UID: \"f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc\") " Feb 13 20:10:41.239167 kubelet[3699]: I0213 20:10:41.239082 3699 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9z99\" (UniqueName: \"kubernetes.io/projected/f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc-kube-api-access-j9z99\") pod \"f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc\" (UID: \"f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc\") " Feb 13 20:10:41.239167 kubelet[3699]: I0213 20:10:41.239127 3699 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc-tigera-ca-bundle\") pod \"f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc\" (UID: \"f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc\") " Feb 13 20:10:41.239167 kubelet[3699]: I0213 20:10:41.239149 3699 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc-policysync\") pod \"f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc\" (UID: \"f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc\") " Feb 13 20:10:41.240455 kubelet[3699]: I0213 20:10:41.239172 3699 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc-var-run-calico\") pod \"f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc\" (UID: \"f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc\") " Feb 13 20:10:41.240455 kubelet[3699]: I0213 20:10:41.239207 3699 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc-cni-net-dir\") pod \"f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc\" (UID: \"f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc\") " Feb 13 20:10:41.240455 kubelet[3699]: I0213 20:10:41.239231 3699 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc-lib-modules\") pod \"f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc\" (UID: \"f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc\") " Feb 13 20:10:41.240455 kubelet[3699]: I0213 20:10:41.239253 3699 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc-cni-log-dir\") pod \"f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc\" (UID: \"f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc\") " Feb 13 20:10:41.240455 kubelet[3699]: I0213 20:10:41.239281 3699 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc-node-certs\") pod \"f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc\" (UID: \"f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc\") " Feb 13 20:10:41.256784 kubelet[3699]: I0213 20:10:41.256314 3699 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc" (UID: "f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:10:41.256784 kubelet[3699]: I0213 20:10:41.256596 3699 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc" (UID: "f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:10:41.256784 kubelet[3699]: I0213 20:10:41.256623 3699 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc" (UID: "f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:10:41.284908 kubelet[3699]: I0213 20:10:41.249595 3699 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc" (UID: "f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:10:41.284908 kubelet[3699]: E0213 20:10:41.284136 3699 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc" containerName="flexvol-driver" Feb 13 20:10:41.284908 kubelet[3699]: E0213 20:10:41.284172 3699 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc" containerName="install-cni" Feb 13 20:10:41.284908 kubelet[3699]: E0213 20:10:41.284187 3699 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc" containerName="calico-node" Feb 13 20:10:41.284908 kubelet[3699]: I0213 20:10:41.284271 3699 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc" containerName="calico-node" Feb 13 20:10:41.290989 kubelet[3699]: I0213 20:10:41.290914 3699 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc-policysync" (OuterVolumeSpecName: "policysync") pod "f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc" (UID: "f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:10:41.291382 kubelet[3699]: I0213 20:10:41.291359 3699 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc" (UID: "f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:10:41.291628 kubelet[3699]: I0213 20:10:41.291608 3699 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc" (UID: "f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:10:41.300397 kubelet[3699]: I0213 20:10:41.299782 3699 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc" (UID: "f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:10:41.301466 kubelet[3699]: I0213 20:10:41.301436 3699 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc" (UID: "f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:10:41.347333 kubelet[3699]: I0213 20:10:41.347109 3699 reconciler_common.go:289] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc-cni-bin-dir\") on node \"ip-172-31-16-93\" DevicePath \"\"" Feb 13 20:10:41.348872 kubelet[3699]: I0213 20:10:41.348398 3699 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc-xtables-lock\") on node \"ip-172-31-16-93\" DevicePath \"\"" Feb 13 20:10:41.348872 kubelet[3699]: I0213 20:10:41.348431 3699 reconciler_common.go:289] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc-var-lib-calico\") on node \"ip-172-31-16-93\" DevicePath \"\"" Feb 13 20:10:41.348872 kubelet[3699]: I0213 20:10:41.348477 3699 reconciler_common.go:289] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc-flexvol-driver-host\") on node \"ip-172-31-16-93\" DevicePath \"\"" Feb 13 20:10:41.348872 kubelet[3699]: I0213 20:10:41.348490 3699 reconciler_common.go:289] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc-policysync\") on node \"ip-172-31-16-93\" DevicePath \"\"" Feb 13 20:10:41.348872 kubelet[3699]: I0213 20:10:41.348504 3699 reconciler_common.go:289] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc-var-run-calico\") on node \"ip-172-31-16-93\" DevicePath \"\"" Feb 13 20:10:41.348872 kubelet[3699]: I0213 20:10:41.348515 3699 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc-lib-modules\") on node \"ip-172-31-16-93\" DevicePath \"\"" Feb 13 20:10:41.348872 kubelet[3699]: I0213 20:10:41.348558 3699 reconciler_common.go:289] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc-cni-net-dir\") on node \"ip-172-31-16-93\" DevicePath \"\"" Feb 13 20:10:41.348872 kubelet[3699]: I0213 20:10:41.348572 3699 reconciler_common.go:289] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc-cni-log-dir\") on node \"ip-172-31-16-93\" DevicePath \"\"" Feb 13 20:10:41.380930 containerd[2097]: 2025-02-13 20:10:41.030 [INFO][6582] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="08fe7c3fceb4e68bf1525388f5ee0c7c65b68b682b0d9d568cd1f3214b738468" Feb 13 20:10:41.380930 containerd[2097]: 2025-02-13 20:10:41.036 [INFO][6582] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="08fe7c3fceb4e68bf1525388f5ee0c7c65b68b682b0d9d568cd1f3214b738468" iface="eth0" netns="/var/run/netns/cni-f1627128-aa56-e44a-8c43-fc7e23c547ca" Feb 13 20:10:41.380930 containerd[2097]: 2025-02-13 20:10:41.043 [INFO][6582] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="08fe7c3fceb4e68bf1525388f5ee0c7c65b68b682b0d9d568cd1f3214b738468" iface="eth0" netns="/var/run/netns/cni-f1627128-aa56-e44a-8c43-fc7e23c547ca" Feb 13 20:10:41.380930 containerd[2097]: 2025-02-13 20:10:41.080 [INFO][6582] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="08fe7c3fceb4e68bf1525388f5ee0c7c65b68b682b0d9d568cd1f3214b738468" after=37.213312ms iface="eth0" netns="/var/run/netns/cni-f1627128-aa56-e44a-8c43-fc7e23c547ca" Feb 13 20:10:41.380930 containerd[2097]: 2025-02-13 20:10:41.080 [INFO][6582] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="08fe7c3fceb4e68bf1525388f5ee0c7c65b68b682b0d9d568cd1f3214b738468" Feb 13 20:10:41.380930 containerd[2097]: 2025-02-13 20:10:41.080 [INFO][6582] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="08fe7c3fceb4e68bf1525388f5ee0c7c65b68b682b0d9d568cd1f3214b738468" Feb 13 20:10:41.380930 containerd[2097]: 2025-02-13 20:10:41.231 [INFO][6623] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="08fe7c3fceb4e68bf1525388f5ee0c7c65b68b682b0d9d568cd1f3214b738468" HandleID="k8s-pod-network.08fe7c3fceb4e68bf1525388f5ee0c7c65b68b682b0d9d568cd1f3214b738468" Workload="ip--172--31--16--93-k8s-calico--kube--controllers--587f87bbd4--mm8mf-eth0" Feb 13 20:10:41.380930 containerd[2097]: 2025-02-13 20:10:41.231 [INFO][6623] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:10:41.380930 containerd[2097]: 2025-02-13 20:10:41.231 [INFO][6623] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:10:41.380930 containerd[2097]: 2025-02-13 20:10:41.334 [INFO][6623] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="08fe7c3fceb4e68bf1525388f5ee0c7c65b68b682b0d9d568cd1f3214b738468" HandleID="k8s-pod-network.08fe7c3fceb4e68bf1525388f5ee0c7c65b68b682b0d9d568cd1f3214b738468" Workload="ip--172--31--16--93-k8s-calico--kube--controllers--587f87bbd4--mm8mf-eth0" Feb 13 20:10:41.380930 containerd[2097]: 2025-02-13 20:10:41.334 [INFO][6623] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="08fe7c3fceb4e68bf1525388f5ee0c7c65b68b682b0d9d568cd1f3214b738468" HandleID="k8s-pod-network.08fe7c3fceb4e68bf1525388f5ee0c7c65b68b682b0d9d568cd1f3214b738468" Workload="ip--172--31--16--93-k8s-calico--kube--controllers--587f87bbd4--mm8mf-eth0" Feb 13 20:10:41.380930 containerd[2097]: 2025-02-13 20:10:41.340 [INFO][6623] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:10:41.380930 containerd[2097]: 2025-02-13 20:10:41.358 [INFO][6582] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="08fe7c3fceb4e68bf1525388f5ee0c7c65b68b682b0d9d568cd1f3214b738468" Feb 13 20:10:41.385183 containerd[2097]: time="2025-02-13T20:10:41.384235216Z" level=info msg="TearDown network for sandbox \"08fe7c3fceb4e68bf1525388f5ee0c7c65b68b682b0d9d568cd1f3214b738468\" successfully" Feb 13 20:10:41.385183 containerd[2097]: time="2025-02-13T20:10:41.384277393Z" level=info msg="StopPodSandbox for \"08fe7c3fceb4e68bf1525388f5ee0c7c65b68b682b0d9d568cd1f3214b738468\" returns successfully" Feb 13 20:10:41.389627 systemd[1]: var-lib-kubelet-pods-f4f05e9d\x2dcd9b\x2d4fc0\x2d97e2\x2d2781d788c6fc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dj9z99.mount: Deactivated successfully. Feb 13 20:10:41.414094 kubelet[3699]: I0213 20:10:41.412962 3699 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc-node-certs" (OuterVolumeSpecName: "node-certs") pod "f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc" (UID: "f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 20:10:41.417933 kubelet[3699]: I0213 20:10:41.417878 3699 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc-kube-api-access-j9z99" (OuterVolumeSpecName: "kube-api-access-j9z99") pod "f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc" (UID: "f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc"). InnerVolumeSpecName "kube-api-access-j9z99". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 20:10:41.420514 kubelet[3699]: I0213 20:10:41.420380 3699 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc" (UID: "f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 20:10:41.436639 containerd[2097]: time="2025-02-13T20:10:41.436573564Z" level=info msg="shim disconnected" id=8dda719a9af59965cf5532e837373d9eefdece1ebad0add03b4bb253975fc085 namespace=k8s.io Feb 13 20:10:41.436639 containerd[2097]: time="2025-02-13T20:10:41.436638514Z" level=warning msg="cleaning up after shim disconnected" id=8dda719a9af59965cf5532e837373d9eefdece1ebad0add03b4bb253975fc085 namespace=k8s.io Feb 13 20:10:41.436848 containerd[2097]: time="2025-02-13T20:10:41.436649635Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:10:41.464317 kubelet[3699]: I0213 20:10:41.453617 3699 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlb24\" (UniqueName: \"kubernetes.io/projected/ba17dcdf-1279-4496-b0fc-fdde00ad61dc-kube-api-access-hlb24\") pod \"ba17dcdf-1279-4496-b0fc-fdde00ad61dc\" (UID: \"ba17dcdf-1279-4496-b0fc-fdde00ad61dc\") " Feb 13 20:10:41.464317 kubelet[3699]: I0213 20:10:41.453700 3699 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba17dcdf-1279-4496-b0fc-fdde00ad61dc-tigera-ca-bundle\") pod \"ba17dcdf-1279-4496-b0fc-fdde00ad61dc\" (UID: \"ba17dcdf-1279-4496-b0fc-fdde00ad61dc\") " Feb 13 20:10:41.487039 kubelet[3699]: I0213 20:10:41.486878 3699 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0df5b4d4-bbb9-4166-9f29-c1f36126a099-tigera-ca-bundle\") pod \"calico-node-gdzn7\" (UID: \"0df5b4d4-bbb9-4166-9f29-c1f36126a099\") " pod="calico-system/calico-node-gdzn7" Feb 13 20:10:41.487039 kubelet[3699]: I0213 20:10:41.486952 3699 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0df5b4d4-bbb9-4166-9f29-c1f36126a099-cni-log-dir\") pod \"calico-node-gdzn7\" (UID: \"0df5b4d4-bbb9-4166-9f29-c1f36126a099\") " pod="calico-system/calico-node-gdzn7" Feb 13 20:10:41.487039 kubelet[3699]: I0213 20:10:41.487038 3699 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0df5b4d4-bbb9-4166-9f29-c1f36126a099-xtables-lock\") pod \"calico-node-gdzn7\" (UID: \"0df5b4d4-bbb9-4166-9f29-c1f36126a099\") " pod="calico-system/calico-node-gdzn7" Feb 13 20:10:41.492255 kubelet[3699]: I0213 20:10:41.487064 3699 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0df5b4d4-bbb9-4166-9f29-c1f36126a099-var-lib-calico\") pod \"calico-node-gdzn7\" (UID: \"0df5b4d4-bbb9-4166-9f29-c1f36126a099\") " pod="calico-system/calico-node-gdzn7" Feb 13 20:10:41.492255 kubelet[3699]: I0213 20:10:41.489351 3699 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmwjs\" (UniqueName: \"kubernetes.io/projected/0df5b4d4-bbb9-4166-9f29-c1f36126a099-kube-api-access-cmwjs\") pod \"calico-node-gdzn7\" (UID: \"0df5b4d4-bbb9-4166-9f29-c1f36126a099\") " pod="calico-system/calico-node-gdzn7" Feb 13 20:10:41.492255 kubelet[3699]: I0213 20:10:41.489395 3699 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0df5b4d4-bbb9-4166-9f29-c1f36126a099-lib-modules\") pod \"calico-node-gdzn7\" (UID: \"0df5b4d4-bbb9-4166-9f29-c1f36126a099\") " pod="calico-system/calico-node-gdzn7" Feb 13 20:10:41.492255 kubelet[3699]: I0213 20:10:41.490691 3699 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0df5b4d4-bbb9-4166-9f29-c1f36126a099-policysync\") pod \"calico-node-gdzn7\" (UID: \"0df5b4d4-bbb9-4166-9f29-c1f36126a099\") " pod="calico-system/calico-node-gdzn7" Feb 13 20:10:41.492255 kubelet[3699]: I0213 20:10:41.491115 3699 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0df5b4d4-bbb9-4166-9f29-c1f36126a099-var-run-calico\") pod \"calico-node-gdzn7\" (UID: \"0df5b4d4-bbb9-4166-9f29-c1f36126a099\") " pod="calico-system/calico-node-gdzn7" Feb 13 20:10:41.494155 kubelet[3699]: I0213 20:10:41.491178 3699 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0df5b4d4-bbb9-4166-9f29-c1f36126a099-cni-bin-dir\") pod \"calico-node-gdzn7\" (UID: \"0df5b4d4-bbb9-4166-9f29-c1f36126a099\") " pod="calico-system/calico-node-gdzn7" Feb 13 20:10:41.494155 kubelet[3699]: I0213 20:10:41.491251 3699 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0df5b4d4-bbb9-4166-9f29-c1f36126a099-node-certs\") pod \"calico-node-gdzn7\" (UID: \"0df5b4d4-bbb9-4166-9f29-c1f36126a099\") " pod="calico-system/calico-node-gdzn7" Feb 13 20:10:41.494155 kubelet[3699]: I0213 20:10:41.491286 3699 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0df5b4d4-bbb9-4166-9f29-c1f36126a099-cni-net-dir\") pod \"calico-node-gdzn7\" (UID: \"0df5b4d4-bbb9-4166-9f29-c1f36126a099\") " pod="calico-system/calico-node-gdzn7" Feb 13 20:10:41.494155 kubelet[3699]: I0213 20:10:41.491426 3699 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0df5b4d4-bbb9-4166-9f29-c1f36126a099-flexvol-driver-host\") pod \"calico-node-gdzn7\" (UID: \"0df5b4d4-bbb9-4166-9f29-c1f36126a099\") " pod="calico-system/calico-node-gdzn7" Feb 13 20:10:41.494155 kubelet[3699]: I0213 20:10:41.491488 3699 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-j9z99\" (UniqueName: \"kubernetes.io/projected/f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc-kube-api-access-j9z99\") on node \"ip-172-31-16-93\" DevicePath \"\"" Feb 13 20:10:41.494155 kubelet[3699]: I0213 20:10:41.491505 3699 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc-tigera-ca-bundle\") on node \"ip-172-31-16-93\" DevicePath \"\"" Feb 13 20:10:41.494472 kubelet[3699]: I0213 20:10:41.491518 3699 reconciler_common.go:289] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc-node-certs\") on node \"ip-172-31-16-93\" DevicePath \"\"" Feb 13 20:10:41.503949 systemd[1]: run-netns-cni\x2df1627128\x2daa56\x2de44a\x2d8c43\x2dfc7e23c547ca.mount: Deactivated successfully. Feb 13 20:10:41.505232 systemd[1]: var-lib-kubelet-pods-f4f05e9d\x2dcd9b\x2d4fc0\x2d97e2\x2d2781d788c6fc-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dnode-1.mount: Deactivated successfully. Feb 13 20:10:41.505505 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8dda719a9af59965cf5532e837373d9eefdece1ebad0add03b4bb253975fc085-rootfs.mount: Deactivated successfully. Feb 13 20:10:41.505869 systemd[1]: var-lib-kubelet-pods-f4f05e9d\x2dcd9b\x2d4fc0\x2d97e2\x2d2781d788c6fc-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Feb 13 20:10:41.530946 systemd[1]: var-lib-kubelet-pods-ba17dcdf\x2d1279\x2d4496\x2db0fc\x2dfdde00ad61dc-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dkube\x2dcontrollers-1.mount: Deactivated successfully. Feb 13 20:10:41.547511 systemd[1]: var-lib-kubelet-pods-ba17dcdf\x2d1279\x2d4496\x2db0fc\x2dfdde00ad61dc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhlb24.mount: Deactivated successfully. Feb 13 20:10:41.556743 containerd[2097]: time="2025-02-13T20:10:41.556672228Z" level=warning msg="cleanup warnings time=\"2025-02-13T20:10:41Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 20:10:41.560605 kubelet[3699]: I0213 20:10:41.560486 3699 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba17dcdf-1279-4496-b0fc-fdde00ad61dc-kube-api-access-hlb24" (OuterVolumeSpecName: "kube-api-access-hlb24") pod "ba17dcdf-1279-4496-b0fc-fdde00ad61dc" (UID: "ba17dcdf-1279-4496-b0fc-fdde00ad61dc"). InnerVolumeSpecName "kube-api-access-hlb24". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 20:10:41.567444 kubelet[3699]: I0213 20:10:41.564782 3699 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba17dcdf-1279-4496-b0fc-fdde00ad61dc-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "ba17dcdf-1279-4496-b0fc-fdde00ad61dc" (UID: "ba17dcdf-1279-4496-b0fc-fdde00ad61dc"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 20:10:41.598697 kubelet[3699]: I0213 20:10:41.598592 3699 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-hlb24\" (UniqueName: \"kubernetes.io/projected/ba17dcdf-1279-4496-b0fc-fdde00ad61dc-kube-api-access-hlb24\") on node \"ip-172-31-16-93\" DevicePath \"\"" Feb 13 20:10:41.598961 kubelet[3699]: I0213 20:10:41.598945 3699 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba17dcdf-1279-4496-b0fc-fdde00ad61dc-tigera-ca-bundle\") on node \"ip-172-31-16-93\" DevicePath \"\"" Feb 13 20:10:41.645001 containerd[2097]: time="2025-02-13T20:10:41.643619060Z" level=info msg="StopContainer for \"8dda719a9af59965cf5532e837373d9eefdece1ebad0add03b4bb253975fc085\" returns successfully" Feb 13 20:10:41.645622 kubelet[3699]: I0213 20:10:41.645480 3699 scope.go:117] "RemoveContainer" containerID="1c5f1f3b1844676e41aa005ebeb53ddb3d2550d6a2ab6df1911bdd6a8352903a" Feb 13 20:10:41.675856 containerd[2097]: time="2025-02-13T20:10:41.675819884Z" level=info msg="StopPodSandbox for \"9c5b76d0cde9bb37ceb8c8f94ac3cd38d15f4b18c25456dab6cfe056157069a6\"" Feb 13 20:10:41.676806 containerd[2097]: time="2025-02-13T20:10:41.676692585Z" level=info msg="Container to stop \"8dda719a9af59965cf5532e837373d9eefdece1ebad0add03b4bb253975fc085\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 20:10:41.681280 containerd[2097]: time="2025-02-13T20:10:41.680203950Z" level=info msg="RemoveContainer for \"1c5f1f3b1844676e41aa005ebeb53ddb3d2550d6a2ab6df1911bdd6a8352903a\"" Feb 13 20:10:41.694943 containerd[2097]: time="2025-02-13T20:10:41.694901250Z" level=info msg="RemoveContainer for \"1c5f1f3b1844676e41aa005ebeb53ddb3d2550d6a2ab6df1911bdd6a8352903a\" returns successfully" Feb 13 20:10:41.722362 kubelet[3699]: I0213 20:10:41.722334 3699 scope.go:117] "RemoveContainer" containerID="1e4d34d0fb69be4282f151c41b755fd2d818438404354fa7888b2d1101bdf209" Feb 13 20:10:41.736502 containerd[2097]: time="2025-02-13T20:10:41.736465484Z" level=info msg="RemoveContainer for \"1e4d34d0fb69be4282f151c41b755fd2d818438404354fa7888b2d1101bdf209\"" Feb 13 20:10:41.742854 containerd[2097]: time="2025-02-13T20:10:41.742575952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gdzn7,Uid:0df5b4d4-bbb9-4166-9f29-c1f36126a099,Namespace:calico-system,Attempt:0,}" Feb 13 20:10:41.755873 containerd[2097]: time="2025-02-13T20:10:41.755809523Z" level=info msg="RemoveContainer for \"1e4d34d0fb69be4282f151c41b755fd2d818438404354fa7888b2d1101bdf209\" returns successfully" Feb 13 20:10:41.757197 kubelet[3699]: I0213 20:10:41.756924 3699 scope.go:117] "RemoveContainer" containerID="03de15521f8f730e1f2cf7974f3d581159750c4d6a9ab1182d051daf15a78256" Feb 13 20:10:41.762745 containerd[2097]: time="2025-02-13T20:10:41.762711240Z" level=info msg="RemoveContainer for \"03de15521f8f730e1f2cf7974f3d581159750c4d6a9ab1182d051daf15a78256\"" Feb 13 20:10:41.784232 containerd[2097]: time="2025-02-13T20:10:41.784193083Z" level=info msg="RemoveContainer for \"03de15521f8f730e1f2cf7974f3d581159750c4d6a9ab1182d051daf15a78256\" returns successfully" Feb 13 20:10:41.786211 kubelet[3699]: I0213 20:10:41.785913 3699 scope.go:117] "RemoveContainer" containerID="1c5f1f3b1844676e41aa005ebeb53ddb3d2550d6a2ab6df1911bdd6a8352903a" Feb 13 20:10:41.841659 containerd[2097]: time="2025-02-13T20:10:41.786436238Z" level=error msg="ContainerStatus for \"1c5f1f3b1844676e41aa005ebeb53ddb3d2550d6a2ab6df1911bdd6a8352903a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1c5f1f3b1844676e41aa005ebeb53ddb3d2550d6a2ab6df1911bdd6a8352903a\": not found" Feb 13 20:10:41.841986 containerd[2097]: time="2025-02-13T20:10:41.833422993Z" level=info msg="shim disconnected" id=9c5b76d0cde9bb37ceb8c8f94ac3cd38d15f4b18c25456dab6cfe056157069a6 namespace=k8s.io Feb 13 20:10:41.842199 containerd[2097]: time="2025-02-13T20:10:41.842175283Z" level=warning msg="cleaning up after shim disconnected" id=9c5b76d0cde9bb37ceb8c8f94ac3cd38d15f4b18c25456dab6cfe056157069a6 namespace=k8s.io Feb 13 20:10:41.842488 containerd[2097]: time="2025-02-13T20:10:41.842237477Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:10:41.921283 kubelet[3699]: E0213 20:10:41.921155 3699 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1c5f1f3b1844676e41aa005ebeb53ddb3d2550d6a2ab6df1911bdd6a8352903a\": not found" containerID="1c5f1f3b1844676e41aa005ebeb53ddb3d2550d6a2ab6df1911bdd6a8352903a" Feb 13 20:10:41.922261 kubelet[3699]: I0213 20:10:41.921624 3699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1c5f1f3b1844676e41aa005ebeb53ddb3d2550d6a2ab6df1911bdd6a8352903a"} err="failed to get container status \"1c5f1f3b1844676e41aa005ebeb53ddb3d2550d6a2ab6df1911bdd6a8352903a\": rpc error: code = NotFound desc = an error occurred when try to find container \"1c5f1f3b1844676e41aa005ebeb53ddb3d2550d6a2ab6df1911bdd6a8352903a\": not found" Feb 13 20:10:41.922261 kubelet[3699]: I0213 20:10:41.921680 3699 scope.go:117] "RemoveContainer" containerID="1e4d34d0fb69be4282f151c41b755fd2d818438404354fa7888b2d1101bdf209" Feb 13 20:10:41.924480 containerd[2097]: time="2025-02-13T20:10:41.923738673Z" level=error msg="ContainerStatus for \"1e4d34d0fb69be4282f151c41b755fd2d818438404354fa7888b2d1101bdf209\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1e4d34d0fb69be4282f151c41b755fd2d818438404354fa7888b2d1101bdf209\": not found" Feb 13 20:10:41.929674 kubelet[3699]: E0213 20:10:41.929185 3699 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1e4d34d0fb69be4282f151c41b755fd2d818438404354fa7888b2d1101bdf209\": not found" containerID="1e4d34d0fb69be4282f151c41b755fd2d818438404354fa7888b2d1101bdf209" Feb 13 20:10:41.929674 kubelet[3699]: I0213 20:10:41.929236 3699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1e4d34d0fb69be4282f151c41b755fd2d818438404354fa7888b2d1101bdf209"} err="failed to get container status \"1e4d34d0fb69be4282f151c41b755fd2d818438404354fa7888b2d1101bdf209\": rpc error: code = NotFound desc = an error occurred when try to find container \"1e4d34d0fb69be4282f151c41b755fd2d818438404354fa7888b2d1101bdf209\": not found" Feb 13 20:10:41.929674 kubelet[3699]: I0213 20:10:41.929309 3699 scope.go:117] "RemoveContainer" containerID="03de15521f8f730e1f2cf7974f3d581159750c4d6a9ab1182d051daf15a78256" Feb 13 20:10:41.934453 kubelet[3699]: E0213 20:10:41.934143 3699 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"03de15521f8f730e1f2cf7974f3d581159750c4d6a9ab1182d051daf15a78256\": not found" containerID="03de15521f8f730e1f2cf7974f3d581159750c4d6a9ab1182d051daf15a78256" Feb 13 20:10:41.934512 containerd[2097]: time="2025-02-13T20:10:41.932538108Z" level=error msg="ContainerStatus for \"03de15521f8f730e1f2cf7974f3d581159750c4d6a9ab1182d051daf15a78256\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"03de15521f8f730e1f2cf7974f3d581159750c4d6a9ab1182d051daf15a78256\": not found" Feb 13 20:10:41.937148 kubelet[3699]: I0213 20:10:41.934181 3699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"03de15521f8f730e1f2cf7974f3d581159750c4d6a9ab1182d051daf15a78256"} err="failed to get container status \"03de15521f8f730e1f2cf7974f3d581159750c4d6a9ab1182d051daf15a78256\": rpc error: code = NotFound desc = an error occurred when try to find container \"03de15521f8f730e1f2cf7974f3d581159750c4d6a9ab1182d051daf15a78256\": not found" Feb 13 20:10:41.937148 kubelet[3699]: I0213 20:10:41.935314 3699 scope.go:117] "RemoveContainer" containerID="eea71472521956c533e029500e97e5c491bc8f0220ec2dbd6d3f76afd53c253f" Feb 13 20:10:41.941320 containerd[2097]: time="2025-02-13T20:10:41.941150583Z" level=info msg="RemoveContainer for \"eea71472521956c533e029500e97e5c491bc8f0220ec2dbd6d3f76afd53c253f\"" Feb 13 20:10:41.946297 containerd[2097]: time="2025-02-13T20:10:41.945788382Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:10:41.946297 containerd[2097]: time="2025-02-13T20:10:41.945858178Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:10:41.946297 containerd[2097]: time="2025-02-13T20:10:41.945890956Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:10:41.951251 containerd[2097]: time="2025-02-13T20:10:41.951105705Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:10:41.964962 containerd[2097]: time="2025-02-13T20:10:41.964762374Z" level=info msg="RemoveContainer for \"eea71472521956c533e029500e97e5c491bc8f0220ec2dbd6d3f76afd53c253f\" returns successfully" Feb 13 20:10:41.974636 containerd[2097]: time="2025-02-13T20:10:41.974495993Z" level=warning msg="cleanup warnings time=\"2025-02-13T20:10:41Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 20:10:42.013038 containerd[2097]: time="2025-02-13T20:10:42.012908069Z" level=info msg="TearDown network for sandbox \"9c5b76d0cde9bb37ceb8c8f94ac3cd38d15f4b18c25456dab6cfe056157069a6\" successfully" Feb 13 20:10:42.013463 containerd[2097]: time="2025-02-13T20:10:42.013128848Z" level=info msg="StopPodSandbox for \"9c5b76d0cde9bb37ceb8c8f94ac3cd38d15f4b18c25456dab6cfe056157069a6\" returns successfully" Feb 13 20:10:42.033498 containerd[2097]: time="2025-02-13T20:10:42.031788724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gdzn7,Uid:0df5b4d4-bbb9-4166-9f29-c1f36126a099,Namespace:calico-system,Attempt:0,} returns sandbox id \"21bcccd5f15d948d6651e1ece3b8a2ccb269bdb2ee6956b43bd5f1d02848f924\"" Feb 13 20:10:42.045650 containerd[2097]: time="2025-02-13T20:10:42.043959571Z" level=info msg="CreateContainer within sandbox \"21bcccd5f15d948d6651e1ece3b8a2ccb269bdb2ee6956b43bd5f1d02848f924\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 20:10:42.074931 containerd[2097]: time="2025-02-13T20:10:42.074804783Z" level=info msg="CreateContainer within sandbox \"21bcccd5f15d948d6651e1ece3b8a2ccb269bdb2ee6956b43bd5f1d02848f924\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"5a365a447549d73c56e4c9ab7c0da11128986f37ebe72d981e682c2ed7c8e06b\"" Feb 13 20:10:42.076381 containerd[2097]: time="2025-02-13T20:10:42.076344936Z" level=info msg="StartContainer for \"5a365a447549d73c56e4c9ab7c0da11128986f37ebe72d981e682c2ed7c8e06b\"" Feb 13 20:10:42.110632 kubelet[3699]: I0213 20:10:42.109990 3699 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/bd52eebf-117d-42e1-a2fb-4681a3e748a4-typha-certs\") pod \"bd52eebf-117d-42e1-a2fb-4681a3e748a4\" (UID: \"bd52eebf-117d-42e1-a2fb-4681a3e748a4\") " Feb 13 20:10:42.110632 kubelet[3699]: I0213 20:10:42.110036 3699 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m29gh\" (UniqueName: \"kubernetes.io/projected/bd52eebf-117d-42e1-a2fb-4681a3e748a4-kube-api-access-m29gh\") pod \"bd52eebf-117d-42e1-a2fb-4681a3e748a4\" (UID: \"bd52eebf-117d-42e1-a2fb-4681a3e748a4\") " Feb 13 20:10:42.110632 kubelet[3699]: I0213 20:10:42.110113 3699 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd52eebf-117d-42e1-a2fb-4681a3e748a4-tigera-ca-bundle\") pod \"bd52eebf-117d-42e1-a2fb-4681a3e748a4\" (UID: \"bd52eebf-117d-42e1-a2fb-4681a3e748a4\") " Feb 13 20:10:42.121596 kubelet[3699]: I0213 20:10:42.121543 3699 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd52eebf-117d-42e1-a2fb-4681a3e748a4-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "bd52eebf-117d-42e1-a2fb-4681a3e748a4" (UID: "bd52eebf-117d-42e1-a2fb-4681a3e748a4"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 20:10:42.121981 kubelet[3699]: I0213 20:10:42.121632 3699 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd52eebf-117d-42e1-a2fb-4681a3e748a4-kube-api-access-m29gh" (OuterVolumeSpecName: "kube-api-access-m29gh") pod "bd52eebf-117d-42e1-a2fb-4681a3e748a4" (UID: "bd52eebf-117d-42e1-a2fb-4681a3e748a4"). InnerVolumeSpecName "kube-api-access-m29gh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 20:10:42.122495 kubelet[3699]: I0213 20:10:42.122455 3699 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd52eebf-117d-42e1-a2fb-4681a3e748a4-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "bd52eebf-117d-42e1-a2fb-4681a3e748a4" (UID: "bd52eebf-117d-42e1-a2fb-4681a3e748a4"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 20:10:42.164702 containerd[2097]: time="2025-02-13T20:10:42.164657944Z" level=info msg="StartContainer for \"5a365a447549d73c56e4c9ab7c0da11128986f37ebe72d981e682c2ed7c8e06b\" returns successfully" Feb 13 20:10:42.213175 kubelet[3699]: I0213 20:10:42.212053 3699 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd52eebf-117d-42e1-a2fb-4681a3e748a4-tigera-ca-bundle\") on node \"ip-172-31-16-93\" DevicePath \"\"" Feb 13 20:10:42.213175 kubelet[3699]: I0213 20:10:42.212100 3699 reconciler_common.go:289] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/bd52eebf-117d-42e1-a2fb-4681a3e748a4-typha-certs\") on node \"ip-172-31-16-93\" DevicePath \"\"" Feb 13 20:10:42.213175 kubelet[3699]: I0213 20:10:42.212125 3699 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-m29gh\" (UniqueName: \"kubernetes.io/projected/bd52eebf-117d-42e1-a2fb-4681a3e748a4-kube-api-access-m29gh\") on node \"ip-172-31-16-93\" DevicePath \"\"" Feb 13 20:10:42.433483 containerd[2097]: time="2025-02-13T20:10:42.433334595Z" level=info msg="shim disconnected" id=5a365a447549d73c56e4c9ab7c0da11128986f37ebe72d981e682c2ed7c8e06b namespace=k8s.io Feb 13 20:10:42.433483 containerd[2097]: time="2025-02-13T20:10:42.433478276Z" level=warning msg="cleaning up after shim disconnected" id=5a365a447549d73c56e4c9ab7c0da11128986f37ebe72d981e682c2ed7c8e06b namespace=k8s.io Feb 13 20:10:42.433483 containerd[2097]: time="2025-02-13T20:10:42.433490676Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:10:42.449890 containerd[2097]: time="2025-02-13T20:10:42.449778251Z" level=warning msg="cleanup warnings time=\"2025-02-13T20:10:42Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 20:10:42.512215 systemd[1]: var-lib-kubelet-pods-bd52eebf\x2d117d\x2d42e1\x2da2fb\x2d4681a3e748a4-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. Feb 13 20:10:42.513274 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9c5b76d0cde9bb37ceb8c8f94ac3cd38d15f4b18c25456dab6cfe056157069a6-rootfs.mount: Deactivated successfully. Feb 13 20:10:42.513420 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9c5b76d0cde9bb37ceb8c8f94ac3cd38d15f4b18c25456dab6cfe056157069a6-shm.mount: Deactivated successfully. Feb 13 20:10:42.513552 systemd[1]: var-lib-kubelet-pods-bd52eebf\x2d117d\x2d42e1\x2da2fb\x2d4681a3e748a4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dm29gh.mount: Deactivated successfully. Feb 13 20:10:42.513729 systemd[1]: var-lib-kubelet-pods-bd52eebf\x2d117d\x2d42e1\x2da2fb\x2d4681a3e748a4-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. Feb 13 20:10:42.699965 kubelet[3699]: I0213 20:10:42.699888 3699 scope.go:117] "RemoveContainer" containerID="8dda719a9af59965cf5532e837373d9eefdece1ebad0add03b4bb253975fc085" Feb 13 20:10:42.706449 containerd[2097]: time="2025-02-13T20:10:42.706405711Z" level=info msg="CreateContainer within sandbox \"21bcccd5f15d948d6651e1ece3b8a2ccb269bdb2ee6956b43bd5f1d02848f924\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 20:10:42.724287 containerd[2097]: time="2025-02-13T20:10:42.723734272Z" level=info msg="RemoveContainer for \"8dda719a9af59965cf5532e837373d9eefdece1ebad0add03b4bb253975fc085\"" Feb 13 20:10:42.739848 containerd[2097]: time="2025-02-13T20:10:42.739163917Z" level=info msg="RemoveContainer for \"8dda719a9af59965cf5532e837373d9eefdece1ebad0add03b4bb253975fc085\" returns successfully" Feb 13 20:10:42.742046 kubelet[3699]: I0213 20:10:42.742023 3699 scope.go:117] "RemoveContainer" containerID="8dda719a9af59965cf5532e837373d9eefdece1ebad0add03b4bb253975fc085" Feb 13 20:10:42.742771 containerd[2097]: time="2025-02-13T20:10:42.742682631Z" level=error msg="ContainerStatus for \"8dda719a9af59965cf5532e837373d9eefdece1ebad0add03b4bb253975fc085\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8dda719a9af59965cf5532e837373d9eefdece1ebad0add03b4bb253975fc085\": not found" Feb 13 20:10:42.743688 kubelet[3699]: E0213 20:10:42.743440 3699 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8dda719a9af59965cf5532e837373d9eefdece1ebad0add03b4bb253975fc085\": not found" containerID="8dda719a9af59965cf5532e837373d9eefdece1ebad0add03b4bb253975fc085" Feb 13 20:10:42.744251 kubelet[3699]: I0213 20:10:42.743936 3699 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8dda719a9af59965cf5532e837373d9eefdece1ebad0add03b4bb253975fc085"} err="failed to get container status \"8dda719a9af59965cf5532e837373d9eefdece1ebad0add03b4bb253975fc085\": rpc error: code = NotFound desc = an error occurred when try to find container \"8dda719a9af59965cf5532e837373d9eefdece1ebad0add03b4bb253975fc085\": not found" Feb 13 20:10:42.767876 containerd[2097]: time="2025-02-13T20:10:42.763406443Z" level=info msg="CreateContainer within sandbox \"21bcccd5f15d948d6651e1ece3b8a2ccb269bdb2ee6956b43bd5f1d02848f924\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"5e481b81123711fa38a67c26573cf90c2980d59ec2a55ec5fc2c7d193b3da539\"" Feb 13 20:10:42.773999 containerd[2097]: time="2025-02-13T20:10:42.773802206Z" level=info msg="StartContainer for \"5e481b81123711fa38a67c26573cf90c2980d59ec2a55ec5fc2c7d193b3da539\"" Feb 13 20:10:42.936001 containerd[2097]: time="2025-02-13T20:10:42.935929999Z" level=info msg="StartContainer for \"5e481b81123711fa38a67c26573cf90c2980d59ec2a55ec5fc2c7d193b3da539\" returns successfully" Feb 13 20:10:43.376608 kubelet[3699]: I0213 20:10:43.376356 3699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba17dcdf-1279-4496-b0fc-fdde00ad61dc" path="/var/lib/kubelet/pods/ba17dcdf-1279-4496-b0fc-fdde00ad61dc/volumes" Feb 13 20:10:43.377776 kubelet[3699]: I0213 20:10:43.377743 3699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd52eebf-117d-42e1-a2fb-4681a3e748a4" path="/var/lib/kubelet/pods/bd52eebf-117d-42e1-a2fb-4681a3e748a4/volumes" Feb 13 20:10:43.379736 kubelet[3699]: I0213 20:10:43.379703 3699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc" path="/var/lib/kubelet/pods/f4f05e9d-cd9b-4fc0-97e2-2781d788c6fc/volumes" Feb 13 20:10:43.553235 ntpd[2052]: Deleting interface #8 calie35944538e5, fe80::ecee:eeff:feee:eeee%5#123, interface stats: received=0, sent=0, dropped=0, active_time=28 secs Feb 13 20:10:43.554362 ntpd[2052]: 13 Feb 20:10:43 ntpd[2052]: Deleting interface #8 calie35944538e5, fe80::ecee:eeff:feee:eeee%5#123, interface stats: received=0, sent=0, dropped=0, active_time=28 secs Feb 13 20:10:43.768586 systemd[1]: Started sshd@15-172.31.16.93:22-139.178.89.65:34532.service - OpenSSH per-connection server daemon (139.178.89.65:34532). Feb 13 20:10:43.989260 sshd[6843]: Accepted publickey for core from 139.178.89.65 port 34532 ssh2: RSA SHA256:7nv7xaFFWmIAvPewvKjLuTxkMrDcPy3WtQ5BDo3Wg0I Feb 13 20:10:43.992657 sshd[6843]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:10:44.005413 systemd-logind[2075]: New session 16 of user core. Feb 13 20:10:44.010797 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 20:10:44.880235 sshd[6843]: pam_unix(sshd:session): session closed for user core Feb 13 20:10:44.887907 systemd[1]: sshd@15-172.31.16.93:22-139.178.89.65:34532.service: Deactivated successfully. Feb 13 20:10:44.892594 systemd-logind[2075]: Session 16 logged out. Waiting for processes to exit. Feb 13 20:10:44.893649 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 20:10:44.900634 systemd-logind[2075]: Removed session 16. Feb 13 20:10:45.334493 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5e481b81123711fa38a67c26573cf90c2980d59ec2a55ec5fc2c7d193b3da539-rootfs.mount: Deactivated successfully. Feb 13 20:10:45.343623 containerd[2097]: time="2025-02-13T20:10:45.343557466Z" level=info msg="shim disconnected" id=5e481b81123711fa38a67c26573cf90c2980d59ec2a55ec5fc2c7d193b3da539 namespace=k8s.io Feb 13 20:10:45.343623 containerd[2097]: time="2025-02-13T20:10:45.343622393Z" level=warning msg="cleaning up after shim disconnected" id=5e481b81123711fa38a67c26573cf90c2980d59ec2a55ec5fc2c7d193b3da539 namespace=k8s.io Feb 13 20:10:45.343623 containerd[2097]: time="2025-02-13T20:10:45.343636508Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:10:46.002098 containerd[2097]: time="2025-02-13T20:10:45.999534590Z" level=info msg="CreateContainer within sandbox \"21bcccd5f15d948d6651e1ece3b8a2ccb269bdb2ee6956b43bd5f1d02848f924\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 20:10:46.080342 containerd[2097]: time="2025-02-13T20:10:46.079677956Z" level=info msg="CreateContainer within sandbox \"21bcccd5f15d948d6651e1ece3b8a2ccb269bdb2ee6956b43bd5f1d02848f924\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"fed71a6904f140f0c2903a8c9fd111639903092e008293e3160c973f0208c152\"" Feb 13 20:10:46.080012 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount744055489.mount: Deactivated successfully. Feb 13 20:10:46.106022 containerd[2097]: time="2025-02-13T20:10:46.098351003Z" level=info msg="StartContainer for \"fed71a6904f140f0c2903a8c9fd111639903092e008293e3160c973f0208c152\"" Feb 13 20:10:46.127673 systemd-resolved[1977]: Under memory pressure, flushing caches. Feb 13 20:10:46.129492 systemd-journald[1566]: Under memory pressure, flushing caches. Feb 13 20:10:46.127685 systemd-resolved[1977]: Flushed all caches. Feb 13 20:10:46.300761 containerd[2097]: time="2025-02-13T20:10:46.299585748Z" level=info msg="StartContainer for \"fed71a6904f140f0c2903a8c9fd111639903092e008293e3160c973f0208c152\" returns successfully" Feb 13 20:10:46.925582 kubelet[3699]: I0213 20:10:46.900145 3699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-gdzn7" podStartSLOduration=5.872689623 podStartE2EDuration="5.872689623s" podCreationTimestamp="2025-02-13 20:10:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:10:46.872636168 +0000 UTC m=+91.728588977" watchObservedRunningTime="2025-02-13 20:10:46.872689623 +0000 UTC m=+91.728642431" Feb 13 20:10:46.941135 systemd[1]: run-containerd-runc-k8s.io-fed71a6904f140f0c2903a8c9fd111639903092e008293e3160c973f0208c152-runc.K0LqIF.mount: Deactivated successfully. Feb 13 20:10:47.864405 systemd[1]: run-containerd-runc-k8s.io-fed71a6904f140f0c2903a8c9fd111639903092e008293e3160c973f0208c152-runc.PdqHPx.mount: Deactivated successfully. Feb 13 20:10:48.177525 systemd-journald[1566]: Under memory pressure, flushing caches. Feb 13 20:10:48.175323 systemd-resolved[1977]: Under memory pressure, flushing caches. Feb 13 20:10:48.175353 systemd-resolved[1977]: Flushed all caches. Feb 13 20:10:49.414895 (udev-worker)[7138]: Network interface NamePolicy= disabled on kernel command line. Feb 13 20:10:49.418325 (udev-worker)[7140]: Network interface NamePolicy= disabled on kernel command line. Feb 13 20:10:49.904932 systemd[1]: Started sshd@16-172.31.16.93:22-139.178.89.65:50332.service - OpenSSH per-connection server daemon (139.178.89.65:50332). Feb 13 20:10:50.116695 sshd[7175]: Accepted publickey for core from 139.178.89.65 port 50332 ssh2: RSA SHA256:7nv7xaFFWmIAvPewvKjLuTxkMrDcPy3WtQ5BDo3Wg0I Feb 13 20:10:50.119763 sshd[7175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:10:50.136472 systemd-logind[2075]: New session 17 of user core. Feb 13 20:10:50.151588 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 20:10:50.223282 systemd-resolved[1977]: Under memory pressure, flushing caches. Feb 13 20:10:50.226105 systemd-journald[1566]: Under memory pressure, flushing caches. Feb 13 20:10:50.223291 systemd-resolved[1977]: Flushed all caches. Feb 13 20:10:51.587193 sshd[7175]: pam_unix(sshd:session): session closed for user core Feb 13 20:10:51.593699 systemd-logind[2075]: Session 17 logged out. Waiting for processes to exit. Feb 13 20:10:51.599832 systemd[1]: sshd@16-172.31.16.93:22-139.178.89.65:50332.service: Deactivated successfully. Feb 13 20:10:51.618879 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 20:10:51.623746 systemd-logind[2075]: Removed session 17. Feb 13 20:10:52.271476 systemd-resolved[1977]: Under memory pressure, flushing caches. Feb 13 20:10:52.271506 systemd-resolved[1977]: Flushed all caches. Feb 13 20:10:52.274167 systemd-journald[1566]: Under memory pressure, flushing caches. Feb 13 20:10:56.518477 systemd[1]: Started sshd@17-172.31.16.93:22-139.178.89.65:57110.service - OpenSSH per-connection server daemon (139.178.89.65:57110). Feb 13 20:10:56.699170 sshd[7198]: Accepted publickey for core from 139.178.89.65 port 57110 ssh2: RSA SHA256:7nv7xaFFWmIAvPewvKjLuTxkMrDcPy3WtQ5BDo3Wg0I Feb 13 20:10:56.702541 sshd[7198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:10:56.715468 systemd-logind[2075]: New session 18 of user core. Feb 13 20:10:56.722194 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 20:10:56.948728 sshd[7198]: pam_unix(sshd:session): session closed for user core Feb 13 20:10:56.959422 systemd-logind[2075]: Session 18 logged out. Waiting for processes to exit. Feb 13 20:10:56.961541 systemd[1]: sshd@17-172.31.16.93:22-139.178.89.65:57110.service: Deactivated successfully. Feb 13 20:10:56.966922 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 20:10:56.972042 systemd-logind[2075]: Removed session 18. Feb 13 20:10:56.978544 systemd[1]: Started sshd@18-172.31.16.93:22-139.178.89.65:57126.service - OpenSSH per-connection server daemon (139.178.89.65:57126). Feb 13 20:10:57.185388 sshd[7212]: Accepted publickey for core from 139.178.89.65 port 57126 ssh2: RSA SHA256:7nv7xaFFWmIAvPewvKjLuTxkMrDcPy3WtQ5BDo3Wg0I Feb 13 20:10:57.212255 sshd[7212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:10:57.218750 systemd-logind[2075]: New session 19 of user core. Feb 13 20:10:57.225473 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 20:10:57.929068 sshd[7212]: pam_unix(sshd:session): session closed for user core Feb 13 20:10:57.938166 systemd[1]: sshd@18-172.31.16.93:22-139.178.89.65:57126.service: Deactivated successfully. Feb 13 20:10:57.942146 systemd-logind[2075]: Session 19 logged out. Waiting for processes to exit. Feb 13 20:10:57.942754 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 20:10:57.951740 systemd-logind[2075]: Removed session 19. Feb 13 20:10:57.963477 systemd[1]: Started sshd@19-172.31.16.93:22-139.178.89.65:57140.service - OpenSSH per-connection server daemon (139.178.89.65:57140). Feb 13 20:10:58.129244 sshd[7224]: Accepted publickey for core from 139.178.89.65 port 57140 ssh2: RSA SHA256:7nv7xaFFWmIAvPewvKjLuTxkMrDcPy3WtQ5BDo3Wg0I Feb 13 20:10:58.130968 sshd[7224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:10:58.141920 systemd-logind[2075]: New session 20 of user core. Feb 13 20:10:58.152669 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 20:11:00.144841 systemd-resolved[1977]: Under memory pressure, flushing caches. Feb 13 20:11:00.144871 systemd-resolved[1977]: Flushed all caches. Feb 13 20:11:00.147114 systemd-journald[1566]: Under memory pressure, flushing caches. Feb 13 20:11:01.089947 sshd[7224]: pam_unix(sshd:session): session closed for user core Feb 13 20:11:01.103649 systemd[1]: sshd@19-172.31.16.93:22-139.178.89.65:57140.service: Deactivated successfully. Feb 13 20:11:01.130833 systemd-logind[2075]: Session 20 logged out. Waiting for processes to exit. Feb 13 20:11:01.155309 systemd[1]: Started sshd@20-172.31.16.93:22-139.178.89.65:57148.service - OpenSSH per-connection server daemon (139.178.89.65:57148). Feb 13 20:11:01.161956 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 20:11:01.185647 systemd-logind[2075]: Removed session 20. Feb 13 20:11:01.420563 sshd[7258]: Accepted publickey for core from 139.178.89.65 port 57148 ssh2: RSA SHA256:7nv7xaFFWmIAvPewvKjLuTxkMrDcPy3WtQ5BDo3Wg0I Feb 13 20:11:01.432558 sshd[7258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:11:01.450602 systemd-logind[2075]: New session 21 of user core. Feb 13 20:11:01.463576 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 20:11:02.195469 systemd-journald[1566]: Under memory pressure, flushing caches. Feb 13 20:11:02.194198 systemd-resolved[1977]: Under memory pressure, flushing caches. Feb 13 20:11:02.194206 systemd-resolved[1977]: Flushed all caches. Feb 13 20:11:02.866094 sshd[7258]: pam_unix(sshd:session): session closed for user core Feb 13 20:11:02.874026 systemd[1]: sshd@20-172.31.16.93:22-139.178.89.65:57148.service: Deactivated successfully. Feb 13 20:11:02.881946 systemd-logind[2075]: Session 21 logged out. Waiting for processes to exit. Feb 13 20:11:02.883811 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 20:11:02.885652 systemd-logind[2075]: Removed session 21. Feb 13 20:11:02.894899 systemd[1]: Started sshd@21-172.31.16.93:22-139.178.89.65:57162.service - OpenSSH per-connection server daemon (139.178.89.65:57162). Feb 13 20:11:03.070994 sshd[7270]: Accepted publickey for core from 139.178.89.65 port 57162 ssh2: RSA SHA256:7nv7xaFFWmIAvPewvKjLuTxkMrDcPy3WtQ5BDo3Wg0I Feb 13 20:11:03.072833 sshd[7270]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:11:03.084086 systemd-logind[2075]: New session 22 of user core. Feb 13 20:11:03.091912 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 20:11:03.297799 sshd[7270]: pam_unix(sshd:session): session closed for user core Feb 13 20:11:03.303679 systemd[1]: sshd@21-172.31.16.93:22-139.178.89.65:57162.service: Deactivated successfully. Feb 13 20:11:03.308546 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 20:11:03.310060 systemd-logind[2075]: Session 22 logged out. Waiting for processes to exit. Feb 13 20:11:03.311543 systemd-logind[2075]: Removed session 22. Feb 13 20:11:08.327464 systemd[1]: Started sshd@22-172.31.16.93:22-139.178.89.65:59308.service - OpenSSH per-connection server daemon (139.178.89.65:59308). Feb 13 20:11:08.541395 sshd[7284]: Accepted publickey for core from 139.178.89.65 port 59308 ssh2: RSA SHA256:7nv7xaFFWmIAvPewvKjLuTxkMrDcPy3WtQ5BDo3Wg0I Feb 13 20:11:08.554212 sshd[7284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:11:08.577624 systemd-logind[2075]: New session 23 of user core. Feb 13 20:11:08.581988 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 20:11:08.804475 sshd[7284]: pam_unix(sshd:session): session closed for user core Feb 13 20:11:08.810679 systemd[1]: sshd@22-172.31.16.93:22-139.178.89.65:59308.service: Deactivated successfully. Feb 13 20:11:08.816597 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 20:11:08.818397 systemd-logind[2075]: Session 23 logged out. Waiting for processes to exit. Feb 13 20:11:08.823496 systemd-logind[2075]: Removed session 23. Feb 13 20:11:13.835561 systemd[1]: Started sshd@23-172.31.16.93:22-139.178.89.65:59320.service - OpenSSH per-connection server daemon (139.178.89.65:59320). Feb 13 20:11:14.015607 sshd[7332]: Accepted publickey for core from 139.178.89.65 port 59320 ssh2: RSA SHA256:7nv7xaFFWmIAvPewvKjLuTxkMrDcPy3WtQ5BDo3Wg0I Feb 13 20:11:14.018755 sshd[7332]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:11:14.025375 systemd-logind[2075]: New session 24 of user core. Feb 13 20:11:14.034342 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 20:11:14.274020 sshd[7332]: pam_unix(sshd:session): session closed for user core Feb 13 20:11:14.278504 systemd[1]: sshd@23-172.31.16.93:22-139.178.89.65:59320.service: Deactivated successfully. Feb 13 20:11:14.289513 systemd-logind[2075]: Session 24 logged out. Waiting for processes to exit. Feb 13 20:11:14.290390 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 20:11:14.298105 systemd-logind[2075]: Removed session 24. Feb 13 20:11:17.624518 containerd[2097]: time="2025-02-13T20:11:17.624278801Z" level=info msg="StopPodSandbox for \"9c5b76d0cde9bb37ceb8c8f94ac3cd38d15f4b18c25456dab6cfe056157069a6\"" Feb 13 20:11:17.624518 containerd[2097]: time="2025-02-13T20:11:17.624407124Z" level=info msg="TearDown network for sandbox \"9c5b76d0cde9bb37ceb8c8f94ac3cd38d15f4b18c25456dab6cfe056157069a6\" successfully" Feb 13 20:11:17.624518 containerd[2097]: time="2025-02-13T20:11:17.624425511Z" level=info msg="StopPodSandbox for \"9c5b76d0cde9bb37ceb8c8f94ac3cd38d15f4b18c25456dab6cfe056157069a6\" returns successfully" Feb 13 20:11:17.639957 containerd[2097]: time="2025-02-13T20:11:17.639768785Z" level=info msg="RemovePodSandbox for \"9c5b76d0cde9bb37ceb8c8f94ac3cd38d15f4b18c25456dab6cfe056157069a6\"" Feb 13 20:11:17.650713 containerd[2097]: time="2025-02-13T20:11:17.650318550Z" level=info msg="Forcibly stopping sandbox \"9c5b76d0cde9bb37ceb8c8f94ac3cd38d15f4b18c25456dab6cfe056157069a6\"" Feb 13 20:11:17.650713 containerd[2097]: time="2025-02-13T20:11:17.650476805Z" level=info msg="TearDown network for sandbox \"9c5b76d0cde9bb37ceb8c8f94ac3cd38d15f4b18c25456dab6cfe056157069a6\" successfully" Feb 13 20:11:17.684990 containerd[2097]: time="2025-02-13T20:11:17.684931366Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9c5b76d0cde9bb37ceb8c8f94ac3cd38d15f4b18c25456dab6cfe056157069a6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:11:17.685165 containerd[2097]: time="2025-02-13T20:11:17.685028536Z" level=info msg="RemovePodSandbox \"9c5b76d0cde9bb37ceb8c8f94ac3cd38d15f4b18c25456dab6cfe056157069a6\" returns successfully" Feb 13 20:11:17.685777 containerd[2097]: time="2025-02-13T20:11:17.685634343Z" level=info msg="StopPodSandbox for \"08fe7c3fceb4e68bf1525388f5ee0c7c65b68b682b0d9d568cd1f3214b738468\"" Feb 13 20:11:18.345095 containerd[2097]: 2025-02-13 20:11:17.834 [WARNING][7360] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="08fe7c3fceb4e68bf1525388f5ee0c7c65b68b682b0d9d568cd1f3214b738468" WorkloadEndpoint="ip--172--31--16--93-k8s-calico--kube--controllers--587f87bbd4--mm8mf-eth0" Feb 13 20:11:18.345095 containerd[2097]: 2025-02-13 20:11:17.837 [INFO][7360] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="08fe7c3fceb4e68bf1525388f5ee0c7c65b68b682b0d9d568cd1f3214b738468" Feb 13 20:11:18.345095 containerd[2097]: 2025-02-13 20:11:17.837 [INFO][7360] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="08fe7c3fceb4e68bf1525388f5ee0c7c65b68b682b0d9d568cd1f3214b738468" iface="eth0" netns="" Feb 13 20:11:18.345095 containerd[2097]: 2025-02-13 20:11:17.837 [INFO][7360] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="08fe7c3fceb4e68bf1525388f5ee0c7c65b68b682b0d9d568cd1f3214b738468" Feb 13 20:11:18.345095 containerd[2097]: 2025-02-13 20:11:17.837 [INFO][7360] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="08fe7c3fceb4e68bf1525388f5ee0c7c65b68b682b0d9d568cd1f3214b738468" Feb 13 20:11:18.345095 containerd[2097]: 2025-02-13 20:11:18.308 [INFO][7366] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="08fe7c3fceb4e68bf1525388f5ee0c7c65b68b682b0d9d568cd1f3214b738468" HandleID="k8s-pod-network.08fe7c3fceb4e68bf1525388f5ee0c7c65b68b682b0d9d568cd1f3214b738468" Workload="ip--172--31--16--93-k8s-calico--kube--controllers--587f87bbd4--mm8mf-eth0" Feb 13 20:11:18.345095 containerd[2097]: 2025-02-13 20:11:18.312 [INFO][7366] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:11:18.345095 containerd[2097]: 2025-02-13 20:11:18.313 [INFO][7366] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:11:18.345095 containerd[2097]: 2025-02-13 20:11:18.337 [WARNING][7366] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="08fe7c3fceb4e68bf1525388f5ee0c7c65b68b682b0d9d568cd1f3214b738468" HandleID="k8s-pod-network.08fe7c3fceb4e68bf1525388f5ee0c7c65b68b682b0d9d568cd1f3214b738468" Workload="ip--172--31--16--93-k8s-calico--kube--controllers--587f87bbd4--mm8mf-eth0" Feb 13 20:11:18.345095 containerd[2097]: 2025-02-13 20:11:18.337 [INFO][7366] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="08fe7c3fceb4e68bf1525388f5ee0c7c65b68b682b0d9d568cd1f3214b738468" HandleID="k8s-pod-network.08fe7c3fceb4e68bf1525388f5ee0c7c65b68b682b0d9d568cd1f3214b738468" Workload="ip--172--31--16--93-k8s-calico--kube--controllers--587f87bbd4--mm8mf-eth0" Feb 13 20:11:18.345095 containerd[2097]: 2025-02-13 20:11:18.339 [INFO][7366] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:11:18.345095 containerd[2097]: 2025-02-13 20:11:18.342 [INFO][7360] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="08fe7c3fceb4e68bf1525388f5ee0c7c65b68b682b0d9d568cd1f3214b738468" Feb 13 20:11:18.345095 containerd[2097]: time="2025-02-13T20:11:18.344946762Z" level=info msg="TearDown network for sandbox \"08fe7c3fceb4e68bf1525388f5ee0c7c65b68b682b0d9d568cd1f3214b738468\" successfully" Feb 13 20:11:18.345095 containerd[2097]: time="2025-02-13T20:11:18.344972640Z" level=info msg="StopPodSandbox for \"08fe7c3fceb4e68bf1525388f5ee0c7c65b68b682b0d9d568cd1f3214b738468\" returns successfully" Feb 13 20:11:18.346504 containerd[2097]: time="2025-02-13T20:11:18.346047134Z" level=info msg="RemovePodSandbox for \"08fe7c3fceb4e68bf1525388f5ee0c7c65b68b682b0d9d568cd1f3214b738468\"" Feb 13 20:11:18.346504 containerd[2097]: time="2025-02-13T20:11:18.346150545Z" level=info msg="Forcibly stopping sandbox \"08fe7c3fceb4e68bf1525388f5ee0c7c65b68b682b0d9d568cd1f3214b738468\"" Feb 13 20:11:18.477100 containerd[2097]: 2025-02-13 20:11:18.425 [WARNING][7384] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="08fe7c3fceb4e68bf1525388f5ee0c7c65b68b682b0d9d568cd1f3214b738468" WorkloadEndpoint="ip--172--31--16--93-k8s-calico--kube--controllers--587f87bbd4--mm8mf-eth0" Feb 13 20:11:18.477100 containerd[2097]: 2025-02-13 20:11:18.425 [INFO][7384] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="08fe7c3fceb4e68bf1525388f5ee0c7c65b68b682b0d9d568cd1f3214b738468" Feb 13 20:11:18.477100 containerd[2097]: 2025-02-13 20:11:18.425 [INFO][7384] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="08fe7c3fceb4e68bf1525388f5ee0c7c65b68b682b0d9d568cd1f3214b738468" iface="eth0" netns="" Feb 13 20:11:18.477100 containerd[2097]: 2025-02-13 20:11:18.425 [INFO][7384] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="08fe7c3fceb4e68bf1525388f5ee0c7c65b68b682b0d9d568cd1f3214b738468" Feb 13 20:11:18.477100 containerd[2097]: 2025-02-13 20:11:18.425 [INFO][7384] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="08fe7c3fceb4e68bf1525388f5ee0c7c65b68b682b0d9d568cd1f3214b738468" Feb 13 20:11:18.477100 containerd[2097]: 2025-02-13 20:11:18.457 [INFO][7390] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="08fe7c3fceb4e68bf1525388f5ee0c7c65b68b682b0d9d568cd1f3214b738468" HandleID="k8s-pod-network.08fe7c3fceb4e68bf1525388f5ee0c7c65b68b682b0d9d568cd1f3214b738468" Workload="ip--172--31--16--93-k8s-calico--kube--controllers--587f87bbd4--mm8mf-eth0" Feb 13 20:11:18.477100 containerd[2097]: 2025-02-13 20:11:18.457 [INFO][7390] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:11:18.477100 containerd[2097]: 2025-02-13 20:11:18.457 [INFO][7390] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:11:18.477100 containerd[2097]: 2025-02-13 20:11:18.468 [WARNING][7390] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="08fe7c3fceb4e68bf1525388f5ee0c7c65b68b682b0d9d568cd1f3214b738468" HandleID="k8s-pod-network.08fe7c3fceb4e68bf1525388f5ee0c7c65b68b682b0d9d568cd1f3214b738468" Workload="ip--172--31--16--93-k8s-calico--kube--controllers--587f87bbd4--mm8mf-eth0" Feb 13 20:11:18.477100 containerd[2097]: 2025-02-13 20:11:18.468 [INFO][7390] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="08fe7c3fceb4e68bf1525388f5ee0c7c65b68b682b0d9d568cd1f3214b738468" HandleID="k8s-pod-network.08fe7c3fceb4e68bf1525388f5ee0c7c65b68b682b0d9d568cd1f3214b738468" Workload="ip--172--31--16--93-k8s-calico--kube--controllers--587f87bbd4--mm8mf-eth0" Feb 13 20:11:18.477100 containerd[2097]: 2025-02-13 20:11:18.472 [INFO][7390] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:11:18.477100 containerd[2097]: 2025-02-13 20:11:18.475 [INFO][7384] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="08fe7c3fceb4e68bf1525388f5ee0c7c65b68b682b0d9d568cd1f3214b738468" Feb 13 20:11:18.479566 containerd[2097]: time="2025-02-13T20:11:18.477168651Z" level=info msg="TearDown network for sandbox \"08fe7c3fceb4e68bf1525388f5ee0c7c65b68b682b0d9d568cd1f3214b738468\" successfully" Feb 13 20:11:18.484504 containerd[2097]: time="2025-02-13T20:11:18.484457616Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"08fe7c3fceb4e68bf1525388f5ee0c7c65b68b682b0d9d568cd1f3214b738468\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:11:18.485497 containerd[2097]: time="2025-02-13T20:11:18.484536933Z" level=info msg="RemovePodSandbox \"08fe7c3fceb4e68bf1525388f5ee0c7c65b68b682b0d9d568cd1f3214b738468\" returns successfully" Feb 13 20:11:18.485497 containerd[2097]: time="2025-02-13T20:11:18.485422289Z" level=info msg="StopPodSandbox for \"efde2821bdb249357e7243eb993bc73b38886bd7d526624708a8f91ac759a7a0\"" Feb 13 20:11:18.485712 containerd[2097]: time="2025-02-13T20:11:18.485596311Z" level=info msg="TearDown network for sandbox \"efde2821bdb249357e7243eb993bc73b38886bd7d526624708a8f91ac759a7a0\" successfully" Feb 13 20:11:18.485712 containerd[2097]: time="2025-02-13T20:11:18.485615698Z" level=info msg="StopPodSandbox for \"efde2821bdb249357e7243eb993bc73b38886bd7d526624708a8f91ac759a7a0\" returns successfully" Feb 13 20:11:18.486258 containerd[2097]: time="2025-02-13T20:11:18.486230901Z" level=info msg="RemovePodSandbox for \"efde2821bdb249357e7243eb993bc73b38886bd7d526624708a8f91ac759a7a0\"" Feb 13 20:11:18.487318 containerd[2097]: time="2025-02-13T20:11:18.486261278Z" level=info msg="Forcibly stopping sandbox \"efde2821bdb249357e7243eb993bc73b38886bd7d526624708a8f91ac759a7a0\"" Feb 13 20:11:18.487318 containerd[2097]: time="2025-02-13T20:11:18.486324942Z" level=info msg="TearDown network for sandbox \"efde2821bdb249357e7243eb993bc73b38886bd7d526624708a8f91ac759a7a0\" successfully" Feb 13 20:11:18.495377 containerd[2097]: time="2025-02-13T20:11:18.495324809Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"efde2821bdb249357e7243eb993bc73b38886bd7d526624708a8f91ac759a7a0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:11:18.495535 containerd[2097]: time="2025-02-13T20:11:18.495408934Z" level=info msg="RemovePodSandbox \"efde2821bdb249357e7243eb993bc73b38886bd7d526624708a8f91ac759a7a0\" returns successfully" Feb 13 20:11:18.495952 containerd[2097]: time="2025-02-13T20:11:18.495888834Z" level=info msg="StopPodSandbox for \"b329744ddb60f7eceb2b5701f32b0822d1a8da2cff82cb838995ebacfc9eb13e\"" Feb 13 20:11:18.600339 containerd[2097]: 2025-02-13 20:11:18.553 [WARNING][7408] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b329744ddb60f7eceb2b5701f32b0822d1a8da2cff82cb838995ebacfc9eb13e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--93-k8s-coredns--7db6d8ff4d--5wkpf-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"a95f6702-f897-4b44-9e9f-23c6d7c2741b", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 9, 29, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-93", ContainerID:"4a071a026be0bcba719f332f7fe1a79053b7749a9f9a24a0e155dc36026ece54", Pod:"coredns-7db6d8ff4d-5wkpf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.111.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid789b3c6948", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:11:18.600339 containerd[2097]: 2025-02-13 20:11:18.554 [INFO][7408] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b329744ddb60f7eceb2b5701f32b0822d1a8da2cff82cb838995ebacfc9eb13e" Feb 13 20:11:18.600339 containerd[2097]: 2025-02-13 20:11:18.554 [INFO][7408] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b329744ddb60f7eceb2b5701f32b0822d1a8da2cff82cb838995ebacfc9eb13e" iface="eth0" netns="" Feb 13 20:11:18.600339 containerd[2097]: 2025-02-13 20:11:18.554 [INFO][7408] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b329744ddb60f7eceb2b5701f32b0822d1a8da2cff82cb838995ebacfc9eb13e" Feb 13 20:11:18.600339 containerd[2097]: 2025-02-13 20:11:18.554 [INFO][7408] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b329744ddb60f7eceb2b5701f32b0822d1a8da2cff82cb838995ebacfc9eb13e" Feb 13 20:11:18.600339 containerd[2097]: 2025-02-13 20:11:18.585 [INFO][7414] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b329744ddb60f7eceb2b5701f32b0822d1a8da2cff82cb838995ebacfc9eb13e" HandleID="k8s-pod-network.b329744ddb60f7eceb2b5701f32b0822d1a8da2cff82cb838995ebacfc9eb13e" Workload="ip--172--31--16--93-k8s-coredns--7db6d8ff4d--5wkpf-eth0" Feb 13 20:11:18.600339 containerd[2097]: 2025-02-13 20:11:18.586 [INFO][7414] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:11:18.600339 containerd[2097]: 2025-02-13 20:11:18.586 [INFO][7414] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:11:18.600339 containerd[2097]: 2025-02-13 20:11:18.592 [WARNING][7414] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b329744ddb60f7eceb2b5701f32b0822d1a8da2cff82cb838995ebacfc9eb13e" HandleID="k8s-pod-network.b329744ddb60f7eceb2b5701f32b0822d1a8da2cff82cb838995ebacfc9eb13e" Workload="ip--172--31--16--93-k8s-coredns--7db6d8ff4d--5wkpf-eth0" Feb 13 20:11:18.600339 containerd[2097]: 2025-02-13 20:11:18.592 [INFO][7414] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b329744ddb60f7eceb2b5701f32b0822d1a8da2cff82cb838995ebacfc9eb13e" HandleID="k8s-pod-network.b329744ddb60f7eceb2b5701f32b0822d1a8da2cff82cb838995ebacfc9eb13e" Workload="ip--172--31--16--93-k8s-coredns--7db6d8ff4d--5wkpf-eth0" Feb 13 20:11:18.600339 containerd[2097]: 2025-02-13 20:11:18.595 [INFO][7414] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:11:18.600339 containerd[2097]: 2025-02-13 20:11:18.598 [INFO][7408] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b329744ddb60f7eceb2b5701f32b0822d1a8da2cff82cb838995ebacfc9eb13e" Feb 13 20:11:18.600339 containerd[2097]: time="2025-02-13T20:11:18.600307369Z" level=info msg="TearDown network for sandbox \"b329744ddb60f7eceb2b5701f32b0822d1a8da2cff82cb838995ebacfc9eb13e\" successfully" Feb 13 20:11:18.605144 containerd[2097]: time="2025-02-13T20:11:18.600337332Z" level=info msg="StopPodSandbox for \"b329744ddb60f7eceb2b5701f32b0822d1a8da2cff82cb838995ebacfc9eb13e\" returns successfully" Feb 13 20:11:18.605144 containerd[2097]: time="2025-02-13T20:11:18.601093120Z" level=info msg="RemovePodSandbox for \"b329744ddb60f7eceb2b5701f32b0822d1a8da2cff82cb838995ebacfc9eb13e\"" Feb 13 20:11:18.605144 containerd[2097]: time="2025-02-13T20:11:18.601124935Z" level=info msg="Forcibly stopping sandbox \"b329744ddb60f7eceb2b5701f32b0822d1a8da2cff82cb838995ebacfc9eb13e\"" Feb 13 20:11:18.724577 containerd[2097]: 2025-02-13 20:11:18.661 [WARNING][7432] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b329744ddb60f7eceb2b5701f32b0822d1a8da2cff82cb838995ebacfc9eb13e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--93-k8s-coredns--7db6d8ff4d--5wkpf-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"a95f6702-f897-4b44-9e9f-23c6d7c2741b", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 9, 29, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-93", ContainerID:"4a071a026be0bcba719f332f7fe1a79053b7749a9f9a24a0e155dc36026ece54", Pod:"coredns-7db6d8ff4d-5wkpf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.111.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid789b3c6948", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:11:18.724577 containerd[2097]: 2025-02-13 20:11:18.661 [INFO][7432] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b329744ddb60f7eceb2b5701f32b0822d1a8da2cff82cb838995ebacfc9eb13e" Feb 13 20:11:18.724577 containerd[2097]: 2025-02-13 20:11:18.661 [INFO][7432] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b329744ddb60f7eceb2b5701f32b0822d1a8da2cff82cb838995ebacfc9eb13e" iface="eth0" netns="" Feb 13 20:11:18.724577 containerd[2097]: 2025-02-13 20:11:18.661 [INFO][7432] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b329744ddb60f7eceb2b5701f32b0822d1a8da2cff82cb838995ebacfc9eb13e" Feb 13 20:11:18.724577 containerd[2097]: 2025-02-13 20:11:18.661 [INFO][7432] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b329744ddb60f7eceb2b5701f32b0822d1a8da2cff82cb838995ebacfc9eb13e" Feb 13 20:11:18.724577 containerd[2097]: 2025-02-13 20:11:18.706 [INFO][7439] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b329744ddb60f7eceb2b5701f32b0822d1a8da2cff82cb838995ebacfc9eb13e" HandleID="k8s-pod-network.b329744ddb60f7eceb2b5701f32b0822d1a8da2cff82cb838995ebacfc9eb13e" Workload="ip--172--31--16--93-k8s-coredns--7db6d8ff4d--5wkpf-eth0" Feb 13 20:11:18.724577 containerd[2097]: 2025-02-13 20:11:18.707 [INFO][7439] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:11:18.724577 containerd[2097]: 2025-02-13 20:11:18.707 [INFO][7439] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:11:18.724577 containerd[2097]: 2025-02-13 20:11:18.714 [WARNING][7439] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b329744ddb60f7eceb2b5701f32b0822d1a8da2cff82cb838995ebacfc9eb13e" HandleID="k8s-pod-network.b329744ddb60f7eceb2b5701f32b0822d1a8da2cff82cb838995ebacfc9eb13e" Workload="ip--172--31--16--93-k8s-coredns--7db6d8ff4d--5wkpf-eth0" Feb 13 20:11:18.724577 containerd[2097]: 2025-02-13 20:11:18.716 [INFO][7439] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b329744ddb60f7eceb2b5701f32b0822d1a8da2cff82cb838995ebacfc9eb13e" HandleID="k8s-pod-network.b329744ddb60f7eceb2b5701f32b0822d1a8da2cff82cb838995ebacfc9eb13e" Workload="ip--172--31--16--93-k8s-coredns--7db6d8ff4d--5wkpf-eth0" Feb 13 20:11:18.724577 containerd[2097]: 2025-02-13 20:11:18.720 [INFO][7439] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:11:18.724577 containerd[2097]: 2025-02-13 20:11:18.722 [INFO][7432] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b329744ddb60f7eceb2b5701f32b0822d1a8da2cff82cb838995ebacfc9eb13e" Feb 13 20:11:18.726991 containerd[2097]: time="2025-02-13T20:11:18.724612134Z" level=info msg="TearDown network for sandbox \"b329744ddb60f7eceb2b5701f32b0822d1a8da2cff82cb838995ebacfc9eb13e\" successfully" Feb 13 20:11:18.730146 containerd[2097]: time="2025-02-13T20:11:18.730093878Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b329744ddb60f7eceb2b5701f32b0822d1a8da2cff82cb838995ebacfc9eb13e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:11:18.730980 containerd[2097]: time="2025-02-13T20:11:18.730253781Z" level=info msg="RemovePodSandbox \"b329744ddb60f7eceb2b5701f32b0822d1a8da2cff82cb838995ebacfc9eb13e\" returns successfully" Feb 13 20:11:19.308449 systemd[1]: Started sshd@24-172.31.16.93:22-139.178.89.65:58384.service - OpenSSH per-connection server daemon (139.178.89.65:58384). Feb 13 20:11:19.556233 sshd[7445]: Accepted publickey for core from 139.178.89.65 port 58384 ssh2: RSA SHA256:7nv7xaFFWmIAvPewvKjLuTxkMrDcPy3WtQ5BDo3Wg0I Feb 13 20:11:19.562836 sshd[7445]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:11:19.580814 systemd-logind[2075]: New session 25 of user core. Feb 13 20:11:19.586041 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 20:11:19.934738 sshd[7445]: pam_unix(sshd:session): session closed for user core Feb 13 20:11:19.942410 systemd[1]: sshd@24-172.31.16.93:22-139.178.89.65:58384.service: Deactivated successfully. Feb 13 20:11:19.949976 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 20:11:19.951908 systemd-logind[2075]: Session 25 logged out. Waiting for processes to exit. Feb 13 20:11:19.957125 systemd-logind[2075]: Removed session 25. Feb 13 20:11:20.111364 systemd-resolved[1977]: Under memory pressure, flushing caches. Feb 13 20:11:20.111396 systemd-resolved[1977]: Flushed all caches. Feb 13 20:11:20.113108 systemd-journald[1566]: Under memory pressure, flushing caches. Feb 13 20:11:24.965189 systemd[1]: Started sshd@25-172.31.16.93:22-139.178.89.65:47874.service - OpenSSH per-connection server daemon (139.178.89.65:47874). Feb 13 20:11:25.131247 sshd[7459]: Accepted publickey for core from 139.178.89.65 port 47874 ssh2: RSA SHA256:7nv7xaFFWmIAvPewvKjLuTxkMrDcPy3WtQ5BDo3Wg0I Feb 13 20:11:25.133056 sshd[7459]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:11:25.137872 systemd-logind[2075]: New session 26 of user core. Feb 13 20:11:25.146464 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 20:11:25.449899 sshd[7459]: pam_unix(sshd:session): session closed for user core Feb 13 20:11:25.459512 systemd[1]: sshd@25-172.31.16.93:22-139.178.89.65:47874.service: Deactivated successfully. Feb 13 20:11:25.471627 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 20:11:25.475711 systemd-logind[2075]: Session 26 logged out. Waiting for processes to exit. Feb 13 20:11:25.480541 systemd-logind[2075]: Removed session 26. Feb 13 20:11:30.481902 systemd[1]: Started sshd@26-172.31.16.93:22-139.178.89.65:47884.service - OpenSSH per-connection server daemon (139.178.89.65:47884). Feb 13 20:11:30.652702 sshd[7481]: Accepted publickey for core from 139.178.89.65 port 47884 ssh2: RSA SHA256:7nv7xaFFWmIAvPewvKjLuTxkMrDcPy3WtQ5BDo3Wg0I Feb 13 20:11:30.654602 sshd[7481]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:11:30.661037 systemd-logind[2075]: New session 27 of user core. Feb 13 20:11:30.666461 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 20:11:30.935361 sshd[7481]: pam_unix(sshd:session): session closed for user core Feb 13 20:11:30.946193 systemd[1]: sshd@26-172.31.16.93:22-139.178.89.65:47884.service: Deactivated successfully. Feb 13 20:11:30.952893 systemd-logind[2075]: Session 27 logged out. Waiting for processes to exit. Feb 13 20:11:30.953638 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 20:11:30.957414 systemd-logind[2075]: Removed session 27. Feb 13 20:11:35.964462 systemd[1]: Started sshd@27-172.31.16.93:22-139.178.89.65:44264.service - OpenSSH per-connection server daemon (139.178.89.65:44264). Feb 13 20:11:36.194921 sshd[7498]: Accepted publickey for core from 139.178.89.65 port 44264 ssh2: RSA SHA256:7nv7xaFFWmIAvPewvKjLuTxkMrDcPy3WtQ5BDo3Wg0I Feb 13 20:11:36.199775 sshd[7498]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:11:36.211818 systemd-logind[2075]: New session 28 of user core. Feb 13 20:11:36.215451 systemd[1]: Started session-28.scope - Session 28 of User core. Feb 13 20:11:36.653549 sshd[7498]: pam_unix(sshd:session): session closed for user core Feb 13 20:11:36.660059 systemd[1]: sshd@27-172.31.16.93:22-139.178.89.65:44264.service: Deactivated successfully. Feb 13 20:11:36.666463 systemd-logind[2075]: Session 28 logged out. Waiting for processes to exit. Feb 13 20:11:36.666802 systemd[1]: session-28.scope: Deactivated successfully. Feb 13 20:11:36.672021 systemd-logind[2075]: Removed session 28. Feb 13 20:11:41.685554 systemd[1]: Started sshd@28-172.31.16.93:22-139.178.89.65:44274.service - OpenSSH per-connection server daemon (139.178.89.65:44274). Feb 13 20:11:41.871847 sshd[7515]: Accepted publickey for core from 139.178.89.65 port 44274 ssh2: RSA SHA256:7nv7xaFFWmIAvPewvKjLuTxkMrDcPy3WtQ5BDo3Wg0I Feb 13 20:11:41.874014 sshd[7515]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:11:41.885935 systemd-logind[2075]: New session 29 of user core. Feb 13 20:11:41.892539 systemd[1]: Started session-29.scope - Session 29 of User core. Feb 13 20:11:42.167186 sshd[7515]: pam_unix(sshd:session): session closed for user core Feb 13 20:11:42.173724 systemd[1]: sshd@28-172.31.16.93:22-139.178.89.65:44274.service: Deactivated successfully. Feb 13 20:11:42.179264 systemd[1]: session-29.scope: Deactivated successfully. Feb 13 20:11:42.181835 systemd-logind[2075]: Session 29 logged out. Waiting for processes to exit. Feb 13 20:11:42.183142 systemd-logind[2075]: Removed session 29. Feb 13 20:11:58.438999 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f046b4067495709899dd90bb39b6e67ebab80c50347a01d17d78cbce2374d1b1-rootfs.mount: Deactivated successfully. Feb 13 20:11:58.476892 containerd[2097]: time="2025-02-13T20:11:58.434398238Z" level=info msg="shim disconnected" id=f046b4067495709899dd90bb39b6e67ebab80c50347a01d17d78cbce2374d1b1 namespace=k8s.io Feb 13 20:11:58.476892 containerd[2097]: time="2025-02-13T20:11:58.475461313Z" level=warning msg="cleaning up after shim disconnected" id=f046b4067495709899dd90bb39b6e67ebab80c50347a01d17d78cbce2374d1b1 namespace=k8s.io Feb 13 20:11:58.476892 containerd[2097]: time="2025-02-13T20:11:58.475482693Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:11:58.495924 kubelet[3699]: E0213 20:11:58.495852 3699 controller.go:195] "Failed to update lease" err="Put \"https://172.31.16.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-93?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 13 20:11:58.769794 kubelet[3699]: I0213 20:11:58.769374 3699 scope.go:117] "RemoveContainer" containerID="f046b4067495709899dd90bb39b6e67ebab80c50347a01d17d78cbce2374d1b1" Feb 13 20:11:58.840092 containerd[2097]: time="2025-02-13T20:11:58.840026209Z" level=info msg="CreateContainer within sandbox \"efdfbe826aa377c99190f78c4f630b97d4338ed63ad377d9e331bcde5e7c84fd\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Feb 13 20:11:58.905366 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1330356393.mount: Deactivated successfully. Feb 13 20:11:58.923568 containerd[2097]: time="2025-02-13T20:11:58.923404023Z" level=info msg="CreateContainer within sandbox \"efdfbe826aa377c99190f78c4f630b97d4338ed63ad377d9e331bcde5e7c84fd\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"e5014fb8671eaad7e501603d62145a47e9b6d673bf3185a90112180de0c89d6c\"" Feb 13 20:11:58.957146 containerd[2097]: time="2025-02-13T20:11:58.956800565Z" level=info msg="StartContainer for \"e5014fb8671eaad7e501603d62145a47e9b6d673bf3185a90112180de0c89d6c\"" Feb 13 20:11:59.025413 containerd[2097]: time="2025-02-13T20:11:59.024241047Z" level=info msg="shim disconnected" id=fae9f866c4e77496a4a2b11ca227961aae687e3384139f6ae0d4f6cf61fd940b namespace=k8s.io Feb 13 20:11:59.025413 containerd[2097]: time="2025-02-13T20:11:59.024367167Z" level=warning msg="cleaning up after shim disconnected" id=fae9f866c4e77496a4a2b11ca227961aae687e3384139f6ae0d4f6cf61fd940b namespace=k8s.io Feb 13 20:11:59.025413 containerd[2097]: time="2025-02-13T20:11:59.024382328Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:11:59.035566 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fae9f866c4e77496a4a2b11ca227961aae687e3384139f6ae0d4f6cf61fd940b-rootfs.mount: Deactivated successfully. Feb 13 20:11:59.205207 containerd[2097]: time="2025-02-13T20:11:59.204989505Z" level=info msg="StartContainer for \"e5014fb8671eaad7e501603d62145a47e9b6d673bf3185a90112180de0c89d6c\" returns successfully" Feb 13 20:11:59.771923 kubelet[3699]: I0213 20:11:59.770655 3699 scope.go:117] "RemoveContainer" containerID="fae9f866c4e77496a4a2b11ca227961aae687e3384139f6ae0d4f6cf61fd940b" Feb 13 20:11:59.782547 containerd[2097]: time="2025-02-13T20:11:59.779808304Z" level=info msg="CreateContainer within sandbox \"2b7488ee04028f66f422c0c90991f053fddac1b4247bb9dbd90d9c232970cf37\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Feb 13 20:11:59.858693 containerd[2097]: time="2025-02-13T20:11:59.858648314Z" level=info msg="CreateContainer within sandbox \"2b7488ee04028f66f422c0c90991f053fddac1b4247bb9dbd90d9c232970cf37\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"52f56895b1d2d02eee06c3a270fc6b1f16a1da9dd196dd67ea598dc475051eb8\"" Feb 13 20:11:59.860442 containerd[2097]: time="2025-02-13T20:11:59.860410891Z" level=info msg="StartContainer for \"52f56895b1d2d02eee06c3a270fc6b1f16a1da9dd196dd67ea598dc475051eb8\"" Feb 13 20:12:00.047462 containerd[2097]: time="2025-02-13T20:12:00.047167034Z" level=info msg="StartContainer for \"52f56895b1d2d02eee06c3a270fc6b1f16a1da9dd196dd67ea598dc475051eb8\" returns successfully" Feb 13 20:12:03.560540 containerd[2097]: time="2025-02-13T20:12:03.560281524Z" level=info msg="shim disconnected" id=5f41783b7c65f53a2e864a2d15a8d94bbf1e484c8e98cb9b99c81271a8f3f042 namespace=k8s.io Feb 13 20:12:03.560540 containerd[2097]: time="2025-02-13T20:12:03.560349901Z" level=warning msg="cleaning up after shim disconnected" id=5f41783b7c65f53a2e864a2d15a8d94bbf1e484c8e98cb9b99c81271a8f3f042 namespace=k8s.io Feb 13 20:12:03.560540 containerd[2097]: time="2025-02-13T20:12:03.560361699Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:12:03.563497 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5f41783b7c65f53a2e864a2d15a8d94bbf1e484c8e98cb9b99c81271a8f3f042-rootfs.mount: Deactivated successfully. Feb 13 20:12:03.795476 kubelet[3699]: I0213 20:12:03.795411 3699 scope.go:117] "RemoveContainer" containerID="5f41783b7c65f53a2e864a2d15a8d94bbf1e484c8e98cb9b99c81271a8f3f042" Feb 13 20:12:03.798286 containerd[2097]: time="2025-02-13T20:12:03.798248940Z" level=info msg="CreateContainer within sandbox \"9cea9f2d4987fc80e48f972fd2b85d4f298b5d7145c1ed90b29c03dc1df30507\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Feb 13 20:12:03.822701 containerd[2097]: time="2025-02-13T20:12:03.822494942Z" level=info msg="CreateContainer within sandbox \"9cea9f2d4987fc80e48f972fd2b85d4f298b5d7145c1ed90b29c03dc1df30507\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"70b469346ca0a4cd75ba8aa7b03499de59f4acf3ea569d5f9b2896c56b5a459f\"" Feb 13 20:12:03.823495 containerd[2097]: time="2025-02-13T20:12:03.823452357Z" level=info msg="StartContainer for \"70b469346ca0a4cd75ba8aa7b03499de59f4acf3ea569d5f9b2896c56b5a459f\"" Feb 13 20:12:03.923112 containerd[2097]: time="2025-02-13T20:12:03.923040809Z" level=info msg="StartContainer for \"70b469346ca0a4cd75ba8aa7b03499de59f4acf3ea569d5f9b2896c56b5a459f\" returns successfully" Feb 13 20:12:08.497202 kubelet[3699]: E0213 20:12:08.497145 3699 controller.go:195] "Failed to update lease" err="Put \"https://172.31.16.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-93?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"