Jan 13 21:31:19.085993 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:40:50 -00 2025 Jan 13 21:31:19.086049 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:31:19.086066 kernel: BIOS-provided physical RAM map: Jan 13 21:31:19.086078 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 13 21:31:19.086089 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 13 21:31:19.086101 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 13 21:31:19.086120 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Jan 13 21:31:19.086133 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Jan 13 21:31:19.086146 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Jan 13 21:31:19.086218 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 13 21:31:19.086237 kernel: NX (Execute Disable) protection: active Jan 13 21:31:19.086250 kernel: APIC: Static calls initialized Jan 13 21:31:19.086263 kernel: SMBIOS 2.7 present. Jan 13 21:31:19.086277 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Jan 13 21:31:19.086408 kernel: Hypervisor detected: KVM Jan 13 21:31:19.086426 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 13 21:31:19.086441 kernel: kvm-clock: using sched offset of 6968020923 cycles Jan 13 21:31:19.086458 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 13 21:31:19.086472 kernel: tsc: Detected 2499.996 MHz processor Jan 13 21:31:19.086487 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 13 21:31:19.086502 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 13 21:31:19.086519 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Jan 13 21:31:19.086534 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 13 21:31:19.086548 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 13 21:31:19.086563 kernel: Using GB pages for direct mapping Jan 13 21:31:19.086577 kernel: ACPI: Early table checksum verification disabled Jan 13 21:31:19.086592 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Jan 13 21:31:19.086606 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Jan 13 21:31:19.086621 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 13 21:31:19.086636 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 13 21:31:19.086654 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Jan 13 21:31:19.086668 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 13 21:31:19.086683 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 13 21:31:19.086697 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Jan 13 21:31:19.086712 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 13 21:31:19.086726 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Jan 13 21:31:19.086740 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Jan 13 21:31:19.086755 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 13 21:31:19.086769 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Jan 13 21:31:19.086787 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Jan 13 21:31:19.086843 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Jan 13 21:31:19.086859 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Jan 13 21:31:19.086875 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Jan 13 21:31:19.086891 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Jan 13 21:31:19.088103 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Jan 13 21:31:19.088121 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Jan 13 21:31:19.088137 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Jan 13 21:31:19.088153 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Jan 13 21:31:19.088169 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 13 21:31:19.088184 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 13 21:31:19.088200 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Jan 13 21:31:19.088215 kernel: NUMA: Initialized distance table, cnt=1 Jan 13 21:31:19.088230 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Jan 13 21:31:19.088249 kernel: Zone ranges: Jan 13 21:31:19.088265 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 13 21:31:19.088280 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Jan 13 21:31:19.088296 kernel: Normal empty Jan 13 21:31:19.088312 kernel: Movable zone start for each node Jan 13 21:31:19.088327 kernel: Early memory node ranges Jan 13 21:31:19.088343 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 13 21:31:19.088358 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Jan 13 21:31:19.088374 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Jan 13 21:31:19.088392 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 21:31:19.088408 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 13 21:31:19.088424 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Jan 13 21:31:19.088439 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 13 21:31:19.088455 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 13 21:31:19.088469 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Jan 13 21:31:19.088485 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 13 21:31:19.088500 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 13 21:31:19.088516 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 13 21:31:19.088534 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 13 21:31:19.088550 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 13 21:31:19.088566 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 13 21:31:19.088582 kernel: TSC deadline timer available Jan 13 21:31:19.088597 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 13 21:31:19.088612 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 13 21:31:19.088628 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Jan 13 21:31:19.088644 kernel: Booting paravirtualized kernel on KVM Jan 13 21:31:19.088659 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 13 21:31:19.088674 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 13 21:31:19.088693 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 13 21:31:19.088709 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 13 21:31:19.088724 kernel: pcpu-alloc: [0] 0 1 Jan 13 21:31:19.088739 kernel: kvm-guest: PV spinlocks enabled Jan 13 21:31:19.088755 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 13 21:31:19.088772 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:31:19.088788 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 21:31:19.088806 kernel: random: crng init done Jan 13 21:31:19.088821 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 21:31:19.088837 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 13 21:31:19.088853 kernel: Fallback order for Node 0: 0 Jan 13 21:31:19.088868 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Jan 13 21:31:19.088884 kernel: Policy zone: DMA32 Jan 13 21:31:19.088935 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 21:31:19.088951 kernel: Memory: 1932348K/2057760K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42844K init, 2348K bss, 125152K reserved, 0K cma-reserved) Jan 13 21:31:19.088966 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 13 21:31:19.088985 kernel: Kernel/User page tables isolation: enabled Jan 13 21:31:19.089001 kernel: ftrace: allocating 37918 entries in 149 pages Jan 13 21:31:19.089016 kernel: ftrace: allocated 149 pages with 4 groups Jan 13 21:31:19.089032 kernel: Dynamic Preempt: voluntary Jan 13 21:31:19.089046 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 21:31:19.089063 kernel: rcu: RCU event tracing is enabled. Jan 13 21:31:19.089079 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 13 21:31:19.089094 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 21:31:19.089110 kernel: Rude variant of Tasks RCU enabled. Jan 13 21:31:19.089125 kernel: Tracing variant of Tasks RCU enabled. Jan 13 21:31:19.089143 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 21:31:19.089159 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 13 21:31:19.089174 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 13 21:31:19.089189 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 21:31:19.089205 kernel: Console: colour VGA+ 80x25 Jan 13 21:31:19.089220 kernel: printk: console [ttyS0] enabled Jan 13 21:31:19.089235 kernel: ACPI: Core revision 20230628 Jan 13 21:31:19.089374 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Jan 13 21:31:19.089437 kernel: APIC: Switch to symmetric I/O mode setup Jan 13 21:31:19.089456 kernel: x2apic enabled Jan 13 21:31:19.089471 kernel: APIC: Switched APIC routing to: physical x2apic Jan 13 21:31:19.089499 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Jan 13 21:31:19.089518 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Jan 13 21:31:19.089535 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 13 21:31:19.089551 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 13 21:31:19.089567 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 13 21:31:19.089582 kernel: Spectre V2 : Mitigation: Retpolines Jan 13 21:31:19.089597 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 13 21:31:19.089612 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 13 21:31:19.089629 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 13 21:31:19.089644 kernel: RETBleed: Vulnerable Jan 13 21:31:19.089664 kernel: Speculative Store Bypass: Vulnerable Jan 13 21:31:19.089680 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Jan 13 21:31:19.089697 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 13 21:31:19.089713 kernel: GDS: Unknown: Dependent on hypervisor status Jan 13 21:31:19.089730 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 13 21:31:19.089746 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 13 21:31:19.089766 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 13 21:31:19.089782 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jan 13 21:31:19.089799 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jan 13 21:31:19.089815 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 13 21:31:19.089832 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 13 21:31:19.089848 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 13 21:31:19.089864 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 13 21:31:19.089881 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 13 21:31:19.092950 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jan 13 21:31:19.093041 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jan 13 21:31:19.093105 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Jan 13 21:31:19.093129 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Jan 13 21:31:19.093146 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Jan 13 21:31:19.093163 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Jan 13 21:31:19.093180 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Jan 13 21:31:19.093737 kernel: Freeing SMP alternatives memory: 32K Jan 13 21:31:19.093795 kernel: pid_max: default: 32768 minimum: 301 Jan 13 21:31:19.093813 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 21:31:19.093830 kernel: landlock: Up and running. Jan 13 21:31:19.093846 kernel: SELinux: Initializing. Jan 13 21:31:19.093863 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 13 21:31:19.093880 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 13 21:31:19.093967 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 13 21:31:19.093991 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 21:31:19.094009 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 21:31:19.094026 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 21:31:19.094043 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 13 21:31:19.094060 kernel: signal: max sigframe size: 3632 Jan 13 21:31:19.094077 kernel: rcu: Hierarchical SRCU implementation. Jan 13 21:31:19.094095 kernel: rcu: Max phase no-delay instances is 400. Jan 13 21:31:19.094112 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 13 21:31:19.094129 kernel: smp: Bringing up secondary CPUs ... Jan 13 21:31:19.094149 kernel: smpboot: x86: Booting SMP configuration: Jan 13 21:31:19.094166 kernel: .... node #0, CPUs: #1 Jan 13 21:31:19.094184 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 13 21:31:19.094203 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 13 21:31:19.094220 kernel: smp: Brought up 1 node, 2 CPUs Jan 13 21:31:19.094236 kernel: smpboot: Max logical packages: 1 Jan 13 21:31:19.094253 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Jan 13 21:31:19.094270 kernel: devtmpfs: initialized Jan 13 21:31:19.094290 kernel: x86/mm: Memory block size: 128MB Jan 13 21:31:19.094308 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 21:31:19.094324 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 13 21:31:19.094341 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 21:31:19.094429 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 21:31:19.094447 kernel: audit: initializing netlink subsys (disabled) Jan 13 21:31:19.094464 kernel: audit: type=2000 audit(1736803878.261:1): state=initialized audit_enabled=0 res=1 Jan 13 21:31:19.094480 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 21:31:19.094498 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 13 21:31:19.094559 kernel: cpuidle: using governor menu Jan 13 21:31:19.094577 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 21:31:19.094593 kernel: dca service started, version 1.12.1 Jan 13 21:31:19.094611 kernel: PCI: Using configuration type 1 for base access Jan 13 21:31:19.094626 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 13 21:31:19.094641 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 21:31:19.094658 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 21:31:19.094675 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 21:31:19.094693 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 21:31:19.094713 kernel: ACPI: Added _OSI(Module Device) Jan 13 21:31:19.094729 kernel: ACPI: Added _OSI(Processor Device) Jan 13 21:31:19.094745 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 21:31:19.094762 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 21:31:19.094779 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 13 21:31:19.094796 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 13 21:31:19.094813 kernel: ACPI: Interpreter enabled Jan 13 21:31:19.094829 kernel: ACPI: PM: (supports S0 S5) Jan 13 21:31:19.094846 kernel: ACPI: Using IOAPIC for interrupt routing Jan 13 21:31:19.094867 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 13 21:31:19.094884 kernel: PCI: Using E820 reservations for host bridge windows Jan 13 21:31:19.096838 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jan 13 21:31:19.096966 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 21:31:19.097430 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 13 21:31:19.097661 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 13 21:31:19.097803 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 13 21:31:19.097830 kernel: acpiphp: Slot [3] registered Jan 13 21:31:19.097846 kernel: acpiphp: Slot [4] registered Jan 13 21:31:19.097863 kernel: acpiphp: Slot [5] registered Jan 13 21:31:19.097879 kernel: acpiphp: Slot [6] registered Jan 13 21:31:19.100936 kernel: acpiphp: Slot [7] registered Jan 13 21:31:19.101063 kernel: acpiphp: Slot [8] registered Jan 13 21:31:19.101081 kernel: acpiphp: Slot [9] registered Jan 13 21:31:19.101126 kernel: acpiphp: Slot [10] registered Jan 13 21:31:19.101142 kernel: acpiphp: Slot [11] registered Jan 13 21:31:19.101158 kernel: acpiphp: Slot [12] registered Jan 13 21:31:19.101181 kernel: acpiphp: Slot [13] registered Jan 13 21:31:19.101197 kernel: acpiphp: Slot [14] registered Jan 13 21:31:19.101213 kernel: acpiphp: Slot [15] registered Jan 13 21:31:19.101228 kernel: acpiphp: Slot [16] registered Jan 13 21:31:19.101288 kernel: acpiphp: Slot [17] registered Jan 13 21:31:19.101381 kernel: acpiphp: Slot [18] registered Jan 13 21:31:19.101398 kernel: acpiphp: Slot [19] registered Jan 13 21:31:19.101464 kernel: acpiphp: Slot [20] registered Jan 13 21:31:19.101482 kernel: acpiphp: Slot [21] registered Jan 13 21:31:19.101504 kernel: acpiphp: Slot [22] registered Jan 13 21:31:19.101520 kernel: acpiphp: Slot [23] registered Jan 13 21:31:19.101536 kernel: acpiphp: Slot [24] registered Jan 13 21:31:19.101551 kernel: acpiphp: Slot [25] registered Jan 13 21:31:19.101567 kernel: acpiphp: Slot [26] registered Jan 13 21:31:19.101583 kernel: acpiphp: Slot [27] registered Jan 13 21:31:19.101599 kernel: acpiphp: Slot [28] registered Jan 13 21:31:19.101614 kernel: acpiphp: Slot [29] registered Jan 13 21:31:19.101631 kernel: acpiphp: Slot [30] registered Jan 13 21:31:19.101648 kernel: acpiphp: Slot [31] registered Jan 13 21:31:19.101667 kernel: PCI host bridge to bus 0000:00 Jan 13 21:31:19.101957 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 13 21:31:19.102080 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 13 21:31:19.102192 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 13 21:31:19.102301 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 13 21:31:19.102666 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 21:31:19.102823 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 13 21:31:19.102983 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 13 21:31:19.103130 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Jan 13 21:31:19.103439 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 13 21:31:19.103576 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Jan 13 21:31:19.103702 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Jan 13 21:31:19.103864 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Jan 13 21:31:19.105440 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Jan 13 21:31:19.105592 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Jan 13 21:31:19.105726 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Jan 13 21:31:19.105859 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Jan 13 21:31:19.106014 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Jan 13 21:31:19.119651 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Jan 13 21:31:19.119829 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jan 13 21:31:19.121580 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 13 21:31:19.121774 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 13 21:31:19.123768 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Jan 13 21:31:19.124041 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 13 21:31:19.124195 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Jan 13 21:31:19.124218 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 13 21:31:19.124235 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 13 21:31:19.124259 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 13 21:31:19.124276 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 13 21:31:19.124293 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 13 21:31:19.124310 kernel: iommu: Default domain type: Translated Jan 13 21:31:19.124327 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 13 21:31:19.124344 kernel: PCI: Using ACPI for IRQ routing Jan 13 21:31:19.124359 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 13 21:31:19.124376 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 13 21:31:19.124392 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Jan 13 21:31:19.124533 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Jan 13 21:31:19.124677 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Jan 13 21:31:19.124887 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 13 21:31:19.130983 kernel: vgaarb: loaded Jan 13 21:31:19.131005 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jan 13 21:31:19.131023 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Jan 13 21:31:19.131039 kernel: clocksource: Switched to clocksource kvm-clock Jan 13 21:31:19.131057 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 21:31:19.131074 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 21:31:19.131098 kernel: pnp: PnP ACPI init Jan 13 21:31:19.131124 kernel: pnp: PnP ACPI: found 5 devices Jan 13 21:31:19.131141 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 13 21:31:19.131157 kernel: NET: Registered PF_INET protocol family Jan 13 21:31:19.131174 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 21:31:19.131191 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 13 21:31:19.131208 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 21:31:19.131225 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 13 21:31:19.131242 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 13 21:31:19.131262 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 13 21:31:19.131279 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 13 21:31:19.131295 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 13 21:31:19.131312 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 21:31:19.131329 kernel: NET: Registered PF_XDP protocol family Jan 13 21:31:19.131525 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 13 21:31:19.131650 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 13 21:31:19.131773 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 13 21:31:19.131940 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 13 21:31:19.132094 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 13 21:31:19.132117 kernel: PCI: CLS 0 bytes, default 64 Jan 13 21:31:19.132135 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 13 21:31:19.132152 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Jan 13 21:31:19.132168 kernel: clocksource: Switched to clocksource tsc Jan 13 21:31:19.132185 kernel: Initialise system trusted keyrings Jan 13 21:31:19.132202 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 13 21:31:19.132223 kernel: Key type asymmetric registered Jan 13 21:31:19.132238 kernel: Asymmetric key parser 'x509' registered Jan 13 21:31:19.132254 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 13 21:31:19.132270 kernel: io scheduler mq-deadline registered Jan 13 21:31:19.132286 kernel: io scheduler kyber registered Jan 13 21:31:19.132303 kernel: io scheduler bfq registered Jan 13 21:31:19.132320 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 13 21:31:19.132336 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 21:31:19.132352 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 13 21:31:19.132372 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 13 21:31:19.132388 kernel: i8042: Warning: Keylock active Jan 13 21:31:19.132405 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 13 21:31:19.132485 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 13 21:31:19.132658 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 13 21:31:19.132790 kernel: rtc_cmos 00:00: registered as rtc0 Jan 13 21:31:19.132934 kernel: rtc_cmos 00:00: setting system clock to 2025-01-13T21:31:18 UTC (1736803878) Jan 13 21:31:19.133060 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 13 21:31:19.133086 kernel: intel_pstate: CPU model not supported Jan 13 21:31:19.133104 kernel: NET: Registered PF_INET6 protocol family Jan 13 21:31:19.133120 kernel: Segment Routing with IPv6 Jan 13 21:31:19.133137 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 21:31:19.133153 kernel: NET: Registered PF_PACKET protocol family Jan 13 21:31:19.133170 kernel: Key type dns_resolver registered Jan 13 21:31:19.133186 kernel: IPI shorthand broadcast: enabled Jan 13 21:31:19.133202 kernel: sched_clock: Marking stable (603067538, 326257484)->(1006865909, -77540887) Jan 13 21:31:19.133219 kernel: registered taskstats version 1 Jan 13 21:31:19.133239 kernel: Loading compiled-in X.509 certificates Jan 13 21:31:19.133368 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: e8ca4908f7ff887d90a0430272c92dde55624447' Jan 13 21:31:19.133446 kernel: Key type .fscrypt registered Jan 13 21:31:19.133463 kernel: Key type fscrypt-provisioning registered Jan 13 21:31:19.133480 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 21:31:19.133497 kernel: ima: Allocated hash algorithm: sha1 Jan 13 21:31:19.133514 kernel: ima: No architecture policies found Jan 13 21:31:19.133530 kernel: clk: Disabling unused clocks Jan 13 21:31:19.133546 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 13 21:31:19.133567 kernel: Write protecting the kernel read-only data: 36864k Jan 13 21:31:19.133585 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 13 21:31:19.133602 kernel: Run /init as init process Jan 13 21:31:19.133675 kernel: with arguments: Jan 13 21:31:19.133697 kernel: /init Jan 13 21:31:19.133715 kernel: with environment: Jan 13 21:31:19.133732 kernel: HOME=/ Jan 13 21:31:19.133749 kernel: TERM=linux Jan 13 21:31:19.133766 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 21:31:19.133794 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:31:19.133829 systemd[1]: Detected virtualization amazon. Jan 13 21:31:19.133850 systemd[1]: Detected architecture x86-64. Jan 13 21:31:19.133867 systemd[1]: Running in initrd. Jan 13 21:31:19.133885 systemd[1]: No hostname configured, using default hostname. Jan 13 21:31:19.136047 systemd[1]: Hostname set to . Jan 13 21:31:19.136071 systemd[1]: Initializing machine ID from VM UUID. Jan 13 21:31:19.136090 systemd[1]: Queued start job for default target initrd.target. Jan 13 21:31:19.136109 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:31:19.136127 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:31:19.136148 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 21:31:19.136166 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:31:19.136185 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 21:31:19.136210 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 21:31:19.136231 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 21:31:19.136249 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 21:31:19.136268 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:31:19.136286 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:31:19.136304 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:31:19.136326 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:31:19.136344 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:31:19.136362 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:31:19.136381 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:31:19.136399 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:31:19.136418 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 21:31:19.136436 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 21:31:19.136454 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:31:19.136621 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:31:19.136645 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:31:19.136665 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:31:19.136682 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 13 21:31:19.136706 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 21:31:19.136728 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:31:19.136746 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 21:31:19.136765 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 21:31:19.136786 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:31:19.136809 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:31:19.136882 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:31:19.139992 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 21:31:19.140059 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:31:19.140079 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 21:31:19.140213 systemd-journald[178]: Collecting audit messages is disabled. Jan 13 21:31:19.140259 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 21:31:19.140313 systemd-journald[178]: Journal started Jan 13 21:31:19.140479 systemd-journald[178]: Runtime Journal (/run/log/journal/ec28e417b4eed56d9857009551310524) is 4.8M, max 38.6M, 33.7M free. Jan 13 21:31:19.114818 systemd-modules-load[179]: Inserted module 'overlay' Jan 13 21:31:19.149938 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:31:19.179925 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 21:31:19.183326 kernel: Bridge firewalling registered Jan 13 21:31:19.182550 systemd-modules-load[179]: Inserted module 'br_netfilter' Jan 13 21:31:19.184857 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:31:19.358679 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:31:19.367733 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:31:19.371512 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:31:19.387424 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:31:19.397202 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:31:19.400036 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:31:19.400816 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:31:19.414854 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:31:19.429116 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:31:19.429444 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:31:19.433663 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:31:19.443193 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 21:31:19.458253 dracut-cmdline[214]: dracut-dracut-053 Jan 13 21:31:19.461085 dracut-cmdline[214]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:31:19.526116 systemd-resolved[207]: Positive Trust Anchors: Jan 13 21:31:19.526133 systemd-resolved[207]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:31:19.526194 systemd-resolved[207]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:31:19.542185 systemd-resolved[207]: Defaulting to hostname 'linux'. Jan 13 21:31:19.544684 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:31:19.546151 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:31:19.586928 kernel: SCSI subsystem initialized Jan 13 21:31:19.596932 kernel: Loading iSCSI transport class v2.0-870. Jan 13 21:31:19.608926 kernel: iscsi: registered transport (tcp) Jan 13 21:31:19.632012 kernel: iscsi: registered transport (qla4xxx) Jan 13 21:31:19.632100 kernel: QLogic iSCSI HBA Driver Jan 13 21:31:19.674667 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 21:31:19.682174 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 21:31:19.712201 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 21:31:19.712274 kernel: device-mapper: uevent: version 1.0.3 Jan 13 21:31:19.712289 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 21:31:19.755942 kernel: raid6: avx512x4 gen() 8794 MB/s Jan 13 21:31:19.772946 kernel: raid6: avx512x2 gen() 13773 MB/s Jan 13 21:31:19.789948 kernel: raid6: avx512x1 gen() 16820 MB/s Jan 13 21:31:19.810957 kernel: raid6: avx2x4 gen() 12008 MB/s Jan 13 21:31:19.828948 kernel: raid6: avx2x2 gen() 6659 MB/s Jan 13 21:31:19.846392 kernel: raid6: avx2x1 gen() 5324 MB/s Jan 13 21:31:19.846469 kernel: raid6: using algorithm avx512x1 gen() 16820 MB/s Jan 13 21:31:19.863961 kernel: raid6: .... xor() 9217 MB/s, rmw enabled Jan 13 21:31:19.864041 kernel: raid6: using avx512x2 recovery algorithm Jan 13 21:31:19.907929 kernel: xor: automatically using best checksumming function avx Jan 13 21:31:20.095920 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 21:31:20.106918 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:31:20.113213 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:31:20.140865 systemd-udevd[398]: Using default interface naming scheme 'v255'. Jan 13 21:31:20.146840 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:31:20.157110 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 21:31:20.186331 dracut-pre-trigger[404]: rd.md=0: removing MD RAID activation Jan 13 21:31:20.221653 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:31:20.229115 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:31:20.357432 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:31:20.370258 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 21:31:20.417518 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 21:31:20.425313 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:31:20.453156 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:31:20.455802 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:31:20.469387 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 21:31:20.475037 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 13 21:31:20.532040 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 13 21:31:20.532327 kernel: cryptd: max_cpu_qlen set to 1000 Jan 13 21:31:20.532351 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Jan 13 21:31:20.532521 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:d7:40:43:39:35 Jan 13 21:31:20.532699 kernel: AVX2 version of gcm_enc/dec engaged. Jan 13 21:31:20.532722 kernel: AES CTR mode by8 optimization enabled Jan 13 21:31:20.514807 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:31:20.542027 (udev-worker)[461]: Network interface NamePolicy= disabled on kernel command line. Jan 13 21:31:20.550023 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 13 21:31:20.550282 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 13 21:31:20.556515 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:31:20.557624 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:31:20.562796 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:31:20.564118 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:31:20.564341 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:31:20.575754 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 13 21:31:20.565629 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:31:20.580775 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:31:20.586536 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 21:31:20.586570 kernel: GPT:9289727 != 16777215 Jan 13 21:31:20.586588 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 21:31:20.586605 kernel: GPT:9289727 != 16777215 Jan 13 21:31:20.586622 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 21:31:20.586639 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 21:31:20.716940 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (451) Jan 13 21:31:20.717945 kernel: BTRFS: device fsid b8e2d3c5-4bed-4339-bed5-268c66823686 devid 1 transid 37 /dev/nvme0n1p3 scanned by (udev-worker) (448) Jan 13 21:31:20.748339 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:31:20.759345 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:31:20.830008 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:31:20.862805 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 13 21:31:20.882775 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 13 21:31:20.898777 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 13 21:31:20.902039 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 13 21:31:20.929673 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 13 21:31:20.948378 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 21:31:20.963284 disk-uuid[632]: Primary Header is updated. Jan 13 21:31:20.963284 disk-uuid[632]: Secondary Entries is updated. Jan 13 21:31:20.963284 disk-uuid[632]: Secondary Header is updated. Jan 13 21:31:20.968923 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 21:31:20.977925 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 21:31:20.983924 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 21:31:21.987918 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 21:31:21.988314 disk-uuid[633]: The operation has completed successfully. Jan 13 21:31:22.204214 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 21:31:22.204344 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 21:31:22.230121 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 21:31:22.242973 sh[976]: Success Jan 13 21:31:22.266924 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 13 21:31:22.371721 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 21:31:22.382034 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 21:31:22.384368 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 21:31:22.422574 kernel: BTRFS info (device dm-0): first mount of filesystem b8e2d3c5-4bed-4339-bed5-268c66823686 Jan 13 21:31:22.422651 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:31:22.422671 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 21:31:22.422689 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 21:31:22.423912 kernel: BTRFS info (device dm-0): using free space tree Jan 13 21:31:22.512927 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 13 21:31:22.527749 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 21:31:22.528568 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 21:31:22.536156 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 21:31:22.538407 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 21:31:22.573483 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:31:22.573550 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:31:22.573572 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 13 21:31:22.579924 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 13 21:31:22.591584 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 21:31:22.594939 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:31:22.600729 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 21:31:22.610215 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 21:31:22.664374 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:31:22.678372 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:31:22.719929 systemd-networkd[1168]: lo: Link UP Jan 13 21:31:22.719939 systemd-networkd[1168]: lo: Gained carrier Jan 13 21:31:22.721540 systemd-networkd[1168]: Enumeration completed Jan 13 21:31:22.724770 systemd-networkd[1168]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:31:22.726531 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:31:22.727980 systemd-networkd[1168]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:31:22.729433 systemd[1]: Reached target network.target - Network. Jan 13 21:31:22.745183 systemd-networkd[1168]: eth0: Link UP Jan 13 21:31:22.745194 systemd-networkd[1168]: eth0: Gained carrier Jan 13 21:31:22.745213 systemd-networkd[1168]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:31:22.759039 systemd-networkd[1168]: eth0: DHCPv4 address 172.31.23.216/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 13 21:31:22.827690 ignition[1116]: Ignition 2.19.0 Jan 13 21:31:22.827704 ignition[1116]: Stage: fetch-offline Jan 13 21:31:22.827988 ignition[1116]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:31:22.828002 ignition[1116]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 21:31:22.828321 ignition[1116]: Ignition finished successfully Jan 13 21:31:22.834545 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:31:22.842158 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 13 21:31:22.871487 ignition[1177]: Ignition 2.19.0 Jan 13 21:31:22.871502 ignition[1177]: Stage: fetch Jan 13 21:31:22.871979 ignition[1177]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:31:22.871993 ignition[1177]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 21:31:22.872212 ignition[1177]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 21:31:22.898052 ignition[1177]: PUT result: OK Jan 13 21:31:22.900350 ignition[1177]: parsed url from cmdline: "" Jan 13 21:31:22.900359 ignition[1177]: no config URL provided Jan 13 21:31:22.900367 ignition[1177]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 21:31:22.900380 ignition[1177]: no config at "/usr/lib/ignition/user.ign" Jan 13 21:31:22.900400 ignition[1177]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 21:31:22.903759 ignition[1177]: PUT result: OK Jan 13 21:31:22.903838 ignition[1177]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 13 21:31:22.908287 ignition[1177]: GET result: OK Jan 13 21:31:22.909455 ignition[1177]: parsing config with SHA512: 132ca03fb3d2e35782700f54c97489cba188f0416629e4f703348f64fe272c7f2f66a0a763d1f4c69a1c80d306083ada7da21471018b819d5dac431006d1fad3 Jan 13 21:31:22.914593 unknown[1177]: fetched base config from "system" Jan 13 21:31:22.914609 unknown[1177]: fetched base config from "system" Jan 13 21:31:22.914622 unknown[1177]: fetched user config from "aws" Jan 13 21:31:22.918133 ignition[1177]: fetch: fetch complete Jan 13 21:31:22.918146 ignition[1177]: fetch: fetch passed Jan 13 21:31:22.918839 ignition[1177]: Ignition finished successfully Jan 13 21:31:22.922215 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 13 21:31:22.930375 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 21:31:22.958927 ignition[1183]: Ignition 2.19.0 Jan 13 21:31:22.958938 ignition[1183]: Stage: kargs Jan 13 21:31:22.959324 ignition[1183]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:31:22.959334 ignition[1183]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 21:31:22.959419 ignition[1183]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 21:31:22.963088 ignition[1183]: PUT result: OK Jan 13 21:31:22.970533 ignition[1183]: kargs: kargs passed Jan 13 21:31:22.970700 ignition[1183]: Ignition finished successfully Jan 13 21:31:22.975528 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 21:31:22.987170 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 21:31:23.028938 ignition[1189]: Ignition 2.19.0 Jan 13 21:31:23.028952 ignition[1189]: Stage: disks Jan 13 21:31:23.029627 ignition[1189]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:31:23.029641 ignition[1189]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 21:31:23.029833 ignition[1189]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 21:31:23.031596 ignition[1189]: PUT result: OK Jan 13 21:31:23.038744 ignition[1189]: disks: disks passed Jan 13 21:31:23.038823 ignition[1189]: Ignition finished successfully Jan 13 21:31:23.041456 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 21:31:23.045306 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 21:31:23.047878 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 21:31:23.050662 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:31:23.050760 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:31:23.054018 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:31:23.067148 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 21:31:23.096610 systemd-fsck[1197]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 13 21:31:23.099961 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 21:31:23.111166 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 21:31:23.243457 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 39899d4c-a8b1-4feb-9875-e812cc535888 r/w with ordered data mode. Quota mode: none. Jan 13 21:31:23.244305 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 21:31:23.245787 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 21:31:23.261075 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:31:23.266146 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 21:31:23.269655 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 21:31:23.272418 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 21:31:23.276544 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:31:23.296409 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 21:31:23.300304 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1216) Jan 13 21:31:23.303851 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:31:23.303929 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:31:23.303955 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 13 21:31:23.307271 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 21:31:23.314923 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 13 21:31:23.318719 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:31:23.603965 initrd-setup-root[1240]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 21:31:23.624191 initrd-setup-root[1247]: cut: /sysroot/etc/group: No such file or directory Jan 13 21:31:23.642351 initrd-setup-root[1254]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 21:31:23.650074 initrd-setup-root[1261]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 21:31:23.989523 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 21:31:23.998059 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 21:31:24.004682 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 21:31:24.012975 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:31:24.014531 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 21:31:24.061452 ignition[1332]: INFO : Ignition 2.19.0 Jan 13 21:31:24.061452 ignition[1332]: INFO : Stage: mount Jan 13 21:31:24.064881 ignition[1332]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:31:24.066622 ignition[1332]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 21:31:24.067995 ignition[1332]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 21:31:24.070166 ignition[1332]: INFO : PUT result: OK Jan 13 21:31:24.071751 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 21:31:24.076086 ignition[1332]: INFO : mount: mount passed Jan 13 21:31:24.076086 ignition[1332]: INFO : Ignition finished successfully Jan 13 21:31:24.078278 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 21:31:24.085138 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 21:31:24.101461 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:31:24.136945 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1343) Jan 13 21:31:24.138928 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:31:24.138987 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:31:24.139008 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 13 21:31:24.145925 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 13 21:31:24.148313 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:31:24.174503 ignition[1360]: INFO : Ignition 2.19.0 Jan 13 21:31:24.174503 ignition[1360]: INFO : Stage: files Jan 13 21:31:24.176613 ignition[1360]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:31:24.176613 ignition[1360]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 21:31:24.176613 ignition[1360]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 21:31:24.180773 ignition[1360]: INFO : PUT result: OK Jan 13 21:31:24.184083 ignition[1360]: DEBUG : files: compiled without relabeling support, skipping Jan 13 21:31:24.187477 ignition[1360]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 21:31:24.187477 ignition[1360]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 21:31:24.194226 ignition[1360]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 21:31:24.195705 ignition[1360]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 21:31:24.198387 unknown[1360]: wrote ssh authorized keys file for user: core Jan 13 21:31:24.200092 ignition[1360]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 21:31:24.202773 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 13 21:31:24.205118 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 13 21:31:24.205118 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 21:31:24.205118 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 13 21:31:24.314458 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 13 21:31:24.494966 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 21:31:24.501192 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 13 21:31:24.501192 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 21:31:24.501192 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 21:31:24.501192 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 21:31:24.501192 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 21:31:24.501192 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 21:31:24.501192 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 21:31:24.501192 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 21:31:24.501192 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:31:24.501192 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:31:24.501192 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 21:31:24.501192 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 21:31:24.501192 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 21:31:24.501192 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jan 13 21:31:24.654092 systemd-networkd[1168]: eth0: Gained IPv6LL Jan 13 21:31:24.995648 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 13 21:31:25.411308 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 21:31:25.411308 ignition[1360]: INFO : files: op(c): [started] processing unit "containerd.service" Jan 13 21:31:25.417526 ignition[1360]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 13 21:31:25.430054 ignition[1360]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 13 21:31:25.430054 ignition[1360]: INFO : files: op(c): [finished] processing unit "containerd.service" Jan 13 21:31:25.430054 ignition[1360]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jan 13 21:31:25.436466 ignition[1360]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 21:31:25.439722 ignition[1360]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 21:31:25.439722 ignition[1360]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jan 13 21:31:25.439722 ignition[1360]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jan 13 21:31:25.444441 ignition[1360]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 21:31:25.446070 ignition[1360]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:31:25.447953 ignition[1360]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:31:25.447953 ignition[1360]: INFO : files: files passed Jan 13 21:31:25.450629 ignition[1360]: INFO : Ignition finished successfully Jan 13 21:31:25.452200 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 21:31:25.458063 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 21:31:25.467123 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 21:31:25.468816 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 21:31:25.468928 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 21:31:25.480613 initrd-setup-root-after-ignition[1389]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:31:25.480613 initrd-setup-root-after-ignition[1389]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:31:25.484241 initrd-setup-root-after-ignition[1393]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:31:25.486830 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:31:25.489868 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 21:31:25.499099 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 21:31:25.530438 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 21:31:25.530619 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 21:31:25.539767 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 21:31:25.542321 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 21:31:25.544477 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 21:31:25.551251 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 21:31:25.565133 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:31:25.571157 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 21:31:25.588199 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:31:25.590602 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:31:25.593129 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 21:31:25.594253 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 21:31:25.594421 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:31:25.598714 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 21:31:25.601062 systemd[1]: Stopped target basic.target - Basic System. Jan 13 21:31:25.603084 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 21:31:25.605319 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:31:25.607692 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 21:31:25.610121 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 21:31:25.617719 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:31:25.620872 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 21:31:25.635958 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 21:31:25.643065 systemd[1]: Stopped target swap.target - Swaps. Jan 13 21:31:25.646199 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 21:31:25.646383 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:31:25.649617 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:31:25.654366 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:31:25.657192 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 21:31:25.657373 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:31:25.670693 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 21:31:25.672061 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 21:31:25.674932 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 21:31:25.676138 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:31:25.677475 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 21:31:25.677840 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 21:31:25.699390 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 21:31:25.723499 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 21:31:25.726164 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 21:31:25.727742 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:31:25.731395 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 21:31:25.733045 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:31:25.746550 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 21:31:25.746684 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 21:31:25.753792 ignition[1413]: INFO : Ignition 2.19.0 Jan 13 21:31:25.753792 ignition[1413]: INFO : Stage: umount Jan 13 21:31:25.753792 ignition[1413]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:31:25.753792 ignition[1413]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 21:31:25.753792 ignition[1413]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 21:31:25.753792 ignition[1413]: INFO : PUT result: OK Jan 13 21:31:25.763324 ignition[1413]: INFO : umount: umount passed Jan 13 21:31:25.763324 ignition[1413]: INFO : Ignition finished successfully Jan 13 21:31:25.762032 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 21:31:25.762134 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 21:31:25.766184 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 21:31:25.766279 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 21:31:25.768458 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 21:31:25.768592 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 21:31:25.770447 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 13 21:31:25.770496 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 13 21:31:25.781811 systemd[1]: Stopped target network.target - Network. Jan 13 21:31:25.783103 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 21:31:25.783410 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:31:25.785046 systemd[1]: Stopped target paths.target - Path Units. Jan 13 21:31:25.787161 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 21:31:25.793503 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:31:25.797354 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 21:31:25.799623 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 21:31:25.801838 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 21:31:25.802088 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:31:25.805155 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 21:31:25.805224 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:31:25.808235 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 21:31:25.809460 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 21:31:25.811559 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 21:31:25.812923 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 21:31:25.815688 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 21:31:25.818265 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 21:31:25.822961 systemd-networkd[1168]: eth0: DHCPv6 lease lost Jan 13 21:31:25.822980 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 21:31:25.825548 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 21:31:25.826574 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 21:31:25.829521 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 21:31:25.829626 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 21:31:25.847517 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 21:31:25.847596 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:31:25.860063 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 21:31:25.863591 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 21:31:25.863844 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:31:25.870310 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 21:31:25.870394 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:31:25.873436 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 21:31:25.873524 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 21:31:25.876157 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 21:31:25.876238 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:31:25.879087 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:31:25.900499 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 21:31:25.900636 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 21:31:25.903991 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 21:31:25.905962 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:31:25.910338 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 21:31:25.910398 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 21:31:25.913106 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 21:31:25.913155 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:31:25.914291 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 21:31:25.914344 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:31:25.919152 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 21:31:25.919227 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 21:31:25.922708 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:31:25.922768 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:31:25.934317 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 21:31:25.936705 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 21:31:25.936782 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:31:25.938248 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 13 21:31:25.938326 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:31:25.944076 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 21:31:25.944138 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:31:25.953283 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:31:25.954691 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:31:25.962483 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 21:31:25.962621 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 21:31:25.973758 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 21:31:25.974253 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 21:31:25.978687 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 21:31:25.980095 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 21:31:25.980163 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 21:31:25.993087 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 21:31:26.041149 systemd[1]: Switching root. Jan 13 21:31:26.080227 systemd-journald[178]: Journal stopped Jan 13 21:31:27.998406 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Jan 13 21:31:27.998497 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 21:31:27.998525 kernel: SELinux: policy capability open_perms=1 Jan 13 21:31:27.998546 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 21:31:27.998566 kernel: SELinux: policy capability always_check_network=0 Jan 13 21:31:27.998592 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 21:31:27.998612 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 21:31:27.998638 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 21:31:27.998659 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 21:31:27.998679 kernel: audit: type=1403 audit(1736803886.497:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 21:31:27.998706 systemd[1]: Successfully loaded SELinux policy in 55.578ms. Jan 13 21:31:27.998740 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 19.926ms. Jan 13 21:31:27.998764 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:31:27.998786 systemd[1]: Detected virtualization amazon. Jan 13 21:31:27.998808 systemd[1]: Detected architecture x86-64. Jan 13 21:31:27.998829 systemd[1]: Detected first boot. Jan 13 21:31:27.998855 systemd[1]: Initializing machine ID from VM UUID. Jan 13 21:31:27.998883 zram_generator::config[1472]: No configuration found. Jan 13 21:31:28.025236 systemd[1]: Populated /etc with preset unit settings. Jan 13 21:31:28.025270 systemd[1]: Queued start job for default target multi-user.target. Jan 13 21:31:28.025291 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 13 21:31:28.025314 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 21:31:28.025335 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 21:31:28.025355 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 21:31:28.025386 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 21:31:28.025406 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 21:31:28.025427 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 21:31:28.025446 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 21:31:28.025466 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 21:31:28.025486 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:31:28.025594 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:31:28.025620 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 21:31:28.025640 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 21:31:28.025663 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 21:31:28.025684 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:31:28.025703 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 13 21:31:28.025723 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:31:28.025743 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 21:31:28.025763 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:31:28.025789 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:31:28.025808 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:31:28.025832 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:31:28.025851 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 21:31:28.025872 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 21:31:28.025891 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 21:31:28.025923 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 21:31:28.025943 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:31:28.025964 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:31:28.025983 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:31:28.026003 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 21:31:28.026027 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 21:31:28.026046 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 21:31:28.026066 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 21:31:28.026087 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:31:28.026106 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 21:31:28.026126 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 21:31:28.026146 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 21:31:28.026166 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 21:31:28.026187 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:31:28.026210 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:31:28.026231 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 21:31:28.026251 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:31:28.026271 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:31:28.026292 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:31:28.026312 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 21:31:28.026331 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:31:28.026351 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 21:31:28.026374 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 13 21:31:28.026395 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 13 21:31:28.026414 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:31:28.026434 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:31:28.026454 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 21:31:28.026475 kernel: loop: module loaded Jan 13 21:31:28.026495 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 21:31:28.026515 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:31:28.026536 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:31:28.026561 kernel: fuse: init (API version 7.39) Jan 13 21:31:28.026577 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 21:31:28.026597 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 21:31:28.026617 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 21:31:28.026637 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 21:31:28.026657 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 21:31:28.026713 systemd-journald[1569]: Collecting audit messages is disabled. Jan 13 21:31:28.026753 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 21:31:28.026774 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:31:28.026794 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 21:31:28.026813 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 21:31:28.026833 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:31:28.026853 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:31:28.026873 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:31:28.038795 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:31:28.038867 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 21:31:28.038891 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 21:31:28.038926 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:31:28.038948 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:31:28.038970 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:31:28.038993 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 21:31:28.039019 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 21:31:28.039044 systemd-journald[1569]: Journal started Jan 13 21:31:28.039091 systemd-journald[1569]: Runtime Journal (/run/log/journal/ec28e417b4eed56d9857009551310524) is 4.8M, max 38.6M, 33.7M free. Jan 13 21:31:28.051120 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:31:28.049402 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 21:31:28.060090 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 21:31:28.069133 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 21:31:28.072072 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 21:31:28.082978 kernel: ACPI: bus type drm_connector registered Jan 13 21:31:28.081054 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 21:31:28.106112 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 21:31:28.106271 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:31:28.114096 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 21:31:28.115691 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:31:28.126509 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:31:28.162177 systemd-journald[1569]: Time spent on flushing to /var/log/journal/ec28e417b4eed56d9857009551310524 is 111.772ms for 943 entries. Jan 13 21:31:28.162177 systemd-journald[1569]: System Journal (/var/log/journal/ec28e417b4eed56d9857009551310524) is 8.0M, max 195.6M, 187.6M free. Jan 13 21:31:28.314092 systemd-journald[1569]: Received client request to flush runtime journal. Jan 13 21:31:28.163885 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 21:31:28.170653 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 21:31:28.172647 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:31:28.173336 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:31:28.175673 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 21:31:28.177247 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 21:31:28.244748 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 21:31:28.246490 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 21:31:28.286731 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:31:28.301677 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 21:31:28.318057 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 21:31:28.327463 systemd-tmpfiles[1609]: ACLs are not supported, ignoring. Jan 13 21:31:28.327493 systemd-tmpfiles[1609]: ACLs are not supported, ignoring. Jan 13 21:31:28.328219 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:31:28.342012 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:31:28.357182 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 21:31:28.362974 udevadm[1629]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 13 21:31:28.407825 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 21:31:28.420265 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:31:28.446484 systemd-tmpfiles[1643]: ACLs are not supported, ignoring. Jan 13 21:31:28.446934 systemd-tmpfiles[1643]: ACLs are not supported, ignoring. Jan 13 21:31:28.456169 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:31:29.083184 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 21:31:29.091158 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:31:29.134373 systemd-udevd[1649]: Using default interface naming scheme 'v255'. Jan 13 21:31:29.171676 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:31:29.183170 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:31:29.223105 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 21:31:29.330369 (udev-worker)[1656]: Network interface NamePolicy= disabled on kernel command line. Jan 13 21:31:29.379781 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 13 21:31:29.381871 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 21:31:29.481972 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 13 21:31:29.488520 kernel: ACPI: button: Power Button [PWRF] Jan 13 21:31:29.488623 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Jan 13 21:31:29.492922 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Jan 13 21:31:29.499210 kernel: ACPI: button: Sleep Button [SLPF] Jan 13 21:31:29.564521 systemd-networkd[1652]: lo: Link UP Jan 13 21:31:29.565145 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (1660) Jan 13 21:31:29.564534 systemd-networkd[1652]: lo: Gained carrier Jan 13 21:31:29.572627 systemd-networkd[1652]: Enumeration completed Jan 13 21:31:29.572818 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:31:29.579811 systemd-networkd[1652]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:31:29.579817 systemd-networkd[1652]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:31:29.584079 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 21:31:29.598137 systemd-networkd[1652]: eth0: Link UP Jan 13 21:31:29.600701 systemd-networkd[1652]: eth0: Gained carrier Jan 13 21:31:29.600738 systemd-networkd[1652]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:31:29.616551 systemd-networkd[1652]: eth0: DHCPv4 address 172.31.23.216/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 13 21:31:29.627132 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input5 Jan 13 21:31:29.724964 kernel: mousedev: PS/2 mouse device common for all mice Jan 13 21:31:29.761284 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:31:29.814629 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 13 21:31:29.816009 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 21:31:29.844226 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 21:31:29.881978 lvm[1769]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:31:29.920057 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 21:31:30.037527 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:31:30.046539 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 21:31:30.053404 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:31:30.065150 lvm[1774]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:31:30.099415 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 21:31:30.101226 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 21:31:30.103181 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 21:31:30.103324 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:31:30.104556 systemd[1]: Reached target machines.target - Containers. Jan 13 21:31:30.108160 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 21:31:30.115357 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 21:31:30.122560 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 21:31:30.124136 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:31:30.128989 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 21:31:30.142276 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 21:31:30.163941 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 21:31:30.178127 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 21:31:30.190316 kernel: loop0: detected capacity change from 0 to 61336 Jan 13 21:31:30.197178 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 21:31:30.220643 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 21:31:30.222741 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 21:31:30.255074 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 21:31:30.278924 kernel: loop1: detected capacity change from 0 to 140768 Jan 13 21:31:30.369919 kernel: loop2: detected capacity change from 0 to 142488 Jan 13 21:31:30.471935 kernel: loop3: detected capacity change from 0 to 211296 Jan 13 21:31:30.534116 kernel: loop4: detected capacity change from 0 to 61336 Jan 13 21:31:30.553938 kernel: loop5: detected capacity change from 0 to 140768 Jan 13 21:31:30.604931 kernel: loop6: detected capacity change from 0 to 142488 Jan 13 21:31:30.635000 kernel: loop7: detected capacity change from 0 to 211296 Jan 13 21:31:30.671491 (sd-merge)[1798]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 13 21:31:30.672291 (sd-merge)[1798]: Merged extensions into '/usr'. Jan 13 21:31:30.680310 systemd[1]: Reloading requested from client PID 1784 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 21:31:30.680328 systemd[1]: Reloading... Jan 13 21:31:30.828925 zram_generator::config[1832]: No configuration found. Jan 13 21:31:31.036145 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:31:31.048734 ldconfig[1780]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 21:31:31.055005 systemd-networkd[1652]: eth0: Gained IPv6LL Jan 13 21:31:31.163235 systemd[1]: Reloading finished in 482 ms. Jan 13 21:31:31.184469 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 21:31:31.186567 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 21:31:31.188530 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 21:31:31.206528 systemd[1]: Starting ensure-sysext.service... Jan 13 21:31:31.222173 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:31:31.235990 systemd[1]: Reloading requested from client PID 1884 ('systemctl') (unit ensure-sysext.service)... Jan 13 21:31:31.236009 systemd[1]: Reloading... Jan 13 21:31:31.256347 systemd-tmpfiles[1885]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 21:31:31.257123 systemd-tmpfiles[1885]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 21:31:31.258762 systemd-tmpfiles[1885]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 21:31:31.259413 systemd-tmpfiles[1885]: ACLs are not supported, ignoring. Jan 13 21:31:31.259528 systemd-tmpfiles[1885]: ACLs are not supported, ignoring. Jan 13 21:31:31.267068 systemd-tmpfiles[1885]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:31:31.267086 systemd-tmpfiles[1885]: Skipping /boot Jan 13 21:31:31.283035 systemd-tmpfiles[1885]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:31:31.283055 systemd-tmpfiles[1885]: Skipping /boot Jan 13 21:31:31.373945 zram_generator::config[1914]: No configuration found. Jan 13 21:31:31.513315 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:31:31.626191 systemd[1]: Reloading finished in 389 ms. Jan 13 21:31:31.662319 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:31:31.694233 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 13 21:31:31.707401 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 21:31:31.713104 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 21:31:31.722107 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:31:31.735497 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 21:31:31.772066 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:31:31.772682 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:31:31.778271 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:31:31.795509 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:31:31.812869 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:31:31.814261 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:31:31.814724 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:31:31.832738 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 21:31:31.848984 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 21:31:31.851953 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:31:31.852281 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:31:31.854409 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:31:31.854720 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:31:31.887508 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:31:31.888138 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:31:31.902635 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:31:31.914142 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:31:31.918945 augenrules[2006]: No rules Jan 13 21:31:31.920610 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:31:31.922073 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:31:31.922327 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 21:31:31.940480 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 21:31:31.945036 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:31:31.956155 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 13 21:31:31.959132 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:31:31.959415 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:31:31.962829 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:31:31.963157 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:31:31.969414 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:31:31.972291 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:31:31.981830 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:31:31.986352 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:31:31.995677 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 21:31:32.027987 systemd-resolved[1974]: Positive Trust Anchors: Jan 13 21:31:32.028008 systemd-resolved[1974]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:31:32.028061 systemd-resolved[1974]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:31:32.029581 systemd[1]: Finished ensure-sysext.service. Jan 13 21:31:32.041093 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 21:31:32.047673 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:31:32.049056 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:31:32.049296 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 21:31:32.052730 systemd-resolved[1974]: Defaulting to hostname 'linux'. Jan 13 21:31:32.057309 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:31:32.059298 systemd[1]: Reached target network.target - Network. Jan 13 21:31:32.060515 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 21:31:32.061991 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:31:32.063413 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:31:32.064975 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 21:31:32.066854 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 21:31:32.068549 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 21:31:32.070668 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 21:31:32.072435 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 21:31:32.074251 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 21:31:32.074284 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:31:32.075477 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:31:32.077930 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 21:31:32.084016 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 21:31:32.088032 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 21:31:32.107432 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 21:31:32.108788 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:31:32.110315 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:31:32.112036 systemd[1]: System is tainted: cgroupsv1 Jan 13 21:31:32.112105 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:31:32.112139 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:31:32.128218 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 21:31:32.142160 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 13 21:31:32.145986 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 21:31:32.157211 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 21:31:32.183106 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 21:31:32.185309 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 21:31:32.196051 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:31:32.208985 jq[2035]: false Jan 13 21:31:32.212509 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 21:31:32.247414 systemd[1]: Started ntpd.service - Network Time Service. Jan 13 21:31:32.270210 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 21:31:32.284162 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 13 21:31:32.298104 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 13 21:31:32.312800 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 21:31:32.318156 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 21:31:32.355645 extend-filesystems[2036]: Found loop4 Jan 13 21:31:32.355645 extend-filesystems[2036]: Found loop5 Jan 13 21:31:32.355645 extend-filesystems[2036]: Found loop6 Jan 13 21:31:32.355645 extend-filesystems[2036]: Found loop7 Jan 13 21:31:32.355645 extend-filesystems[2036]: Found nvme0n1 Jan 13 21:31:32.355645 extend-filesystems[2036]: Found nvme0n1p1 Jan 13 21:31:32.355645 extend-filesystems[2036]: Found nvme0n1p2 Jan 13 21:31:32.355645 extend-filesystems[2036]: Found nvme0n1p3 Jan 13 21:31:32.355645 extend-filesystems[2036]: Found usr Jan 13 21:31:32.355645 extend-filesystems[2036]: Found nvme0n1p4 Jan 13 21:31:32.355645 extend-filesystems[2036]: Found nvme0n1p6 Jan 13 21:31:32.355645 extend-filesystems[2036]: Found nvme0n1p7 Jan 13 21:31:32.355645 extend-filesystems[2036]: Found nvme0n1p9 Jan 13 21:31:32.355645 extend-filesystems[2036]: Checking size of /dev/nvme0n1p9 Jan 13 21:31:32.344937 dbus-daemon[2034]: [system] SELinux support is enabled Jan 13 21:31:32.345501 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 21:31:32.386281 dbus-daemon[2034]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1652 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 13 21:31:32.347294 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 21:31:32.370468 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 21:31:32.388982 coreos-metadata[2033]: Jan 13 21:31:32.388 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 13 21:31:32.392345 coreos-metadata[2033]: Jan 13 21:31:32.389 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 13 21:31:32.393838 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 21:31:32.401240 coreos-metadata[2033]: Jan 13 21:31:32.397 INFO Fetch successful Jan 13 21:31:32.401240 coreos-metadata[2033]: Jan 13 21:31:32.397 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 13 21:31:32.396444 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 21:31:32.404297 coreos-metadata[2033]: Jan 13 21:31:32.403 INFO Fetch successful Jan 13 21:31:32.404297 coreos-metadata[2033]: Jan 13 21:31:32.403 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 13 21:31:32.405643 coreos-metadata[2033]: Jan 13 21:31:32.405 INFO Fetch successful Jan 13 21:31:32.405643 coreos-metadata[2033]: Jan 13 21:31:32.405 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 13 21:31:32.408208 coreos-metadata[2033]: Jan 13 21:31:32.408 INFO Fetch successful Jan 13 21:31:32.408208 coreos-metadata[2033]: Jan 13 21:31:32.408 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 13 21:31:32.409050 coreos-metadata[2033]: Jan 13 21:31:32.409 INFO Fetch failed with 404: resource not found Jan 13 21:31:32.411534 coreos-metadata[2033]: Jan 13 21:31:32.411 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 13 21:31:32.412976 coreos-metadata[2033]: Jan 13 21:31:32.412 INFO Fetch successful Jan 13 21:31:32.412976 coreos-metadata[2033]: Jan 13 21:31:32.412 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 13 21:31:32.413923 coreos-metadata[2033]: Jan 13 21:31:32.413 INFO Fetch successful Jan 13 21:31:32.413923 coreos-metadata[2033]: Jan 13 21:31:32.413 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 13 21:31:32.414718 coreos-metadata[2033]: Jan 13 21:31:32.414 INFO Fetch successful Jan 13 21:31:32.414965 coreos-metadata[2033]: Jan 13 21:31:32.414 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 13 21:31:32.416609 coreos-metadata[2033]: Jan 13 21:31:32.416 INFO Fetch successful Jan 13 21:31:32.420123 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 21:31:32.420573 coreos-metadata[2033]: Jan 13 21:31:32.416 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 13 21:31:32.421132 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 21:31:32.436924 coreos-metadata[2033]: Jan 13 21:31:32.425 INFO Fetch successful Jan 13 21:31:32.436498 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 21:31:32.437144 extend-filesystems[2036]: Resized partition /dev/nvme0n1p9 Jan 13 21:31:32.444811 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jan 13 21:31:32.436860 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 21:31:32.446064 update_engine[2060]: I20250113 21:31:32.440078 2060 main.cc:92] Flatcar Update Engine starting Jan 13 21:31:32.446064 update_engine[2060]: I20250113 21:31:32.443746 2060 update_check_scheduler.cc:74] Next update check in 10m15s Jan 13 21:31:32.446464 extend-filesystems[2076]: resize2fs 1.47.1 (20-May-2024) Jan 13 21:31:32.466185 jq[2067]: true Jan 13 21:31:32.474601 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 21:31:32.502700 ntpd[2042]: 13 Jan 21:31:32 ntpd[2042]: ntpd 4.2.8p17@1.4004-o Mon Jan 13 19:01:38 UTC 2025 (1): Starting Jan 13 21:31:32.502700 ntpd[2042]: 13 Jan 21:31:32 ntpd[2042]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 13 21:31:32.502700 ntpd[2042]: 13 Jan 21:31:32 ntpd[2042]: ---------------------------------------------------- Jan 13 21:31:32.502700 ntpd[2042]: 13 Jan 21:31:32 ntpd[2042]: ntp-4 is maintained by Network Time Foundation, Jan 13 21:31:32.502700 ntpd[2042]: 13 Jan 21:31:32 ntpd[2042]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 13 21:31:32.502700 ntpd[2042]: 13 Jan 21:31:32 ntpd[2042]: corporation. Support and training for ntp-4 are Jan 13 21:31:32.502700 ntpd[2042]: 13 Jan 21:31:32 ntpd[2042]: available at https://www.nwtime.org/support Jan 13 21:31:32.502700 ntpd[2042]: 13 Jan 21:31:32 ntpd[2042]: ---------------------------------------------------- Jan 13 21:31:32.502700 ntpd[2042]: 13 Jan 21:31:32 ntpd[2042]: proto: precision = 0.081 usec (-23) Jan 13 21:31:32.502700 ntpd[2042]: 13 Jan 21:31:32 ntpd[2042]: basedate set to 2025-01-01 Jan 13 21:31:32.502700 ntpd[2042]: 13 Jan 21:31:32 ntpd[2042]: gps base set to 2025-01-05 (week 2348) Jan 13 21:31:32.502700 ntpd[2042]: 13 Jan 21:31:32 ntpd[2042]: Listen and drop on 0 v6wildcard [::]:123 Jan 13 21:31:32.502700 ntpd[2042]: 13 Jan 21:31:32 ntpd[2042]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 13 21:31:32.502700 ntpd[2042]: 13 Jan 21:31:32 ntpd[2042]: Listen normally on 2 lo 127.0.0.1:123 Jan 13 21:31:32.502700 ntpd[2042]: 13 Jan 21:31:32 ntpd[2042]: Listen normally on 3 eth0 172.31.23.216:123 Jan 13 21:31:32.502700 ntpd[2042]: 13 Jan 21:31:32 ntpd[2042]: Listen normally on 4 lo [::1]:123 Jan 13 21:31:32.502700 ntpd[2042]: 13 Jan 21:31:32 ntpd[2042]: Listen normally on 5 eth0 [fe80::4d7:40ff:fe43:3935%2]:123 Jan 13 21:31:32.502700 ntpd[2042]: 13 Jan 21:31:32 ntpd[2042]: Listening on routing socket on fd #22 for interface updates Jan 13 21:31:32.502700 ntpd[2042]: 13 Jan 21:31:32 ntpd[2042]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 21:31:32.502700 ntpd[2042]: 13 Jan 21:31:32 ntpd[2042]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 21:31:32.467492 ntpd[2042]: ntpd 4.2.8p17@1.4004-o Mon Jan 13 19:01:38 UTC 2025 (1): Starting Jan 13 21:31:32.478212 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 21:31:32.467522 ntpd[2042]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 13 21:31:32.478596 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 21:31:32.467534 ntpd[2042]: ---------------------------------------------------- Jan 13 21:31:32.467544 ntpd[2042]: ntp-4 is maintained by Network Time Foundation, Jan 13 21:31:32.550145 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 21:31:32.467554 ntpd[2042]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 13 21:31:32.550193 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 21:31:32.467564 ntpd[2042]: corporation. Support and training for ntp-4 are Jan 13 21:31:32.554124 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 21:31:32.559021 jq[2085]: true Jan 13 21:31:32.467574 ntpd[2042]: available at https://www.nwtime.org/support Jan 13 21:31:32.554148 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 21:31:32.467585 ntpd[2042]: ---------------------------------------------------- Jan 13 21:31:32.470535 ntpd[2042]: proto: precision = 0.081 usec (-23) Jan 13 21:31:32.473824 ntpd[2042]: basedate set to 2025-01-01 Jan 13 21:31:32.473847 ntpd[2042]: gps base set to 2025-01-05 (week 2348) Jan 13 21:31:32.487574 ntpd[2042]: Listen and drop on 0 v6wildcard [::]:123 Jan 13 21:31:32.487632 ntpd[2042]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 13 21:31:32.487822 ntpd[2042]: Listen normally on 2 lo 127.0.0.1:123 Jan 13 21:31:32.487865 ntpd[2042]: Listen normally on 3 eth0 172.31.23.216:123 Jan 13 21:31:32.487924 ntpd[2042]: Listen normally on 4 lo [::1]:123 Jan 13 21:31:32.487965 ntpd[2042]: Listen normally on 5 eth0 [fe80::4d7:40ff:fe43:3935%2]:123 Jan 13 21:31:32.488000 ntpd[2042]: Listening on routing socket on fd #22 for interface updates Jan 13 21:31:32.494841 ntpd[2042]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 21:31:32.494871 ntpd[2042]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 21:31:32.571270 dbus-daemon[2034]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 13 21:31:32.597170 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jan 13 21:31:32.598015 (ntainerd)[2100]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 21:31:32.660778 extend-filesystems[2076]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 13 21:31:32.660778 extend-filesystems[2076]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 13 21:31:32.660778 extend-filesystems[2076]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jan 13 21:31:32.656291 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 21:31:32.682174 extend-filesystems[2036]: Resized filesystem in /dev/nvme0n1p9 Jan 13 21:31:32.656721 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 21:31:32.687471 tar[2077]: linux-amd64/helm Jan 13 21:31:32.669689 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 13 21:31:32.688386 systemd[1]: Started update-engine.service - Update Engine. Jan 13 21:31:32.694550 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 21:31:32.708150 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 13 21:31:32.710075 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 21:31:32.723122 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 21:31:32.733956 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 13 21:31:32.823572 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 13 21:31:32.885510 systemd-logind[2056]: Watching system buttons on /dev/input/event1 (Power Button) Jan 13 21:31:32.885541 systemd-logind[2056]: Watching system buttons on /dev/input/event2 (Sleep Button) Jan 13 21:31:32.885564 systemd-logind[2056]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 13 21:31:32.888602 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (2140) Jan 13 21:31:32.893119 systemd-logind[2056]: New seat seat0. Jan 13 21:31:32.896847 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 21:31:32.909510 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 21:31:32.918162 bash[2154]: Updated "/home/core/.ssh/authorized_keys" Jan 13 21:31:32.926162 systemd[1]: Starting sshkeys.service... Jan 13 21:31:32.960586 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 13 21:31:32.975857 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 13 21:31:33.038011 amazon-ssm-agent[2135]: Initializing new seelog logger Jan 13 21:31:33.038011 amazon-ssm-agent[2135]: New Seelog Logger Creation Complete Jan 13 21:31:33.038011 amazon-ssm-agent[2135]: 2025/01/13 21:31:33 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 21:31:33.038011 amazon-ssm-agent[2135]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 21:31:33.038011 amazon-ssm-agent[2135]: 2025/01/13 21:31:33 processing appconfig overrides Jan 13 21:31:33.038011 amazon-ssm-agent[2135]: 2025/01/13 21:31:33 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 21:31:33.038011 amazon-ssm-agent[2135]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 21:31:33.038011 amazon-ssm-agent[2135]: 2025/01/13 21:31:33 processing appconfig overrides Jan 13 21:31:33.038011 amazon-ssm-agent[2135]: 2025-01-13 21:31:33 INFO Proxy environment variables: Jan 13 21:31:33.045620 amazon-ssm-agent[2135]: 2025/01/13 21:31:33 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 21:31:33.045620 amazon-ssm-agent[2135]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 21:31:33.045620 amazon-ssm-agent[2135]: 2025/01/13 21:31:33 processing appconfig overrides Jan 13 21:31:33.076922 amazon-ssm-agent[2135]: 2025/01/13 21:31:33 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 21:31:33.076922 amazon-ssm-agent[2135]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 21:31:33.076922 amazon-ssm-agent[2135]: 2025/01/13 21:31:33 processing appconfig overrides Jan 13 21:31:33.162038 amazon-ssm-agent[2135]: 2025-01-13 21:31:33 INFO https_proxy: Jan 13 21:31:33.263024 amazon-ssm-agent[2135]: 2025-01-13 21:31:33 INFO http_proxy: Jan 13 21:31:33.369472 amazon-ssm-agent[2135]: 2025-01-13 21:31:33 INFO no_proxy: Jan 13 21:31:33.388268 dbus-daemon[2034]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 13 21:31:33.388555 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 13 21:31:33.406069 coreos-metadata[2178]: Jan 13 21:31:33.406 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 13 21:31:33.407790 coreos-metadata[2178]: Jan 13 21:31:33.407 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 13 21:31:33.409452 coreos-metadata[2178]: Jan 13 21:31:33.408 INFO Fetch successful Jan 13 21:31:33.409452 coreos-metadata[2178]: Jan 13 21:31:33.408 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 13 21:31:33.409452 coreos-metadata[2178]: Jan 13 21:31:33.409 INFO Fetch successful Jan 13 21:31:33.417291 dbus-daemon[2034]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=2125 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 13 21:31:33.419102 unknown[2178]: wrote ssh authorized keys file for user: core Jan 13 21:31:33.430788 systemd[1]: Starting polkit.service - Authorization Manager... Jan 13 21:31:33.478281 amazon-ssm-agent[2135]: 2025-01-13 21:31:33 INFO Checking if agent identity type OnPrem can be assumed Jan 13 21:31:33.492887 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 13 21:31:33.493226 update-ssh-keys[2262]: Updated "/home/core/.ssh/authorized_keys" Jan 13 21:31:33.504259 systemd[1]: Finished sshkeys.service. Jan 13 21:31:33.512142 locksmithd[2128]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 21:31:33.536044 polkitd[2260]: Started polkitd version 121 Jan 13 21:31:33.566213 containerd[2100]: time="2025-01-13T21:31:33.564400149Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 13 21:31:33.571598 polkitd[2260]: Loading rules from directory /etc/polkit-1/rules.d Jan 13 21:31:33.571759 polkitd[2260]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 13 21:31:33.579117 amazon-ssm-agent[2135]: 2025-01-13 21:31:33 INFO Checking if agent identity type EC2 can be assumed Jan 13 21:31:33.581535 polkitd[2260]: Finished loading, compiling and executing 2 rules Jan 13 21:31:33.584348 dbus-daemon[2034]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 13 21:31:33.587208 systemd[1]: Started polkit.service - Authorization Manager. Jan 13 21:31:33.588413 polkitd[2260]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 13 21:31:33.654347 containerd[2100]: time="2025-01-13T21:31:33.654288174Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:31:33.665743 systemd-hostnamed[2125]: Hostname set to (transient) Jan 13 21:31:33.669587 containerd[2100]: time="2025-01-13T21:31:33.664868525Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:31:33.669587 containerd[2100]: time="2025-01-13T21:31:33.666816326Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 21:31:33.669587 containerd[2100]: time="2025-01-13T21:31:33.666852996Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 21:31:33.669587 containerd[2100]: time="2025-01-13T21:31:33.667366956Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 21:31:33.669587 containerd[2100]: time="2025-01-13T21:31:33.667399186Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 21:31:33.669587 containerd[2100]: time="2025-01-13T21:31:33.667468988Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:31:33.669587 containerd[2100]: time="2025-01-13T21:31:33.667487882Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:31:33.669587 containerd[2100]: time="2025-01-13T21:31:33.668456163Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:31:33.669587 containerd[2100]: time="2025-01-13T21:31:33.668482202Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 21:31:33.669587 containerd[2100]: time="2025-01-13T21:31:33.668503062Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:31:33.669587 containerd[2100]: time="2025-01-13T21:31:33.668517217Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 21:31:33.667754 systemd-resolved[1974]: System hostname changed to 'ip-172-31-23-216'. Jan 13 21:31:33.670359 containerd[2100]: time="2025-01-13T21:31:33.668626230Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:31:33.672476 containerd[2100]: time="2025-01-13T21:31:33.671671767Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:31:33.673802 containerd[2100]: time="2025-01-13T21:31:33.673040326Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:31:33.673802 containerd[2100]: time="2025-01-13T21:31:33.673177615Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 21:31:33.673802 containerd[2100]: time="2025-01-13T21:31:33.673559746Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 21:31:33.673802 containerd[2100]: time="2025-01-13T21:31:33.673660848Z" level=info msg="metadata content store policy set" policy=shared Jan 13 21:31:33.683193 amazon-ssm-agent[2135]: 2025-01-13 21:31:33 INFO Agent will take identity from EC2 Jan 13 21:31:33.690915 containerd[2100]: time="2025-01-13T21:31:33.688678037Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 21:31:33.690915 containerd[2100]: time="2025-01-13T21:31:33.688756497Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 21:31:33.690915 containerd[2100]: time="2025-01-13T21:31:33.688787147Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 21:31:33.690915 containerd[2100]: time="2025-01-13T21:31:33.688809883Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 21:31:33.690915 containerd[2100]: time="2025-01-13T21:31:33.688832112Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 21:31:33.690915 containerd[2100]: time="2025-01-13T21:31:33.689191588Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 21:31:33.690915 containerd[2100]: time="2025-01-13T21:31:33.689632275Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 21:31:33.690915 containerd[2100]: time="2025-01-13T21:31:33.690285612Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 21:31:33.690915 containerd[2100]: time="2025-01-13T21:31:33.690313939Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 21:31:33.690915 containerd[2100]: time="2025-01-13T21:31:33.690332989Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 21:31:33.690915 containerd[2100]: time="2025-01-13T21:31:33.690357695Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 21:31:33.690915 containerd[2100]: time="2025-01-13T21:31:33.690378363Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 21:31:33.690915 containerd[2100]: time="2025-01-13T21:31:33.690398019Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 21:31:33.690915 containerd[2100]: time="2025-01-13T21:31:33.690419250Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 21:31:33.691501 containerd[2100]: time="2025-01-13T21:31:33.690445860Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 21:31:33.691501 containerd[2100]: time="2025-01-13T21:31:33.690467662Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 21:31:33.691501 containerd[2100]: time="2025-01-13T21:31:33.690486758Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 21:31:33.691501 containerd[2100]: time="2025-01-13T21:31:33.690505364Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 21:31:33.691501 containerd[2100]: time="2025-01-13T21:31:33.690532333Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 21:31:33.691501 containerd[2100]: time="2025-01-13T21:31:33.690552656Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 21:31:33.691501 containerd[2100]: time="2025-01-13T21:31:33.690570006Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 21:31:33.691501 containerd[2100]: time="2025-01-13T21:31:33.690588485Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 21:31:33.691501 containerd[2100]: time="2025-01-13T21:31:33.690604977Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 21:31:33.691501 containerd[2100]: time="2025-01-13T21:31:33.690630233Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 21:31:33.691501 containerd[2100]: time="2025-01-13T21:31:33.690647317Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 21:31:33.691501 containerd[2100]: time="2025-01-13T21:31:33.690672409Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 21:31:33.691501 containerd[2100]: time="2025-01-13T21:31:33.690696668Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 21:31:33.691501 containerd[2100]: time="2025-01-13T21:31:33.690716901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 21:31:33.692106 containerd[2100]: time="2025-01-13T21:31:33.690733759Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 21:31:33.692106 containerd[2100]: time="2025-01-13T21:31:33.690749357Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 21:31:33.692106 containerd[2100]: time="2025-01-13T21:31:33.690767103Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 21:31:33.692106 containerd[2100]: time="2025-01-13T21:31:33.690788818Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 21:31:33.692106 containerd[2100]: time="2025-01-13T21:31:33.690819345Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 21:31:33.692106 containerd[2100]: time="2025-01-13T21:31:33.690837532Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 21:31:33.692106 containerd[2100]: time="2025-01-13T21:31:33.690855231Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 21:31:33.693279 containerd[2100]: time="2025-01-13T21:31:33.693251946Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 21:31:33.695235 containerd[2100]: time="2025-01-13T21:31:33.693451045Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 21:31:33.695235 containerd[2100]: time="2025-01-13T21:31:33.693473675Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 21:31:33.695235 containerd[2100]: time="2025-01-13T21:31:33.693493263Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 21:31:33.695235 containerd[2100]: time="2025-01-13T21:31:33.693507936Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 21:31:33.695235 containerd[2100]: time="2025-01-13T21:31:33.693535260Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 21:31:33.695235 containerd[2100]: time="2025-01-13T21:31:33.693550597Z" level=info msg="NRI interface is disabled by configuration." Jan 13 21:31:33.695235 containerd[2100]: time="2025-01-13T21:31:33.693566060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 21:31:33.695543 containerd[2100]: time="2025-01-13T21:31:33.694185297Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 21:31:33.695543 containerd[2100]: time="2025-01-13T21:31:33.694276191Z" level=info msg="Connect containerd service" Jan 13 21:31:33.695543 containerd[2100]: time="2025-01-13T21:31:33.694331072Z" level=info msg="using legacy CRI server" Jan 13 21:31:33.695543 containerd[2100]: time="2025-01-13T21:31:33.694341087Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 21:31:33.695543 containerd[2100]: time="2025-01-13T21:31:33.694485069Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 21:31:33.702217 containerd[2100]: time="2025-01-13T21:31:33.701004226Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 21:31:33.702217 containerd[2100]: time="2025-01-13T21:31:33.701221005Z" level=info msg="Start subscribing containerd event" Jan 13 21:31:33.702217 containerd[2100]: time="2025-01-13T21:31:33.701276945Z" level=info msg="Start recovering state" Jan 13 21:31:33.702217 containerd[2100]: time="2025-01-13T21:31:33.701358810Z" level=info msg="Start event monitor" Jan 13 21:31:33.702217 containerd[2100]: time="2025-01-13T21:31:33.701383780Z" level=info msg="Start snapshots syncer" Jan 13 21:31:33.702217 containerd[2100]: time="2025-01-13T21:31:33.701398254Z" level=info msg="Start cni network conf syncer for default" Jan 13 21:31:33.702217 containerd[2100]: time="2025-01-13T21:31:33.701409605Z" level=info msg="Start streaming server" Jan 13 21:31:33.702217 containerd[2100]: time="2025-01-13T21:31:33.702063627Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 21:31:33.703916 containerd[2100]: time="2025-01-13T21:31:33.702134626Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 21:31:33.709183 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 21:31:33.709905 containerd[2100]: time="2025-01-13T21:31:33.709860381Z" level=info msg="containerd successfully booted in 0.146779s" Jan 13 21:31:33.781575 amazon-ssm-agent[2135]: 2025-01-13 21:31:33 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 13 21:31:33.879006 amazon-ssm-agent[2135]: 2025-01-13 21:31:33 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 13 21:31:33.969020 sshd_keygen[2082]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 21:31:33.978263 amazon-ssm-agent[2135]: 2025-01-13 21:31:33 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 13 21:31:34.029487 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 21:31:34.045434 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 21:31:34.075463 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 21:31:34.075760 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 21:31:34.079447 amazon-ssm-agent[2135]: 2025-01-13 21:31:33 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 13 21:31:34.088279 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 21:31:34.094887 amazon-ssm-agent[2135]: 2025-01-13 21:31:33 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Jan 13 21:31:34.094887 amazon-ssm-agent[2135]: 2025-01-13 21:31:33 INFO [amazon-ssm-agent] Starting Core Agent Jan 13 21:31:34.094887 amazon-ssm-agent[2135]: 2025-01-13 21:31:33 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 13 21:31:34.094887 amazon-ssm-agent[2135]: 2025-01-13 21:31:33 INFO [Registrar] Starting registrar module Jan 13 21:31:34.095138 amazon-ssm-agent[2135]: 2025-01-13 21:31:33 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 13 21:31:34.095138 amazon-ssm-agent[2135]: 2025-01-13 21:31:34 INFO [EC2Identity] EC2 registration was successful. Jan 13 21:31:34.095138 amazon-ssm-agent[2135]: 2025-01-13 21:31:34 INFO [CredentialRefresher] credentialRefresher has started Jan 13 21:31:34.095138 amazon-ssm-agent[2135]: 2025-01-13 21:31:34 INFO [CredentialRefresher] Starting credentials refresher loop Jan 13 21:31:34.095138 amazon-ssm-agent[2135]: 2025-01-13 21:31:34 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 13 21:31:34.117256 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 21:31:34.128479 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 21:31:34.142373 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 13 21:31:34.143944 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 21:31:34.179177 amazon-ssm-agent[2135]: 2025-01-13 21:31:34 INFO [CredentialRefresher] Next credential rotation will be in 30.2166609966 minutes Jan 13 21:31:34.554057 tar[2077]: linux-amd64/LICENSE Jan 13 21:31:34.554057 tar[2077]: linux-amd64/README.md Jan 13 21:31:34.575551 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 13 21:31:35.095217 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:31:35.097635 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 21:31:35.100575 systemd[1]: Startup finished in 8.516s (kernel) + 8.655s (userspace) = 17.171s. Jan 13 21:31:35.136278 amazon-ssm-agent[2135]: 2025-01-13 21:31:35 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 13 21:31:35.221601 amazon-ssm-agent[2135]: 2025-01-13 21:31:35 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2328) started Jan 13 21:31:35.271225 (kubelet)[2326]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:31:35.322805 amazon-ssm-agent[2135]: 2025-01-13 21:31:35 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 13 21:31:36.550140 kubelet[2326]: E0113 21:31:36.550009 2326 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:31:36.554355 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:31:36.554740 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:31:40.149507 systemd-resolved[1974]: Clock change detected. Flushing caches. Jan 13 21:31:40.937422 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 21:31:40.952218 systemd[1]: Started sshd@0-172.31.23.216:22-147.75.109.163:40160.service - OpenSSH per-connection server daemon (147.75.109.163:40160). Jan 13 21:31:41.165864 sshd[2350]: Accepted publickey for core from 147.75.109.163 port 40160 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:31:41.169070 sshd[2350]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:31:41.184168 systemd-logind[2056]: New session 1 of user core. Jan 13 21:31:41.184790 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 21:31:41.191102 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 21:31:41.222662 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 21:31:41.229990 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 21:31:41.257254 (systemd)[2355]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 21:31:41.447707 systemd[2355]: Queued start job for default target default.target. Jan 13 21:31:41.448328 systemd[2355]: Created slice app.slice - User Application Slice. Jan 13 21:31:41.448363 systemd[2355]: Reached target paths.target - Paths. Jan 13 21:31:41.448383 systemd[2355]: Reached target timers.target - Timers. Jan 13 21:31:41.454600 systemd[2355]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 21:31:41.477688 systemd[2355]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 21:31:41.477776 systemd[2355]: Reached target sockets.target - Sockets. Jan 13 21:31:41.477795 systemd[2355]: Reached target basic.target - Basic System. Jan 13 21:31:41.477874 systemd[2355]: Reached target default.target - Main User Target. Jan 13 21:31:41.477912 systemd[2355]: Startup finished in 189ms. Jan 13 21:31:41.478537 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 21:31:41.493443 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 21:31:41.646787 systemd[1]: Started sshd@1-172.31.23.216:22-147.75.109.163:40162.service - OpenSSH per-connection server daemon (147.75.109.163:40162). Jan 13 21:31:41.818388 sshd[2368]: Accepted publickey for core from 147.75.109.163 port 40162 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:31:41.820390 sshd[2368]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:31:41.828182 systemd-logind[2056]: New session 2 of user core. Jan 13 21:31:41.831241 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 21:31:41.963443 sshd[2368]: pam_unix(sshd:session): session closed for user core Jan 13 21:31:41.968658 systemd[1]: sshd@1-172.31.23.216:22-147.75.109.163:40162.service: Deactivated successfully. Jan 13 21:31:41.976060 systemd-logind[2056]: Session 2 logged out. Waiting for processes to exit. Jan 13 21:31:41.977054 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 21:31:41.979433 systemd-logind[2056]: Removed session 2. Jan 13 21:31:41.991184 systemd[1]: Started sshd@2-172.31.23.216:22-147.75.109.163:40168.service - OpenSSH per-connection server daemon (147.75.109.163:40168). Jan 13 21:31:42.158636 sshd[2376]: Accepted publickey for core from 147.75.109.163 port 40168 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:31:42.161375 sshd[2376]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:31:42.173071 systemd-logind[2056]: New session 3 of user core. Jan 13 21:31:42.181646 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 21:31:42.309501 sshd[2376]: pam_unix(sshd:session): session closed for user core Jan 13 21:31:42.315568 systemd[1]: sshd@2-172.31.23.216:22-147.75.109.163:40168.service: Deactivated successfully. Jan 13 21:31:42.321699 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 21:31:42.322732 systemd-logind[2056]: Session 3 logged out. Waiting for processes to exit. Jan 13 21:31:42.324157 systemd-logind[2056]: Removed session 3. Jan 13 21:31:42.340296 systemd[1]: Started sshd@3-172.31.23.216:22-147.75.109.163:40172.service - OpenSSH per-connection server daemon (147.75.109.163:40172). Jan 13 21:31:42.505944 sshd[2384]: Accepted publickey for core from 147.75.109.163 port 40172 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:31:42.508734 sshd[2384]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:31:42.519440 systemd-logind[2056]: New session 4 of user core. Jan 13 21:31:42.526467 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 21:31:42.674286 sshd[2384]: pam_unix(sshd:session): session closed for user core Jan 13 21:31:42.680578 systemd[1]: sshd@3-172.31.23.216:22-147.75.109.163:40172.service: Deactivated successfully. Jan 13 21:31:42.691117 systemd-logind[2056]: Session 4 logged out. Waiting for processes to exit. Jan 13 21:31:42.692939 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 21:31:42.713470 systemd[1]: Started sshd@4-172.31.23.216:22-147.75.109.163:40176.service - OpenSSH per-connection server daemon (147.75.109.163:40176). Jan 13 21:31:42.715432 systemd-logind[2056]: Removed session 4. Jan 13 21:31:42.898493 sshd[2392]: Accepted publickey for core from 147.75.109.163 port 40176 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:31:42.900343 sshd[2392]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:31:42.910876 systemd-logind[2056]: New session 5 of user core. Jan 13 21:31:42.921361 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 21:31:43.053256 sudo[2396]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 21:31:43.054005 sudo[2396]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:31:43.067728 sudo[2396]: pam_unix(sudo:session): session closed for user root Jan 13 21:31:43.090785 sshd[2392]: pam_unix(sshd:session): session closed for user core Jan 13 21:31:43.100361 systemd[1]: sshd@4-172.31.23.216:22-147.75.109.163:40176.service: Deactivated successfully. Jan 13 21:31:43.107011 systemd-logind[2056]: Session 5 logged out. Waiting for processes to exit. Jan 13 21:31:43.107881 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 21:31:43.109460 systemd-logind[2056]: Removed session 5. Jan 13 21:31:43.117168 systemd[1]: Started sshd@5-172.31.23.216:22-147.75.109.163:40192.service - OpenSSH per-connection server daemon (147.75.109.163:40192). Jan 13 21:31:43.277649 sshd[2401]: Accepted publickey for core from 147.75.109.163 port 40192 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:31:43.280040 sshd[2401]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:31:43.287031 systemd-logind[2056]: New session 6 of user core. Jan 13 21:31:43.299696 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 21:31:43.404026 sudo[2406]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 21:31:43.404427 sudo[2406]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:31:43.410253 sudo[2406]: pam_unix(sudo:session): session closed for user root Jan 13 21:31:43.419410 sudo[2405]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 13 21:31:43.420363 sudo[2405]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:31:43.443389 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 13 21:31:43.457808 auditctl[2409]: No rules Jan 13 21:31:43.458594 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 21:31:43.458900 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 13 21:31:43.469191 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 13 21:31:43.544121 augenrules[2428]: No rules Jan 13 21:31:43.548250 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 13 21:31:43.550762 sudo[2405]: pam_unix(sudo:session): session closed for user root Jan 13 21:31:43.574392 sshd[2401]: pam_unix(sshd:session): session closed for user core Jan 13 21:31:43.579425 systemd[1]: sshd@5-172.31.23.216:22-147.75.109.163:40192.service: Deactivated successfully. Jan 13 21:31:43.583863 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 21:31:43.584893 systemd-logind[2056]: Session 6 logged out. Waiting for processes to exit. Jan 13 21:31:43.586716 systemd-logind[2056]: Removed session 6. Jan 13 21:31:43.601555 systemd[1]: Started sshd@6-172.31.23.216:22-147.75.109.163:40208.service - OpenSSH per-connection server daemon (147.75.109.163:40208). Jan 13 21:31:43.755961 sshd[2437]: Accepted publickey for core from 147.75.109.163 port 40208 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:31:43.758134 sshd[2437]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:31:43.767926 systemd-logind[2056]: New session 7 of user core. Jan 13 21:31:43.774498 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 21:31:43.876251 sudo[2441]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 21:31:43.876718 sudo[2441]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:31:44.435202 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 13 21:31:44.440128 (dockerd)[2456]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 13 21:31:45.040222 dockerd[2456]: time="2025-01-13T21:31:45.040158946Z" level=info msg="Starting up" Jan 13 21:31:45.534302 dockerd[2456]: time="2025-01-13T21:31:45.533803577Z" level=info msg="Loading containers: start." Jan 13 21:31:45.698890 kernel: Initializing XFRM netlink socket Jan 13 21:31:45.739914 (udev-worker)[2477]: Network interface NamePolicy= disabled on kernel command line. Jan 13 21:31:45.844449 systemd-networkd[1652]: docker0: Link UP Jan 13 21:31:45.869445 dockerd[2456]: time="2025-01-13T21:31:45.869388418Z" level=info msg="Loading containers: done." Jan 13 21:31:45.895267 dockerd[2456]: time="2025-01-13T21:31:45.895210104Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 13 21:31:45.895473 dockerd[2456]: time="2025-01-13T21:31:45.895348687Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 13 21:31:45.895523 dockerd[2456]: time="2025-01-13T21:31:45.895489033Z" level=info msg="Daemon has completed initialization" Jan 13 21:31:45.963163 dockerd[2456]: time="2025-01-13T21:31:45.962936533Z" level=info msg="API listen on /run/docker.sock" Jan 13 21:31:45.963617 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 13 21:31:47.304907 containerd[2100]: time="2025-01-13T21:31:47.304560644Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Jan 13 21:31:47.484036 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 21:31:47.490219 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:31:47.741986 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:31:47.775452 (kubelet)[2614]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:31:47.859968 kubelet[2614]: E0113 21:31:47.859872 2614 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:31:47.866193 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:31:47.866549 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:31:48.002913 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2056650344.mount: Deactivated successfully. Jan 13 21:31:50.648730 containerd[2100]: time="2025-01-13T21:31:50.648680502Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:31:50.650355 containerd[2100]: time="2025-01-13T21:31:50.650230551Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=35139254" Jan 13 21:31:50.653010 containerd[2100]: time="2025-01-13T21:31:50.652568610Z" level=info msg="ImageCreate event name:\"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:31:50.656910 containerd[2100]: time="2025-01-13T21:31:50.656863090Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:31:50.658318 containerd[2100]: time="2025-01-13T21:31:50.658272507Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"35136054\" in 3.35367342s" Jan 13 21:31:50.658482 containerd[2100]: time="2025-01-13T21:31:50.658457685Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Jan 13 21:31:50.685319 containerd[2100]: time="2025-01-13T21:31:50.685283826Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Jan 13 21:31:53.198570 containerd[2100]: time="2025-01-13T21:31:53.198466506Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:31:53.201491 containerd[2100]: time="2025-01-13T21:31:53.201256714Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=32217732" Jan 13 21:31:53.203791 containerd[2100]: time="2025-01-13T21:31:53.202921728Z" level=info msg="ImageCreate event name:\"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:31:53.208530 containerd[2100]: time="2025-01-13T21:31:53.208482556Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:31:53.209899 containerd[2100]: time="2025-01-13T21:31:53.209854851Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"33662844\" in 2.524526705s" Jan 13 21:31:53.210005 containerd[2100]: time="2025-01-13T21:31:53.209906469Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Jan 13 21:31:53.244313 containerd[2100]: time="2025-01-13T21:31:53.244281930Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Jan 13 21:31:54.817224 containerd[2100]: time="2025-01-13T21:31:54.817169608Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:31:54.818821 containerd[2100]: time="2025-01-13T21:31:54.818756862Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=17332822" Jan 13 21:31:54.820490 containerd[2100]: time="2025-01-13T21:31:54.820342405Z" level=info msg="ImageCreate event name:\"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:31:54.824759 containerd[2100]: time="2025-01-13T21:31:54.824632330Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:31:54.826049 containerd[2100]: time="2025-01-13T21:31:54.826011497Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"18777952\" in 1.5815091s" Jan 13 21:31:54.826315 containerd[2100]: time="2025-01-13T21:31:54.826206028Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Jan 13 21:31:54.856908 containerd[2100]: time="2025-01-13T21:31:54.856697018Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Jan 13 21:31:56.197704 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4276756626.mount: Deactivated successfully. Jan 13 21:31:56.817903 containerd[2100]: time="2025-01-13T21:31:56.817846941Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:31:56.819415 containerd[2100]: time="2025-01-13T21:31:56.819240319Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=28619958" Jan 13 21:31:56.820858 containerd[2100]: time="2025-01-13T21:31:56.820646586Z" level=info msg="ImageCreate event name:\"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:31:56.822901 containerd[2100]: time="2025-01-13T21:31:56.822840801Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:31:56.824856 containerd[2100]: time="2025-01-13T21:31:56.823638796Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"28618977\" in 1.966902222s" Jan 13 21:31:56.824856 containerd[2100]: time="2025-01-13T21:31:56.823683966Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Jan 13 21:31:56.851729 containerd[2100]: time="2025-01-13T21:31:56.851692411Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 13 21:31:57.518054 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1862587576.mount: Deactivated successfully. Jan 13 21:31:58.116626 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 21:31:58.123085 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:31:58.549715 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:31:58.564407 (kubelet)[2768]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:31:58.687512 kubelet[2768]: E0113 21:31:58.687435 2768 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:31:58.690482 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:31:58.690743 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:31:58.964182 containerd[2100]: time="2025-01-13T21:31:58.962241240Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:31:58.970692 containerd[2100]: time="2025-01-13T21:31:58.970603658Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 13 21:31:58.972542 containerd[2100]: time="2025-01-13T21:31:58.972497414Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:31:58.984456 containerd[2100]: time="2025-01-13T21:31:58.984034060Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:31:58.986630 containerd[2100]: time="2025-01-13T21:31:58.986580996Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.134845176s" Jan 13 21:31:58.986775 containerd[2100]: time="2025-01-13T21:31:58.986639434Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 13 21:31:59.023415 containerd[2100]: time="2025-01-13T21:31:59.023367601Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 13 21:31:59.528124 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1495081352.mount: Deactivated successfully. Jan 13 21:31:59.537049 containerd[2100]: time="2025-01-13T21:31:59.536998267Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:31:59.538418 containerd[2100]: time="2025-01-13T21:31:59.538219647Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jan 13 21:31:59.541281 containerd[2100]: time="2025-01-13T21:31:59.539939556Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:31:59.542538 containerd[2100]: time="2025-01-13T21:31:59.542505407Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:31:59.543340 containerd[2100]: time="2025-01-13T21:31:59.543305302Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 519.893379ms" Jan 13 21:31:59.543437 containerd[2100]: time="2025-01-13T21:31:59.543348357Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 13 21:31:59.571227 containerd[2100]: time="2025-01-13T21:31:59.571192771Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jan 13 21:32:00.179538 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2971361741.mount: Deactivated successfully. Jan 13 21:32:04.115784 containerd[2100]: time="2025-01-13T21:32:04.115724237Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:32:04.117590 containerd[2100]: time="2025-01-13T21:32:04.117355317Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Jan 13 21:32:04.119668 containerd[2100]: time="2025-01-13T21:32:04.119312912Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:32:04.123225 containerd[2100]: time="2025-01-13T21:32:04.123191147Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:32:04.124794 containerd[2100]: time="2025-01-13T21:32:04.124686113Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 4.553459735s" Jan 13 21:32:04.124946 containerd[2100]: time="2025-01-13T21:32:04.124796925Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jan 13 21:32:04.376787 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 13 21:32:08.031793 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:32:08.042561 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:32:08.091470 systemd[1]: Reloading requested from client PID 2905 ('systemctl') (unit session-7.scope)... Jan 13 21:32:08.091485 systemd[1]: Reloading... Jan 13 21:32:08.233863 zram_generator::config[2943]: No configuration found. Jan 13 21:32:08.481027 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:32:08.594583 systemd[1]: Reloading finished in 502 ms. Jan 13 21:32:08.661070 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 21:32:08.661396 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 21:32:08.661805 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:32:08.667215 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:32:08.948076 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:32:08.959452 (kubelet)[3014]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 21:32:09.038149 kubelet[3014]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:32:09.038149 kubelet[3014]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 21:32:09.038149 kubelet[3014]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:32:09.042753 kubelet[3014]: I0113 21:32:09.042669 3014 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 21:32:10.095671 kubelet[3014]: I0113 21:32:10.095634 3014 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 13 21:32:10.095671 kubelet[3014]: I0113 21:32:10.095679 3014 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 21:32:10.096970 kubelet[3014]: I0113 21:32:10.096586 3014 server.go:919] "Client rotation is on, will bootstrap in background" Jan 13 21:32:10.140285 kubelet[3014]: I0113 21:32:10.140240 3014 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:32:10.142903 kubelet[3014]: E0113 21:32:10.142874 3014 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.23.216:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.23.216:6443: connect: connection refused Jan 13 21:32:10.165559 kubelet[3014]: I0113 21:32:10.165520 3014 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 21:32:10.166087 kubelet[3014]: I0113 21:32:10.166064 3014 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 21:32:10.167630 kubelet[3014]: I0113 21:32:10.167600 3014 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 21:32:10.168503 kubelet[3014]: I0113 21:32:10.168474 3014 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 21:32:10.168503 kubelet[3014]: I0113 21:32:10.168506 3014 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 21:32:10.168671 kubelet[3014]: I0113 21:32:10.168653 3014 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:32:10.168807 kubelet[3014]: I0113 21:32:10.168789 3014 kubelet.go:396] "Attempting to sync node with API server" Jan 13 21:32:10.169245 kubelet[3014]: I0113 21:32:10.168812 3014 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 21:32:10.169346 kubelet[3014]: W0113 21:32:10.169301 3014 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.23.216:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-216&limit=500&resourceVersion=0": dial tcp 172.31.23.216:6443: connect: connection refused Jan 13 21:32:10.169395 kubelet[3014]: E0113 21:32:10.169359 3014 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.23.216:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-216&limit=500&resourceVersion=0": dial tcp 172.31.23.216:6443: connect: connection refused Jan 13 21:32:10.171912 kubelet[3014]: I0113 21:32:10.171607 3014 kubelet.go:312] "Adding apiserver pod source" Jan 13 21:32:10.171912 kubelet[3014]: I0113 21:32:10.171656 3014 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 21:32:10.174737 kubelet[3014]: W0113 21:32:10.174691 3014 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.23.216:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.23.216:6443: connect: connection refused Jan 13 21:32:10.174823 kubelet[3014]: E0113 21:32:10.174749 3014 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.23.216:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.23.216:6443: connect: connection refused Jan 13 21:32:10.178015 kubelet[3014]: I0113 21:32:10.177130 3014 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 13 21:32:10.183900 kubelet[3014]: I0113 21:32:10.183861 3014 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 21:32:10.186251 kubelet[3014]: W0113 21:32:10.186203 3014 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 21:32:10.186972 kubelet[3014]: I0113 21:32:10.186929 3014 server.go:1256] "Started kubelet" Jan 13 21:32:10.189856 kubelet[3014]: I0113 21:32:10.187215 3014 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 21:32:10.189856 kubelet[3014]: I0113 21:32:10.187278 3014 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 21:32:10.189856 kubelet[3014]: I0113 21:32:10.187751 3014 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 21:32:10.189856 kubelet[3014]: I0113 21:32:10.188264 3014 server.go:461] "Adding debug handlers to kubelet server" Jan 13 21:32:10.193808 kubelet[3014]: I0113 21:32:10.193765 3014 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 21:32:10.208741 kubelet[3014]: I0113 21:32:10.208211 3014 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 21:32:10.211812 kubelet[3014]: E0113 21:32:10.211779 3014 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.23.216:6443/api/v1/namespaces/default/events\": dial tcp 172.31.23.216:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-23-216.181a5df8f74ae33c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-23-216,UID:ip-172-31-23-216,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-23-216,},FirstTimestamp:2025-01-13 21:32:10.186900284 +0000 UTC m=+1.221740042,LastTimestamp:2025-01-13 21:32:10.186900284 +0000 UTC m=+1.221740042,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-23-216,}" Jan 13 21:32:10.214451 kubelet[3014]: E0113 21:32:10.214420 3014 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.216:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-216?timeout=10s\": dial tcp 172.31.23.216:6443: connect: connection refused" interval="200ms" Jan 13 21:32:10.214894 kubelet[3014]: I0113 21:32:10.214867 3014 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 13 21:32:10.215788 kubelet[3014]: I0113 21:32:10.215197 3014 factory.go:221] Registration of the systemd container factory successfully Jan 13 21:32:10.215788 kubelet[3014]: I0113 21:32:10.215325 3014 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 21:32:10.215788 kubelet[3014]: I0113 21:32:10.215540 3014 reconciler_new.go:29] "Reconciler: start to sync state" Jan 13 21:32:10.234310 kubelet[3014]: W0113 21:32:10.234218 3014 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.23.216:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.216:6443: connect: connection refused Jan 13 21:32:10.234310 kubelet[3014]: E0113 21:32:10.234355 3014 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.23.216:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.216:6443: connect: connection refused Jan 13 21:32:10.239886 kubelet[3014]: E0113 21:32:10.239605 3014 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 21:32:10.245389 kubelet[3014]: I0113 21:32:10.245342 3014 factory.go:221] Registration of the containerd container factory successfully Jan 13 21:32:10.252887 kubelet[3014]: I0113 21:32:10.252696 3014 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 21:32:10.254620 kubelet[3014]: I0113 21:32:10.254588 3014 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 21:32:10.254893 kubelet[3014]: I0113 21:32:10.254775 3014 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 21:32:10.254893 kubelet[3014]: I0113 21:32:10.254810 3014 kubelet.go:2329] "Starting kubelet main sync loop" Jan 13 21:32:10.255255 kubelet[3014]: E0113 21:32:10.255072 3014 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 21:32:10.267772 kubelet[3014]: W0113 21:32:10.267570 3014 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.23.216:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.216:6443: connect: connection refused Jan 13 21:32:10.267772 kubelet[3014]: E0113 21:32:10.267631 3014 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.23.216:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.216:6443: connect: connection refused Jan 13 21:32:10.289253 kubelet[3014]: I0113 21:32:10.289222 3014 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 21:32:10.289579 kubelet[3014]: I0113 21:32:10.289386 3014 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 21:32:10.289579 kubelet[3014]: I0113 21:32:10.289405 3014 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:32:10.292559 kubelet[3014]: I0113 21:32:10.292408 3014 policy_none.go:49] "None policy: Start" Jan 13 21:32:10.293629 kubelet[3014]: I0113 21:32:10.293313 3014 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 21:32:10.293629 kubelet[3014]: I0113 21:32:10.293340 3014 state_mem.go:35] "Initializing new in-memory state store" Jan 13 21:32:10.302472 kubelet[3014]: I0113 21:32:10.302432 3014 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 21:32:10.302739 kubelet[3014]: I0113 21:32:10.302716 3014 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 21:32:10.310021 kubelet[3014]: I0113 21:32:10.310000 3014 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-216" Jan 13 21:32:10.310985 kubelet[3014]: E0113 21:32:10.310961 3014 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.23.216:6443/api/v1/nodes\": dial tcp 172.31.23.216:6443: connect: connection refused" node="ip-172-31-23-216" Jan 13 21:32:10.311256 kubelet[3014]: E0113 21:32:10.311236 3014 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-23-216\" not found" Jan 13 21:32:10.355370 kubelet[3014]: I0113 21:32:10.355250 3014 topology_manager.go:215] "Topology Admit Handler" podUID="5264b917ac128845d4e4545dc716c5f4" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-23-216" Jan 13 21:32:10.358626 kubelet[3014]: I0113 21:32:10.358598 3014 topology_manager.go:215] "Topology Admit Handler" podUID="108d238c88cdb03b89beef917e506462" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-23-216" Jan 13 21:32:10.360787 kubelet[3014]: I0113 21:32:10.360524 3014 topology_manager.go:215] "Topology Admit Handler" podUID="625ced77fea44bd56a3e41c331d68647" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-23-216" Jan 13 21:32:10.416029 kubelet[3014]: E0113 21:32:10.415986 3014 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.216:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-216?timeout=10s\": dial tcp 172.31.23.216:6443: connect: connection refused" interval="400ms" Jan 13 21:32:10.417525 kubelet[3014]: I0113 21:32:10.417492 3014 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/625ced77fea44bd56a3e41c331d68647-kubeconfig\") pod \"kube-scheduler-ip-172-31-23-216\" (UID: \"625ced77fea44bd56a3e41c331d68647\") " pod="kube-system/kube-scheduler-ip-172-31-23-216" Jan 13 21:32:10.417643 kubelet[3014]: I0113 21:32:10.417544 3014 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5264b917ac128845d4e4545dc716c5f4-k8s-certs\") pod \"kube-apiserver-ip-172-31-23-216\" (UID: \"5264b917ac128845d4e4545dc716c5f4\") " pod="kube-system/kube-apiserver-ip-172-31-23-216" Jan 13 21:32:10.417643 kubelet[3014]: I0113 21:32:10.417579 3014 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5264b917ac128845d4e4545dc716c5f4-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-23-216\" (UID: \"5264b917ac128845d4e4545dc716c5f4\") " pod="kube-system/kube-apiserver-ip-172-31-23-216" Jan 13 21:32:10.417643 kubelet[3014]: I0113 21:32:10.417610 3014 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/108d238c88cdb03b89beef917e506462-k8s-certs\") pod \"kube-controller-manager-ip-172-31-23-216\" (UID: \"108d238c88cdb03b89beef917e506462\") " pod="kube-system/kube-controller-manager-ip-172-31-23-216" Jan 13 21:32:10.417643 kubelet[3014]: I0113 21:32:10.417637 3014 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5264b917ac128845d4e4545dc716c5f4-ca-certs\") pod \"kube-apiserver-ip-172-31-23-216\" (UID: \"5264b917ac128845d4e4545dc716c5f4\") " pod="kube-system/kube-apiserver-ip-172-31-23-216" Jan 13 21:32:10.417809 kubelet[3014]: I0113 21:32:10.417665 3014 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/108d238c88cdb03b89beef917e506462-ca-certs\") pod \"kube-controller-manager-ip-172-31-23-216\" (UID: \"108d238c88cdb03b89beef917e506462\") " pod="kube-system/kube-controller-manager-ip-172-31-23-216" Jan 13 21:32:10.417809 kubelet[3014]: I0113 21:32:10.417696 3014 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/108d238c88cdb03b89beef917e506462-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-23-216\" (UID: \"108d238c88cdb03b89beef917e506462\") " pod="kube-system/kube-controller-manager-ip-172-31-23-216" Jan 13 21:32:10.417809 kubelet[3014]: I0113 21:32:10.417729 3014 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/108d238c88cdb03b89beef917e506462-kubeconfig\") pod \"kube-controller-manager-ip-172-31-23-216\" (UID: \"108d238c88cdb03b89beef917e506462\") " pod="kube-system/kube-controller-manager-ip-172-31-23-216" Jan 13 21:32:10.417809 kubelet[3014]: I0113 21:32:10.417765 3014 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/108d238c88cdb03b89beef917e506462-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-23-216\" (UID: \"108d238c88cdb03b89beef917e506462\") " pod="kube-system/kube-controller-manager-ip-172-31-23-216" Jan 13 21:32:10.513362 kubelet[3014]: I0113 21:32:10.513096 3014 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-216" Jan 13 21:32:10.513545 kubelet[3014]: E0113 21:32:10.513429 3014 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.23.216:6443/api/v1/nodes\": dial tcp 172.31.23.216:6443: connect: connection refused" node="ip-172-31-23-216" Jan 13 21:32:10.664908 containerd[2100]: time="2025-01-13T21:32:10.664547792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-23-216,Uid:5264b917ac128845d4e4545dc716c5f4,Namespace:kube-system,Attempt:0,}" Jan 13 21:32:10.668299 containerd[2100]: time="2025-01-13T21:32:10.668255868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-23-216,Uid:108d238c88cdb03b89beef917e506462,Namespace:kube-system,Attempt:0,}" Jan 13 21:32:10.670796 containerd[2100]: time="2025-01-13T21:32:10.670535631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-23-216,Uid:625ced77fea44bd56a3e41c331d68647,Namespace:kube-system,Attempt:0,}" Jan 13 21:32:10.817215 kubelet[3014]: E0113 21:32:10.817181 3014 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.216:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-216?timeout=10s\": dial tcp 172.31.23.216:6443: connect: connection refused" interval="800ms" Jan 13 21:32:10.915101 kubelet[3014]: I0113 21:32:10.914994 3014 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-216" Jan 13 21:32:10.915341 kubelet[3014]: E0113 21:32:10.915331 3014 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.23.216:6443/api/v1/nodes\": dial tcp 172.31.23.216:6443: connect: connection refused" node="ip-172-31-23-216" Jan 13 21:32:11.016682 kubelet[3014]: W0113 21:32:11.016614 3014 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.23.216:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-216&limit=500&resourceVersion=0": dial tcp 172.31.23.216:6443: connect: connection refused Jan 13 21:32:11.016682 kubelet[3014]: E0113 21:32:11.016682 3014 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.23.216:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-216&limit=500&resourceVersion=0": dial tcp 172.31.23.216:6443: connect: connection refused Jan 13 21:32:11.187711 kubelet[3014]: W0113 21:32:11.187413 3014 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.23.216:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.216:6443: connect: connection refused Jan 13 21:32:11.187711 kubelet[3014]: E0113 21:32:11.187460 3014 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.23.216:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.216:6443: connect: connection refused Jan 13 21:32:11.207459 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3826166695.mount: Deactivated successfully. Jan 13 21:32:11.217967 containerd[2100]: time="2025-01-13T21:32:11.217918146Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:32:11.219113 containerd[2100]: time="2025-01-13T21:32:11.219054036Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 13 21:32:11.220676 containerd[2100]: time="2025-01-13T21:32:11.220637865Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:32:11.221804 containerd[2100]: time="2025-01-13T21:32:11.221770297Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:32:11.222740 containerd[2100]: time="2025-01-13T21:32:11.222685038Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 21:32:11.224283 containerd[2100]: time="2025-01-13T21:32:11.224248737Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:32:11.225579 containerd[2100]: time="2025-01-13T21:32:11.225470670Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 21:32:11.228873 containerd[2100]: time="2025-01-13T21:32:11.228079231Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:32:11.230319 containerd[2100]: time="2025-01-13T21:32:11.230248725Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 565.611499ms" Jan 13 21:32:11.233768 containerd[2100]: time="2025-01-13T21:32:11.232926652Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 562.311854ms" Jan 13 21:32:11.238145 containerd[2100]: time="2025-01-13T21:32:11.238100223Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 569.66227ms" Jan 13 21:32:11.327799 kubelet[3014]: W0113 21:32:11.327767 3014 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.23.216:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.216:6443: connect: connection refused Jan 13 21:32:11.328027 kubelet[3014]: E0113 21:32:11.328006 3014 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.23.216:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.216:6443: connect: connection refused Jan 13 21:32:11.448286 containerd[2100]: time="2025-01-13T21:32:11.447376330Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:32:11.448716 containerd[2100]: time="2025-01-13T21:32:11.448530459Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:32:11.449394 containerd[2100]: time="2025-01-13T21:32:11.449015397Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:32:11.449394 containerd[2100]: time="2025-01-13T21:32:11.449073563Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:32:11.449394 containerd[2100]: time="2025-01-13T21:32:11.449104202Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:32:11.449394 containerd[2100]: time="2025-01-13T21:32:11.449241673Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:32:11.449744 containerd[2100]: time="2025-01-13T21:32:11.449374298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:32:11.450025 containerd[2100]: time="2025-01-13T21:32:11.449980485Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:32:11.460649 containerd[2100]: time="2025-01-13T21:32:11.460216825Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:32:11.460649 containerd[2100]: time="2025-01-13T21:32:11.460297909Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:32:11.460649 containerd[2100]: time="2025-01-13T21:32:11.460319692Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:32:11.460649 containerd[2100]: time="2025-01-13T21:32:11.460532340Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:32:11.582399 containerd[2100]: time="2025-01-13T21:32:11.582284848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-23-216,Uid:5264b917ac128845d4e4545dc716c5f4,Namespace:kube-system,Attempt:0,} returns sandbox id \"3b789daba76b7a4f9445c2452869456531ef3fcbf53e1c7a9983010c3dd6c317\"" Jan 13 21:32:11.597365 containerd[2100]: time="2025-01-13T21:32:11.596314341Z" level=info msg="CreateContainer within sandbox \"3b789daba76b7a4f9445c2452869456531ef3fcbf53e1c7a9983010c3dd6c317\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 13 21:32:11.619722 kubelet[3014]: E0113 21:32:11.618908 3014 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.216:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-216?timeout=10s\": dial tcp 172.31.23.216:6443: connect: connection refused" interval="1.6s" Jan 13 21:32:11.622763 containerd[2100]: time="2025-01-13T21:32:11.622726844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-23-216,Uid:625ced77fea44bd56a3e41c331d68647,Namespace:kube-system,Attempt:0,} returns sandbox id \"bdc5b78ead2c90a9887594baa40916d413282fc6b0447bd6501c6d5e40a5e035\"" Jan 13 21:32:11.629180 kubelet[3014]: W0113 21:32:11.629122 3014 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.23.216:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.23.216:6443: connect: connection refused Jan 13 21:32:11.629180 kubelet[3014]: E0113 21:32:11.629185 3014 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.23.216:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.23.216:6443: connect: connection refused Jan 13 21:32:11.630722 containerd[2100]: time="2025-01-13T21:32:11.630689304Z" level=info msg="CreateContainer within sandbox \"bdc5b78ead2c90a9887594baa40916d413282fc6b0447bd6501c6d5e40a5e035\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 13 21:32:11.631021 containerd[2100]: time="2025-01-13T21:32:11.630852852Z" level=info msg="CreateContainer within sandbox \"3b789daba76b7a4f9445c2452869456531ef3fcbf53e1c7a9983010c3dd6c317\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"02611f865147198db373cb6bd5309d94da892b551694dca7aa92cd13457aa938\"" Jan 13 21:32:11.631641 containerd[2100]: time="2025-01-13T21:32:11.631613532Z" level=info msg="StartContainer for \"02611f865147198db373cb6bd5309d94da892b551694dca7aa92cd13457aa938\"" Jan 13 21:32:11.652121 containerd[2100]: time="2025-01-13T21:32:11.651897088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-23-216,Uid:108d238c88cdb03b89beef917e506462,Namespace:kube-system,Attempt:0,} returns sandbox id \"7cf301a189f0b4830896083a67cfe6d71185fc017cd8dfd9326c06f3d8329675\"" Jan 13 21:32:11.664288 containerd[2100]: time="2025-01-13T21:32:11.664248530Z" level=info msg="CreateContainer within sandbox \"bdc5b78ead2c90a9887594baa40916d413282fc6b0447bd6501c6d5e40a5e035\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a022267ffc4e7d0869fabfab5499ca3cb5dd8279d8c7b7f59b8826999c37fcba\"" Jan 13 21:32:11.665101 containerd[2100]: time="2025-01-13T21:32:11.664908226Z" level=info msg="CreateContainer within sandbox \"7cf301a189f0b4830896083a67cfe6d71185fc017cd8dfd9326c06f3d8329675\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 13 21:32:11.668971 containerd[2100]: time="2025-01-13T21:32:11.665439439Z" level=info msg="StartContainer for \"a022267ffc4e7d0869fabfab5499ca3cb5dd8279d8c7b7f59b8826999c37fcba\"" Jan 13 21:32:11.701884 containerd[2100]: time="2025-01-13T21:32:11.699134971Z" level=info msg="CreateContainer within sandbox \"7cf301a189f0b4830896083a67cfe6d71185fc017cd8dfd9326c06f3d8329675\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8e089319162d58d42fe81468d0ac72eb99e2c506604af18e84ec5cb1d809fa5d\"" Jan 13 21:32:11.701884 containerd[2100]: time="2025-01-13T21:32:11.700055577Z" level=info msg="StartContainer for \"8e089319162d58d42fe81468d0ac72eb99e2c506604af18e84ec5cb1d809fa5d\"" Jan 13 21:32:11.718521 kubelet[3014]: I0113 21:32:11.718477 3014 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-216" Jan 13 21:32:11.718905 kubelet[3014]: E0113 21:32:11.718873 3014 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.23.216:6443/api/v1/nodes\": dial tcp 172.31.23.216:6443: connect: connection refused" node="ip-172-31-23-216" Jan 13 21:32:11.774484 containerd[2100]: time="2025-01-13T21:32:11.774441309Z" level=info msg="StartContainer for \"02611f865147198db373cb6bd5309d94da892b551694dca7aa92cd13457aa938\" returns successfully" Jan 13 21:32:11.828958 containerd[2100]: time="2025-01-13T21:32:11.828767884Z" level=info msg="StartContainer for \"a022267ffc4e7d0869fabfab5499ca3cb5dd8279d8c7b7f59b8826999c37fcba\" returns successfully" Jan 13 21:32:11.888085 containerd[2100]: time="2025-01-13T21:32:11.886625243Z" level=info msg="StartContainer for \"8e089319162d58d42fe81468d0ac72eb99e2c506604af18e84ec5cb1d809fa5d\" returns successfully" Jan 13 21:32:12.297111 kubelet[3014]: E0113 21:32:12.297069 3014 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.23.216:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.23.216:6443: connect: connection refused Jan 13 21:32:13.329012 kubelet[3014]: I0113 21:32:13.324416 3014 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-216" Jan 13 21:32:14.686268 kubelet[3014]: E0113 21:32:14.684984 3014 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-23-216\" not found" node="ip-172-31-23-216" Jan 13 21:32:14.731327 kubelet[3014]: I0113 21:32:14.731229 3014 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-23-216" Jan 13 21:32:15.177661 kubelet[3014]: I0113 21:32:15.177616 3014 apiserver.go:52] "Watching apiserver" Jan 13 21:32:15.216093 kubelet[3014]: I0113 21:32:15.216020 3014 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 13 21:32:17.664897 systemd[1]: Reloading requested from client PID 3284 ('systemctl') (unit session-7.scope)... Jan 13 21:32:17.664920 systemd[1]: Reloading... Jan 13 21:32:17.798861 zram_generator::config[3320]: No configuration found. Jan 13 21:32:18.056466 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:32:18.177240 systemd[1]: Reloading finished in 511 ms. Jan 13 21:32:18.215602 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:32:18.225598 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 21:32:18.225976 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:32:18.238266 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:32:18.501167 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:32:18.519498 (kubelet)[3391]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 21:32:18.625388 update_engine[2060]: I20250113 21:32:18.624125 2060 update_attempter.cc:509] Updating boot flags... Jan 13 21:32:18.773109 kubelet[3391]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:32:18.773109 kubelet[3391]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 21:32:18.773109 kubelet[3391]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:32:18.788852 kubelet[3391]: I0113 21:32:18.779120 3391 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 21:32:18.809693 kubelet[3391]: I0113 21:32:18.809087 3391 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 13 21:32:18.809693 kubelet[3391]: I0113 21:32:18.809139 3391 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 21:32:18.809693 kubelet[3391]: I0113 21:32:18.809523 3391 server.go:919] "Client rotation is on, will bootstrap in background" Jan 13 21:32:18.817274 kubelet[3391]: I0113 21:32:18.814938 3391 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 13 21:32:18.839016 kubelet[3391]: I0113 21:32:18.830025 3391 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:32:18.842429 kubelet[3391]: I0113 21:32:18.841481 3391 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 21:32:18.847704 kubelet[3391]: I0113 21:32:18.846206 3391 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 21:32:18.847704 kubelet[3391]: I0113 21:32:18.847533 3391 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 21:32:18.847704 kubelet[3391]: I0113 21:32:18.847578 3391 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 21:32:18.848025 kubelet[3391]: I0113 21:32:18.847593 3391 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 21:32:18.855278 kubelet[3391]: I0113 21:32:18.848140 3391 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:32:18.855278 kubelet[3391]: I0113 21:32:18.850448 3391 kubelet.go:396] "Attempting to sync node with API server" Jan 13 21:32:18.855278 kubelet[3391]: I0113 21:32:18.850615 3391 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 21:32:18.855278 kubelet[3391]: I0113 21:32:18.850658 3391 kubelet.go:312] "Adding apiserver pod source" Jan 13 21:32:18.855278 kubelet[3391]: I0113 21:32:18.850698 3391 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 21:32:18.888936 kubelet[3391]: I0113 21:32:18.885335 3391 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 13 21:32:18.910234 kubelet[3391]: I0113 21:32:18.909022 3391 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 21:32:18.910756 kubelet[3391]: I0113 21:32:18.910717 3391 server.go:1256] "Started kubelet" Jan 13 21:32:18.919249 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3420) Jan 13 21:32:18.943770 kubelet[3391]: I0113 21:32:18.940977 3391 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 21:32:18.950435 kubelet[3391]: I0113 21:32:18.950403 3391 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 21:32:18.959866 kubelet[3391]: I0113 21:32:18.957888 3391 server.go:461] "Adding debug handlers to kubelet server" Jan 13 21:32:18.963614 kubelet[3391]: I0113 21:32:18.962062 3391 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 21:32:18.963614 kubelet[3391]: I0113 21:32:18.962643 3391 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 21:32:18.984885 kubelet[3391]: I0113 21:32:18.983086 3391 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 21:32:18.984885 kubelet[3391]: I0113 21:32:18.983590 3391 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 13 21:32:18.984885 kubelet[3391]: I0113 21:32:18.983757 3391 reconciler_new.go:29] "Reconciler: start to sync state" Jan 13 21:32:19.029304 kubelet[3391]: I0113 21:32:19.029188 3391 factory.go:221] Registration of the systemd container factory successfully Jan 13 21:32:19.035884 kubelet[3391]: I0113 21:32:19.035728 3391 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 21:32:19.043223 kubelet[3391]: I0113 21:32:19.043196 3391 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 21:32:19.056084 kubelet[3391]: E0113 21:32:19.055777 3391 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 21:32:19.065887 kubelet[3391]: I0113 21:32:19.065717 3391 factory.go:221] Registration of the containerd container factory successfully Jan 13 21:32:19.066592 kubelet[3391]: I0113 21:32:19.066556 3391 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 21:32:19.069098 kubelet[3391]: I0113 21:32:19.066602 3391 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 21:32:19.069098 kubelet[3391]: I0113 21:32:19.066622 3391 kubelet.go:2329] "Starting kubelet main sync loop" Jan 13 21:32:19.069098 kubelet[3391]: E0113 21:32:19.068110 3391 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 21:32:19.139856 kubelet[3391]: E0113 21:32:19.138077 3391 container_manager_linux.go:881] "Unable to get rootfs data from cAdvisor interface" err="unable to find data in memory cache" Jan 13 21:32:19.145274 kubelet[3391]: I0113 21:32:19.145180 3391 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-216" Jan 13 21:32:19.168255 kubelet[3391]: E0113 21:32:19.168209 3391 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 13 21:32:19.174959 kubelet[3391]: I0113 21:32:19.173435 3391 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-23-216" Jan 13 21:32:19.175307 kubelet[3391]: I0113 21:32:19.175285 3391 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-23-216" Jan 13 21:32:19.374792 kubelet[3391]: E0113 21:32:19.374761 3391 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 13 21:32:19.424854 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3406) Jan 13 21:32:19.494103 kubelet[3391]: I0113 21:32:19.493773 3391 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 21:32:19.494103 kubelet[3391]: I0113 21:32:19.493803 3391 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 21:32:19.494103 kubelet[3391]: I0113 21:32:19.493825 3391 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:32:19.494548 kubelet[3391]: I0113 21:32:19.494533 3391 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 13 21:32:19.494649 kubelet[3391]: I0113 21:32:19.494641 3391 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 13 21:32:19.495629 kubelet[3391]: I0113 21:32:19.495186 3391 policy_none.go:49] "None policy: Start" Jan 13 21:32:19.498169 kubelet[3391]: I0113 21:32:19.497566 3391 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 21:32:19.498169 kubelet[3391]: I0113 21:32:19.497612 3391 state_mem.go:35] "Initializing new in-memory state store" Jan 13 21:32:19.498169 kubelet[3391]: I0113 21:32:19.497931 3391 state_mem.go:75] "Updated machine memory state" Jan 13 21:32:19.511055 kubelet[3391]: I0113 21:32:19.510630 3391 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 21:32:19.516423 kubelet[3391]: I0113 21:32:19.516395 3391 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 21:32:19.763465 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3406) Jan 13 21:32:19.775919 kubelet[3391]: I0113 21:32:19.775763 3391 topology_manager.go:215] "Topology Admit Handler" podUID="5264b917ac128845d4e4545dc716c5f4" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-23-216" Jan 13 21:32:19.775919 kubelet[3391]: I0113 21:32:19.775901 3391 topology_manager.go:215] "Topology Admit Handler" podUID="108d238c88cdb03b89beef917e506462" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-23-216" Jan 13 21:32:19.776486 kubelet[3391]: I0113 21:32:19.775947 3391 topology_manager.go:215] "Topology Admit Handler" podUID="625ced77fea44bd56a3e41c331d68647" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-23-216" Jan 13 21:32:19.788099 kubelet[3391]: E0113 21:32:19.786238 3391 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-23-216\" already exists" pod="kube-system/kube-apiserver-ip-172-31-23-216" Jan 13 21:32:19.788099 kubelet[3391]: E0113 21:32:19.786347 3391 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-23-216\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-23-216" Jan 13 21:32:19.809447 kubelet[3391]: I0113 21:32:19.809241 3391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5264b917ac128845d4e4545dc716c5f4-ca-certs\") pod \"kube-apiserver-ip-172-31-23-216\" (UID: \"5264b917ac128845d4e4545dc716c5f4\") " pod="kube-system/kube-apiserver-ip-172-31-23-216" Jan 13 21:32:19.809447 kubelet[3391]: I0113 21:32:19.809328 3391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/108d238c88cdb03b89beef917e506462-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-23-216\" (UID: \"108d238c88cdb03b89beef917e506462\") " pod="kube-system/kube-controller-manager-ip-172-31-23-216" Jan 13 21:32:19.809660 kubelet[3391]: I0113 21:32:19.809460 3391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/625ced77fea44bd56a3e41c331d68647-kubeconfig\") pod \"kube-scheduler-ip-172-31-23-216\" (UID: \"625ced77fea44bd56a3e41c331d68647\") " pod="kube-system/kube-scheduler-ip-172-31-23-216" Jan 13 21:32:19.809660 kubelet[3391]: I0113 21:32:19.809495 3391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5264b917ac128845d4e4545dc716c5f4-k8s-certs\") pod \"kube-apiserver-ip-172-31-23-216\" (UID: \"5264b917ac128845d4e4545dc716c5f4\") " pod="kube-system/kube-apiserver-ip-172-31-23-216" Jan 13 21:32:19.809961 kubelet[3391]: I0113 21:32:19.809935 3391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5264b917ac128845d4e4545dc716c5f4-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-23-216\" (UID: \"5264b917ac128845d4e4545dc716c5f4\") " pod="kube-system/kube-apiserver-ip-172-31-23-216" Jan 13 21:32:19.810076 kubelet[3391]: I0113 21:32:19.810024 3391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/108d238c88cdb03b89beef917e506462-ca-certs\") pod \"kube-controller-manager-ip-172-31-23-216\" (UID: \"108d238c88cdb03b89beef917e506462\") " pod="kube-system/kube-controller-manager-ip-172-31-23-216" Jan 13 21:32:19.810128 kubelet[3391]: I0113 21:32:19.810094 3391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/108d238c88cdb03b89beef917e506462-k8s-certs\") pod \"kube-controller-manager-ip-172-31-23-216\" (UID: \"108d238c88cdb03b89beef917e506462\") " pod="kube-system/kube-controller-manager-ip-172-31-23-216" Jan 13 21:32:19.810176 kubelet[3391]: I0113 21:32:19.810162 3391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/108d238c88cdb03b89beef917e506462-kubeconfig\") pod \"kube-controller-manager-ip-172-31-23-216\" (UID: \"108d238c88cdb03b89beef917e506462\") " pod="kube-system/kube-controller-manager-ip-172-31-23-216" Jan 13 21:32:19.810851 kubelet[3391]: I0113 21:32:19.810239 3391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/108d238c88cdb03b89beef917e506462-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-23-216\" (UID: \"108d238c88cdb03b89beef917e506462\") " pod="kube-system/kube-controller-manager-ip-172-31-23-216" Jan 13 21:32:19.874404 kubelet[3391]: I0113 21:32:19.874364 3391 apiserver.go:52] "Watching apiserver" Jan 13 21:32:19.884812 kubelet[3391]: I0113 21:32:19.884778 3391 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 13 21:32:20.194124 kubelet[3391]: I0113 21:32:20.193685 3391 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-23-216" podStartSLOduration=2.193604756 podStartE2EDuration="2.193604756s" podCreationTimestamp="2025-01-13 21:32:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:32:20.178541761 +0000 UTC m=+1.646893613" watchObservedRunningTime="2025-01-13 21:32:20.193604756 +0000 UTC m=+1.661956599" Jan 13 21:32:20.219930 kubelet[3391]: I0113 21:32:20.219803 3391 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-23-216" podStartSLOduration=4.219113064 podStartE2EDuration="4.219113064s" podCreationTimestamp="2025-01-13 21:32:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:32:20.198129536 +0000 UTC m=+1.666481383" watchObservedRunningTime="2025-01-13 21:32:20.219113064 +0000 UTC m=+1.687464908" Jan 13 21:32:20.220492 kubelet[3391]: I0113 21:32:20.220444 3391 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-23-216" podStartSLOduration=1.22039906 podStartE2EDuration="1.22039906s" podCreationTimestamp="2025-01-13 21:32:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:32:20.214178231 +0000 UTC m=+1.682530074" watchObservedRunningTime="2025-01-13 21:32:20.22039906 +0000 UTC m=+1.688750962" Jan 13 21:32:25.699041 sudo[2441]: pam_unix(sudo:session): session closed for user root Jan 13 21:32:25.721746 sshd[2437]: pam_unix(sshd:session): session closed for user core Jan 13 21:32:25.726027 systemd[1]: sshd@6-172.31.23.216:22-147.75.109.163:40208.service: Deactivated successfully. Jan 13 21:32:25.733426 systemd-logind[2056]: Session 7 logged out. Waiting for processes to exit. Jan 13 21:32:25.734540 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 21:32:25.736453 systemd-logind[2056]: Removed session 7. Jan 13 21:32:33.081496 kubelet[3391]: I0113 21:32:33.081105 3391 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 13 21:32:33.083265 containerd[2100]: time="2025-01-13T21:32:33.082405963Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 21:32:33.093710 kubelet[3391]: I0113 21:32:33.089457 3391 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 13 21:32:33.152148 kubelet[3391]: I0113 21:32:33.152101 3391 topology_manager.go:215] "Topology Admit Handler" podUID="85609219-aaf6-44f4-86e4-540019b5f0ae" podNamespace="kube-system" podName="kube-proxy-svtpx" Jan 13 21:32:33.210039 kubelet[3391]: I0113 21:32:33.209934 3391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/85609219-aaf6-44f4-86e4-540019b5f0ae-xtables-lock\") pod \"kube-proxy-svtpx\" (UID: \"85609219-aaf6-44f4-86e4-540019b5f0ae\") " pod="kube-system/kube-proxy-svtpx" Jan 13 21:32:33.210503 kubelet[3391]: I0113 21:32:33.210303 3391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/85609219-aaf6-44f4-86e4-540019b5f0ae-lib-modules\") pod \"kube-proxy-svtpx\" (UID: \"85609219-aaf6-44f4-86e4-540019b5f0ae\") " pod="kube-system/kube-proxy-svtpx" Jan 13 21:32:33.210503 kubelet[3391]: I0113 21:32:33.210379 3391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/85609219-aaf6-44f4-86e4-540019b5f0ae-kube-proxy\") pod \"kube-proxy-svtpx\" (UID: \"85609219-aaf6-44f4-86e4-540019b5f0ae\") " pod="kube-system/kube-proxy-svtpx" Jan 13 21:32:33.210503 kubelet[3391]: I0113 21:32:33.210462 3391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrc75\" (UniqueName: \"kubernetes.io/projected/85609219-aaf6-44f4-86e4-540019b5f0ae-kube-api-access-jrc75\") pod \"kube-proxy-svtpx\" (UID: \"85609219-aaf6-44f4-86e4-540019b5f0ae\") " pod="kube-system/kube-proxy-svtpx" Jan 13 21:32:33.277947 kubelet[3391]: I0113 21:32:33.275404 3391 topology_manager.go:215] "Topology Admit Handler" podUID="bc400886-7b91-4c38-812c-0c980501af94" podNamespace="tigera-operator" podName="tigera-operator-c7ccbd65-trhw6" Jan 13 21:32:33.291366 kubelet[3391]: W0113 21:32:33.289434 3391 reflector.go:539] object-"tigera-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-23-216" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ip-172-31-23-216' and this object Jan 13 21:32:33.291366 kubelet[3391]: W0113 21:32:33.290157 3391 reflector.go:539] object-"tigera-operator"/"kubernetes-services-endpoint": failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:ip-172-31-23-216" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ip-172-31-23-216' and this object Jan 13 21:32:33.291366 kubelet[3391]: E0113 21:32:33.291301 3391 reflector.go:147] object-"tigera-operator"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-23-216" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ip-172-31-23-216' and this object Jan 13 21:32:33.291366 kubelet[3391]: E0113 21:32:33.291309 3391 reflector.go:147] object-"tigera-operator"/"kubernetes-services-endpoint": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:ip-172-31-23-216" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ip-172-31-23-216' and this object Jan 13 21:32:33.311895 kubelet[3391]: I0113 21:32:33.311533 3391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2h66d\" (UniqueName: \"kubernetes.io/projected/bc400886-7b91-4c38-812c-0c980501af94-kube-api-access-2h66d\") pod \"tigera-operator-c7ccbd65-trhw6\" (UID: \"bc400886-7b91-4c38-812c-0c980501af94\") " pod="tigera-operator/tigera-operator-c7ccbd65-trhw6" Jan 13 21:32:33.311895 kubelet[3391]: I0113 21:32:33.311660 3391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bc400886-7b91-4c38-812c-0c980501af94-var-lib-calico\") pod \"tigera-operator-c7ccbd65-trhw6\" (UID: \"bc400886-7b91-4c38-812c-0c980501af94\") " pod="tigera-operator/tigera-operator-c7ccbd65-trhw6" Jan 13 21:32:33.465417 containerd[2100]: time="2025-01-13T21:32:33.464701608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-svtpx,Uid:85609219-aaf6-44f4-86e4-540019b5f0ae,Namespace:kube-system,Attempt:0,}" Jan 13 21:32:33.508437 containerd[2100]: time="2025-01-13T21:32:33.508166522Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:32:33.509265 containerd[2100]: time="2025-01-13T21:32:33.508819410Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:32:33.509265 containerd[2100]: time="2025-01-13T21:32:33.508866428Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:32:33.509265 containerd[2100]: time="2025-01-13T21:32:33.509140001Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:32:33.570589 containerd[2100]: time="2025-01-13T21:32:33.570527310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-svtpx,Uid:85609219-aaf6-44f4-86e4-540019b5f0ae,Namespace:kube-system,Attempt:0,} returns sandbox id \"87aedd7ed7f560a8b05bbc629e2b1c83e51e7fac6a7a32304217600bf24eac6d\"" Jan 13 21:32:33.574903 containerd[2100]: time="2025-01-13T21:32:33.574864709Z" level=info msg="CreateContainer within sandbox \"87aedd7ed7f560a8b05bbc629e2b1c83e51e7fac6a7a32304217600bf24eac6d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 21:32:33.595087 containerd[2100]: time="2025-01-13T21:32:33.595038936Z" level=info msg="CreateContainer within sandbox \"87aedd7ed7f560a8b05bbc629e2b1c83e51e7fac6a7a32304217600bf24eac6d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"12f6f36056fedc65a887ce0ce094e10dd98b603b0b4d0ae336e379b10dcbe2a9\"" Jan 13 21:32:33.596141 containerd[2100]: time="2025-01-13T21:32:33.595984992Z" level=info msg="StartContainer for \"12f6f36056fedc65a887ce0ce094e10dd98b603b0b4d0ae336e379b10dcbe2a9\"" Jan 13 21:32:33.694590 containerd[2100]: time="2025-01-13T21:32:33.694547795Z" level=info msg="StartContainer for \"12f6f36056fedc65a887ce0ce094e10dd98b603b0b4d0ae336e379b10dcbe2a9\" returns successfully" Jan 13 21:32:34.423623 kubelet[3391]: E0113 21:32:34.423571 3391 projected.go:294] Couldn't get configMap tigera-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 13 21:32:34.423623 kubelet[3391]: E0113 21:32:34.423625 3391 projected.go:200] Error preparing data for projected volume kube-api-access-2h66d for pod tigera-operator/tigera-operator-c7ccbd65-trhw6: failed to sync configmap cache: timed out waiting for the condition Jan 13 21:32:34.434583 kubelet[3391]: E0113 21:32:34.434339 3391 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bc400886-7b91-4c38-812c-0c980501af94-kube-api-access-2h66d podName:bc400886-7b91-4c38-812c-0c980501af94 nodeName:}" failed. No retries permitted until 2025-01-13 21:32:34.934271266 +0000 UTC m=+16.402623118 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2h66d" (UniqueName: "kubernetes.io/projected/bc400886-7b91-4c38-812c-0c980501af94-kube-api-access-2h66d") pod "tigera-operator-c7ccbd65-trhw6" (UID: "bc400886-7b91-4c38-812c-0c980501af94") : failed to sync configmap cache: timed out waiting for the condition Jan 13 21:32:35.090305 containerd[2100]: time="2025-01-13T21:32:35.090255665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-trhw6,Uid:bc400886-7b91-4c38-812c-0c980501af94,Namespace:tigera-operator,Attempt:0,}" Jan 13 21:32:35.175406 containerd[2100]: time="2025-01-13T21:32:35.175278561Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:32:35.175406 containerd[2100]: time="2025-01-13T21:32:35.175359977Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:32:35.175763 containerd[2100]: time="2025-01-13T21:32:35.175381827Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:32:35.177138 containerd[2100]: time="2025-01-13T21:32:35.176815486Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:32:35.276016 containerd[2100]: time="2025-01-13T21:32:35.275879981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-trhw6,Uid:bc400886-7b91-4c38-812c-0c980501af94,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"47353b1a209c2723fd0157b217c0286bdbd4040e4c50880177b8f8efa3f1d76a\"" Jan 13 21:32:35.302545 containerd[2100]: time="2025-01-13T21:32:35.302460278Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 13 21:32:39.090629 kubelet[3391]: I0113 21:32:39.090517 3391 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-svtpx" podStartSLOduration=6.090466342 podStartE2EDuration="6.090466342s" podCreationTimestamp="2025-01-13 21:32:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:32:34.259155289 +0000 UTC m=+15.727507142" watchObservedRunningTime="2025-01-13 21:32:39.090466342 +0000 UTC m=+20.558818213" Jan 13 21:32:39.336527 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4004320467.mount: Deactivated successfully. Jan 13 21:32:40.132991 containerd[2100]: time="2025-01-13T21:32:40.132937590Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:32:40.134431 containerd[2100]: time="2025-01-13T21:32:40.134274146Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21763697" Jan 13 21:32:40.138951 containerd[2100]: time="2025-01-13T21:32:40.135647447Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:32:40.147194 containerd[2100]: time="2025-01-13T21:32:40.147122773Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:32:40.148970 containerd[2100]: time="2025-01-13T21:32:40.148924043Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 4.846368241s" Jan 13 21:32:40.149155 containerd[2100]: time="2025-01-13T21:32:40.148977999Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 13 21:32:40.154471 containerd[2100]: time="2025-01-13T21:32:40.154430904Z" level=info msg="CreateContainer within sandbox \"47353b1a209c2723fd0157b217c0286bdbd4040e4c50880177b8f8efa3f1d76a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 13 21:32:40.223770 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1444997283.mount: Deactivated successfully. Jan 13 21:32:40.233278 containerd[2100]: time="2025-01-13T21:32:40.233230873Z" level=info msg="CreateContainer within sandbox \"47353b1a209c2723fd0157b217c0286bdbd4040e4c50880177b8f8efa3f1d76a\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"56e62e046bfbd46893da97df4402ab9d91b1b02d9c3f5fd921eea6f9a72e2973\"" Jan 13 21:32:40.234763 containerd[2100]: time="2025-01-13T21:32:40.233998251Z" level=info msg="StartContainer for \"56e62e046bfbd46893da97df4402ab9d91b1b02d9c3f5fd921eea6f9a72e2973\"" Jan 13 21:32:40.321881 containerd[2100]: time="2025-01-13T21:32:40.321809262Z" level=info msg="StartContainer for \"56e62e046bfbd46893da97df4402ab9d91b1b02d9c3f5fd921eea6f9a72e2973\" returns successfully" Jan 13 21:32:43.779654 kubelet[3391]: I0113 21:32:43.779603 3391 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-c7ccbd65-trhw6" podStartSLOduration=5.90955068 podStartE2EDuration="10.779528629s" podCreationTimestamp="2025-01-13 21:32:33 +0000 UTC" firstStartedPulling="2025-01-13 21:32:35.2795035 +0000 UTC m=+16.747855332" lastFinishedPulling="2025-01-13 21:32:40.149481437 +0000 UTC m=+21.617833281" observedRunningTime="2025-01-13 21:32:41.29290018 +0000 UTC m=+22.761252031" watchObservedRunningTime="2025-01-13 21:32:43.779528629 +0000 UTC m=+25.247880483" Jan 13 21:32:43.780314 kubelet[3391]: I0113 21:32:43.779765 3391 topology_manager.go:215] "Topology Admit Handler" podUID="54104ea0-0440-40a9-9abe-b389894c34cf" podNamespace="calico-system" podName="calico-typha-65bfbf68ff-5fznz" Jan 13 21:32:43.824641 kubelet[3391]: I0113 21:32:43.824585 3391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xggnp\" (UniqueName: \"kubernetes.io/projected/54104ea0-0440-40a9-9abe-b389894c34cf-kube-api-access-xggnp\") pod \"calico-typha-65bfbf68ff-5fznz\" (UID: \"54104ea0-0440-40a9-9abe-b389894c34cf\") " pod="calico-system/calico-typha-65bfbf68ff-5fznz" Jan 13 21:32:43.824801 kubelet[3391]: I0113 21:32:43.824656 3391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/54104ea0-0440-40a9-9abe-b389894c34cf-tigera-ca-bundle\") pod \"calico-typha-65bfbf68ff-5fznz\" (UID: \"54104ea0-0440-40a9-9abe-b389894c34cf\") " pod="calico-system/calico-typha-65bfbf68ff-5fznz" Jan 13 21:32:43.824801 kubelet[3391]: I0113 21:32:43.824687 3391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/54104ea0-0440-40a9-9abe-b389894c34cf-typha-certs\") pod \"calico-typha-65bfbf68ff-5fznz\" (UID: \"54104ea0-0440-40a9-9abe-b389894c34cf\") " pod="calico-system/calico-typha-65bfbf68ff-5fznz" Jan 13 21:32:44.004858 kubelet[3391]: I0113 21:32:44.004801 3391 topology_manager.go:215] "Topology Admit Handler" podUID="ee2e4eb5-5141-49ae-b4d9-ac88f344b28e" podNamespace="calico-system" podName="calico-node-2hbnf" Jan 13 21:32:44.094678 containerd[2100]: time="2025-01-13T21:32:44.094636732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-65bfbf68ff-5fznz,Uid:54104ea0-0440-40a9-9abe-b389894c34cf,Namespace:calico-system,Attempt:0,}" Jan 13 21:32:44.146998 kubelet[3391]: I0113 21:32:44.142455 3391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ee2e4eb5-5141-49ae-b4d9-ac88f344b28e-var-lib-calico\") pod \"calico-node-2hbnf\" (UID: \"ee2e4eb5-5141-49ae-b4d9-ac88f344b28e\") " pod="calico-system/calico-node-2hbnf" Jan 13 21:32:44.146998 kubelet[3391]: I0113 21:32:44.142508 3391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ee2e4eb5-5141-49ae-b4d9-ac88f344b28e-xtables-lock\") pod \"calico-node-2hbnf\" (UID: \"ee2e4eb5-5141-49ae-b4d9-ac88f344b28e\") " pod="calico-system/calico-node-2hbnf" Jan 13 21:32:44.146998 kubelet[3391]: I0113 21:32:44.142537 3391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ee2e4eb5-5141-49ae-b4d9-ac88f344b28e-policysync\") pod \"calico-node-2hbnf\" (UID: \"ee2e4eb5-5141-49ae-b4d9-ac88f344b28e\") " pod="calico-system/calico-node-2hbnf" Jan 13 21:32:44.146998 kubelet[3391]: I0113 21:32:44.142565 3391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ee2e4eb5-5141-49ae-b4d9-ac88f344b28e-cni-log-dir\") pod \"calico-node-2hbnf\" (UID: \"ee2e4eb5-5141-49ae-b4d9-ac88f344b28e\") " pod="calico-system/calico-node-2hbnf" Jan 13 21:32:44.146998 kubelet[3391]: I0113 21:32:44.142594 3391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kr9mh\" (UniqueName: \"kubernetes.io/projected/ee2e4eb5-5141-49ae-b4d9-ac88f344b28e-kube-api-access-kr9mh\") pod \"calico-node-2hbnf\" (UID: \"ee2e4eb5-5141-49ae-b4d9-ac88f344b28e\") " pod="calico-system/calico-node-2hbnf" Jan 13 21:32:44.147315 kubelet[3391]: I0113 21:32:44.142621 3391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ee2e4eb5-5141-49ae-b4d9-ac88f344b28e-flexvol-driver-host\") pod \"calico-node-2hbnf\" (UID: \"ee2e4eb5-5141-49ae-b4d9-ac88f344b28e\") " pod="calico-system/calico-node-2hbnf" Jan 13 21:32:44.147315 kubelet[3391]: I0113 21:32:44.142647 3391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ee2e4eb5-5141-49ae-b4d9-ac88f344b28e-node-certs\") pod \"calico-node-2hbnf\" (UID: \"ee2e4eb5-5141-49ae-b4d9-ac88f344b28e\") " pod="calico-system/calico-node-2hbnf" Jan 13 21:32:44.147315 kubelet[3391]: I0113 21:32:44.142672 3391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ee2e4eb5-5141-49ae-b4d9-ac88f344b28e-cni-net-dir\") pod \"calico-node-2hbnf\" (UID: \"ee2e4eb5-5141-49ae-b4d9-ac88f344b28e\") " pod="calico-system/calico-node-2hbnf" Jan 13 21:32:44.147315 kubelet[3391]: I0113 21:32:44.142708 3391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ee2e4eb5-5141-49ae-b4d9-ac88f344b28e-lib-modules\") pod \"calico-node-2hbnf\" (UID: \"ee2e4eb5-5141-49ae-b4d9-ac88f344b28e\") " pod="calico-system/calico-node-2hbnf" Jan 13 21:32:44.147315 kubelet[3391]: I0113 21:32:44.142735 3391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee2e4eb5-5141-49ae-b4d9-ac88f344b28e-tigera-ca-bundle\") pod \"calico-node-2hbnf\" (UID: \"ee2e4eb5-5141-49ae-b4d9-ac88f344b28e\") " pod="calico-system/calico-node-2hbnf" Jan 13 21:32:44.147526 kubelet[3391]: I0113 21:32:44.142765 3391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ee2e4eb5-5141-49ae-b4d9-ac88f344b28e-var-run-calico\") pod \"calico-node-2hbnf\" (UID: \"ee2e4eb5-5141-49ae-b4d9-ac88f344b28e\") " pod="calico-system/calico-node-2hbnf" Jan 13 21:32:44.147526 kubelet[3391]: I0113 21:32:44.142792 3391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ee2e4eb5-5141-49ae-b4d9-ac88f344b28e-cni-bin-dir\") pod \"calico-node-2hbnf\" (UID: \"ee2e4eb5-5141-49ae-b4d9-ac88f344b28e\") " pod="calico-system/calico-node-2hbnf" Jan 13 21:32:44.188140 containerd[2100]: time="2025-01-13T21:32:44.185976530Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:32:44.188140 containerd[2100]: time="2025-01-13T21:32:44.186036879Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:32:44.188140 containerd[2100]: time="2025-01-13T21:32:44.186061790Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:32:44.188140 containerd[2100]: time="2025-01-13T21:32:44.186157804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:32:44.262101 kubelet[3391]: E0113 21:32:44.262063 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:44.262101 kubelet[3391]: W0113 21:32:44.262098 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:44.267259 kubelet[3391]: E0113 21:32:44.263812 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:44.292702 kubelet[3391]: E0113 21:32:44.292000 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:44.292702 kubelet[3391]: W0113 21:32:44.292023 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:44.292702 kubelet[3391]: E0113 21:32:44.292078 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:44.334870 containerd[2100]: time="2025-01-13T21:32:44.334798638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2hbnf,Uid:ee2e4eb5-5141-49ae-b4d9-ac88f344b28e,Namespace:calico-system,Attempt:0,}" Jan 13 21:32:44.358233 kubelet[3391]: I0113 21:32:44.358007 3391 topology_manager.go:215] "Topology Admit Handler" podUID="1349c369-e827-4f6c-bda4-a032fbaa74c0" podNamespace="calico-system" podName="csi-node-driver-m7j9j" Jan 13 21:32:44.362329 kubelet[3391]: E0113 21:32:44.361817 3391 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m7j9j" podUID="1349c369-e827-4f6c-bda4-a032fbaa74c0" Jan 13 21:32:44.430946 kubelet[3391]: E0113 21:32:44.421326 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:44.430946 kubelet[3391]: W0113 21:32:44.421668 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:44.430946 kubelet[3391]: E0113 21:32:44.421700 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:44.430946 kubelet[3391]: E0113 21:32:44.429532 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:44.430946 kubelet[3391]: W0113 21:32:44.429554 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:44.430946 kubelet[3391]: E0113 21:32:44.429587 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:44.433668 kubelet[3391]: E0113 21:32:44.433642 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:44.433872 kubelet[3391]: W0113 21:32:44.433852 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:44.434028 kubelet[3391]: E0113 21:32:44.433956 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:44.435252 kubelet[3391]: E0113 21:32:44.435237 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:44.436606 kubelet[3391]: W0113 21:32:44.436243 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:44.437784 kubelet[3391]: E0113 21:32:44.436876 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:44.439513 kubelet[3391]: E0113 21:32:44.439442 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:44.439627 kubelet[3391]: W0113 21:32:44.439613 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:44.439719 kubelet[3391]: E0113 21:32:44.439709 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:44.441199 kubelet[3391]: E0113 21:32:44.440904 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:44.441199 kubelet[3391]: W0113 21:32:44.440919 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:44.441199 kubelet[3391]: E0113 21:32:44.440942 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:44.441522 kubelet[3391]: E0113 21:32:44.441477 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:44.442300 kubelet[3391]: W0113 21:32:44.441751 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:44.442300 kubelet[3391]: E0113 21:32:44.441775 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:44.442300 kubelet[3391]: E0113 21:32:44.442175 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:44.442300 kubelet[3391]: W0113 21:32:44.442186 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:44.442300 kubelet[3391]: E0113 21:32:44.442203 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:44.446542 kubelet[3391]: E0113 21:32:44.446521 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:44.448259 kubelet[3391]: W0113 21:32:44.447953 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:44.448259 kubelet[3391]: E0113 21:32:44.447989 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:44.454354 kubelet[3391]: E0113 21:32:44.452957 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:44.454354 kubelet[3391]: W0113 21:32:44.452977 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:44.454354 kubelet[3391]: E0113 21:32:44.453003 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:44.457036 kubelet[3391]: E0113 21:32:44.455552 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:44.457036 kubelet[3391]: W0113 21:32:44.455575 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:44.457036 kubelet[3391]: E0113 21:32:44.455603 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:44.461454 kubelet[3391]: E0113 21:32:44.458952 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:44.461454 kubelet[3391]: W0113 21:32:44.459125 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:44.461454 kubelet[3391]: E0113 21:32:44.459161 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:44.461454 kubelet[3391]: E0113 21:32:44.460237 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:44.461454 kubelet[3391]: W0113 21:32:44.460289 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:44.461454 kubelet[3391]: E0113 21:32:44.460310 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:44.462125 kubelet[3391]: E0113 21:32:44.461906 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:44.462125 kubelet[3391]: W0113 21:32:44.461923 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:44.462125 kubelet[3391]: E0113 21:32:44.462005 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:44.469578 kubelet[3391]: E0113 21:32:44.462609 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:44.469578 kubelet[3391]: W0113 21:32:44.462621 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:44.469578 kubelet[3391]: E0113 21:32:44.462638 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:44.469578 kubelet[3391]: E0113 21:32:44.463314 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:44.469578 kubelet[3391]: W0113 21:32:44.463325 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:44.469578 kubelet[3391]: E0113 21:32:44.463346 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:44.469578 kubelet[3391]: E0113 21:32:44.464172 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:44.469578 kubelet[3391]: W0113 21:32:44.464188 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:44.469578 kubelet[3391]: E0113 21:32:44.464211 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:44.473418 kubelet[3391]: I0113 21:32:44.464247 3391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1349c369-e827-4f6c-bda4-a032fbaa74c0-registration-dir\") pod \"csi-node-driver-m7j9j\" (UID: \"1349c369-e827-4f6c-bda4-a032fbaa74c0\") " pod="calico-system/csi-node-driver-m7j9j" Jan 13 21:32:44.473418 kubelet[3391]: E0113 21:32:44.464796 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:44.473418 kubelet[3391]: W0113 21:32:44.464811 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:44.473418 kubelet[3391]: E0113 21:32:44.465029 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:44.473418 kubelet[3391]: I0113 21:32:44.465065 3391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1349c369-e827-4f6c-bda4-a032fbaa74c0-kubelet-dir\") pod \"csi-node-driver-m7j9j\" (UID: \"1349c369-e827-4f6c-bda4-a032fbaa74c0\") " pod="calico-system/csi-node-driver-m7j9j" Jan 13 21:32:44.473418 kubelet[3391]: E0113 21:32:44.465635 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:44.473418 kubelet[3391]: W0113 21:32:44.465647 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:44.473418 kubelet[3391]: E0113 21:32:44.465667 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:44.473418 kubelet[3391]: E0113 21:32:44.466402 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:44.477977 kubelet[3391]: W0113 21:32:44.466423 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:44.477977 kubelet[3391]: E0113 21:32:44.466764 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:44.477977 kubelet[3391]: E0113 21:32:44.467074 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:44.477977 kubelet[3391]: W0113 21:32:44.467085 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:44.477977 kubelet[3391]: E0113 21:32:44.467351 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:44.477977 kubelet[3391]: E0113 21:32:44.467670 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:44.477977 kubelet[3391]: W0113 21:32:44.467682 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:44.477977 kubelet[3391]: E0113 21:32:44.467859 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:44.477977 kubelet[3391]: I0113 21:32:44.467897 3391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/1349c369-e827-4f6c-bda4-a032fbaa74c0-varrun\") pod \"csi-node-driver-m7j9j\" (UID: \"1349c369-e827-4f6c-bda4-a032fbaa74c0\") " pod="calico-system/csi-node-driver-m7j9j" Jan 13 21:32:44.479714 kubelet[3391]: E0113 21:32:44.468271 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:44.479714 kubelet[3391]: W0113 21:32:44.468282 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:44.479714 kubelet[3391]: E0113 21:32:44.468668 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:44.479714 kubelet[3391]: E0113 21:32:44.468869 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:44.479714 kubelet[3391]: W0113 21:32:44.468879 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:44.479714 kubelet[3391]: E0113 21:32:44.469041 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:44.479714 kubelet[3391]: E0113 21:32:44.469536 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:44.479714 kubelet[3391]: W0113 21:32:44.469548 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:44.479714 kubelet[3391]: E0113 21:32:44.469756 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:44.479714 kubelet[3391]: E0113 21:32:44.470320 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:44.487287 kubelet[3391]: W0113 21:32:44.470331 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:44.487287 kubelet[3391]: E0113 21:32:44.470351 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:44.487287 kubelet[3391]: E0113 21:32:44.471374 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:44.487287 kubelet[3391]: W0113 21:32:44.471386 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:44.487287 kubelet[3391]: E0113 21:32:44.471415 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:44.487287 kubelet[3391]: E0113 21:32:44.473986 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:44.487287 kubelet[3391]: W0113 21:32:44.473999 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:44.487287 kubelet[3391]: E0113 21:32:44.474017 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:44.487287 kubelet[3391]: E0113 21:32:44.474439 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:44.487287 kubelet[3391]: W0113 21:32:44.474452 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:44.487821 kubelet[3391]: E0113 21:32:44.474489 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:44.487821 kubelet[3391]: I0113 21:32:44.474725 3391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1349c369-e827-4f6c-bda4-a032fbaa74c0-socket-dir\") pod \"csi-node-driver-m7j9j\" (UID: \"1349c369-e827-4f6c-bda4-a032fbaa74c0\") " pod="calico-system/csi-node-driver-m7j9j" Jan 13 21:32:44.487821 kubelet[3391]: E0113 21:32:44.474810 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:44.487821 kubelet[3391]: W0113 21:32:44.474819 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:44.487821 kubelet[3391]: E0113 21:32:44.474848 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:44.487821 kubelet[3391]: E0113 21:32:44.477020 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:44.487821 kubelet[3391]: W0113 21:32:44.477034 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:44.487821 kubelet[3391]: E0113 21:32:44.477055 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:44.487821 kubelet[3391]: E0113 21:32:44.477776 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:44.493584 kubelet[3391]: W0113 21:32:44.477789 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:44.493584 kubelet[3391]: E0113 21:32:44.477806 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:44.559163 containerd[2100]: time="2025-01-13T21:32:44.530985500Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:32:44.559163 containerd[2100]: time="2025-01-13T21:32:44.531115797Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:32:44.559163 containerd[2100]: time="2025-01-13T21:32:44.531161697Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:32:44.559163 containerd[2100]: time="2025-01-13T21:32:44.531357831Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:32:44.578661 kubelet[3391]: E0113 21:32:44.577050 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:44.578661 kubelet[3391]: W0113 21:32:44.577083 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:44.578661 kubelet[3391]: E0113 21:32:44.577108 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:44.578661 kubelet[3391]: E0113 21:32:44.577465 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:44.578661 kubelet[3391]: W0113 21:32:44.577476 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:44.578661 kubelet[3391]: E0113 21:32:44.577501 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:44.578661 kubelet[3391]: E0113 21:32:44.577733 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:44.578661 kubelet[3391]: W0113 21:32:44.577744 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:44.578661 kubelet[3391]: E0113 21:32:44.577854 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:44.578661 kubelet[3391]: E0113 21:32:44.578053 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:44.580320 kubelet[3391]: W0113 21:32:44.578064 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:44.580320 kubelet[3391]: E0113 21:32:44.578102 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:44.580320 kubelet[3391]: E0113 21:32:44.578418 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:44.580320 kubelet[3391]: W0113 21:32:44.578429 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:44.580320 kubelet[3391]: E0113 21:32:44.578469 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:44.580320 kubelet[3391]: E0113 21:32:44.579025 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:44.580320 kubelet[3391]: W0113 21:32:44.579036 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:44.580320 kubelet[3391]: E0113 21:32:44.579181 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:44.580320 kubelet[3391]: I0113 21:32:44.579215 3391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gm8z\" (UniqueName: \"kubernetes.io/projected/1349c369-e827-4f6c-bda4-a032fbaa74c0-kube-api-access-6gm8z\") pod \"csi-node-driver-m7j9j\" (UID: \"1349c369-e827-4f6c-bda4-a032fbaa74c0\") " pod="calico-system/csi-node-driver-m7j9j" Jan 13 21:32:44.580698 kubelet[3391]: E0113 21:32:44.579913 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:44.580698 kubelet[3391]: W0113 21:32:44.579939 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:44.580698 kubelet[3391]: E0113 21:32:44.579970 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:44.586745 kubelet[3391]: E0113 21:32:44.581241 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:44.586745 kubelet[3391]: W0113 21:32:44.581252 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:44.586745 kubelet[3391]: E0113 21:32:44.581297 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:44.586745 kubelet[3391]: E0113 21:32:44.581580 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:44.586745 kubelet[3391]: W0113 21:32:44.581589 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:44.586745 kubelet[3391]: E0113 21:32:44.581650 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:44.586745 kubelet[3391]: E0113 21:32:44.581907 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:44.586745 kubelet[3391]: W0113 21:32:44.581917 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:44.586745 kubelet[3391]: E0113 21:32:44.582005 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:44.586745 kubelet[3391]: E0113 21:32:44.582365 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:44.587981 kubelet[3391]: W0113 21:32:44.582376 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:44.587981 kubelet[3391]: E0113 21:32:44.582511 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:44.587981 kubelet[3391]: E0113 21:32:44.582949 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:44.587981 kubelet[3391]: W0113 21:32:44.582960 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:44.587981 kubelet[3391]: E0113 21:32:44.583116 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:44.587981 kubelet[3391]: E0113 21:32:44.583599 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:44.587981 kubelet[3391]: W0113 21:32:44.583611 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:44.587981 kubelet[3391]: E0113 21:32:44.583951 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:44.587981 kubelet[3391]: E0113 21:32:44.584340 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:44.587981 kubelet[3391]: W0113 21:32:44.584351 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:44.588531 kubelet[3391]: E0113 21:32:44.584622 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:44.588531 kubelet[3391]: E0113 21:32:44.585073 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:44.588531 kubelet[3391]: W0113 21:32:44.585084 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:44.588531 kubelet[3391]: E0113 21:32:44.585258 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:44.588531 kubelet[3391]: E0113 21:32:44.587636 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:44.588531 kubelet[3391]: W0113 21:32:44.587651 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:44.588531 kubelet[3391]: E0113 21:32:44.587760 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:44.588531 kubelet[3391]: E0113 21:32:44.587920 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:44.588531 kubelet[3391]: W0113 21:32:44.587930 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:44.588531 kubelet[3391]: E0113 21:32:44.587987 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:44.588990 kubelet[3391]: E0113 21:32:44.588224 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:44.588990 kubelet[3391]: W0113 21:32:44.588235 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:44.588990 kubelet[3391]: E0113 21:32:44.588325 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:44.588990 kubelet[3391]: E0113 21:32:44.588487 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:44.588990 kubelet[3391]: W0113 21:32:44.588496 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:44.588990 kubelet[3391]: E0113 21:32:44.588670 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:44.588990 kubelet[3391]: W0113 21:32:44.588679 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:44.588990 kubelet[3391]: E0113 21:32:44.588696 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:44.588990 kubelet[3391]: E0113 21:32:44.588959 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:44.588990 kubelet[3391]: W0113 21:32:44.588969 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:44.589402 kubelet[3391]: E0113 21:32:44.588984 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:44.589402 kubelet[3391]: E0113 21:32:44.589012 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:44.589402 kubelet[3391]: E0113 21:32:44.589339 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:44.589402 kubelet[3391]: W0113 21:32:44.589349 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:44.589402 kubelet[3391]: E0113 21:32:44.589397 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:44.589951 kubelet[3391]: E0113 21:32:44.589935 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:44.589951 kubelet[3391]: W0113 21:32:44.589953 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:44.590086 kubelet[3391]: E0113 21:32:44.589969 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:44.597462 containerd[2100]: time="2025-01-13T21:32:44.595902622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-65bfbf68ff-5fznz,Uid:54104ea0-0440-40a9-9abe-b389894c34cf,Namespace:calico-system,Attempt:0,} returns sandbox id \"db9256db5d192b356351934106bb6b94a629e6ecefda2cbc743d9eb60a42d88b\"" Jan 13 21:32:44.603504 containerd[2100]: time="2025-01-13T21:32:44.602888620Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 13 21:32:44.685195 kubelet[3391]: E0113 21:32:44.684983 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:44.685195 kubelet[3391]: W0113 21:32:44.685108 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:44.685195 kubelet[3391]: E0113 21:32:44.685134 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:44.694667 kubelet[3391]: E0113 21:32:44.692661 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:44.694667 kubelet[3391]: W0113 21:32:44.692687 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:44.694667 kubelet[3391]: E0113 21:32:44.692867 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:44.694667 kubelet[3391]: E0113 21:32:44.693505 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:44.694667 kubelet[3391]: W0113 21:32:44.693695 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:44.694667 kubelet[3391]: E0113 21:32:44.693887 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:44.697883 kubelet[3391]: E0113 21:32:44.696283 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:44.697883 kubelet[3391]: W0113 21:32:44.696304 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:44.697883 kubelet[3391]: E0113 21:32:44.696480 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:44.697883 kubelet[3391]: E0113 21:32:44.697125 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:44.697883 kubelet[3391]: W0113 21:32:44.697137 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:44.697883 kubelet[3391]: E0113 21:32:44.697155 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:44.715880 kubelet[3391]: E0113 21:32:44.715412 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:44.715880 kubelet[3391]: W0113 21:32:44.715433 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:44.715880 kubelet[3391]: E0113 21:32:44.715458 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:44.724819 containerd[2100]: time="2025-01-13T21:32:44.722481714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2hbnf,Uid:ee2e4eb5-5141-49ae-b4d9-ac88f344b28e,Namespace:calico-system,Attempt:0,} returns sandbox id \"86065975bf38d7eed27a648545feb36ac980ab74f010ab503ba8b83db0700f88\"" Jan 13 21:32:46.067707 kubelet[3391]: E0113 21:32:46.067517 3391 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m7j9j" podUID="1349c369-e827-4f6c-bda4-a032fbaa74c0" Jan 13 21:32:46.222848 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount223884569.mount: Deactivated successfully. Jan 13 21:32:47.375208 containerd[2100]: time="2025-01-13T21:32:47.375158559Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:32:47.377399 containerd[2100]: time="2025-01-13T21:32:47.377244650Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Jan 13 21:32:47.380922 containerd[2100]: time="2025-01-13T21:32:47.379355414Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:32:47.385494 containerd[2100]: time="2025-01-13T21:32:47.385426844Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:32:47.386523 containerd[2100]: time="2025-01-13T21:32:47.386176389Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.783240791s" Jan 13 21:32:47.386523 containerd[2100]: time="2025-01-13T21:32:47.386221177Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 13 21:32:47.440695 containerd[2100]: time="2025-01-13T21:32:47.440656629Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 13 21:32:47.446668 containerd[2100]: time="2025-01-13T21:32:47.446280308Z" level=info msg="CreateContainer within sandbox \"db9256db5d192b356351934106bb6b94a629e6ecefda2cbc743d9eb60a42d88b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 13 21:32:47.473771 containerd[2100]: time="2025-01-13T21:32:47.473728088Z" level=info msg="CreateContainer within sandbox \"db9256db5d192b356351934106bb6b94a629e6ecefda2cbc743d9eb60a42d88b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"dc73a1e969a2e18e01802b7dc5c7590837da06f71b4eca2aed90783f94a6eff8\"" Jan 13 21:32:47.474797 containerd[2100]: time="2025-01-13T21:32:47.474741876Z" level=info msg="StartContainer for \"dc73a1e969a2e18e01802b7dc5c7590837da06f71b4eca2aed90783f94a6eff8\"" Jan 13 21:32:47.658929 containerd[2100]: time="2025-01-13T21:32:47.642981820Z" level=info msg="StartContainer for \"dc73a1e969a2e18e01802b7dc5c7590837da06f71b4eca2aed90783f94a6eff8\" returns successfully" Jan 13 21:32:48.067983 kubelet[3391]: E0113 21:32:48.067934 3391 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m7j9j" podUID="1349c369-e827-4f6c-bda4-a032fbaa74c0" Jan 13 21:32:48.352534 kubelet[3391]: E0113 21:32:48.341246 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:48.352534 kubelet[3391]: W0113 21:32:48.341279 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:48.352534 kubelet[3391]: E0113 21:32:48.341308 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:48.352534 kubelet[3391]: E0113 21:32:48.343936 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:48.352534 kubelet[3391]: W0113 21:32:48.343957 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:48.352534 kubelet[3391]: E0113 21:32:48.343983 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:48.352534 kubelet[3391]: E0113 21:32:48.345237 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:48.352534 kubelet[3391]: W0113 21:32:48.345255 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:48.352534 kubelet[3391]: E0113 21:32:48.345278 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:48.352534 kubelet[3391]: E0113 21:32:48.346883 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:48.354844 kubelet[3391]: W0113 21:32:48.346898 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:48.354844 kubelet[3391]: E0113 21:32:48.346920 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:48.354844 kubelet[3391]: E0113 21:32:48.348955 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:48.354844 kubelet[3391]: W0113 21:32:48.348971 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:48.354844 kubelet[3391]: E0113 21:32:48.349050 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:48.354844 kubelet[3391]: E0113 21:32:48.350477 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:48.354844 kubelet[3391]: W0113 21:32:48.350493 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:48.354844 kubelet[3391]: E0113 21:32:48.350515 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:48.354844 kubelet[3391]: E0113 21:32:48.350737 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:48.354844 kubelet[3391]: W0113 21:32:48.350747 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:48.355289 kubelet[3391]: E0113 21:32:48.350764 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:48.355289 kubelet[3391]: E0113 21:32:48.351008 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:48.355289 kubelet[3391]: W0113 21:32:48.351018 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:48.355289 kubelet[3391]: E0113 21:32:48.351034 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:48.355289 kubelet[3391]: E0113 21:32:48.351292 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:48.355289 kubelet[3391]: W0113 21:32:48.351303 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:48.355289 kubelet[3391]: E0113 21:32:48.351319 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:48.357605 kubelet[3391]: E0113 21:32:48.357023 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:48.357605 kubelet[3391]: W0113 21:32:48.357047 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:48.357605 kubelet[3391]: E0113 21:32:48.357073 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:48.357605 kubelet[3391]: E0113 21:32:48.357479 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:48.357605 kubelet[3391]: W0113 21:32:48.357493 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:48.357605 kubelet[3391]: E0113 21:32:48.357532 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:48.358430 kubelet[3391]: E0113 21:32:48.358205 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:48.358430 kubelet[3391]: W0113 21:32:48.358217 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:48.358743 kubelet[3391]: E0113 21:32:48.358538 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:48.358894 kubelet[3391]: E0113 21:32:48.358882 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:48.359047 kubelet[3391]: W0113 21:32:48.358969 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:48.359047 kubelet[3391]: E0113 21:32:48.358990 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:48.359438 kubelet[3391]: E0113 21:32:48.359331 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:48.359438 kubelet[3391]: W0113 21:32:48.359343 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:48.359438 kubelet[3391]: E0113 21:32:48.359360 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:48.359861 kubelet[3391]: E0113 21:32:48.359757 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:48.359861 kubelet[3391]: W0113 21:32:48.359769 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:48.359861 kubelet[3391]: E0113 21:32:48.359787 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:48.370707 kubelet[3391]: I0113 21:32:48.369940 3391 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-65bfbf68ff-5fznz" podStartSLOduration=2.5849529479999998 podStartE2EDuration="5.369887806s" podCreationTimestamp="2025-01-13 21:32:43 +0000 UTC" firstStartedPulling="2025-01-13 21:32:44.602220102 +0000 UTC m=+26.070571943" lastFinishedPulling="2025-01-13 21:32:47.387154948 +0000 UTC m=+28.855506801" observedRunningTime="2025-01-13 21:32:48.354602047 +0000 UTC m=+29.822953900" watchObservedRunningTime="2025-01-13 21:32:48.369887806 +0000 UTC m=+29.838239667" Jan 13 21:32:48.433320 kubelet[3391]: E0113 21:32:48.433261 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:48.433320 kubelet[3391]: W0113 21:32:48.433288 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:48.433320 kubelet[3391]: E0113 21:32:48.433317 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:48.433791 kubelet[3391]: E0113 21:32:48.433713 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:48.433791 kubelet[3391]: W0113 21:32:48.433730 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:48.433791 kubelet[3391]: E0113 21:32:48.433765 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:48.434036 kubelet[3391]: E0113 21:32:48.434022 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:48.434036 kubelet[3391]: W0113 21:32:48.434032 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:48.434198 kubelet[3391]: E0113 21:32:48.434062 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:48.434379 kubelet[3391]: E0113 21:32:48.434364 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:48.434379 kubelet[3391]: W0113 21:32:48.434377 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:48.434487 kubelet[3391]: E0113 21:32:48.434412 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:48.434710 kubelet[3391]: E0113 21:32:48.434692 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:48.434710 kubelet[3391]: W0113 21:32:48.434706 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:48.434865 kubelet[3391]: E0113 21:32:48.434740 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:48.435220 kubelet[3391]: E0113 21:32:48.435202 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:48.435220 kubelet[3391]: W0113 21:32:48.435216 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:48.435340 kubelet[3391]: E0113 21:32:48.435250 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:48.435543 kubelet[3391]: E0113 21:32:48.435525 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:48.435543 kubelet[3391]: W0113 21:32:48.435539 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:48.435791 kubelet[3391]: E0113 21:32:48.435561 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:48.436064 kubelet[3391]: E0113 21:32:48.435920 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:48.436064 kubelet[3391]: W0113 21:32:48.435933 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:48.436064 kubelet[3391]: E0113 21:32:48.435955 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:48.436390 kubelet[3391]: E0113 21:32:48.436362 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:48.436390 kubelet[3391]: W0113 21:32:48.436389 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:48.436570 kubelet[3391]: E0113 21:32:48.436410 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:48.436759 kubelet[3391]: E0113 21:32:48.436743 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:48.436759 kubelet[3391]: W0113 21:32:48.436756 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:48.436919 kubelet[3391]: E0113 21:32:48.436791 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:48.437117 kubelet[3391]: E0113 21:32:48.437101 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:48.437117 kubelet[3391]: W0113 21:32:48.437114 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:48.437304 kubelet[3391]: E0113 21:32:48.437195 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:48.437370 kubelet[3391]: E0113 21:32:48.437360 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:48.437419 kubelet[3391]: W0113 21:32:48.437371 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:48.437419 kubelet[3391]: E0113 21:32:48.437411 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:48.437693 kubelet[3391]: E0113 21:32:48.437677 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:48.437693 kubelet[3391]: W0113 21:32:48.437690 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:48.437927 kubelet[3391]: E0113 21:32:48.437743 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:48.438046 kubelet[3391]: E0113 21:32:48.438030 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:48.438046 kubelet[3391]: W0113 21:32:48.438044 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:48.438143 kubelet[3391]: E0113 21:32:48.438068 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:48.438398 kubelet[3391]: E0113 21:32:48.438380 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:48.438398 kubelet[3391]: W0113 21:32:48.438394 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:48.438718 kubelet[3391]: E0113 21:32:48.438468 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:48.438982 kubelet[3391]: E0113 21:32:48.438969 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:48.439087 kubelet[3391]: W0113 21:32:48.439051 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:48.439087 kubelet[3391]: E0113 21:32:48.439080 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:48.439675 kubelet[3391]: E0113 21:32:48.439658 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:48.439675 kubelet[3391]: W0113 21:32:48.439673 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:48.439785 kubelet[3391]: E0113 21:32:48.439695 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:48.440009 kubelet[3391]: E0113 21:32:48.439991 3391 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 21:32:48.440009 kubelet[3391]: W0113 21:32:48.440007 3391 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 21:32:48.440095 kubelet[3391]: E0113 21:32:48.440023 3391 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 21:32:48.778124 containerd[2100]: time="2025-01-13T21:32:48.777823299Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:32:48.781502 containerd[2100]: time="2025-01-13T21:32:48.781380799Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Jan 13 21:32:48.784100 containerd[2100]: time="2025-01-13T21:32:48.784064435Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:32:48.787483 containerd[2100]: time="2025-01-13T21:32:48.787426403Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:32:48.788623 containerd[2100]: time="2025-01-13T21:32:48.788467035Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.347771362s" Jan 13 21:32:48.788623 containerd[2100]: time="2025-01-13T21:32:48.788513808Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 13 21:32:48.791342 containerd[2100]: time="2025-01-13T21:32:48.791118725Z" level=info msg="CreateContainer within sandbox \"86065975bf38d7eed27a648545feb36ac980ab74f010ab503ba8b83db0700f88\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 13 21:32:48.819087 containerd[2100]: time="2025-01-13T21:32:48.818043349Z" level=info msg="CreateContainer within sandbox \"86065975bf38d7eed27a648545feb36ac980ab74f010ab503ba8b83db0700f88\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"f4928e5573813d695eb78a172909eab8f6b047e62b6ce19670871d244b1673ea\"" Jan 13 21:32:48.821134 containerd[2100]: time="2025-01-13T21:32:48.820777338Z" level=info msg="StartContainer for \"f4928e5573813d695eb78a172909eab8f6b047e62b6ce19670871d244b1673ea\"" Jan 13 21:32:48.956178 containerd[2100]: time="2025-01-13T21:32:48.956131850Z" level=info msg="StartContainer for \"f4928e5573813d695eb78a172909eab8f6b047e62b6ce19670871d244b1673ea\" returns successfully" Jan 13 21:32:48.997961 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f4928e5573813d695eb78a172909eab8f6b047e62b6ce19670871d244b1673ea-rootfs.mount: Deactivated successfully. Jan 13 21:32:49.080369 containerd[2100]: time="2025-01-13T21:32:49.031312056Z" level=info msg="shim disconnected" id=f4928e5573813d695eb78a172909eab8f6b047e62b6ce19670871d244b1673ea namespace=k8s.io Jan 13 21:32:49.080659 containerd[2100]: time="2025-01-13T21:32:49.080374696Z" level=warning msg="cleaning up after shim disconnected" id=f4928e5573813d695eb78a172909eab8f6b047e62b6ce19670871d244b1673ea namespace=k8s.io Jan 13 21:32:49.080659 containerd[2100]: time="2025-01-13T21:32:49.080399230Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:32:49.331530 containerd[2100]: time="2025-01-13T21:32:49.331239189Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 13 21:32:50.071136 kubelet[3391]: E0113 21:32:50.071077 3391 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m7j9j" podUID="1349c369-e827-4f6c-bda4-a032fbaa74c0" Jan 13 21:32:52.067546 kubelet[3391]: E0113 21:32:52.067514 3391 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m7j9j" podUID="1349c369-e827-4f6c-bda4-a032fbaa74c0" Jan 13 21:32:53.928079 containerd[2100]: time="2025-01-13T21:32:53.928028301Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:32:53.929915 containerd[2100]: time="2025-01-13T21:32:53.929814976Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 13 21:32:53.930973 containerd[2100]: time="2025-01-13T21:32:53.930935582Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:32:53.948860 containerd[2100]: time="2025-01-13T21:32:53.948315962Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:32:53.949648 containerd[2100]: time="2025-01-13T21:32:53.949606222Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 4.618259609s" Jan 13 21:32:53.949813 containerd[2100]: time="2025-01-13T21:32:53.949788074Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 13 21:32:53.956817 containerd[2100]: time="2025-01-13T21:32:53.956711072Z" level=info msg="CreateContainer within sandbox \"86065975bf38d7eed27a648545feb36ac980ab74f010ab503ba8b83db0700f88\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 13 21:32:53.991536 containerd[2100]: time="2025-01-13T21:32:53.991484084Z" level=info msg="CreateContainer within sandbox \"86065975bf38d7eed27a648545feb36ac980ab74f010ab503ba8b83db0700f88\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a8c35e3bdc1ed6cdc2139c1480182e99763d9cbf4901dee719311bc9ff4a57ba\"" Jan 13 21:32:53.993932 containerd[2100]: time="2025-01-13T21:32:53.992339033Z" level=info msg="StartContainer for \"a8c35e3bdc1ed6cdc2139c1480182e99763d9cbf4901dee719311bc9ff4a57ba\"" Jan 13 21:32:54.068709 kubelet[3391]: E0113 21:32:54.067545 3391 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m7j9j" podUID="1349c369-e827-4f6c-bda4-a032fbaa74c0" Jan 13 21:32:54.118897 containerd[2100]: time="2025-01-13T21:32:54.118819954Z" level=info msg="StartContainer for \"a8c35e3bdc1ed6cdc2139c1480182e99763d9cbf4901dee719311bc9ff4a57ba\" returns successfully" Jan 13 21:32:55.200324 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a8c35e3bdc1ed6cdc2139c1480182e99763d9cbf4901dee719311bc9ff4a57ba-rootfs.mount: Deactivated successfully. Jan 13 21:32:55.205584 containerd[2100]: time="2025-01-13T21:32:55.204936041Z" level=info msg="shim disconnected" id=a8c35e3bdc1ed6cdc2139c1480182e99763d9cbf4901dee719311bc9ff4a57ba namespace=k8s.io Jan 13 21:32:55.205584 containerd[2100]: time="2025-01-13T21:32:55.205006131Z" level=warning msg="cleaning up after shim disconnected" id=a8c35e3bdc1ed6cdc2139c1480182e99763d9cbf4901dee719311bc9ff4a57ba namespace=k8s.io Jan 13 21:32:55.205584 containerd[2100]: time="2025-01-13T21:32:55.205020813Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:32:55.235178 kubelet[3391]: I0113 21:32:55.234735 3391 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 13 21:32:55.237279 containerd[2100]: time="2025-01-13T21:32:55.237223358Z" level=warning msg="cleanup warnings time=\"2025-01-13T21:32:55Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 13 21:32:55.304865 kubelet[3391]: I0113 21:32:55.303884 3391 topology_manager.go:215] "Topology Admit Handler" podUID="5d39a778-23bc-4ff9-9d67-cbce50e1aa94" podNamespace="kube-system" podName="coredns-76f75df574-xh79p" Jan 13 21:32:55.317982 kubelet[3391]: I0113 21:32:55.315391 3391 topology_manager.go:215] "Topology Admit Handler" podUID="1da65a44-04e3-44d6-8959-9a867b5fe933" podNamespace="kube-system" podName="coredns-76f75df574-qn2h7" Jan 13 21:32:55.340821 kubelet[3391]: I0113 21:32:55.339382 3391 topology_manager.go:215] "Topology Admit Handler" podUID="6516d32b-0c84-4b53-a73d-5859b4a02633" podNamespace="calico-system" podName="calico-kube-controllers-97574c6fb-sdstw" Jan 13 21:32:55.340821 kubelet[3391]: I0113 21:32:55.339596 3391 topology_manager.go:215] "Topology Admit Handler" podUID="3ddd0a1e-3af8-462a-b5e7-d0696cbfc1e1" podNamespace="calico-apiserver" podName="calico-apiserver-7f9ff6c558-5ns5g" Jan 13 21:32:55.340821 kubelet[3391]: I0113 21:32:55.339721 3391 topology_manager.go:215] "Topology Admit Handler" podUID="7531431e-bdd0-4c4b-b0d9-91a26f9acf4a" podNamespace="calico-apiserver" podName="calico-apiserver-7f9ff6c558-pqxl5" Jan 13 21:32:55.406317 kubelet[3391]: I0113 21:32:55.406044 3391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1da65a44-04e3-44d6-8959-9a867b5fe933-config-volume\") pod \"coredns-76f75df574-qn2h7\" (UID: \"1da65a44-04e3-44d6-8959-9a867b5fe933\") " pod="kube-system/coredns-76f75df574-qn2h7" Jan 13 21:32:55.406317 kubelet[3391]: I0113 21:32:55.406186 3391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjscp\" (UniqueName: \"kubernetes.io/projected/5d39a778-23bc-4ff9-9d67-cbce50e1aa94-kube-api-access-kjscp\") pod \"coredns-76f75df574-xh79p\" (UID: \"5d39a778-23bc-4ff9-9d67-cbce50e1aa94\") " pod="kube-system/coredns-76f75df574-xh79p" Jan 13 21:32:55.406317 kubelet[3391]: I0113 21:32:55.406247 3391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlf27\" (UniqueName: \"kubernetes.io/projected/1da65a44-04e3-44d6-8959-9a867b5fe933-kube-api-access-xlf27\") pod \"coredns-76f75df574-qn2h7\" (UID: \"1da65a44-04e3-44d6-8959-9a867b5fe933\") " pod="kube-system/coredns-76f75df574-qn2h7" Jan 13 21:32:55.406956 kubelet[3391]: I0113 21:32:55.406597 3391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5d39a778-23bc-4ff9-9d67-cbce50e1aa94-config-volume\") pod \"coredns-76f75df574-xh79p\" (UID: \"5d39a778-23bc-4ff9-9d67-cbce50e1aa94\") " pod="kube-system/coredns-76f75df574-xh79p" Jan 13 21:32:55.411086 containerd[2100]: time="2025-01-13T21:32:55.409596308Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 13 21:32:55.509412 kubelet[3391]: I0113 21:32:55.507921 3391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6516d32b-0c84-4b53-a73d-5859b4a02633-tigera-ca-bundle\") pod \"calico-kube-controllers-97574c6fb-sdstw\" (UID: \"6516d32b-0c84-4b53-a73d-5859b4a02633\") " pod="calico-system/calico-kube-controllers-97574c6fb-sdstw" Jan 13 21:32:55.509412 kubelet[3391]: I0113 21:32:55.508035 3391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbfzk\" (UniqueName: \"kubernetes.io/projected/7531431e-bdd0-4c4b-b0d9-91a26f9acf4a-kube-api-access-mbfzk\") pod \"calico-apiserver-7f9ff6c558-pqxl5\" (UID: \"7531431e-bdd0-4c4b-b0d9-91a26f9acf4a\") " pod="calico-apiserver/calico-apiserver-7f9ff6c558-pqxl5" Jan 13 21:32:55.509412 kubelet[3391]: I0113 21:32:55.508098 3391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7531431e-bdd0-4c4b-b0d9-91a26f9acf4a-calico-apiserver-certs\") pod \"calico-apiserver-7f9ff6c558-pqxl5\" (UID: \"7531431e-bdd0-4c4b-b0d9-91a26f9acf4a\") " pod="calico-apiserver/calico-apiserver-7f9ff6c558-pqxl5" Jan 13 21:32:55.509412 kubelet[3391]: I0113 21:32:55.508130 3391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3ddd0a1e-3af8-462a-b5e7-d0696cbfc1e1-calico-apiserver-certs\") pod \"calico-apiserver-7f9ff6c558-5ns5g\" (UID: \"3ddd0a1e-3af8-462a-b5e7-d0696cbfc1e1\") " pod="calico-apiserver/calico-apiserver-7f9ff6c558-5ns5g" Jan 13 21:32:55.509412 kubelet[3391]: I0113 21:32:55.508207 3391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sq9qv\" (UniqueName: \"kubernetes.io/projected/6516d32b-0c84-4b53-a73d-5859b4a02633-kube-api-access-sq9qv\") pod \"calico-kube-controllers-97574c6fb-sdstw\" (UID: \"6516d32b-0c84-4b53-a73d-5859b4a02633\") " pod="calico-system/calico-kube-controllers-97574c6fb-sdstw" Jan 13 21:32:55.510736 kubelet[3391]: I0113 21:32:55.510193 3391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xwfn\" (UniqueName: \"kubernetes.io/projected/3ddd0a1e-3af8-462a-b5e7-d0696cbfc1e1-kube-api-access-9xwfn\") pod \"calico-apiserver-7f9ff6c558-5ns5g\" (UID: \"3ddd0a1e-3af8-462a-b5e7-d0696cbfc1e1\") " pod="calico-apiserver/calico-apiserver-7f9ff6c558-5ns5g" Jan 13 21:32:55.681886 containerd[2100]: time="2025-01-13T21:32:55.681703309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-qn2h7,Uid:1da65a44-04e3-44d6-8959-9a867b5fe933,Namespace:kube-system,Attempt:0,}" Jan 13 21:32:55.682154 containerd[2100]: time="2025-01-13T21:32:55.681703409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xh79p,Uid:5d39a778-23bc-4ff9-9d67-cbce50e1aa94,Namespace:kube-system,Attempt:0,}" Jan 13 21:32:55.689875 containerd[2100]: time="2025-01-13T21:32:55.689812199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f9ff6c558-pqxl5,Uid:7531431e-bdd0-4c4b-b0d9-91a26f9acf4a,Namespace:calico-apiserver,Attempt:0,}" Jan 13 21:32:55.701354 containerd[2100]: time="2025-01-13T21:32:55.700946036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-97574c6fb-sdstw,Uid:6516d32b-0c84-4b53-a73d-5859b4a02633,Namespace:calico-system,Attempt:0,}" Jan 13 21:32:55.710568 containerd[2100]: time="2025-01-13T21:32:55.710523183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f9ff6c558-5ns5g,Uid:3ddd0a1e-3af8-462a-b5e7-d0696cbfc1e1,Namespace:calico-apiserver,Attempt:0,}" Jan 13 21:32:56.079727 containerd[2100]: time="2025-01-13T21:32:56.079308889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-m7j9j,Uid:1349c369-e827-4f6c-bda4-a032fbaa74c0,Namespace:calico-system,Attempt:0,}" Jan 13 21:32:56.245750 containerd[2100]: time="2025-01-13T21:32:56.245689634Z" level=error msg="Failed to destroy network for sandbox \"50ab2d574563d2cf3f4b79dafcfe83b0dc64537f2ab7150dfdc24bdac25ad4c8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:32:56.255783 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-50ab2d574563d2cf3f4b79dafcfe83b0dc64537f2ab7150dfdc24bdac25ad4c8-shm.mount: Deactivated successfully. Jan 13 21:32:56.260691 containerd[2100]: time="2025-01-13T21:32:56.260611570Z" level=error msg="encountered an error cleaning up failed sandbox \"50ab2d574563d2cf3f4b79dafcfe83b0dc64537f2ab7150dfdc24bdac25ad4c8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:32:56.284151 containerd[2100]: time="2025-01-13T21:32:56.283069691Z" level=error msg="Failed to destroy network for sandbox \"7e61938cb3913460527310aa3c8f00066ff32ec8133d0845c3cac36864b1ea63\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:32:56.284419 containerd[2100]: time="2025-01-13T21:32:56.284372883Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f9ff6c558-pqxl5,Uid:7531431e-bdd0-4c4b-b0d9-91a26f9acf4a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"50ab2d574563d2cf3f4b79dafcfe83b0dc64537f2ab7150dfdc24bdac25ad4c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:32:56.285854 containerd[2100]: time="2025-01-13T21:32:56.285420391Z" level=error msg="encountered an error cleaning up failed sandbox \"7e61938cb3913460527310aa3c8f00066ff32ec8133d0845c3cac36864b1ea63\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:32:56.285854 containerd[2100]: time="2025-01-13T21:32:56.285513482Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-qn2h7,Uid:1da65a44-04e3-44d6-8959-9a867b5fe933,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7e61938cb3913460527310aa3c8f00066ff32ec8133d0845c3cac36864b1ea63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:32:56.289601 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7e61938cb3913460527310aa3c8f00066ff32ec8133d0845c3cac36864b1ea63-shm.mount: Deactivated successfully. Jan 13 21:32:56.308911 kubelet[3391]: E0113 21:32:56.308870 3391 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e61938cb3913460527310aa3c8f00066ff32ec8133d0845c3cac36864b1ea63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:32:56.309450 kubelet[3391]: E0113 21:32:56.308998 3391 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50ab2d574563d2cf3f4b79dafcfe83b0dc64537f2ab7150dfdc24bdac25ad4c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:32:56.323874 kubelet[3391]: E0113 21:32:56.323822 3391 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e61938cb3913460527310aa3c8f00066ff32ec8133d0845c3cac36864b1ea63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-qn2h7" Jan 13 21:32:56.323874 kubelet[3391]: E0113 21:32:56.323889 3391 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e61938cb3913460527310aa3c8f00066ff32ec8133d0845c3cac36864b1ea63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-qn2h7" Jan 13 21:32:56.324119 kubelet[3391]: E0113 21:32:56.323971 3391 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-qn2h7_kube-system(1da65a44-04e3-44d6-8959-9a867b5fe933)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-qn2h7_kube-system(1da65a44-04e3-44d6-8959-9a867b5fe933)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7e61938cb3913460527310aa3c8f00066ff32ec8133d0845c3cac36864b1ea63\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-qn2h7" podUID="1da65a44-04e3-44d6-8959-9a867b5fe933" Jan 13 21:32:56.324119 kubelet[3391]: E0113 21:32:56.324027 3391 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50ab2d574563d2cf3f4b79dafcfe83b0dc64537f2ab7150dfdc24bdac25ad4c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f9ff6c558-pqxl5" Jan 13 21:32:56.324119 kubelet[3391]: E0113 21:32:56.324050 3391 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50ab2d574563d2cf3f4b79dafcfe83b0dc64537f2ab7150dfdc24bdac25ad4c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f9ff6c558-pqxl5" Jan 13 21:32:56.324316 kubelet[3391]: E0113 21:32:56.324092 3391 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7f9ff6c558-pqxl5_calico-apiserver(7531431e-bdd0-4c4b-b0d9-91a26f9acf4a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7f9ff6c558-pqxl5_calico-apiserver(7531431e-bdd0-4c4b-b0d9-91a26f9acf4a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"50ab2d574563d2cf3f4b79dafcfe83b0dc64537f2ab7150dfdc24bdac25ad4c8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f9ff6c558-pqxl5" podUID="7531431e-bdd0-4c4b-b0d9-91a26f9acf4a" Jan 13 21:32:56.343575 containerd[2100]: time="2025-01-13T21:32:56.343411861Z" level=error msg="Failed to destroy network for sandbox \"06a90424f5008c3a4cb972bf2e357e5159c1097ee9d9e188e864596c33bfaafd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:32:56.349195 containerd[2100]: time="2025-01-13T21:32:56.349144282Z" level=error msg="Failed to destroy network for sandbox \"0960b6c50f41504172e0320529dad5eef1e1024faaf7f631ad753b6edb6e5465\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:32:56.351850 containerd[2100]: time="2025-01-13T21:32:56.350189093Z" level=error msg="encountered an error cleaning up failed sandbox \"06a90424f5008c3a4cb972bf2e357e5159c1097ee9d9e188e864596c33bfaafd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:32:56.352105 containerd[2100]: time="2025-01-13T21:32:56.352067298Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xh79p,Uid:5d39a778-23bc-4ff9-9d67-cbce50e1aa94,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"06a90424f5008c3a4cb972bf2e357e5159c1097ee9d9e188e864596c33bfaafd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:32:56.352474 kubelet[3391]: E0113 21:32:56.352449 3391 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06a90424f5008c3a4cb972bf2e357e5159c1097ee9d9e188e864596c33bfaafd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:32:56.352730 kubelet[3391]: E0113 21:32:56.352717 3391 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06a90424f5008c3a4cb972bf2e357e5159c1097ee9d9e188e864596c33bfaafd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-xh79p" Jan 13 21:32:56.353011 kubelet[3391]: E0113 21:32:56.352987 3391 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06a90424f5008c3a4cb972bf2e357e5159c1097ee9d9e188e864596c33bfaafd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-xh79p" Jan 13 21:32:56.354747 kubelet[3391]: E0113 21:32:56.353388 3391 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-xh79p_kube-system(5d39a778-23bc-4ff9-9d67-cbce50e1aa94)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-xh79p_kube-system(5d39a778-23bc-4ff9-9d67-cbce50e1aa94)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"06a90424f5008c3a4cb972bf2e357e5159c1097ee9d9e188e864596c33bfaafd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-xh79p" podUID="5d39a778-23bc-4ff9-9d67-cbce50e1aa94" Jan 13 21:32:56.354549 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-06a90424f5008c3a4cb972bf2e357e5159c1097ee9d9e188e864596c33bfaafd-shm.mount: Deactivated successfully. Jan 13 21:32:56.357474 containerd[2100]: time="2025-01-13T21:32:56.356511907Z" level=error msg="encountered an error cleaning up failed sandbox \"0960b6c50f41504172e0320529dad5eef1e1024faaf7f631ad753b6edb6e5465\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:32:56.357851 containerd[2100]: time="2025-01-13T21:32:56.357791015Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f9ff6c558-5ns5g,Uid:3ddd0a1e-3af8-462a-b5e7-d0696cbfc1e1,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0960b6c50f41504172e0320529dad5eef1e1024faaf7f631ad753b6edb6e5465\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:32:56.362403 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0960b6c50f41504172e0320529dad5eef1e1024faaf7f631ad753b6edb6e5465-shm.mount: Deactivated successfully. Jan 13 21:32:56.362707 kubelet[3391]: E0113 21:32:56.362678 3391 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0960b6c50f41504172e0320529dad5eef1e1024faaf7f631ad753b6edb6e5465\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:32:56.362937 kubelet[3391]: E0113 21:32:56.362924 3391 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0960b6c50f41504172e0320529dad5eef1e1024faaf7f631ad753b6edb6e5465\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f9ff6c558-5ns5g" Jan 13 21:32:56.363061 kubelet[3391]: E0113 21:32:56.363051 3391 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0960b6c50f41504172e0320529dad5eef1e1024faaf7f631ad753b6edb6e5465\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f9ff6c558-5ns5g" Jan 13 21:32:56.363553 kubelet[3391]: E0113 21:32:56.363533 3391 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7f9ff6c558-5ns5g_calico-apiserver(3ddd0a1e-3af8-462a-b5e7-d0696cbfc1e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7f9ff6c558-5ns5g_calico-apiserver(3ddd0a1e-3af8-462a-b5e7-d0696cbfc1e1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0960b6c50f41504172e0320529dad5eef1e1024faaf7f631ad753b6edb6e5465\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f9ff6c558-5ns5g" podUID="3ddd0a1e-3af8-462a-b5e7-d0696cbfc1e1" Jan 13 21:32:56.367864 containerd[2100]: time="2025-01-13T21:32:56.367804930Z" level=error msg="Failed to destroy network for sandbox \"e720c157c9e2fea93369da4ca75ddaa59865e235c48c826fdb55612408673a9f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:32:56.368434 containerd[2100]: time="2025-01-13T21:32:56.368399348Z" level=error msg="Failed to destroy network for sandbox \"aba47a4051f24607b4bb469a39d943adfdb3508b13bee8e4ff6ee6abb58edec2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:32:56.368654 containerd[2100]: time="2025-01-13T21:32:56.368618580Z" level=error msg="encountered an error cleaning up failed sandbox \"e720c157c9e2fea93369da4ca75ddaa59865e235c48c826fdb55612408673a9f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:32:56.368731 containerd[2100]: time="2025-01-13T21:32:56.368704448Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-m7j9j,Uid:1349c369-e827-4f6c-bda4-a032fbaa74c0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e720c157c9e2fea93369da4ca75ddaa59865e235c48c826fdb55612408673a9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:32:56.369063 kubelet[3391]: E0113 21:32:56.369025 3391 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e720c157c9e2fea93369da4ca75ddaa59865e235c48c826fdb55612408673a9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:32:56.369225 kubelet[3391]: E0113 21:32:56.369088 3391 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e720c157c9e2fea93369da4ca75ddaa59865e235c48c826fdb55612408673a9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-m7j9j" Jan 13 21:32:56.369225 kubelet[3391]: E0113 21:32:56.369117 3391 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e720c157c9e2fea93369da4ca75ddaa59865e235c48c826fdb55612408673a9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-m7j9j" Jan 13 21:32:56.369345 kubelet[3391]: E0113 21:32:56.369263 3391 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-m7j9j_calico-system(1349c369-e827-4f6c-bda4-a032fbaa74c0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-m7j9j_calico-system(1349c369-e827-4f6c-bda4-a032fbaa74c0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e720c157c9e2fea93369da4ca75ddaa59865e235c48c826fdb55612408673a9f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-m7j9j" podUID="1349c369-e827-4f6c-bda4-a032fbaa74c0" Jan 13 21:32:56.369442 containerd[2100]: time="2025-01-13T21:32:56.369344982Z" level=error msg="encountered an error cleaning up failed sandbox \"aba47a4051f24607b4bb469a39d943adfdb3508b13bee8e4ff6ee6abb58edec2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:32:56.369442 containerd[2100]: time="2025-01-13T21:32:56.369413647Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-97574c6fb-sdstw,Uid:6516d32b-0c84-4b53-a73d-5859b4a02633,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"aba47a4051f24607b4bb469a39d943adfdb3508b13bee8e4ff6ee6abb58edec2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:32:56.369701 kubelet[3391]: E0113 21:32:56.369680 3391 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aba47a4051f24607b4bb469a39d943adfdb3508b13bee8e4ff6ee6abb58edec2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:32:56.369794 kubelet[3391]: E0113 21:32:56.369781 3391 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aba47a4051f24607b4bb469a39d943adfdb3508b13bee8e4ff6ee6abb58edec2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-97574c6fb-sdstw" Jan 13 21:32:56.369853 kubelet[3391]: E0113 21:32:56.369812 3391 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aba47a4051f24607b4bb469a39d943adfdb3508b13bee8e4ff6ee6abb58edec2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-97574c6fb-sdstw" Jan 13 21:32:56.370036 kubelet[3391]: E0113 21:32:56.369982 3391 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-97574c6fb-sdstw_calico-system(6516d32b-0c84-4b53-a73d-5859b4a02633)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-97574c6fb-sdstw_calico-system(6516d32b-0c84-4b53-a73d-5859b4a02633)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aba47a4051f24607b4bb469a39d943adfdb3508b13bee8e4ff6ee6abb58edec2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-97574c6fb-sdstw" podUID="6516d32b-0c84-4b53-a73d-5859b4a02633" Jan 13 21:32:56.411966 kubelet[3391]: I0113 21:32:56.411892 3391 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e61938cb3913460527310aa3c8f00066ff32ec8133d0845c3cac36864b1ea63" Jan 13 21:32:56.416372 kubelet[3391]: I0113 21:32:56.414663 3391 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50ab2d574563d2cf3f4b79dafcfe83b0dc64537f2ab7150dfdc24bdac25ad4c8" Jan 13 21:32:56.439939 kubelet[3391]: I0113 21:32:56.439553 3391 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06a90424f5008c3a4cb972bf2e357e5159c1097ee9d9e188e864596c33bfaafd" Jan 13 21:32:56.441227 kubelet[3391]: I0113 21:32:56.441185 3391 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e720c157c9e2fea93369da4ca75ddaa59865e235c48c826fdb55612408673a9f" Jan 13 21:32:56.442635 kubelet[3391]: I0113 21:32:56.442519 3391 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aba47a4051f24607b4bb469a39d943adfdb3508b13bee8e4ff6ee6abb58edec2" Jan 13 21:32:56.444138 kubelet[3391]: I0113 21:32:56.443644 3391 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0960b6c50f41504172e0320529dad5eef1e1024faaf7f631ad753b6edb6e5465" Jan 13 21:32:56.495456 containerd[2100]: time="2025-01-13T21:32:56.495122651Z" level=info msg="StopPodSandbox for \"7e61938cb3913460527310aa3c8f00066ff32ec8133d0845c3cac36864b1ea63\"" Jan 13 21:32:56.496860 containerd[2100]: time="2025-01-13T21:32:56.496309397Z" level=info msg="StopPodSandbox for \"50ab2d574563d2cf3f4b79dafcfe83b0dc64537f2ab7150dfdc24bdac25ad4c8\"" Jan 13 21:32:56.497384 containerd[2100]: time="2025-01-13T21:32:56.497344775Z" level=info msg="Ensure that sandbox 7e61938cb3913460527310aa3c8f00066ff32ec8133d0845c3cac36864b1ea63 in task-service has been cleanup successfully" Jan 13 21:32:56.497857 containerd[2100]: time="2025-01-13T21:32:56.497657999Z" level=info msg="Ensure that sandbox 50ab2d574563d2cf3f4b79dafcfe83b0dc64537f2ab7150dfdc24bdac25ad4c8 in task-service has been cleanup successfully" Jan 13 21:32:56.499474 containerd[2100]: time="2025-01-13T21:32:56.499446932Z" level=info msg="StopPodSandbox for \"aba47a4051f24607b4bb469a39d943adfdb3508b13bee8e4ff6ee6abb58edec2\"" Jan 13 21:32:56.499878 containerd[2100]: time="2025-01-13T21:32:56.499839324Z" level=info msg="Ensure that sandbox aba47a4051f24607b4bb469a39d943adfdb3508b13bee8e4ff6ee6abb58edec2 in task-service has been cleanup successfully" Jan 13 21:32:56.500351 containerd[2100]: time="2025-01-13T21:32:56.500051685Z" level=info msg="StopPodSandbox for \"06a90424f5008c3a4cb972bf2e357e5159c1097ee9d9e188e864596c33bfaafd\"" Jan 13 21:32:56.500351 containerd[2100]: time="2025-01-13T21:32:56.500220132Z" level=info msg="Ensure that sandbox 06a90424f5008c3a4cb972bf2e357e5159c1097ee9d9e188e864596c33bfaafd in task-service has been cleanup successfully" Jan 13 21:32:56.503704 containerd[2100]: time="2025-01-13T21:32:56.503672979Z" level=info msg="StopPodSandbox for \"0960b6c50f41504172e0320529dad5eef1e1024faaf7f631ad753b6edb6e5465\"" Jan 13 21:32:56.504046 containerd[2100]: time="2025-01-13T21:32:56.504020206Z" level=info msg="Ensure that sandbox 0960b6c50f41504172e0320529dad5eef1e1024faaf7f631ad753b6edb6e5465 in task-service has been cleanup successfully" Jan 13 21:32:56.505982 containerd[2100]: time="2025-01-13T21:32:56.505952259Z" level=info msg="StopPodSandbox for \"e720c157c9e2fea93369da4ca75ddaa59865e235c48c826fdb55612408673a9f\"" Jan 13 21:32:56.506304 containerd[2100]: time="2025-01-13T21:32:56.506279052Z" level=info msg="Ensure that sandbox e720c157c9e2fea93369da4ca75ddaa59865e235c48c826fdb55612408673a9f in task-service has been cleanup successfully" Jan 13 21:32:56.646674 containerd[2100]: time="2025-01-13T21:32:56.646504135Z" level=error msg="StopPodSandbox for \"50ab2d574563d2cf3f4b79dafcfe83b0dc64537f2ab7150dfdc24bdac25ad4c8\" failed" error="failed to destroy network for sandbox \"50ab2d574563d2cf3f4b79dafcfe83b0dc64537f2ab7150dfdc24bdac25ad4c8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:32:56.648803 kubelet[3391]: E0113 21:32:56.648287 3391 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"50ab2d574563d2cf3f4b79dafcfe83b0dc64537f2ab7150dfdc24bdac25ad4c8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="50ab2d574563d2cf3f4b79dafcfe83b0dc64537f2ab7150dfdc24bdac25ad4c8" Jan 13 21:32:56.653013 containerd[2100]: time="2025-01-13T21:32:56.652468447Z" level=error msg="StopPodSandbox for \"7e61938cb3913460527310aa3c8f00066ff32ec8133d0845c3cac36864b1ea63\" failed" error="failed to destroy network for sandbox \"7e61938cb3913460527310aa3c8f00066ff32ec8133d0845c3cac36864b1ea63\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:32:56.653493 kubelet[3391]: E0113 21:32:56.653441 3391 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7e61938cb3913460527310aa3c8f00066ff32ec8133d0845c3cac36864b1ea63\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7e61938cb3913460527310aa3c8f00066ff32ec8133d0845c3cac36864b1ea63" Jan 13 21:32:56.670302 kubelet[3391]: E0113 21:32:56.669949 3391 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"50ab2d574563d2cf3f4b79dafcfe83b0dc64537f2ab7150dfdc24bdac25ad4c8"} Jan 13 21:32:56.670656 kubelet[3391]: E0113 21:32:56.670519 3391 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7e61938cb3913460527310aa3c8f00066ff32ec8133d0845c3cac36864b1ea63"} Jan 13 21:32:56.671015 kubelet[3391]: E0113 21:32:56.670873 3391 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1da65a44-04e3-44d6-8959-9a867b5fe933\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7e61938cb3913460527310aa3c8f00066ff32ec8133d0845c3cac36864b1ea63\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 21:32:56.671015 kubelet[3391]: E0113 21:32:56.670987 3391 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1da65a44-04e3-44d6-8959-9a867b5fe933\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7e61938cb3913460527310aa3c8f00066ff32ec8133d0845c3cac36864b1ea63\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-qn2h7" podUID="1da65a44-04e3-44d6-8959-9a867b5fe933" Jan 13 21:32:56.671371 kubelet[3391]: E0113 21:32:56.671239 3391 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7531431e-bdd0-4c4b-b0d9-91a26f9acf4a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"50ab2d574563d2cf3f4b79dafcfe83b0dc64537f2ab7150dfdc24bdac25ad4c8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 21:32:56.671371 kubelet[3391]: E0113 21:32:56.671349 3391 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7531431e-bdd0-4c4b-b0d9-91a26f9acf4a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"50ab2d574563d2cf3f4b79dafcfe83b0dc64537f2ab7150dfdc24bdac25ad4c8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f9ff6c558-pqxl5" podUID="7531431e-bdd0-4c4b-b0d9-91a26f9acf4a" Jan 13 21:32:56.686116 containerd[2100]: time="2025-01-13T21:32:56.686063602Z" level=error msg="StopPodSandbox for \"06a90424f5008c3a4cb972bf2e357e5159c1097ee9d9e188e864596c33bfaafd\" failed" error="failed to destroy network for sandbox \"06a90424f5008c3a4cb972bf2e357e5159c1097ee9d9e188e864596c33bfaafd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:32:56.686650 kubelet[3391]: E0113 21:32:56.686364 3391 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"06a90424f5008c3a4cb972bf2e357e5159c1097ee9d9e188e864596c33bfaafd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="06a90424f5008c3a4cb972bf2e357e5159c1097ee9d9e188e864596c33bfaafd" Jan 13 21:32:56.686650 kubelet[3391]: E0113 21:32:56.686414 3391 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"06a90424f5008c3a4cb972bf2e357e5159c1097ee9d9e188e864596c33bfaafd"} Jan 13 21:32:56.686650 kubelet[3391]: E0113 21:32:56.686461 3391 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5d39a778-23bc-4ff9-9d67-cbce50e1aa94\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"06a90424f5008c3a4cb972bf2e357e5159c1097ee9d9e188e864596c33bfaafd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 21:32:56.686650 kubelet[3391]: E0113 21:32:56.686500 3391 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5d39a778-23bc-4ff9-9d67-cbce50e1aa94\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"06a90424f5008c3a4cb972bf2e357e5159c1097ee9d9e188e864596c33bfaafd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-xh79p" podUID="5d39a778-23bc-4ff9-9d67-cbce50e1aa94" Jan 13 21:32:56.699536 containerd[2100]: time="2025-01-13T21:32:56.698464893Z" level=error msg="StopPodSandbox for \"e720c157c9e2fea93369da4ca75ddaa59865e235c48c826fdb55612408673a9f\" failed" error="failed to destroy network for sandbox \"e720c157c9e2fea93369da4ca75ddaa59865e235c48c826fdb55612408673a9f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:32:56.699536 containerd[2100]: time="2025-01-13T21:32:56.698862441Z" level=error msg="StopPodSandbox for \"0960b6c50f41504172e0320529dad5eef1e1024faaf7f631ad753b6edb6e5465\" failed" error="failed to destroy network for sandbox \"0960b6c50f41504172e0320529dad5eef1e1024faaf7f631ad753b6edb6e5465\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:32:56.699751 kubelet[3391]: E0113 21:32:56.699143 3391 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0960b6c50f41504172e0320529dad5eef1e1024faaf7f631ad753b6edb6e5465\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0960b6c50f41504172e0320529dad5eef1e1024faaf7f631ad753b6edb6e5465" Jan 13 21:32:56.699751 kubelet[3391]: E0113 21:32:56.699202 3391 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0960b6c50f41504172e0320529dad5eef1e1024faaf7f631ad753b6edb6e5465"} Jan 13 21:32:56.699751 kubelet[3391]: E0113 21:32:56.699250 3391 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3ddd0a1e-3af8-462a-b5e7-d0696cbfc1e1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0960b6c50f41504172e0320529dad5eef1e1024faaf7f631ad753b6edb6e5465\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 21:32:56.699751 kubelet[3391]: E0113 21:32:56.699287 3391 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3ddd0a1e-3af8-462a-b5e7-d0696cbfc1e1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0960b6c50f41504172e0320529dad5eef1e1024faaf7f631ad753b6edb6e5465\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f9ff6c558-5ns5g" podUID="3ddd0a1e-3af8-462a-b5e7-d0696cbfc1e1" Jan 13 21:32:56.700330 containerd[2100]: time="2025-01-13T21:32:56.699657711Z" level=error msg="StopPodSandbox for \"aba47a4051f24607b4bb469a39d943adfdb3508b13bee8e4ff6ee6abb58edec2\" failed" error="failed to destroy network for sandbox \"aba47a4051f24607b4bb469a39d943adfdb3508b13bee8e4ff6ee6abb58edec2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 21:32:56.700373 kubelet[3391]: E0113 21:32:56.700147 3391 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"aba47a4051f24607b4bb469a39d943adfdb3508b13bee8e4ff6ee6abb58edec2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="aba47a4051f24607b4bb469a39d943adfdb3508b13bee8e4ff6ee6abb58edec2" Jan 13 21:32:56.700373 kubelet[3391]: E0113 21:32:56.700187 3391 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"aba47a4051f24607b4bb469a39d943adfdb3508b13bee8e4ff6ee6abb58edec2"} Jan 13 21:32:56.700373 kubelet[3391]: E0113 21:32:56.700233 3391 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6516d32b-0c84-4b53-a73d-5859b4a02633\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"aba47a4051f24607b4bb469a39d943adfdb3508b13bee8e4ff6ee6abb58edec2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 21:32:56.700373 kubelet[3391]: E0113 21:32:56.700270 3391 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6516d32b-0c84-4b53-a73d-5859b4a02633\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"aba47a4051f24607b4bb469a39d943adfdb3508b13bee8e4ff6ee6abb58edec2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-97574c6fb-sdstw" podUID="6516d32b-0c84-4b53-a73d-5859b4a02633" Jan 13 21:32:56.700616 kubelet[3391]: E0113 21:32:56.700589 3391 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e720c157c9e2fea93369da4ca75ddaa59865e235c48c826fdb55612408673a9f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e720c157c9e2fea93369da4ca75ddaa59865e235c48c826fdb55612408673a9f" Jan 13 21:32:56.700766 kubelet[3391]: E0113 21:32:56.700624 3391 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e720c157c9e2fea93369da4ca75ddaa59865e235c48c826fdb55612408673a9f"} Jan 13 21:32:56.700817 kubelet[3391]: E0113 21:32:56.700774 3391 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1349c369-e827-4f6c-bda4-a032fbaa74c0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e720c157c9e2fea93369da4ca75ddaa59865e235c48c826fdb55612408673a9f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 13 21:32:56.701238 kubelet[3391]: E0113 21:32:56.700860 3391 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1349c369-e827-4f6c-bda4-a032fbaa74c0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e720c157c9e2fea93369da4ca75ddaa59865e235c48c826fdb55612408673a9f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-m7j9j" podUID="1349c369-e827-4f6c-bda4-a032fbaa74c0" Jan 13 21:32:57.203192 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e720c157c9e2fea93369da4ca75ddaa59865e235c48c826fdb55612408673a9f-shm.mount: Deactivated successfully. Jan 13 21:32:57.203540 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-aba47a4051f24607b4bb469a39d943adfdb3508b13bee8e4ff6ee6abb58edec2-shm.mount: Deactivated successfully. Jan 13 21:32:58.324928 systemd-resolved[1974]: Under memory pressure, flushing caches. Jan 13 21:32:58.329403 systemd-journald[1569]: Under memory pressure, flushing caches. Jan 13 21:32:58.325018 systemd-resolved[1974]: Flushed all caches. Jan 13 21:33:00.376416 systemd-journald[1569]: Under memory pressure, flushing caches. Jan 13 21:33:00.373139 systemd-resolved[1974]: Under memory pressure, flushing caches. Jan 13 21:33:00.373170 systemd-resolved[1974]: Flushed all caches. Jan 13 21:33:04.345165 systemd-journald[1569]: Under memory pressure, flushing caches. Jan 13 21:33:04.342536 systemd-resolved[1974]: Under memory pressure, flushing caches. Jan 13 21:33:04.342544 systemd-resolved[1974]: Flushed all caches. Jan 13 21:33:05.070111 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1744673136.mount: Deactivated successfully. Jan 13 21:33:05.092485 systemd[1]: Started sshd@7-172.31.23.216:22-147.75.109.163:41824.service - OpenSSH per-connection server daemon (147.75.109.163:41824). Jan 13 21:33:05.284775 containerd[2100]: time="2025-01-13T21:33:05.284593915Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 13 21:33:05.297419 containerd[2100]: time="2025-01-13T21:33:05.295882951Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:33:05.363315 containerd[2100]: time="2025-01-13T21:33:05.361923358Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:33:05.364878 containerd[2100]: time="2025-01-13T21:33:05.364782964Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:33:05.374135 containerd[2100]: time="2025-01-13T21:33:05.374077990Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 9.963011621s" Jan 13 21:33:05.374135 containerd[2100]: time="2025-01-13T21:33:05.374127075Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 13 21:33:05.471318 sshd[4721]: Accepted publickey for core from 147.75.109.163 port 41824 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:33:05.474734 sshd[4721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:33:05.511772 systemd-logind[2056]: New session 8 of user core. Jan 13 21:33:05.529595 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 21:33:05.548609 containerd[2100]: time="2025-01-13T21:33:05.548450660Z" level=info msg="CreateContainer within sandbox \"86065975bf38d7eed27a648545feb36ac980ab74f010ab503ba8b83db0700f88\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 13 21:33:05.660326 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3278000145.mount: Deactivated successfully. Jan 13 21:33:05.703335 containerd[2100]: time="2025-01-13T21:33:05.702698259Z" level=info msg="CreateContainer within sandbox \"86065975bf38d7eed27a648545feb36ac980ab74f010ab503ba8b83db0700f88\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"21aa5069122deaa58584a553b207ba99a6bb56541b7e926d08f08bb1b0fe71e8\"" Jan 13 21:33:05.707365 containerd[2100]: time="2025-01-13T21:33:05.707146403Z" level=info msg="StartContainer for \"21aa5069122deaa58584a553b207ba99a6bb56541b7e926d08f08bb1b0fe71e8\"" Jan 13 21:33:05.954116 sshd[4721]: pam_unix(sshd:session): session closed for user core Jan 13 21:33:05.962010 systemd[1]: sshd@7-172.31.23.216:22-147.75.109.163:41824.service: Deactivated successfully. Jan 13 21:33:05.977060 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 21:33:05.979223 systemd-logind[2056]: Session 8 logged out. Waiting for processes to exit. Jan 13 21:33:05.980746 systemd-logind[2056]: Removed session 8. Jan 13 21:33:06.179382 systemd[1]: run-containerd-runc-k8s.io-21aa5069122deaa58584a553b207ba99a6bb56541b7e926d08f08bb1b0fe71e8-runc.Pckthz.mount: Deactivated successfully. Jan 13 21:33:06.284754 containerd[2100]: time="2025-01-13T21:33:06.282447372Z" level=info msg="StartContainer for \"21aa5069122deaa58584a553b207ba99a6bb56541b7e926d08f08bb1b0fe71e8\" returns successfully" Jan 13 21:33:06.395999 systemd-journald[1569]: Under memory pressure, flushing caches. Jan 13 21:33:06.396287 systemd-resolved[1974]: Under memory pressure, flushing caches. Jan 13 21:33:06.396299 systemd-resolved[1974]: Flushed all caches. Jan 13 21:33:06.647852 kubelet[3391]: I0113 21:33:06.647675 3391 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-2hbnf" podStartSLOduration=2.916348229 podStartE2EDuration="23.566554056s" podCreationTimestamp="2025-01-13 21:32:43 +0000 UTC" firstStartedPulling="2025-01-13 21:32:44.724752714 +0000 UTC m=+26.193104560" lastFinishedPulling="2025-01-13 21:33:05.374958557 +0000 UTC m=+46.843310387" observedRunningTime="2025-01-13 21:33:06.566062146 +0000 UTC m=+48.034414001" watchObservedRunningTime="2025-01-13 21:33:06.566554056 +0000 UTC m=+48.034905909" Jan 13 21:33:06.905302 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 13 21:33:06.905679 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 13 21:33:09.076912 containerd[2100]: time="2025-01-13T21:33:09.076040107Z" level=info msg="StopPodSandbox for \"7e61938cb3913460527310aa3c8f00066ff32ec8133d0845c3cac36864b1ea63\"" Jan 13 21:33:09.082405 containerd[2100]: time="2025-01-13T21:33:09.081231342Z" level=info msg="StopPodSandbox for \"06a90424f5008c3a4cb972bf2e357e5159c1097ee9d9e188e864596c33bfaafd\"" Jan 13 21:33:09.347856 kernel: bpftool[5031]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 13 21:33:09.722342 (udev-worker)[4796]: Network interface NamePolicy= disabled on kernel command line. Jan 13 21:33:09.735453 systemd-networkd[1652]: vxlan.calico: Link UP Jan 13 21:33:09.735459 systemd-networkd[1652]: vxlan.calico: Gained carrier Jan 13 21:33:09.783197 (udev-worker)[4795]: Network interface NamePolicy= disabled on kernel command line. Jan 13 21:33:09.794059 (udev-worker)[5074]: Network interface NamePolicy= disabled on kernel command line. Jan 13 21:33:09.897850 containerd[2100]: 2025-01-13 21:33:09.329 [INFO][4995] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="06a90424f5008c3a4cb972bf2e357e5159c1097ee9d9e188e864596c33bfaafd" Jan 13 21:33:09.897850 containerd[2100]: 2025-01-13 21:33:09.331 [INFO][4995] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="06a90424f5008c3a4cb972bf2e357e5159c1097ee9d9e188e864596c33bfaafd" iface="eth0" netns="/var/run/netns/cni-b1e6ff6a-5a06-373e-f23f-c5860801276b" Jan 13 21:33:09.897850 containerd[2100]: 2025-01-13 21:33:09.333 [INFO][4995] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="06a90424f5008c3a4cb972bf2e357e5159c1097ee9d9e188e864596c33bfaafd" iface="eth0" netns="/var/run/netns/cni-b1e6ff6a-5a06-373e-f23f-c5860801276b" Jan 13 21:33:09.897850 containerd[2100]: 2025-01-13 21:33:09.336 [INFO][4995] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="06a90424f5008c3a4cb972bf2e357e5159c1097ee9d9e188e864596c33bfaafd" iface="eth0" netns="/var/run/netns/cni-b1e6ff6a-5a06-373e-f23f-c5860801276b" Jan 13 21:33:09.897850 containerd[2100]: 2025-01-13 21:33:09.337 [INFO][4995] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="06a90424f5008c3a4cb972bf2e357e5159c1097ee9d9e188e864596c33bfaafd" Jan 13 21:33:09.897850 containerd[2100]: 2025-01-13 21:33:09.337 [INFO][4995] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="06a90424f5008c3a4cb972bf2e357e5159c1097ee9d9e188e864596c33bfaafd" Jan 13 21:33:09.897850 containerd[2100]: 2025-01-13 21:33:09.813 [INFO][5026] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="06a90424f5008c3a4cb972bf2e357e5159c1097ee9d9e188e864596c33bfaafd" HandleID="k8s-pod-network.06a90424f5008c3a4cb972bf2e357e5159c1097ee9d9e188e864596c33bfaafd" Workload="ip--172--31--23--216-k8s-coredns--76f75df574--xh79p-eth0" Jan 13 21:33:09.897850 containerd[2100]: 2025-01-13 21:33:09.820 [INFO][5026] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:33:09.897850 containerd[2100]: 2025-01-13 21:33:09.821 [INFO][5026] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:33:09.897850 containerd[2100]: 2025-01-13 21:33:09.850 [WARNING][5026] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="06a90424f5008c3a4cb972bf2e357e5159c1097ee9d9e188e864596c33bfaafd" HandleID="k8s-pod-network.06a90424f5008c3a4cb972bf2e357e5159c1097ee9d9e188e864596c33bfaafd" Workload="ip--172--31--23--216-k8s-coredns--76f75df574--xh79p-eth0" Jan 13 21:33:09.897850 containerd[2100]: 2025-01-13 21:33:09.850 [INFO][5026] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="06a90424f5008c3a4cb972bf2e357e5159c1097ee9d9e188e864596c33bfaafd" HandleID="k8s-pod-network.06a90424f5008c3a4cb972bf2e357e5159c1097ee9d9e188e864596c33bfaafd" Workload="ip--172--31--23--216-k8s-coredns--76f75df574--xh79p-eth0" Jan 13 21:33:09.897850 containerd[2100]: 2025-01-13 21:33:09.858 [INFO][5026] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:33:09.897850 containerd[2100]: 2025-01-13 21:33:09.884 [INFO][4995] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="06a90424f5008c3a4cb972bf2e357e5159c1097ee9d9e188e864596c33bfaafd" Jan 13 21:33:09.903995 systemd[1]: run-netns-cni\x2db1e6ff6a\x2d5a06\x2d373e\x2df23f\x2dc5860801276b.mount: Deactivated successfully. Jan 13 21:33:09.935138 containerd[2100]: time="2025-01-13T21:33:09.934861904Z" level=info msg="TearDown network for sandbox \"06a90424f5008c3a4cb972bf2e357e5159c1097ee9d9e188e864596c33bfaafd\" successfully" Jan 13 21:33:09.935138 containerd[2100]: time="2025-01-13T21:33:09.934956123Z" level=info msg="StopPodSandbox for \"06a90424f5008c3a4cb972bf2e357e5159c1097ee9d9e188e864596c33bfaafd\" returns successfully" Jan 13 21:33:09.943931 containerd[2100]: time="2025-01-13T21:33:09.943879928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xh79p,Uid:5d39a778-23bc-4ff9-9d67-cbce50e1aa94,Namespace:kube-system,Attempt:1,}" Jan 13 21:33:09.960172 containerd[2100]: 2025-01-13 21:33:09.330 [INFO][4988] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7e61938cb3913460527310aa3c8f00066ff32ec8133d0845c3cac36864b1ea63" Jan 13 21:33:09.960172 containerd[2100]: 2025-01-13 21:33:09.333 [INFO][4988] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7e61938cb3913460527310aa3c8f00066ff32ec8133d0845c3cac36864b1ea63" iface="eth0" netns="/var/run/netns/cni-56e2554c-f222-2c7e-a77b-26133d9a54a3" Jan 13 21:33:09.960172 containerd[2100]: 2025-01-13 21:33:09.334 [INFO][4988] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7e61938cb3913460527310aa3c8f00066ff32ec8133d0845c3cac36864b1ea63" iface="eth0" netns="/var/run/netns/cni-56e2554c-f222-2c7e-a77b-26133d9a54a3" Jan 13 21:33:09.960172 containerd[2100]: 2025-01-13 21:33:09.337 [INFO][4988] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7e61938cb3913460527310aa3c8f00066ff32ec8133d0845c3cac36864b1ea63" iface="eth0" netns="/var/run/netns/cni-56e2554c-f222-2c7e-a77b-26133d9a54a3" Jan 13 21:33:09.960172 containerd[2100]: 2025-01-13 21:33:09.337 [INFO][4988] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7e61938cb3913460527310aa3c8f00066ff32ec8133d0845c3cac36864b1ea63" Jan 13 21:33:09.960172 containerd[2100]: 2025-01-13 21:33:09.337 [INFO][4988] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7e61938cb3913460527310aa3c8f00066ff32ec8133d0845c3cac36864b1ea63" Jan 13 21:33:09.960172 containerd[2100]: 2025-01-13 21:33:09.820 [INFO][5027] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7e61938cb3913460527310aa3c8f00066ff32ec8133d0845c3cac36864b1ea63" HandleID="k8s-pod-network.7e61938cb3913460527310aa3c8f00066ff32ec8133d0845c3cac36864b1ea63" Workload="ip--172--31--23--216-k8s-coredns--76f75df574--qn2h7-eth0" Jan 13 21:33:09.960172 containerd[2100]: 2025-01-13 21:33:09.830 [INFO][5027] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:33:09.960172 containerd[2100]: 2025-01-13 21:33:09.858 [INFO][5027] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:33:09.960172 containerd[2100]: 2025-01-13 21:33:09.872 [WARNING][5027] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7e61938cb3913460527310aa3c8f00066ff32ec8133d0845c3cac36864b1ea63" HandleID="k8s-pod-network.7e61938cb3913460527310aa3c8f00066ff32ec8133d0845c3cac36864b1ea63" Workload="ip--172--31--23--216-k8s-coredns--76f75df574--qn2h7-eth0" Jan 13 21:33:09.960172 containerd[2100]: 2025-01-13 21:33:09.872 [INFO][5027] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7e61938cb3913460527310aa3c8f00066ff32ec8133d0845c3cac36864b1ea63" HandleID="k8s-pod-network.7e61938cb3913460527310aa3c8f00066ff32ec8133d0845c3cac36864b1ea63" Workload="ip--172--31--23--216-k8s-coredns--76f75df574--qn2h7-eth0" Jan 13 21:33:09.960172 containerd[2100]: 2025-01-13 21:33:09.881 [INFO][5027] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:33:09.960172 containerd[2100]: 2025-01-13 21:33:09.938 [INFO][4988] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7e61938cb3913460527310aa3c8f00066ff32ec8133d0845c3cac36864b1ea63" Jan 13 21:33:09.960172 containerd[2100]: time="2025-01-13T21:33:09.959716280Z" level=info msg="TearDown network for sandbox \"7e61938cb3913460527310aa3c8f00066ff32ec8133d0845c3cac36864b1ea63\" successfully" Jan 13 21:33:09.960172 containerd[2100]: time="2025-01-13T21:33:09.959796582Z" level=info msg="StopPodSandbox for \"7e61938cb3913460527310aa3c8f00066ff32ec8133d0845c3cac36864b1ea63\" returns successfully" Jan 13 21:33:09.970054 containerd[2100]: time="2025-01-13T21:33:09.962821738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-qn2h7,Uid:1da65a44-04e3-44d6-8959-9a867b5fe933,Namespace:kube-system,Attempt:1,}" Jan 13 21:33:09.972406 systemd[1]: run-netns-cni\x2d56e2554c\x2df222\x2d2c7e\x2da77b\x2d26133d9a54a3.mount: Deactivated successfully. Jan 13 21:33:10.086400 containerd[2100]: time="2025-01-13T21:33:10.086293292Z" level=info msg="StopPodSandbox for \"e720c157c9e2fea93369da4ca75ddaa59865e235c48c826fdb55612408673a9f\"" Jan 13 21:33:10.089133 containerd[2100]: time="2025-01-13T21:33:10.089095732Z" level=info msg="StopPodSandbox for \"50ab2d574563d2cf3f4b79dafcfe83b0dc64537f2ab7150dfdc24bdac25ad4c8\"" Jan 13 21:33:10.666605 systemd-networkd[1652]: cali72875e634b0: Link UP Jan 13 21:33:10.666951 systemd-networkd[1652]: cali72875e634b0: Gained carrier Jan 13 21:33:10.707296 containerd[2100]: 2025-01-13 21:33:10.253 [INFO][5084] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--216-k8s-coredns--76f75df574--qn2h7-eth0 coredns-76f75df574- kube-system 1da65a44-04e3-44d6-8959-9a867b5fe933 875 0 2025-01-13 21:32:33 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-23-216 coredns-76f75df574-qn2h7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali72875e634b0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="6ae3f25e998e949f370e112e89f5c0b31913c41ddaa6af112e1d5ee886f509cb" Namespace="kube-system" Pod="coredns-76f75df574-qn2h7" WorkloadEndpoint="ip--172--31--23--216-k8s-coredns--76f75df574--qn2h7-" Jan 13 21:33:10.707296 containerd[2100]: 2025-01-13 21:33:10.260 [INFO][5084] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6ae3f25e998e949f370e112e89f5c0b31913c41ddaa6af112e1d5ee886f509cb" Namespace="kube-system" Pod="coredns-76f75df574-qn2h7" WorkloadEndpoint="ip--172--31--23--216-k8s-coredns--76f75df574--qn2h7-eth0" Jan 13 21:33:10.707296 containerd[2100]: 2025-01-13 21:33:10.469 [INFO][5139] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6ae3f25e998e949f370e112e89f5c0b31913c41ddaa6af112e1d5ee886f509cb" HandleID="k8s-pod-network.6ae3f25e998e949f370e112e89f5c0b31913c41ddaa6af112e1d5ee886f509cb" Workload="ip--172--31--23--216-k8s-coredns--76f75df574--qn2h7-eth0" Jan 13 21:33:10.707296 containerd[2100]: 2025-01-13 21:33:10.507 [INFO][5139] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6ae3f25e998e949f370e112e89f5c0b31913c41ddaa6af112e1d5ee886f509cb" HandleID="k8s-pod-network.6ae3f25e998e949f370e112e89f5c0b31913c41ddaa6af112e1d5ee886f509cb" Workload="ip--172--31--23--216-k8s-coredns--76f75df574--qn2h7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00039ca40), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-23-216", "pod":"coredns-76f75df574-qn2h7", "timestamp":"2025-01-13 21:33:10.469469858 +0000 UTC"}, Hostname:"ip-172-31-23-216", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:33:10.707296 containerd[2100]: 2025-01-13 21:33:10.507 [INFO][5139] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:33:10.707296 containerd[2100]: 2025-01-13 21:33:10.507 [INFO][5139] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:33:10.707296 containerd[2100]: 2025-01-13 21:33:10.507 [INFO][5139] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-216' Jan 13 21:33:10.707296 containerd[2100]: 2025-01-13 21:33:10.520 [INFO][5139] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6ae3f25e998e949f370e112e89f5c0b31913c41ddaa6af112e1d5ee886f509cb" host="ip-172-31-23-216" Jan 13 21:33:10.707296 containerd[2100]: 2025-01-13 21:33:10.546 [INFO][5139] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-23-216" Jan 13 21:33:10.707296 containerd[2100]: 2025-01-13 21:33:10.576 [INFO][5139] ipam/ipam.go 489: Trying affinity for 192.168.40.0/26 host="ip-172-31-23-216" Jan 13 21:33:10.707296 containerd[2100]: 2025-01-13 21:33:10.580 [INFO][5139] ipam/ipam.go 155: Attempting to load block cidr=192.168.40.0/26 host="ip-172-31-23-216" Jan 13 21:33:10.707296 containerd[2100]: 2025-01-13 21:33:10.589 [INFO][5139] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.40.0/26 host="ip-172-31-23-216" Jan 13 21:33:10.707296 containerd[2100]: 2025-01-13 21:33:10.590 [INFO][5139] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.40.0/26 handle="k8s-pod-network.6ae3f25e998e949f370e112e89f5c0b31913c41ddaa6af112e1d5ee886f509cb" host="ip-172-31-23-216" Jan 13 21:33:10.707296 containerd[2100]: 2025-01-13 21:33:10.595 [INFO][5139] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6ae3f25e998e949f370e112e89f5c0b31913c41ddaa6af112e1d5ee886f509cb Jan 13 21:33:10.707296 containerd[2100]: 2025-01-13 21:33:10.614 [INFO][5139] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.40.0/26 handle="k8s-pod-network.6ae3f25e998e949f370e112e89f5c0b31913c41ddaa6af112e1d5ee886f509cb" host="ip-172-31-23-216" Jan 13 21:33:10.707296 containerd[2100]: 2025-01-13 21:33:10.635 [INFO][5139] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.40.1/26] block=192.168.40.0/26 handle="k8s-pod-network.6ae3f25e998e949f370e112e89f5c0b31913c41ddaa6af112e1d5ee886f509cb" host="ip-172-31-23-216" Jan 13 21:33:10.707296 containerd[2100]: 2025-01-13 21:33:10.636 [INFO][5139] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.40.1/26] handle="k8s-pod-network.6ae3f25e998e949f370e112e89f5c0b31913c41ddaa6af112e1d5ee886f509cb" host="ip-172-31-23-216" Jan 13 21:33:10.707296 containerd[2100]: 2025-01-13 21:33:10.637 [INFO][5139] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:33:10.707296 containerd[2100]: 2025-01-13 21:33:10.638 [INFO][5139] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.40.1/26] IPv6=[] ContainerID="6ae3f25e998e949f370e112e89f5c0b31913c41ddaa6af112e1d5ee886f509cb" HandleID="k8s-pod-network.6ae3f25e998e949f370e112e89f5c0b31913c41ddaa6af112e1d5ee886f509cb" Workload="ip--172--31--23--216-k8s-coredns--76f75df574--qn2h7-eth0" Jan 13 21:33:10.710275 containerd[2100]: 2025-01-13 21:33:10.646 [INFO][5084] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6ae3f25e998e949f370e112e89f5c0b31913c41ddaa6af112e1d5ee886f509cb" Namespace="kube-system" Pod="coredns-76f75df574-qn2h7" WorkloadEndpoint="ip--172--31--23--216-k8s-coredns--76f75df574--qn2h7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--216-k8s-coredns--76f75df574--qn2h7-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"1da65a44-04e3-44d6-8959-9a867b5fe933", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 32, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-216", ContainerID:"", Pod:"coredns-76f75df574-qn2h7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.40.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali72875e634b0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:33:10.710275 containerd[2100]: 2025-01-13 21:33:10.646 [INFO][5084] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.40.1/32] ContainerID="6ae3f25e998e949f370e112e89f5c0b31913c41ddaa6af112e1d5ee886f509cb" Namespace="kube-system" Pod="coredns-76f75df574-qn2h7" WorkloadEndpoint="ip--172--31--23--216-k8s-coredns--76f75df574--qn2h7-eth0" Jan 13 21:33:10.710275 containerd[2100]: 2025-01-13 21:33:10.647 [INFO][5084] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali72875e634b0 ContainerID="6ae3f25e998e949f370e112e89f5c0b31913c41ddaa6af112e1d5ee886f509cb" Namespace="kube-system" Pod="coredns-76f75df574-qn2h7" WorkloadEndpoint="ip--172--31--23--216-k8s-coredns--76f75df574--qn2h7-eth0" Jan 13 21:33:10.710275 containerd[2100]: 2025-01-13 21:33:10.671 [INFO][5084] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6ae3f25e998e949f370e112e89f5c0b31913c41ddaa6af112e1d5ee886f509cb" Namespace="kube-system" Pod="coredns-76f75df574-qn2h7" WorkloadEndpoint="ip--172--31--23--216-k8s-coredns--76f75df574--qn2h7-eth0" Jan 13 21:33:10.710275 containerd[2100]: 2025-01-13 21:33:10.673 [INFO][5084] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6ae3f25e998e949f370e112e89f5c0b31913c41ddaa6af112e1d5ee886f509cb" Namespace="kube-system" Pod="coredns-76f75df574-qn2h7" WorkloadEndpoint="ip--172--31--23--216-k8s-coredns--76f75df574--qn2h7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--216-k8s-coredns--76f75df574--qn2h7-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"1da65a44-04e3-44d6-8959-9a867b5fe933", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 32, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-216", ContainerID:"6ae3f25e998e949f370e112e89f5c0b31913c41ddaa6af112e1d5ee886f509cb", Pod:"coredns-76f75df574-qn2h7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.40.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali72875e634b0", MAC:"6a:52:60:48:88:e4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:33:10.710275 containerd[2100]: 2025-01-13 21:33:10.698 [INFO][5084] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6ae3f25e998e949f370e112e89f5c0b31913c41ddaa6af112e1d5ee886f509cb" Namespace="kube-system" Pod="coredns-76f75df574-qn2h7" WorkloadEndpoint="ip--172--31--23--216-k8s-coredns--76f75df574--qn2h7-eth0" Jan 13 21:33:10.728446 containerd[2100]: 2025-01-13 21:33:10.442 [INFO][5126] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e720c157c9e2fea93369da4ca75ddaa59865e235c48c826fdb55612408673a9f" Jan 13 21:33:10.728446 containerd[2100]: 2025-01-13 21:33:10.443 [INFO][5126] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e720c157c9e2fea93369da4ca75ddaa59865e235c48c826fdb55612408673a9f" iface="eth0" netns="/var/run/netns/cni-5e33fb3b-d17f-2ff1-9ce4-1ab1843fb97e" Jan 13 21:33:10.728446 containerd[2100]: 2025-01-13 21:33:10.444 [INFO][5126] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e720c157c9e2fea93369da4ca75ddaa59865e235c48c826fdb55612408673a9f" iface="eth0" netns="/var/run/netns/cni-5e33fb3b-d17f-2ff1-9ce4-1ab1843fb97e" Jan 13 21:33:10.728446 containerd[2100]: 2025-01-13 21:33:10.444 [INFO][5126] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e720c157c9e2fea93369da4ca75ddaa59865e235c48c826fdb55612408673a9f" iface="eth0" netns="/var/run/netns/cni-5e33fb3b-d17f-2ff1-9ce4-1ab1843fb97e" Jan 13 21:33:10.728446 containerd[2100]: 2025-01-13 21:33:10.444 [INFO][5126] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e720c157c9e2fea93369da4ca75ddaa59865e235c48c826fdb55612408673a9f" Jan 13 21:33:10.728446 containerd[2100]: 2025-01-13 21:33:10.444 [INFO][5126] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e720c157c9e2fea93369da4ca75ddaa59865e235c48c826fdb55612408673a9f" Jan 13 21:33:10.728446 containerd[2100]: 2025-01-13 21:33:10.639 [INFO][5159] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e720c157c9e2fea93369da4ca75ddaa59865e235c48c826fdb55612408673a9f" HandleID="k8s-pod-network.e720c157c9e2fea93369da4ca75ddaa59865e235c48c826fdb55612408673a9f" Workload="ip--172--31--23--216-k8s-csi--node--driver--m7j9j-eth0" Jan 13 21:33:10.728446 containerd[2100]: 2025-01-13 21:33:10.642 [INFO][5159] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:33:10.728446 containerd[2100]: 2025-01-13 21:33:10.642 [INFO][5159] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:33:10.728446 containerd[2100]: 2025-01-13 21:33:10.656 [WARNING][5159] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e720c157c9e2fea93369da4ca75ddaa59865e235c48c826fdb55612408673a9f" HandleID="k8s-pod-network.e720c157c9e2fea93369da4ca75ddaa59865e235c48c826fdb55612408673a9f" Workload="ip--172--31--23--216-k8s-csi--node--driver--m7j9j-eth0" Jan 13 21:33:10.728446 containerd[2100]: 2025-01-13 21:33:10.656 [INFO][5159] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e720c157c9e2fea93369da4ca75ddaa59865e235c48c826fdb55612408673a9f" HandleID="k8s-pod-network.e720c157c9e2fea93369da4ca75ddaa59865e235c48c826fdb55612408673a9f" Workload="ip--172--31--23--216-k8s-csi--node--driver--m7j9j-eth0" Jan 13 21:33:10.728446 containerd[2100]: 2025-01-13 21:33:10.660 [INFO][5159] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:33:10.728446 containerd[2100]: 2025-01-13 21:33:10.693 [INFO][5126] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e720c157c9e2fea93369da4ca75ddaa59865e235c48c826fdb55612408673a9f" Jan 13 21:33:10.729304 containerd[2100]: time="2025-01-13T21:33:10.729025834Z" level=info msg="TearDown network for sandbox \"e720c157c9e2fea93369da4ca75ddaa59865e235c48c826fdb55612408673a9f\" successfully" Jan 13 21:33:10.729304 containerd[2100]: time="2025-01-13T21:33:10.729059713Z" level=info msg="StopPodSandbox for \"e720c157c9e2fea93369da4ca75ddaa59865e235c48c826fdb55612408673a9f\" returns successfully" Jan 13 21:33:10.731547 containerd[2100]: time="2025-01-13T21:33:10.730436681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-m7j9j,Uid:1349c369-e827-4f6c-bda4-a032fbaa74c0,Namespace:calico-system,Attempt:1,}" Jan 13 21:33:10.801570 containerd[2100]: 2025-01-13 21:33:10.485 [INFO][5127] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="50ab2d574563d2cf3f4b79dafcfe83b0dc64537f2ab7150dfdc24bdac25ad4c8" Jan 13 21:33:10.801570 containerd[2100]: 2025-01-13 21:33:10.487 [INFO][5127] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="50ab2d574563d2cf3f4b79dafcfe83b0dc64537f2ab7150dfdc24bdac25ad4c8" iface="eth0" netns="/var/run/netns/cni-e4d9a73b-8166-07d0-2985-9d8b3e8d480c" Jan 13 21:33:10.801570 containerd[2100]: 2025-01-13 21:33:10.489 [INFO][5127] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="50ab2d574563d2cf3f4b79dafcfe83b0dc64537f2ab7150dfdc24bdac25ad4c8" iface="eth0" netns="/var/run/netns/cni-e4d9a73b-8166-07d0-2985-9d8b3e8d480c" Jan 13 21:33:10.801570 containerd[2100]: 2025-01-13 21:33:10.490 [INFO][5127] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="50ab2d574563d2cf3f4b79dafcfe83b0dc64537f2ab7150dfdc24bdac25ad4c8" iface="eth0" netns="/var/run/netns/cni-e4d9a73b-8166-07d0-2985-9d8b3e8d480c" Jan 13 21:33:10.801570 containerd[2100]: 2025-01-13 21:33:10.490 [INFO][5127] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="50ab2d574563d2cf3f4b79dafcfe83b0dc64537f2ab7150dfdc24bdac25ad4c8" Jan 13 21:33:10.801570 containerd[2100]: 2025-01-13 21:33:10.490 [INFO][5127] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="50ab2d574563d2cf3f4b79dafcfe83b0dc64537f2ab7150dfdc24bdac25ad4c8" Jan 13 21:33:10.801570 containerd[2100]: 2025-01-13 21:33:10.654 [INFO][5171] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="50ab2d574563d2cf3f4b79dafcfe83b0dc64537f2ab7150dfdc24bdac25ad4c8" HandleID="k8s-pod-network.50ab2d574563d2cf3f4b79dafcfe83b0dc64537f2ab7150dfdc24bdac25ad4c8" Workload="ip--172--31--23--216-k8s-calico--apiserver--7f9ff6c558--pqxl5-eth0" Jan 13 21:33:10.801570 containerd[2100]: 2025-01-13 21:33:10.655 [INFO][5171] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:33:10.801570 containerd[2100]: 2025-01-13 21:33:10.660 [INFO][5171] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:33:10.801570 containerd[2100]: 2025-01-13 21:33:10.700 [WARNING][5171] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="50ab2d574563d2cf3f4b79dafcfe83b0dc64537f2ab7150dfdc24bdac25ad4c8" HandleID="k8s-pod-network.50ab2d574563d2cf3f4b79dafcfe83b0dc64537f2ab7150dfdc24bdac25ad4c8" Workload="ip--172--31--23--216-k8s-calico--apiserver--7f9ff6c558--pqxl5-eth0" Jan 13 21:33:10.801570 containerd[2100]: 2025-01-13 21:33:10.700 [INFO][5171] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="50ab2d574563d2cf3f4b79dafcfe83b0dc64537f2ab7150dfdc24bdac25ad4c8" HandleID="k8s-pod-network.50ab2d574563d2cf3f4b79dafcfe83b0dc64537f2ab7150dfdc24bdac25ad4c8" Workload="ip--172--31--23--216-k8s-calico--apiserver--7f9ff6c558--pqxl5-eth0" Jan 13 21:33:10.801570 containerd[2100]: 2025-01-13 21:33:10.707 [INFO][5171] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:33:10.801570 containerd[2100]: 2025-01-13 21:33:10.738 [INFO][5127] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="50ab2d574563d2cf3f4b79dafcfe83b0dc64537f2ab7150dfdc24bdac25ad4c8" Jan 13 21:33:10.809010 containerd[2100]: time="2025-01-13T21:33:10.808809910Z" level=info msg="TearDown network for sandbox \"50ab2d574563d2cf3f4b79dafcfe83b0dc64537f2ab7150dfdc24bdac25ad4c8\" successfully" Jan 13 21:33:10.809313 containerd[2100]: time="2025-01-13T21:33:10.809283917Z" level=info msg="StopPodSandbox for \"50ab2d574563d2cf3f4b79dafcfe83b0dc64537f2ab7150dfdc24bdac25ad4c8\" returns successfully" Jan 13 21:33:10.813204 systemd-networkd[1652]: cali4097fc172ab: Link UP Jan 13 21:33:10.814227 containerd[2100]: time="2025-01-13T21:33:10.814169766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f9ff6c558-pqxl5,Uid:7531431e-bdd0-4c4b-b0d9-91a26f9acf4a,Namespace:calico-apiserver,Attempt:1,}" Jan 13 21:33:10.814884 systemd-networkd[1652]: cali4097fc172ab: Gained carrier Jan 13 21:33:10.857559 containerd[2100]: 2025-01-13 21:33:10.330 [INFO][5081] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--216-k8s-coredns--76f75df574--xh79p-eth0 coredns-76f75df574- kube-system 5d39a778-23bc-4ff9-9d67-cbce50e1aa94 874 0 2025-01-13 21:32:33 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-23-216 coredns-76f75df574-xh79p eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4097fc172ab [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="4888a017b387696fd4d61f9239f3639dd0dc77b36a6fb9dd9a393f6c946cf43d" Namespace="kube-system" Pod="coredns-76f75df574-xh79p" WorkloadEndpoint="ip--172--31--23--216-k8s-coredns--76f75df574--xh79p-" Jan 13 21:33:10.857559 containerd[2100]: 2025-01-13 21:33:10.331 [INFO][5081] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4888a017b387696fd4d61f9239f3639dd0dc77b36a6fb9dd9a393f6c946cf43d" Namespace="kube-system" Pod="coredns-76f75df574-xh79p" WorkloadEndpoint="ip--172--31--23--216-k8s-coredns--76f75df574--xh79p-eth0" Jan 13 21:33:10.857559 containerd[2100]: 2025-01-13 21:33:10.648 [INFO][5158] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4888a017b387696fd4d61f9239f3639dd0dc77b36a6fb9dd9a393f6c946cf43d" HandleID="k8s-pod-network.4888a017b387696fd4d61f9239f3639dd0dc77b36a6fb9dd9a393f6c946cf43d" Workload="ip--172--31--23--216-k8s-coredns--76f75df574--xh79p-eth0" Jan 13 21:33:10.857559 containerd[2100]: 2025-01-13 21:33:10.701 [INFO][5158] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4888a017b387696fd4d61f9239f3639dd0dc77b36a6fb9dd9a393f6c946cf43d" HandleID="k8s-pod-network.4888a017b387696fd4d61f9239f3639dd0dc77b36a6fb9dd9a393f6c946cf43d" Workload="ip--172--31--23--216-k8s-coredns--76f75df574--xh79p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000261aa0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-23-216", "pod":"coredns-76f75df574-xh79p", "timestamp":"2025-01-13 21:33:10.648127198 +0000 UTC"}, Hostname:"ip-172-31-23-216", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:33:10.857559 containerd[2100]: 2025-01-13 21:33:10.705 [INFO][5158] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:33:10.857559 containerd[2100]: 2025-01-13 21:33:10.709 [INFO][5158] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:33:10.857559 containerd[2100]: 2025-01-13 21:33:10.709 [INFO][5158] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-216' Jan 13 21:33:10.857559 containerd[2100]: 2025-01-13 21:33:10.722 [INFO][5158] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4888a017b387696fd4d61f9239f3639dd0dc77b36a6fb9dd9a393f6c946cf43d" host="ip-172-31-23-216" Jan 13 21:33:10.857559 containerd[2100]: 2025-01-13 21:33:10.744 [INFO][5158] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-23-216" Jan 13 21:33:10.857559 containerd[2100]: 2025-01-13 21:33:10.756 [INFO][5158] ipam/ipam.go 489: Trying affinity for 192.168.40.0/26 host="ip-172-31-23-216" Jan 13 21:33:10.857559 containerd[2100]: 2025-01-13 21:33:10.762 [INFO][5158] ipam/ipam.go 155: Attempting to load block cidr=192.168.40.0/26 host="ip-172-31-23-216" Jan 13 21:33:10.857559 containerd[2100]: 2025-01-13 21:33:10.768 [INFO][5158] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.40.0/26 host="ip-172-31-23-216" Jan 13 21:33:10.857559 containerd[2100]: 2025-01-13 21:33:10.768 [INFO][5158] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.40.0/26 handle="k8s-pod-network.4888a017b387696fd4d61f9239f3639dd0dc77b36a6fb9dd9a393f6c946cf43d" host="ip-172-31-23-216" Jan 13 21:33:10.857559 containerd[2100]: 2025-01-13 21:33:10.775 [INFO][5158] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4888a017b387696fd4d61f9239f3639dd0dc77b36a6fb9dd9a393f6c946cf43d Jan 13 21:33:10.857559 containerd[2100]: 2025-01-13 21:33:10.787 [INFO][5158] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.40.0/26 handle="k8s-pod-network.4888a017b387696fd4d61f9239f3639dd0dc77b36a6fb9dd9a393f6c946cf43d" host="ip-172-31-23-216" Jan 13 21:33:10.857559 containerd[2100]: 2025-01-13 21:33:10.797 [INFO][5158] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.40.2/26] block=192.168.40.0/26 handle="k8s-pod-network.4888a017b387696fd4d61f9239f3639dd0dc77b36a6fb9dd9a393f6c946cf43d" host="ip-172-31-23-216" Jan 13 21:33:10.857559 containerd[2100]: 2025-01-13 21:33:10.797 [INFO][5158] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.40.2/26] handle="k8s-pod-network.4888a017b387696fd4d61f9239f3639dd0dc77b36a6fb9dd9a393f6c946cf43d" host="ip-172-31-23-216" Jan 13 21:33:10.857559 containerd[2100]: 2025-01-13 21:33:10.797 [INFO][5158] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:33:10.857559 containerd[2100]: 2025-01-13 21:33:10.797 [INFO][5158] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.40.2/26] IPv6=[] ContainerID="4888a017b387696fd4d61f9239f3639dd0dc77b36a6fb9dd9a393f6c946cf43d" HandleID="k8s-pod-network.4888a017b387696fd4d61f9239f3639dd0dc77b36a6fb9dd9a393f6c946cf43d" Workload="ip--172--31--23--216-k8s-coredns--76f75df574--xh79p-eth0" Jan 13 21:33:10.858530 containerd[2100]: 2025-01-13 21:33:10.801 [INFO][5081] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4888a017b387696fd4d61f9239f3639dd0dc77b36a6fb9dd9a393f6c946cf43d" Namespace="kube-system" Pod="coredns-76f75df574-xh79p" WorkloadEndpoint="ip--172--31--23--216-k8s-coredns--76f75df574--xh79p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--216-k8s-coredns--76f75df574--xh79p-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"5d39a778-23bc-4ff9-9d67-cbce50e1aa94", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 32, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-216", ContainerID:"", Pod:"coredns-76f75df574-xh79p", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.40.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4097fc172ab", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:33:10.858530 containerd[2100]: 2025-01-13 21:33:10.803 [INFO][5081] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.40.2/32] ContainerID="4888a017b387696fd4d61f9239f3639dd0dc77b36a6fb9dd9a393f6c946cf43d" Namespace="kube-system" Pod="coredns-76f75df574-xh79p" WorkloadEndpoint="ip--172--31--23--216-k8s-coredns--76f75df574--xh79p-eth0" Jan 13 21:33:10.858530 containerd[2100]: 2025-01-13 21:33:10.805 [INFO][5081] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4097fc172ab ContainerID="4888a017b387696fd4d61f9239f3639dd0dc77b36a6fb9dd9a393f6c946cf43d" Namespace="kube-system" Pod="coredns-76f75df574-xh79p" WorkloadEndpoint="ip--172--31--23--216-k8s-coredns--76f75df574--xh79p-eth0" Jan 13 21:33:10.858530 containerd[2100]: 2025-01-13 21:33:10.814 [INFO][5081] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4888a017b387696fd4d61f9239f3639dd0dc77b36a6fb9dd9a393f6c946cf43d" Namespace="kube-system" Pod="coredns-76f75df574-xh79p" WorkloadEndpoint="ip--172--31--23--216-k8s-coredns--76f75df574--xh79p-eth0" Jan 13 21:33:10.858530 containerd[2100]: 2025-01-13 21:33:10.817 [INFO][5081] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4888a017b387696fd4d61f9239f3639dd0dc77b36a6fb9dd9a393f6c946cf43d" Namespace="kube-system" Pod="coredns-76f75df574-xh79p" WorkloadEndpoint="ip--172--31--23--216-k8s-coredns--76f75df574--xh79p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--216-k8s-coredns--76f75df574--xh79p-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"5d39a778-23bc-4ff9-9d67-cbce50e1aa94", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 32, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-216", ContainerID:"4888a017b387696fd4d61f9239f3639dd0dc77b36a6fb9dd9a393f6c946cf43d", Pod:"coredns-76f75df574-xh79p", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.40.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4097fc172ab", MAC:"ca:cb:29:25:4f:31", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:33:10.858530 containerd[2100]: 2025-01-13 21:33:10.845 [INFO][5081] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4888a017b387696fd4d61f9239f3639dd0dc77b36a6fb9dd9a393f6c946cf43d" Namespace="kube-system" Pod="coredns-76f75df574-xh79p" WorkloadEndpoint="ip--172--31--23--216-k8s-coredns--76f75df574--xh79p-eth0" Jan 13 21:33:10.869072 systemd-networkd[1652]: vxlan.calico: Gained IPv6LL Jan 13 21:33:10.924983 systemd[1]: run-netns-cni\x2d5e33fb3b\x2dd17f\x2d2ff1\x2d9ce4\x2d1ab1843fb97e.mount: Deactivated successfully. Jan 13 21:33:10.925187 systemd[1]: run-netns-cni\x2de4d9a73b\x2d8166\x2d07d0\x2d2985\x2d9d8b3e8d480c.mount: Deactivated successfully. Jan 13 21:33:10.992057 systemd[1]: Started sshd@8-172.31.23.216:22-147.75.109.163:60734.service - OpenSSH per-connection server daemon (147.75.109.163:60734). Jan 13 21:33:11.076988 containerd[2100]: time="2025-01-13T21:33:11.076607159Z" level=info msg="StopPodSandbox for \"aba47a4051f24607b4bb469a39d943adfdb3508b13bee8e4ff6ee6abb58edec2\"" Jan 13 21:33:11.077270 containerd[2100]: time="2025-01-13T21:33:11.077234391Z" level=info msg="StopPodSandbox for \"0960b6c50f41504172e0320529dad5eef1e1024faaf7f631ad753b6edb6e5465\"" Jan 13 21:33:11.113627 containerd[2100]: time="2025-01-13T21:33:11.106954848Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:33:11.113627 containerd[2100]: time="2025-01-13T21:33:11.107047179Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:33:11.113627 containerd[2100]: time="2025-01-13T21:33:11.107069744Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:33:11.113627 containerd[2100]: time="2025-01-13T21:33:11.107197447Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:33:11.156350 containerd[2100]: time="2025-01-13T21:33:11.155868176Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:33:11.156350 containerd[2100]: time="2025-01-13T21:33:11.155994160Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:33:11.156350 containerd[2100]: time="2025-01-13T21:33:11.156013420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:33:11.156350 containerd[2100]: time="2025-01-13T21:33:11.156173089Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:33:11.292045 sshd[5252]: Accepted publickey for core from 147.75.109.163 port 60734 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:33:11.297045 sshd[5252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:33:11.323905 systemd-logind[2056]: New session 9 of user core. Jan 13 21:33:11.323961 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 21:33:11.645017 containerd[2100]: time="2025-01-13T21:33:11.644538324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xh79p,Uid:5d39a778-23bc-4ff9-9d67-cbce50e1aa94,Namespace:kube-system,Attempt:1,} returns sandbox id \"4888a017b387696fd4d61f9239f3639dd0dc77b36a6fb9dd9a393f6c946cf43d\"" Jan 13 21:33:11.690872 containerd[2100]: time="2025-01-13T21:33:11.690330297Z" level=info msg="CreateContainer within sandbox \"4888a017b387696fd4d61f9239f3639dd0dc77b36a6fb9dd9a393f6c946cf43d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 21:33:11.797762 systemd-networkd[1652]: calic312bce2af0: Link UP Jan 13 21:33:11.798064 systemd-networkd[1652]: calic312bce2af0: Gained carrier Jan 13 21:33:11.856886 containerd[2100]: 2025-01-13 21:33:11.177 [INFO][5234] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--216-k8s-calico--apiserver--7f9ff6c558--pqxl5-eth0 calico-apiserver-7f9ff6c558- calico-apiserver 7531431e-bdd0-4c4b-b0d9-91a26f9acf4a 886 0 2025-01-13 21:32:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7f9ff6c558 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-23-216 calico-apiserver-7f9ff6c558-pqxl5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic312bce2af0 [] []}} ContainerID="069796690fb5d7f6350b986502528315dfb0805e8fc802c6ddb0554a8934512d" Namespace="calico-apiserver" Pod="calico-apiserver-7f9ff6c558-pqxl5" WorkloadEndpoint="ip--172--31--23--216-k8s-calico--apiserver--7f9ff6c558--pqxl5-" Jan 13 21:33:11.856886 containerd[2100]: 2025-01-13 21:33:11.177 [INFO][5234] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="069796690fb5d7f6350b986502528315dfb0805e8fc802c6ddb0554a8934512d" Namespace="calico-apiserver" Pod="calico-apiserver-7f9ff6c558-pqxl5" WorkloadEndpoint="ip--172--31--23--216-k8s-calico--apiserver--7f9ff6c558--pqxl5-eth0" Jan 13 21:33:11.856886 containerd[2100]: 2025-01-13 21:33:11.434 [INFO][5332] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="069796690fb5d7f6350b986502528315dfb0805e8fc802c6ddb0554a8934512d" HandleID="k8s-pod-network.069796690fb5d7f6350b986502528315dfb0805e8fc802c6ddb0554a8934512d" Workload="ip--172--31--23--216-k8s-calico--apiserver--7f9ff6c558--pqxl5-eth0" Jan 13 21:33:11.856886 containerd[2100]: 2025-01-13 21:33:11.564 [INFO][5332] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="069796690fb5d7f6350b986502528315dfb0805e8fc802c6ddb0554a8934512d" HandleID="k8s-pod-network.069796690fb5d7f6350b986502528315dfb0805e8fc802c6ddb0554a8934512d" Workload="ip--172--31--23--216-k8s-calico--apiserver--7f9ff6c558--pqxl5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004b8660), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-23-216", "pod":"calico-apiserver-7f9ff6c558-pqxl5", "timestamp":"2025-01-13 21:33:11.433439394 +0000 UTC"}, Hostname:"ip-172-31-23-216", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:33:11.856886 containerd[2100]: 2025-01-13 21:33:11.564 [INFO][5332] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:33:11.856886 containerd[2100]: 2025-01-13 21:33:11.564 [INFO][5332] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:33:11.856886 containerd[2100]: 2025-01-13 21:33:11.564 [INFO][5332] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-216' Jan 13 21:33:11.856886 containerd[2100]: 2025-01-13 21:33:11.576 [INFO][5332] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.069796690fb5d7f6350b986502528315dfb0805e8fc802c6ddb0554a8934512d" host="ip-172-31-23-216" Jan 13 21:33:11.856886 containerd[2100]: 2025-01-13 21:33:11.601 [INFO][5332] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-23-216" Jan 13 21:33:11.856886 containerd[2100]: 2025-01-13 21:33:11.614 [INFO][5332] ipam/ipam.go 489: Trying affinity for 192.168.40.0/26 host="ip-172-31-23-216" Jan 13 21:33:11.856886 containerd[2100]: 2025-01-13 21:33:11.626 [INFO][5332] ipam/ipam.go 155: Attempting to load block cidr=192.168.40.0/26 host="ip-172-31-23-216" Jan 13 21:33:11.856886 containerd[2100]: 2025-01-13 21:33:11.636 [INFO][5332] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.40.0/26 host="ip-172-31-23-216" Jan 13 21:33:11.856886 containerd[2100]: 2025-01-13 21:33:11.636 [INFO][5332] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.40.0/26 handle="k8s-pod-network.069796690fb5d7f6350b986502528315dfb0805e8fc802c6ddb0554a8934512d" host="ip-172-31-23-216" Jan 13 21:33:11.856886 containerd[2100]: 2025-01-13 21:33:11.665 [INFO][5332] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.069796690fb5d7f6350b986502528315dfb0805e8fc802c6ddb0554a8934512d Jan 13 21:33:11.856886 containerd[2100]: 2025-01-13 21:33:11.694 [INFO][5332] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.40.0/26 handle="k8s-pod-network.069796690fb5d7f6350b986502528315dfb0805e8fc802c6ddb0554a8934512d" host="ip-172-31-23-216" Jan 13 21:33:11.856886 containerd[2100]: 2025-01-13 21:33:11.730 [INFO][5332] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.40.3/26] block=192.168.40.0/26 handle="k8s-pod-network.069796690fb5d7f6350b986502528315dfb0805e8fc802c6ddb0554a8934512d" host="ip-172-31-23-216" Jan 13 21:33:11.856886 containerd[2100]: 2025-01-13 21:33:11.732 [INFO][5332] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.40.3/26] handle="k8s-pod-network.069796690fb5d7f6350b986502528315dfb0805e8fc802c6ddb0554a8934512d" host="ip-172-31-23-216" Jan 13 21:33:11.856886 containerd[2100]: 2025-01-13 21:33:11.733 [INFO][5332] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:33:11.856886 containerd[2100]: 2025-01-13 21:33:11.736 [INFO][5332] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.40.3/26] IPv6=[] ContainerID="069796690fb5d7f6350b986502528315dfb0805e8fc802c6ddb0554a8934512d" HandleID="k8s-pod-network.069796690fb5d7f6350b986502528315dfb0805e8fc802c6ddb0554a8934512d" Workload="ip--172--31--23--216-k8s-calico--apiserver--7f9ff6c558--pqxl5-eth0" Jan 13 21:33:11.859529 containerd[2100]: 2025-01-13 21:33:11.777 [INFO][5234] cni-plugin/k8s.go 386: Populated endpoint ContainerID="069796690fb5d7f6350b986502528315dfb0805e8fc802c6ddb0554a8934512d" Namespace="calico-apiserver" Pod="calico-apiserver-7f9ff6c558-pqxl5" WorkloadEndpoint="ip--172--31--23--216-k8s-calico--apiserver--7f9ff6c558--pqxl5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--216-k8s-calico--apiserver--7f9ff6c558--pqxl5-eth0", GenerateName:"calico-apiserver-7f9ff6c558-", Namespace:"calico-apiserver", SelfLink:"", UID:"7531431e-bdd0-4c4b-b0d9-91a26f9acf4a", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 32, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f9ff6c558", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-216", ContainerID:"", Pod:"calico-apiserver-7f9ff6c558-pqxl5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.40.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic312bce2af0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:33:11.859529 containerd[2100]: 2025-01-13 21:33:11.777 [INFO][5234] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.40.3/32] ContainerID="069796690fb5d7f6350b986502528315dfb0805e8fc802c6ddb0554a8934512d" Namespace="calico-apiserver" Pod="calico-apiserver-7f9ff6c558-pqxl5" WorkloadEndpoint="ip--172--31--23--216-k8s-calico--apiserver--7f9ff6c558--pqxl5-eth0" Jan 13 21:33:11.859529 containerd[2100]: 2025-01-13 21:33:11.777 [INFO][5234] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic312bce2af0 ContainerID="069796690fb5d7f6350b986502528315dfb0805e8fc802c6ddb0554a8934512d" Namespace="calico-apiserver" Pod="calico-apiserver-7f9ff6c558-pqxl5" WorkloadEndpoint="ip--172--31--23--216-k8s-calico--apiserver--7f9ff6c558--pqxl5-eth0" Jan 13 21:33:11.859529 containerd[2100]: 2025-01-13 21:33:11.803 [INFO][5234] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="069796690fb5d7f6350b986502528315dfb0805e8fc802c6ddb0554a8934512d" Namespace="calico-apiserver" Pod="calico-apiserver-7f9ff6c558-pqxl5" WorkloadEndpoint="ip--172--31--23--216-k8s-calico--apiserver--7f9ff6c558--pqxl5-eth0" Jan 13 21:33:11.859529 containerd[2100]: 2025-01-13 21:33:11.805 [INFO][5234] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="069796690fb5d7f6350b986502528315dfb0805e8fc802c6ddb0554a8934512d" Namespace="calico-apiserver" Pod="calico-apiserver-7f9ff6c558-pqxl5" WorkloadEndpoint="ip--172--31--23--216-k8s-calico--apiserver--7f9ff6c558--pqxl5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--216-k8s-calico--apiserver--7f9ff6c558--pqxl5-eth0", GenerateName:"calico-apiserver-7f9ff6c558-", Namespace:"calico-apiserver", SelfLink:"", UID:"7531431e-bdd0-4c4b-b0d9-91a26f9acf4a", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 32, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f9ff6c558", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-216", ContainerID:"069796690fb5d7f6350b986502528315dfb0805e8fc802c6ddb0554a8934512d", Pod:"calico-apiserver-7f9ff6c558-pqxl5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.40.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic312bce2af0", MAC:"6e:67:9c:58:5f:96", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:33:11.859529 containerd[2100]: 2025-01-13 21:33:11.845 [INFO][5234] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="069796690fb5d7f6350b986502528315dfb0805e8fc802c6ddb0554a8934512d" Namespace="calico-apiserver" Pod="calico-apiserver-7f9ff6c558-pqxl5" WorkloadEndpoint="ip--172--31--23--216-k8s-calico--apiserver--7f9ff6c558--pqxl5-eth0" Jan 13 21:33:11.859529 containerd[2100]: time="2025-01-13T21:33:11.859387602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-qn2h7,Uid:1da65a44-04e3-44d6-8959-9a867b5fe933,Namespace:kube-system,Attempt:1,} returns sandbox id \"6ae3f25e998e949f370e112e89f5c0b31913c41ddaa6af112e1d5ee886f509cb\"" Jan 13 21:33:11.865104 containerd[2100]: time="2025-01-13T21:33:11.864970993Z" level=info msg="CreateContainer within sandbox \"4888a017b387696fd4d61f9239f3639dd0dc77b36a6fb9dd9a393f6c946cf43d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"35af3000484221f7857ff3fe88b3148dfe2d1ead404b5014d07e56ab090fbd64\"" Jan 13 21:33:11.876858 containerd[2100]: time="2025-01-13T21:33:11.876335578Z" level=info msg="StartContainer for \"35af3000484221f7857ff3fe88b3148dfe2d1ead404b5014d07e56ab090fbd64\"" Jan 13 21:33:11.892923 containerd[2100]: time="2025-01-13T21:33:11.892525376Z" level=info msg="CreateContainer within sandbox \"6ae3f25e998e949f370e112e89f5c0b31913c41ddaa6af112e1d5ee886f509cb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 21:33:11.991916 systemd-networkd[1652]: cali52c7d2c0e73: Link UP Jan 13 21:33:11.995575 systemd-networkd[1652]: cali52c7d2c0e73: Gained carrier Jan 13 21:33:12.042549 containerd[2100]: 2025-01-13 21:33:11.507 [INFO][5321] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0960b6c50f41504172e0320529dad5eef1e1024faaf7f631ad753b6edb6e5465" Jan 13 21:33:12.042549 containerd[2100]: 2025-01-13 21:33:11.507 [INFO][5321] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0960b6c50f41504172e0320529dad5eef1e1024faaf7f631ad753b6edb6e5465" iface="eth0" netns="/var/run/netns/cni-b6a79ac3-11a5-44a8-abc8-5563a8774320" Jan 13 21:33:12.042549 containerd[2100]: 2025-01-13 21:33:11.509 [INFO][5321] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0960b6c50f41504172e0320529dad5eef1e1024faaf7f631ad753b6edb6e5465" iface="eth0" netns="/var/run/netns/cni-b6a79ac3-11a5-44a8-abc8-5563a8774320" Jan 13 21:33:12.042549 containerd[2100]: 2025-01-13 21:33:11.531 [INFO][5321] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0960b6c50f41504172e0320529dad5eef1e1024faaf7f631ad753b6edb6e5465" iface="eth0" netns="/var/run/netns/cni-b6a79ac3-11a5-44a8-abc8-5563a8774320" Jan 13 21:33:12.042549 containerd[2100]: 2025-01-13 21:33:11.532 [INFO][5321] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0960b6c50f41504172e0320529dad5eef1e1024faaf7f631ad753b6edb6e5465" Jan 13 21:33:12.042549 containerd[2100]: 2025-01-13 21:33:11.533 [INFO][5321] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0960b6c50f41504172e0320529dad5eef1e1024faaf7f631ad753b6edb6e5465" Jan 13 21:33:12.042549 containerd[2100]: 2025-01-13 21:33:11.720 [INFO][5383] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0960b6c50f41504172e0320529dad5eef1e1024faaf7f631ad753b6edb6e5465" HandleID="k8s-pod-network.0960b6c50f41504172e0320529dad5eef1e1024faaf7f631ad753b6edb6e5465" Workload="ip--172--31--23--216-k8s-calico--apiserver--7f9ff6c558--5ns5g-eth0" Jan 13 21:33:12.042549 containerd[2100]: 2025-01-13 21:33:11.720 [INFO][5383] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:33:12.042549 containerd[2100]: 2025-01-13 21:33:11.901 [INFO][5383] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:33:12.042549 containerd[2100]: 2025-01-13 21:33:11.939 [WARNING][5383] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0960b6c50f41504172e0320529dad5eef1e1024faaf7f631ad753b6edb6e5465" HandleID="k8s-pod-network.0960b6c50f41504172e0320529dad5eef1e1024faaf7f631ad753b6edb6e5465" Workload="ip--172--31--23--216-k8s-calico--apiserver--7f9ff6c558--5ns5g-eth0" Jan 13 21:33:12.042549 containerd[2100]: 2025-01-13 21:33:11.939 [INFO][5383] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0960b6c50f41504172e0320529dad5eef1e1024faaf7f631ad753b6edb6e5465" HandleID="k8s-pod-network.0960b6c50f41504172e0320529dad5eef1e1024faaf7f631ad753b6edb6e5465" Workload="ip--172--31--23--216-k8s-calico--apiserver--7f9ff6c558--5ns5g-eth0" Jan 13 21:33:12.042549 containerd[2100]: 2025-01-13 21:33:11.954 [INFO][5383] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:33:12.042549 containerd[2100]: 2025-01-13 21:33:12.002 [INFO][5321] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0960b6c50f41504172e0320529dad5eef1e1024faaf7f631ad753b6edb6e5465" Jan 13 21:33:12.055862 containerd[2100]: time="2025-01-13T21:33:12.054713603Z" level=info msg="TearDown network for sandbox \"0960b6c50f41504172e0320529dad5eef1e1024faaf7f631ad753b6edb6e5465\" successfully" Jan 13 21:33:12.055862 containerd[2100]: time="2025-01-13T21:33:12.054759147Z" level=info msg="StopPodSandbox for \"0960b6c50f41504172e0320529dad5eef1e1024faaf7f631ad753b6edb6e5465\" returns successfully" Jan 13 21:33:12.055862 containerd[2100]: time="2025-01-13T21:33:12.055819647Z" level=info msg="CreateContainer within sandbox \"6ae3f25e998e949f370e112e89f5c0b31913c41ddaa6af112e1d5ee886f509cb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"070b396de7c61abbecd86b968c4cc091219e55e1a8522f762cc8fd7d46d673cd\"" Jan 13 21:33:12.055135 systemd[1]: run-netns-cni\x2db6a79ac3\x2d11a5\x2d44a8\x2dabc8\x2d5563a8774320.mount: Deactivated successfully. Jan 13 21:33:12.056471 containerd[2100]: time="2025-01-13T21:33:12.056092070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f9ff6c558-5ns5g,Uid:3ddd0a1e-3af8-462a-b5e7-d0696cbfc1e1,Namespace:calico-apiserver,Attempt:1,}" Jan 13 21:33:12.071417 containerd[2100]: time="2025-01-13T21:33:12.067462332Z" level=info msg="StartContainer for \"070b396de7c61abbecd86b968c4cc091219e55e1a8522f762cc8fd7d46d673cd\"" Jan 13 21:33:12.071417 containerd[2100]: 2025-01-13 21:33:11.288 [INFO][5312] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="aba47a4051f24607b4bb469a39d943adfdb3508b13bee8e4ff6ee6abb58edec2" Jan 13 21:33:12.071417 containerd[2100]: 2025-01-13 21:33:11.301 [INFO][5312] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="aba47a4051f24607b4bb469a39d943adfdb3508b13bee8e4ff6ee6abb58edec2" iface="eth0" netns="/var/run/netns/cni-4a47025e-596f-e778-1a47-e2165a8d35eb" Jan 13 21:33:12.071417 containerd[2100]: 2025-01-13 21:33:11.302 [INFO][5312] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="aba47a4051f24607b4bb469a39d943adfdb3508b13bee8e4ff6ee6abb58edec2" iface="eth0" netns="/var/run/netns/cni-4a47025e-596f-e778-1a47-e2165a8d35eb" Jan 13 21:33:12.071417 containerd[2100]: 2025-01-13 21:33:11.302 [INFO][5312] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="aba47a4051f24607b4bb469a39d943adfdb3508b13bee8e4ff6ee6abb58edec2" iface="eth0" netns="/var/run/netns/cni-4a47025e-596f-e778-1a47-e2165a8d35eb" Jan 13 21:33:12.071417 containerd[2100]: 2025-01-13 21:33:11.302 [INFO][5312] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="aba47a4051f24607b4bb469a39d943adfdb3508b13bee8e4ff6ee6abb58edec2" Jan 13 21:33:12.071417 containerd[2100]: 2025-01-13 21:33:11.303 [INFO][5312] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aba47a4051f24607b4bb469a39d943adfdb3508b13bee8e4ff6ee6abb58edec2" Jan 13 21:33:12.071417 containerd[2100]: 2025-01-13 21:33:11.725 [INFO][5351] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="aba47a4051f24607b4bb469a39d943adfdb3508b13bee8e4ff6ee6abb58edec2" HandleID="k8s-pod-network.aba47a4051f24607b4bb469a39d943adfdb3508b13bee8e4ff6ee6abb58edec2" Workload="ip--172--31--23--216-k8s-calico--kube--controllers--97574c6fb--sdstw-eth0" Jan 13 21:33:12.071417 containerd[2100]: 2025-01-13 21:33:11.726 [INFO][5351] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:33:12.071417 containerd[2100]: 2025-01-13 21:33:11.954 [INFO][5351] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:33:12.071417 containerd[2100]: 2025-01-13 21:33:11.985 [WARNING][5351] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="aba47a4051f24607b4bb469a39d943adfdb3508b13bee8e4ff6ee6abb58edec2" HandleID="k8s-pod-network.aba47a4051f24607b4bb469a39d943adfdb3508b13bee8e4ff6ee6abb58edec2" Workload="ip--172--31--23--216-k8s-calico--kube--controllers--97574c6fb--sdstw-eth0" Jan 13 21:33:12.071417 containerd[2100]: 2025-01-13 21:33:11.985 [INFO][5351] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="aba47a4051f24607b4bb469a39d943adfdb3508b13bee8e4ff6ee6abb58edec2" HandleID="k8s-pod-network.aba47a4051f24607b4bb469a39d943adfdb3508b13bee8e4ff6ee6abb58edec2" Workload="ip--172--31--23--216-k8s-calico--kube--controllers--97574c6fb--sdstw-eth0" Jan 13 21:33:12.071417 containerd[2100]: 2025-01-13 21:33:11.996 [INFO][5351] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:33:12.071417 containerd[2100]: 2025-01-13 21:33:12.043 [INFO][5312] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="aba47a4051f24607b4bb469a39d943adfdb3508b13bee8e4ff6ee6abb58edec2" Jan 13 21:33:12.076959 containerd[2100]: time="2025-01-13T21:33:12.071509921Z" level=info msg="TearDown network for sandbox \"aba47a4051f24607b4bb469a39d943adfdb3508b13bee8e4ff6ee6abb58edec2\" successfully" Jan 13 21:33:12.076959 containerd[2100]: time="2025-01-13T21:33:12.071651575Z" level=info msg="StopPodSandbox for \"aba47a4051f24607b4bb469a39d943adfdb3508b13bee8e4ff6ee6abb58edec2\" returns successfully" Jan 13 21:33:12.076959 containerd[2100]: 2025-01-13 21:33:11.009 [INFO][5212] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--216-k8s-csi--node--driver--m7j9j-eth0 csi-node-driver- calico-system 1349c369-e827-4f6c-bda4-a032fbaa74c0 885 0 2025-01-13 21:32:44 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-23-216 csi-node-driver-m7j9j eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali52c7d2c0e73 [] []}} ContainerID="4c98c3d2f5c6021b26675f5c836294d726d77409ad636d98e07c130b23ffacb3" Namespace="calico-system" Pod="csi-node-driver-m7j9j" WorkloadEndpoint="ip--172--31--23--216-k8s-csi--node--driver--m7j9j-" Jan 13 21:33:12.076959 containerd[2100]: 2025-01-13 21:33:11.009 [INFO][5212] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4c98c3d2f5c6021b26675f5c836294d726d77409ad636d98e07c130b23ffacb3" Namespace="calico-system" Pod="csi-node-driver-m7j9j" WorkloadEndpoint="ip--172--31--23--216-k8s-csi--node--driver--m7j9j-eth0" Jan 13 21:33:12.076959 containerd[2100]: 2025-01-13 21:33:11.410 [INFO][5265] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4c98c3d2f5c6021b26675f5c836294d726d77409ad636d98e07c130b23ffacb3" HandleID="k8s-pod-network.4c98c3d2f5c6021b26675f5c836294d726d77409ad636d98e07c130b23ffacb3" Workload="ip--172--31--23--216-k8s-csi--node--driver--m7j9j-eth0" Jan 13 21:33:12.076959 containerd[2100]: 2025-01-13 21:33:11.575 [INFO][5265] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4c98c3d2f5c6021b26675f5c836294d726d77409ad636d98e07c130b23ffacb3" HandleID="k8s-pod-network.4c98c3d2f5c6021b26675f5c836294d726d77409ad636d98e07c130b23ffacb3" Workload="ip--172--31--23--216-k8s-csi--node--driver--m7j9j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051f10), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-23-216", "pod":"csi-node-driver-m7j9j", "timestamp":"2025-01-13 21:33:11.410644977 +0000 UTC"}, Hostname:"ip-172-31-23-216", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:33:12.076959 containerd[2100]: 2025-01-13 21:33:11.584 [INFO][5265] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:33:12.076959 containerd[2100]: 2025-01-13 21:33:11.737 [INFO][5265] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:33:12.076959 containerd[2100]: 2025-01-13 21:33:11.737 [INFO][5265] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-216' Jan 13 21:33:12.076959 containerd[2100]: 2025-01-13 21:33:11.745 [INFO][5265] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4c98c3d2f5c6021b26675f5c836294d726d77409ad636d98e07c130b23ffacb3" host="ip-172-31-23-216" Jan 13 21:33:12.076959 containerd[2100]: 2025-01-13 21:33:11.782 [INFO][5265] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-23-216" Jan 13 21:33:12.076959 containerd[2100]: 2025-01-13 21:33:11.799 [INFO][5265] ipam/ipam.go 489: Trying affinity for 192.168.40.0/26 host="ip-172-31-23-216" Jan 13 21:33:12.076959 containerd[2100]: 2025-01-13 21:33:11.811 [INFO][5265] ipam/ipam.go 155: Attempting to load block cidr=192.168.40.0/26 host="ip-172-31-23-216" Jan 13 21:33:12.076959 containerd[2100]: 2025-01-13 21:33:11.834 [INFO][5265] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.40.0/26 host="ip-172-31-23-216" Jan 13 21:33:12.076959 containerd[2100]: 2025-01-13 21:33:11.834 [INFO][5265] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.40.0/26 handle="k8s-pod-network.4c98c3d2f5c6021b26675f5c836294d726d77409ad636d98e07c130b23ffacb3" host="ip-172-31-23-216" Jan 13 21:33:12.076959 containerd[2100]: 2025-01-13 21:33:11.851 [INFO][5265] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4c98c3d2f5c6021b26675f5c836294d726d77409ad636d98e07c130b23ffacb3 Jan 13 21:33:12.076959 containerd[2100]: 2025-01-13 21:33:11.867 [INFO][5265] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.40.0/26 handle="k8s-pod-network.4c98c3d2f5c6021b26675f5c836294d726d77409ad636d98e07c130b23ffacb3" host="ip-172-31-23-216" Jan 13 21:33:12.076959 containerd[2100]: 2025-01-13 21:33:11.897 [INFO][5265] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.40.4/26] block=192.168.40.0/26 handle="k8s-pod-network.4c98c3d2f5c6021b26675f5c836294d726d77409ad636d98e07c130b23ffacb3" host="ip-172-31-23-216" Jan 13 21:33:12.076959 containerd[2100]: 2025-01-13 21:33:11.897 [INFO][5265] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.40.4/26] handle="k8s-pod-network.4c98c3d2f5c6021b26675f5c836294d726d77409ad636d98e07c130b23ffacb3" host="ip-172-31-23-216" Jan 13 21:33:12.076959 containerd[2100]: 2025-01-13 21:33:11.897 [INFO][5265] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:33:12.076959 containerd[2100]: 2025-01-13 21:33:11.897 [INFO][5265] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.40.4/26] IPv6=[] ContainerID="4c98c3d2f5c6021b26675f5c836294d726d77409ad636d98e07c130b23ffacb3" HandleID="k8s-pod-network.4c98c3d2f5c6021b26675f5c836294d726d77409ad636d98e07c130b23ffacb3" Workload="ip--172--31--23--216-k8s-csi--node--driver--m7j9j-eth0" Jan 13 21:33:12.089151 containerd[2100]: 2025-01-13 21:33:11.952 [INFO][5212] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4c98c3d2f5c6021b26675f5c836294d726d77409ad636d98e07c130b23ffacb3" Namespace="calico-system" Pod="csi-node-driver-m7j9j" WorkloadEndpoint="ip--172--31--23--216-k8s-csi--node--driver--m7j9j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--216-k8s-csi--node--driver--m7j9j-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1349c369-e827-4f6c-bda4-a032fbaa74c0", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 32, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-216", ContainerID:"", Pod:"csi-node-driver-m7j9j", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.40.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali52c7d2c0e73", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:33:12.089151 containerd[2100]: 2025-01-13 21:33:11.953 [INFO][5212] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.40.4/32] ContainerID="4c98c3d2f5c6021b26675f5c836294d726d77409ad636d98e07c130b23ffacb3" Namespace="calico-system" Pod="csi-node-driver-m7j9j" WorkloadEndpoint="ip--172--31--23--216-k8s-csi--node--driver--m7j9j-eth0" Jan 13 21:33:12.089151 containerd[2100]: 2025-01-13 21:33:11.953 [INFO][5212] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali52c7d2c0e73 ContainerID="4c98c3d2f5c6021b26675f5c836294d726d77409ad636d98e07c130b23ffacb3" Namespace="calico-system" Pod="csi-node-driver-m7j9j" WorkloadEndpoint="ip--172--31--23--216-k8s-csi--node--driver--m7j9j-eth0" Jan 13 21:33:12.089151 containerd[2100]: 2025-01-13 21:33:11.996 [INFO][5212] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4c98c3d2f5c6021b26675f5c836294d726d77409ad636d98e07c130b23ffacb3" Namespace="calico-system" Pod="csi-node-driver-m7j9j" WorkloadEndpoint="ip--172--31--23--216-k8s-csi--node--driver--m7j9j-eth0" Jan 13 21:33:12.089151 containerd[2100]: 2025-01-13 21:33:12.010 [INFO][5212] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4c98c3d2f5c6021b26675f5c836294d726d77409ad636d98e07c130b23ffacb3" Namespace="calico-system" Pod="csi-node-driver-m7j9j" WorkloadEndpoint="ip--172--31--23--216-k8s-csi--node--driver--m7j9j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--216-k8s-csi--node--driver--m7j9j-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1349c369-e827-4f6c-bda4-a032fbaa74c0", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 32, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-216", ContainerID:"4c98c3d2f5c6021b26675f5c836294d726d77409ad636d98e07c130b23ffacb3", Pod:"csi-node-driver-m7j9j", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.40.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali52c7d2c0e73", MAC:"86:37:4a:fd:78:b8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:33:12.089151 containerd[2100]: 2025-01-13 21:33:12.052 [INFO][5212] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4c98c3d2f5c6021b26675f5c836294d726d77409ad636d98e07c130b23ffacb3" Namespace="calico-system" Pod="csi-node-driver-m7j9j" WorkloadEndpoint="ip--172--31--23--216-k8s-csi--node--driver--m7j9j-eth0" Jan 13 21:33:12.081083 systemd[1]: run-netns-cni\x2d4a47025e\x2d596f\x2de778\x2d1a47\x2de2165a8d35eb.mount: Deactivated successfully. Jan 13 21:33:12.107410 containerd[2100]: time="2025-01-13T21:33:12.107365665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-97574c6fb-sdstw,Uid:6516d32b-0c84-4b53-a73d-5859b4a02633,Namespace:calico-system,Attempt:1,}" Jan 13 21:33:12.214168 systemd-networkd[1652]: cali4097fc172ab: Gained IPv6LL Jan 13 21:33:12.252890 sshd[5252]: pam_unix(sshd:session): session closed for user core Jan 13 21:33:12.277244 systemd-networkd[1652]: cali72875e634b0: Gained IPv6LL Jan 13 21:33:12.279475 systemd[1]: sshd@8-172.31.23.216:22-147.75.109.163:60734.service: Deactivated successfully. Jan 13 21:33:12.293027 containerd[2100]: time="2025-01-13T21:33:12.275014318Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:33:12.293027 containerd[2100]: time="2025-01-13T21:33:12.275086195Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:33:12.293027 containerd[2100]: time="2025-01-13T21:33:12.275107314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:33:12.293027 containerd[2100]: time="2025-01-13T21:33:12.275220453Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:33:12.303176 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 21:33:12.307289 systemd-logind[2056]: Session 9 logged out. Waiting for processes to exit. Jan 13 21:33:12.310514 systemd-logind[2056]: Removed session 9. Jan 13 21:33:12.341997 systemd-resolved[1974]: Under memory pressure, flushing caches. Jan 13 21:33:12.342028 systemd-resolved[1974]: Flushed all caches. Jan 13 21:33:12.342870 systemd-journald[1569]: Under memory pressure, flushing caches. Jan 13 21:33:12.498185 containerd[2100]: time="2025-01-13T21:33:12.497415927Z" level=info msg="StartContainer for \"35af3000484221f7857ff3fe88b3148dfe2d1ead404b5014d07e56ab090fbd64\" returns successfully" Jan 13 21:33:12.509951 containerd[2100]: time="2025-01-13T21:33:12.507195965Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:33:12.509951 containerd[2100]: time="2025-01-13T21:33:12.507282475Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:33:12.509951 containerd[2100]: time="2025-01-13T21:33:12.507302392Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:33:12.511892 containerd[2100]: time="2025-01-13T21:33:12.511696316Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:33:12.517755 containerd[2100]: time="2025-01-13T21:33:12.517714848Z" level=info msg="StartContainer for \"070b396de7c61abbecd86b968c4cc091219e55e1a8522f762cc8fd7d46d673cd\" returns successfully" Jan 13 21:33:12.563147 containerd[2100]: time="2025-01-13T21:33:12.559193523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f9ff6c558-pqxl5,Uid:7531431e-bdd0-4c4b-b0d9-91a26f9acf4a,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"069796690fb5d7f6350b986502528315dfb0805e8fc802c6ddb0554a8934512d\"" Jan 13 21:33:12.564868 containerd[2100]: time="2025-01-13T21:33:12.564111483Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 13 21:33:12.689258 containerd[2100]: time="2025-01-13T21:33:12.689198166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-m7j9j,Uid:1349c369-e827-4f6c-bda4-a032fbaa74c0,Namespace:calico-system,Attempt:1,} returns sandbox id \"4c98c3d2f5c6021b26675f5c836294d726d77409ad636d98e07c130b23ffacb3\"" Jan 13 21:33:12.778801 systemd-networkd[1652]: calif1fabf08c78: Link UP Jan 13 21:33:12.782604 systemd-networkd[1652]: calif1fabf08c78: Gained carrier Jan 13 21:33:12.856077 containerd[2100]: 2025-01-13 21:33:12.462 [INFO][5484] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--216-k8s-calico--kube--controllers--97574c6fb--sdstw-eth0 calico-kube-controllers-97574c6fb- calico-system 6516d32b-0c84-4b53-a73d-5859b4a02633 898 0 2025-01-13 21:32:44 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:97574c6fb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-23-216 calico-kube-controllers-97574c6fb-sdstw eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif1fabf08c78 [] []}} ContainerID="730d5ce859f774b043e518251a98851c2c0f88a7124d555c205b096292c300e7" Namespace="calico-system" Pod="calico-kube-controllers-97574c6fb-sdstw" WorkloadEndpoint="ip--172--31--23--216-k8s-calico--kube--controllers--97574c6fb--sdstw-" Jan 13 21:33:12.856077 containerd[2100]: 2025-01-13 21:33:12.463 [INFO][5484] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="730d5ce859f774b043e518251a98851c2c0f88a7124d555c205b096292c300e7" Namespace="calico-system" Pod="calico-kube-controllers-97574c6fb-sdstw" WorkloadEndpoint="ip--172--31--23--216-k8s-calico--kube--controllers--97574c6fb--sdstw-eth0" Jan 13 21:33:12.856077 containerd[2100]: 2025-01-13 21:33:12.627 [INFO][5583] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="730d5ce859f774b043e518251a98851c2c0f88a7124d555c205b096292c300e7" HandleID="k8s-pod-network.730d5ce859f774b043e518251a98851c2c0f88a7124d555c205b096292c300e7" Workload="ip--172--31--23--216-k8s-calico--kube--controllers--97574c6fb--sdstw-eth0" Jan 13 21:33:12.856077 containerd[2100]: 2025-01-13 21:33:12.662 [INFO][5583] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="730d5ce859f774b043e518251a98851c2c0f88a7124d555c205b096292c300e7" HandleID="k8s-pod-network.730d5ce859f774b043e518251a98851c2c0f88a7124d555c205b096292c300e7" Workload="ip--172--31--23--216-k8s-calico--kube--controllers--97574c6fb--sdstw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000291a80), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-23-216", "pod":"calico-kube-controllers-97574c6fb-sdstw", "timestamp":"2025-01-13 21:33:12.627788723 +0000 UTC"}, Hostname:"ip-172-31-23-216", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:33:12.856077 containerd[2100]: 2025-01-13 21:33:12.662 [INFO][5583] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:33:12.856077 containerd[2100]: 2025-01-13 21:33:12.663 [INFO][5583] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:33:12.856077 containerd[2100]: 2025-01-13 21:33:12.663 [INFO][5583] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-216' Jan 13 21:33:12.856077 containerd[2100]: 2025-01-13 21:33:12.668 [INFO][5583] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.730d5ce859f774b043e518251a98851c2c0f88a7124d555c205b096292c300e7" host="ip-172-31-23-216" Jan 13 21:33:12.856077 containerd[2100]: 2025-01-13 21:33:12.676 [INFO][5583] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-23-216" Jan 13 21:33:12.856077 containerd[2100]: 2025-01-13 21:33:12.714 [INFO][5583] ipam/ipam.go 489: Trying affinity for 192.168.40.0/26 host="ip-172-31-23-216" Jan 13 21:33:12.856077 containerd[2100]: 2025-01-13 21:33:12.719 [INFO][5583] ipam/ipam.go 155: Attempting to load block cidr=192.168.40.0/26 host="ip-172-31-23-216" Jan 13 21:33:12.856077 containerd[2100]: 2025-01-13 21:33:12.725 [INFO][5583] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.40.0/26 host="ip-172-31-23-216" Jan 13 21:33:12.856077 containerd[2100]: 2025-01-13 21:33:12.726 [INFO][5583] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.40.0/26 handle="k8s-pod-network.730d5ce859f774b043e518251a98851c2c0f88a7124d555c205b096292c300e7" host="ip-172-31-23-216" Jan 13 21:33:12.856077 containerd[2100]: 2025-01-13 21:33:12.732 [INFO][5583] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.730d5ce859f774b043e518251a98851c2c0f88a7124d555c205b096292c300e7 Jan 13 21:33:12.856077 containerd[2100]: 2025-01-13 21:33:12.745 [INFO][5583] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.40.0/26 handle="k8s-pod-network.730d5ce859f774b043e518251a98851c2c0f88a7124d555c205b096292c300e7" host="ip-172-31-23-216" Jan 13 21:33:12.856077 containerd[2100]: 2025-01-13 21:33:12.764 [INFO][5583] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.40.5/26] block=192.168.40.0/26 handle="k8s-pod-network.730d5ce859f774b043e518251a98851c2c0f88a7124d555c205b096292c300e7" host="ip-172-31-23-216" Jan 13 21:33:12.856077 containerd[2100]: 2025-01-13 21:33:12.765 [INFO][5583] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.40.5/26] handle="k8s-pod-network.730d5ce859f774b043e518251a98851c2c0f88a7124d555c205b096292c300e7" host="ip-172-31-23-216" Jan 13 21:33:12.856077 containerd[2100]: 2025-01-13 21:33:12.765 [INFO][5583] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:33:12.856077 containerd[2100]: 2025-01-13 21:33:12.765 [INFO][5583] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.40.5/26] IPv6=[] ContainerID="730d5ce859f774b043e518251a98851c2c0f88a7124d555c205b096292c300e7" HandleID="k8s-pod-network.730d5ce859f774b043e518251a98851c2c0f88a7124d555c205b096292c300e7" Workload="ip--172--31--23--216-k8s-calico--kube--controllers--97574c6fb--sdstw-eth0" Jan 13 21:33:12.858751 containerd[2100]: 2025-01-13 21:33:12.770 [INFO][5484] cni-plugin/k8s.go 386: Populated endpoint ContainerID="730d5ce859f774b043e518251a98851c2c0f88a7124d555c205b096292c300e7" Namespace="calico-system" Pod="calico-kube-controllers-97574c6fb-sdstw" WorkloadEndpoint="ip--172--31--23--216-k8s-calico--kube--controllers--97574c6fb--sdstw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--216-k8s-calico--kube--controllers--97574c6fb--sdstw-eth0", GenerateName:"calico-kube-controllers-97574c6fb-", Namespace:"calico-system", SelfLink:"", UID:"6516d32b-0c84-4b53-a73d-5859b4a02633", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 32, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"97574c6fb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-216", ContainerID:"", Pod:"calico-kube-controllers-97574c6fb-sdstw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.40.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif1fabf08c78", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:33:12.858751 containerd[2100]: 2025-01-13 21:33:12.771 [INFO][5484] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.40.5/32] ContainerID="730d5ce859f774b043e518251a98851c2c0f88a7124d555c205b096292c300e7" Namespace="calico-system" Pod="calico-kube-controllers-97574c6fb-sdstw" WorkloadEndpoint="ip--172--31--23--216-k8s-calico--kube--controllers--97574c6fb--sdstw-eth0" Jan 13 21:33:12.858751 containerd[2100]: 2025-01-13 21:33:12.771 [INFO][5484] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif1fabf08c78 ContainerID="730d5ce859f774b043e518251a98851c2c0f88a7124d555c205b096292c300e7" Namespace="calico-system" Pod="calico-kube-controllers-97574c6fb-sdstw" WorkloadEndpoint="ip--172--31--23--216-k8s-calico--kube--controllers--97574c6fb--sdstw-eth0" Jan 13 21:33:12.858751 containerd[2100]: 2025-01-13 21:33:12.785 [INFO][5484] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="730d5ce859f774b043e518251a98851c2c0f88a7124d555c205b096292c300e7" Namespace="calico-system" Pod="calico-kube-controllers-97574c6fb-sdstw" WorkloadEndpoint="ip--172--31--23--216-k8s-calico--kube--controllers--97574c6fb--sdstw-eth0" Jan 13 21:33:12.858751 containerd[2100]: 2025-01-13 21:33:12.786 [INFO][5484] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="730d5ce859f774b043e518251a98851c2c0f88a7124d555c205b096292c300e7" Namespace="calico-system" Pod="calico-kube-controllers-97574c6fb-sdstw" WorkloadEndpoint="ip--172--31--23--216-k8s-calico--kube--controllers--97574c6fb--sdstw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--216-k8s-calico--kube--controllers--97574c6fb--sdstw-eth0", GenerateName:"calico-kube-controllers-97574c6fb-", Namespace:"calico-system", SelfLink:"", UID:"6516d32b-0c84-4b53-a73d-5859b4a02633", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 32, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"97574c6fb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-216", ContainerID:"730d5ce859f774b043e518251a98851c2c0f88a7124d555c205b096292c300e7", Pod:"calico-kube-controllers-97574c6fb-sdstw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.40.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif1fabf08c78", MAC:"7e:6a:53:5d:dc:b4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:33:12.858751 containerd[2100]: 2025-01-13 21:33:12.825 [INFO][5484] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="730d5ce859f774b043e518251a98851c2c0f88a7124d555c205b096292c300e7" Namespace="calico-system" Pod="calico-kube-controllers-97574c6fb-sdstw" WorkloadEndpoint="ip--172--31--23--216-k8s-calico--kube--controllers--97574c6fb--sdstw-eth0" Jan 13 21:33:12.874799 kubelet[3391]: I0113 21:33:12.874760 3391 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-xh79p" podStartSLOduration=39.874693842 podStartE2EDuration="39.874693842s" podCreationTimestamp="2025-01-13 21:32:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:33:12.871948285 +0000 UTC m=+54.340300138" watchObservedRunningTime="2025-01-13 21:33:12.874693842 +0000 UTC m=+54.343045698" Jan 13 21:33:12.898511 systemd-networkd[1652]: cali25a1bbb9fa1: Link UP Jan 13 21:33:12.902707 systemd-networkd[1652]: cali25a1bbb9fa1: Gained carrier Jan 13 21:33:12.978890 kubelet[3391]: I0113 21:33:12.977560 3391 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-qn2h7" podStartSLOduration=39.977500522 podStartE2EDuration="39.977500522s" podCreationTimestamp="2025-01-13 21:32:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:33:12.922876331 +0000 UTC m=+54.391228183" watchObservedRunningTime="2025-01-13 21:33:12.977500522 +0000 UTC m=+54.445852373" Jan 13 21:33:12.991495 containerd[2100]: 2025-01-13 21:33:12.589 [INFO][5463] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--216-k8s-calico--apiserver--7f9ff6c558--5ns5g-eth0 calico-apiserver-7f9ff6c558- calico-apiserver 3ddd0a1e-3af8-462a-b5e7-d0696cbfc1e1 899 0 2025-01-13 21:32:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7f9ff6c558 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-23-216 calico-apiserver-7f9ff6c558-5ns5g eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali25a1bbb9fa1 [] []}} ContainerID="bd72b0e3faa851cd6a634fafe3e868cbfd6892eeb23913cd627473147ee63462" Namespace="calico-apiserver" Pod="calico-apiserver-7f9ff6c558-5ns5g" WorkloadEndpoint="ip--172--31--23--216-k8s-calico--apiserver--7f9ff6c558--5ns5g-" Jan 13 21:33:12.991495 containerd[2100]: 2025-01-13 21:33:12.589 [INFO][5463] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="bd72b0e3faa851cd6a634fafe3e868cbfd6892eeb23913cd627473147ee63462" Namespace="calico-apiserver" Pod="calico-apiserver-7f9ff6c558-5ns5g" WorkloadEndpoint="ip--172--31--23--216-k8s-calico--apiserver--7f9ff6c558--5ns5g-eth0" Jan 13 21:33:12.991495 containerd[2100]: 2025-01-13 21:33:12.745 [INFO][5621] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bd72b0e3faa851cd6a634fafe3e868cbfd6892eeb23913cd627473147ee63462" HandleID="k8s-pod-network.bd72b0e3faa851cd6a634fafe3e868cbfd6892eeb23913cd627473147ee63462" Workload="ip--172--31--23--216-k8s-calico--apiserver--7f9ff6c558--5ns5g-eth0" Jan 13 21:33:12.991495 containerd[2100]: 2025-01-13 21:33:12.766 [INFO][5621] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bd72b0e3faa851cd6a634fafe3e868cbfd6892eeb23913cd627473147ee63462" HandleID="k8s-pod-network.bd72b0e3faa851cd6a634fafe3e868cbfd6892eeb23913cd627473147ee63462" Workload="ip--172--31--23--216-k8s-calico--apiserver--7f9ff6c558--5ns5g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ef550), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-23-216", "pod":"calico-apiserver-7f9ff6c558-5ns5g", "timestamp":"2025-01-13 21:33:12.745902587 +0000 UTC"}, Hostname:"ip-172-31-23-216", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 21:33:12.991495 containerd[2100]: 2025-01-13 21:33:12.766 [INFO][5621] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:33:12.991495 containerd[2100]: 2025-01-13 21:33:12.767 [INFO][5621] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:33:12.991495 containerd[2100]: 2025-01-13 21:33:12.768 [INFO][5621] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-216' Jan 13 21:33:12.991495 containerd[2100]: 2025-01-13 21:33:12.771 [INFO][5621] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.bd72b0e3faa851cd6a634fafe3e868cbfd6892eeb23913cd627473147ee63462" host="ip-172-31-23-216" Jan 13 21:33:12.991495 containerd[2100]: 2025-01-13 21:33:12.781 [INFO][5621] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-23-216" Jan 13 21:33:12.991495 containerd[2100]: 2025-01-13 21:33:12.795 [INFO][5621] ipam/ipam.go 489: Trying affinity for 192.168.40.0/26 host="ip-172-31-23-216" Jan 13 21:33:12.991495 containerd[2100]: 2025-01-13 21:33:12.805 [INFO][5621] ipam/ipam.go 155: Attempting to load block cidr=192.168.40.0/26 host="ip-172-31-23-216" Jan 13 21:33:12.991495 containerd[2100]: 2025-01-13 21:33:12.822 [INFO][5621] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.40.0/26 host="ip-172-31-23-216" Jan 13 21:33:12.991495 containerd[2100]: 2025-01-13 21:33:12.823 [INFO][5621] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.40.0/26 handle="k8s-pod-network.bd72b0e3faa851cd6a634fafe3e868cbfd6892eeb23913cd627473147ee63462" host="ip-172-31-23-216" Jan 13 21:33:12.991495 containerd[2100]: 2025-01-13 21:33:12.827 [INFO][5621] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.bd72b0e3faa851cd6a634fafe3e868cbfd6892eeb23913cd627473147ee63462 Jan 13 21:33:12.991495 containerd[2100]: 2025-01-13 21:33:12.842 [INFO][5621] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.40.0/26 handle="k8s-pod-network.bd72b0e3faa851cd6a634fafe3e868cbfd6892eeb23913cd627473147ee63462" host="ip-172-31-23-216" Jan 13 21:33:12.991495 containerd[2100]: 2025-01-13 21:33:12.868 [INFO][5621] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.40.6/26] block=192.168.40.0/26 handle="k8s-pod-network.bd72b0e3faa851cd6a634fafe3e868cbfd6892eeb23913cd627473147ee63462" host="ip-172-31-23-216" Jan 13 21:33:12.991495 containerd[2100]: 2025-01-13 21:33:12.868 [INFO][5621] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.40.6/26] handle="k8s-pod-network.bd72b0e3faa851cd6a634fafe3e868cbfd6892eeb23913cd627473147ee63462" host="ip-172-31-23-216" Jan 13 21:33:12.991495 containerd[2100]: 2025-01-13 21:33:12.868 [INFO][5621] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:33:12.991495 containerd[2100]: 2025-01-13 21:33:12.869 [INFO][5621] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.40.6/26] IPv6=[] ContainerID="bd72b0e3faa851cd6a634fafe3e868cbfd6892eeb23913cd627473147ee63462" HandleID="k8s-pod-network.bd72b0e3faa851cd6a634fafe3e868cbfd6892eeb23913cd627473147ee63462" Workload="ip--172--31--23--216-k8s-calico--apiserver--7f9ff6c558--5ns5g-eth0" Jan 13 21:33:12.993999 containerd[2100]: 2025-01-13 21:33:12.888 [INFO][5463] cni-plugin/k8s.go 386: Populated endpoint ContainerID="bd72b0e3faa851cd6a634fafe3e868cbfd6892eeb23913cd627473147ee63462" Namespace="calico-apiserver" Pod="calico-apiserver-7f9ff6c558-5ns5g" WorkloadEndpoint="ip--172--31--23--216-k8s-calico--apiserver--7f9ff6c558--5ns5g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--216-k8s-calico--apiserver--7f9ff6c558--5ns5g-eth0", GenerateName:"calico-apiserver-7f9ff6c558-", Namespace:"calico-apiserver", SelfLink:"", UID:"3ddd0a1e-3af8-462a-b5e7-d0696cbfc1e1", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 32, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f9ff6c558", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-216", ContainerID:"", Pod:"calico-apiserver-7f9ff6c558-5ns5g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.40.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali25a1bbb9fa1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:33:12.993999 containerd[2100]: 2025-01-13 21:33:12.889 [INFO][5463] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.40.6/32] ContainerID="bd72b0e3faa851cd6a634fafe3e868cbfd6892eeb23913cd627473147ee63462" Namespace="calico-apiserver" Pod="calico-apiserver-7f9ff6c558-5ns5g" WorkloadEndpoint="ip--172--31--23--216-k8s-calico--apiserver--7f9ff6c558--5ns5g-eth0" Jan 13 21:33:12.993999 containerd[2100]: 2025-01-13 21:33:12.889 [INFO][5463] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali25a1bbb9fa1 ContainerID="bd72b0e3faa851cd6a634fafe3e868cbfd6892eeb23913cd627473147ee63462" Namespace="calico-apiserver" Pod="calico-apiserver-7f9ff6c558-5ns5g" WorkloadEndpoint="ip--172--31--23--216-k8s-calico--apiserver--7f9ff6c558--5ns5g-eth0" Jan 13 21:33:12.993999 containerd[2100]: 2025-01-13 21:33:12.903 [INFO][5463] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bd72b0e3faa851cd6a634fafe3e868cbfd6892eeb23913cd627473147ee63462" Namespace="calico-apiserver" Pod="calico-apiserver-7f9ff6c558-5ns5g" WorkloadEndpoint="ip--172--31--23--216-k8s-calico--apiserver--7f9ff6c558--5ns5g-eth0" Jan 13 21:33:12.993999 containerd[2100]: 2025-01-13 21:33:12.906 [INFO][5463] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="bd72b0e3faa851cd6a634fafe3e868cbfd6892eeb23913cd627473147ee63462" Namespace="calico-apiserver" Pod="calico-apiserver-7f9ff6c558-5ns5g" WorkloadEndpoint="ip--172--31--23--216-k8s-calico--apiserver--7f9ff6c558--5ns5g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--216-k8s-calico--apiserver--7f9ff6c558--5ns5g-eth0", GenerateName:"calico-apiserver-7f9ff6c558-", Namespace:"calico-apiserver", SelfLink:"", UID:"3ddd0a1e-3af8-462a-b5e7-d0696cbfc1e1", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 32, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f9ff6c558", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-216", ContainerID:"bd72b0e3faa851cd6a634fafe3e868cbfd6892eeb23913cd627473147ee63462", Pod:"calico-apiserver-7f9ff6c558-5ns5g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.40.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali25a1bbb9fa1", MAC:"16:c1:84:cc:2e:59", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:33:12.993999 containerd[2100]: 2025-01-13 21:33:12.975 [INFO][5463] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="bd72b0e3faa851cd6a634fafe3e868cbfd6892eeb23913cd627473147ee63462" Namespace="calico-apiserver" Pod="calico-apiserver-7f9ff6c558-5ns5g" WorkloadEndpoint="ip--172--31--23--216-k8s-calico--apiserver--7f9ff6c558--5ns5g-eth0" Jan 13 21:33:13.028195 containerd[2100]: time="2025-01-13T21:33:13.024964966Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:33:13.028195 containerd[2100]: time="2025-01-13T21:33:13.025162095Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:33:13.028195 containerd[2100]: time="2025-01-13T21:33:13.025187431Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:33:13.036487 containerd[2100]: time="2025-01-13T21:33:13.034204626Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:33:13.131134 containerd[2100]: time="2025-01-13T21:33:13.111419194Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:33:13.131134 containerd[2100]: time="2025-01-13T21:33:13.111510241Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:33:13.131134 containerd[2100]: time="2025-01-13T21:33:13.111535608Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:33:13.131134 containerd[2100]: time="2025-01-13T21:33:13.111682182Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:33:13.294668 containerd[2100]: time="2025-01-13T21:33:13.294525532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-97574c6fb-sdstw,Uid:6516d32b-0c84-4b53-a73d-5859b4a02633,Namespace:calico-system,Attempt:1,} returns sandbox id \"730d5ce859f774b043e518251a98851c2c0f88a7124d555c205b096292c300e7\"" Jan 13 21:33:13.301699 systemd-networkd[1652]: cali52c7d2c0e73: Gained IPv6LL Jan 13 21:33:13.303635 systemd-networkd[1652]: calic312bce2af0: Gained IPv6LL Jan 13 21:33:13.328552 containerd[2100]: time="2025-01-13T21:33:13.328503553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f9ff6c558-5ns5g,Uid:3ddd0a1e-3af8-462a-b5e7-d0696cbfc1e1,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"bd72b0e3faa851cd6a634fafe3e868cbfd6892eeb23913cd627473147ee63462\"" Jan 13 21:33:14.134688 systemd-networkd[1652]: calif1fabf08c78: Gained IPv6LL Jan 13 21:33:14.581038 systemd-networkd[1652]: cali25a1bbb9fa1: Gained IPv6LL Jan 13 21:33:16.203120 containerd[2100]: time="2025-01-13T21:33:16.203056881Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:33:16.204767 containerd[2100]: time="2025-01-13T21:33:16.204686639Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 13 21:33:16.205910 containerd[2100]: time="2025-01-13T21:33:16.205718573Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:33:16.209270 containerd[2100]: time="2025-01-13T21:33:16.209198702Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:33:16.210297 containerd[2100]: time="2025-01-13T21:33:16.210039396Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 3.64588243s" Jan 13 21:33:16.210297 containerd[2100]: time="2025-01-13T21:33:16.210084244Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 13 21:33:16.211440 containerd[2100]: time="2025-01-13T21:33:16.211145172Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 13 21:33:16.216912 containerd[2100]: time="2025-01-13T21:33:16.216875646Z" level=info msg="CreateContainer within sandbox \"069796690fb5d7f6350b986502528315dfb0805e8fc802c6ddb0554a8934512d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 13 21:33:16.244037 containerd[2100]: time="2025-01-13T21:33:16.242095464Z" level=info msg="CreateContainer within sandbox \"069796690fb5d7f6350b986502528315dfb0805e8fc802c6ddb0554a8934512d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"31893cf66bfe87a9b78e0966d5800e3dc2f97144a0aa3ee4eb09462ef033b24a\"" Jan 13 21:33:16.246529 containerd[2100]: time="2025-01-13T21:33:16.246482241Z" level=info msg="StartContainer for \"31893cf66bfe87a9b78e0966d5800e3dc2f97144a0aa3ee4eb09462ef033b24a\"" Jan 13 21:33:16.247631 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1133081879.mount: Deactivated successfully. Jan 13 21:33:16.407686 containerd[2100]: time="2025-01-13T21:33:16.407628832Z" level=info msg="StartContainer for \"31893cf66bfe87a9b78e0966d5800e3dc2f97144a0aa3ee4eb09462ef033b24a\" returns successfully" Jan 13 21:33:16.907659 kubelet[3391]: I0113 21:33:16.907609 3391 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7f9ff6c558-pqxl5" podStartSLOduration=28.26023723 podStartE2EDuration="31.907558153s" podCreationTimestamp="2025-01-13 21:32:45 +0000 UTC" firstStartedPulling="2025-01-13 21:33:12.563499111 +0000 UTC m=+54.031850956" lastFinishedPulling="2025-01-13 21:33:16.210820035 +0000 UTC m=+57.679171879" observedRunningTime="2025-01-13 21:33:16.906167268 +0000 UTC m=+58.374519123" watchObservedRunningTime="2025-01-13 21:33:16.907558153 +0000 UTC m=+58.375909998" Jan 13 21:33:17.146986 ntpd[2042]: Listen normally on 6 vxlan.calico 192.168.40.0:123 Jan 13 21:33:17.149154 ntpd[2042]: 13 Jan 21:33:17 ntpd[2042]: Listen normally on 6 vxlan.calico 192.168.40.0:123 Jan 13 21:33:17.149154 ntpd[2042]: 13 Jan 21:33:17 ntpd[2042]: Listen normally on 7 vxlan.calico [fe80::64bb:f7ff:fee8:a45e%4]:123 Jan 13 21:33:17.149154 ntpd[2042]: 13 Jan 21:33:17 ntpd[2042]: Listen normally on 8 cali72875e634b0 [fe80::ecee:eeff:feee:eeee%7]:123 Jan 13 21:33:17.149154 ntpd[2042]: 13 Jan 21:33:17 ntpd[2042]: Listen normally on 9 cali4097fc172ab [fe80::ecee:eeff:feee:eeee%8]:123 Jan 13 21:33:17.149154 ntpd[2042]: 13 Jan 21:33:17 ntpd[2042]: Listen normally on 10 calic312bce2af0 [fe80::ecee:eeff:feee:eeee%9]:123 Jan 13 21:33:17.149154 ntpd[2042]: 13 Jan 21:33:17 ntpd[2042]: Listen normally on 11 cali52c7d2c0e73 [fe80::ecee:eeff:feee:eeee%10]:123 Jan 13 21:33:17.149154 ntpd[2042]: 13 Jan 21:33:17 ntpd[2042]: Listen normally on 12 calif1fabf08c78 [fe80::ecee:eeff:feee:eeee%11]:123 Jan 13 21:33:17.149154 ntpd[2042]: 13 Jan 21:33:17 ntpd[2042]: Listen normally on 13 cali25a1bbb9fa1 [fe80::ecee:eeff:feee:eeee%12]:123 Jan 13 21:33:17.147079 ntpd[2042]: Listen normally on 7 vxlan.calico [fe80::64bb:f7ff:fee8:a45e%4]:123 Jan 13 21:33:17.147140 ntpd[2042]: Listen normally on 8 cali72875e634b0 [fe80::ecee:eeff:feee:eeee%7]:123 Jan 13 21:33:17.147183 ntpd[2042]: Listen normally on 9 cali4097fc172ab [fe80::ecee:eeff:feee:eeee%8]:123 Jan 13 21:33:17.147222 ntpd[2042]: Listen normally on 10 calic312bce2af0 [fe80::ecee:eeff:feee:eeee%9]:123 Jan 13 21:33:17.147259 ntpd[2042]: Listen normally on 11 cali52c7d2c0e73 [fe80::ecee:eeff:feee:eeee%10]:123 Jan 13 21:33:17.147302 ntpd[2042]: Listen normally on 12 calif1fabf08c78 [fe80::ecee:eeff:feee:eeee%11]:123 Jan 13 21:33:17.147338 ntpd[2042]: Listen normally on 13 cali25a1bbb9fa1 [fe80::ecee:eeff:feee:eeee%12]:123 Jan 13 21:33:17.291478 systemd[1]: Started sshd@9-172.31.23.216:22-147.75.109.163:60746.service - OpenSSH per-connection server daemon (147.75.109.163:60746). Jan 13 21:33:17.573896 sshd[5809]: Accepted publickey for core from 147.75.109.163 port 60746 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:33:17.576824 sshd[5809]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:33:17.587250 systemd-logind[2056]: New session 10 of user core. Jan 13 21:33:17.592399 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 21:33:17.858447 containerd[2100]: time="2025-01-13T21:33:17.858263328Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:33:17.861321 containerd[2100]: time="2025-01-13T21:33:17.861177204Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 13 21:33:17.863345 containerd[2100]: time="2025-01-13T21:33:17.863097813Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:33:17.868257 containerd[2100]: time="2025-01-13T21:33:17.868120352Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.656939919s" Jan 13 21:33:17.868257 containerd[2100]: time="2025-01-13T21:33:17.868156422Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 13 21:33:17.869201 containerd[2100]: time="2025-01-13T21:33:17.868998979Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 13 21:33:17.881206 containerd[2100]: time="2025-01-13T21:33:17.879804457Z" level=info msg="CreateContainer within sandbox \"4c98c3d2f5c6021b26675f5c836294d726d77409ad636d98e07c130b23ffacb3\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 13 21:33:17.923275 kubelet[3391]: I0113 21:33:17.922449 3391 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 21:33:17.928493 containerd[2100]: time="2025-01-13T21:33:17.928421871Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:33:17.942714 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount768840357.mount: Deactivated successfully. Jan 13 21:33:17.947496 containerd[2100]: time="2025-01-13T21:33:17.947347429Z" level=info msg="CreateContainer within sandbox \"4c98c3d2f5c6021b26675f5c836294d726d77409ad636d98e07c130b23ffacb3\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"ddb6d566e6283cb7e27caeeac0b30a85010a2180a0d508f97c06f83c78fa59d0\"" Jan 13 21:33:17.951503 containerd[2100]: time="2025-01-13T21:33:17.951306056Z" level=info msg="StartContainer for \"ddb6d566e6283cb7e27caeeac0b30a85010a2180a0d508f97c06f83c78fa59d0\"" Jan 13 21:33:18.203966 containerd[2100]: time="2025-01-13T21:33:18.201880051Z" level=info msg="StartContainer for \"ddb6d566e6283cb7e27caeeac0b30a85010a2180a0d508f97c06f83c78fa59d0\" returns successfully" Jan 13 21:33:18.360737 systemd-journald[1569]: Under memory pressure, flushing caches. Jan 13 21:33:18.356994 systemd-resolved[1974]: Under memory pressure, flushing caches. Jan 13 21:33:18.357040 systemd-resolved[1974]: Flushed all caches. Jan 13 21:33:18.707443 sshd[5809]: pam_unix(sshd:session): session closed for user core Jan 13 21:33:18.713339 systemd[1]: sshd@9-172.31.23.216:22-147.75.109.163:60746.service: Deactivated successfully. Jan 13 21:33:18.719368 systemd-logind[2056]: Session 10 logged out. Waiting for processes to exit. Jan 13 21:33:18.719640 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 21:33:18.721701 systemd-logind[2056]: Removed session 10. Jan 13 21:33:18.735274 systemd[1]: Started sshd@10-172.31.23.216:22-147.75.109.163:60488.service - OpenSSH per-connection server daemon (147.75.109.163:60488). Jan 13 21:33:18.897292 sshd[5861]: Accepted publickey for core from 147.75.109.163 port 60488 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:33:18.900718 sshd[5861]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:33:18.910213 systemd-logind[2056]: New session 11 of user core. Jan 13 21:33:18.916030 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 21:33:19.084938 containerd[2100]: time="2025-01-13T21:33:19.084900623Z" level=info msg="StopPodSandbox for \"50ab2d574563d2cf3f4b79dafcfe83b0dc64537f2ab7150dfdc24bdac25ad4c8\"" Jan 13 21:33:19.369634 sshd[5861]: pam_unix(sshd:session): session closed for user core Jan 13 21:33:19.396424 systemd[1]: sshd@10-172.31.23.216:22-147.75.109.163:60488.service: Deactivated successfully. Jan 13 21:33:19.402905 systemd-logind[2056]: Session 11 logged out. Waiting for processes to exit. Jan 13 21:33:19.450372 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 21:33:19.478148 systemd[1]: Started sshd@11-172.31.23.216:22-147.75.109.163:60500.service - OpenSSH per-connection server daemon (147.75.109.163:60500). Jan 13 21:33:19.498317 systemd-logind[2056]: Removed session 11. Jan 13 21:33:19.621798 containerd[2100]: 2025-01-13 21:33:19.358 [WARNING][5883] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="50ab2d574563d2cf3f4b79dafcfe83b0dc64537f2ab7150dfdc24bdac25ad4c8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--216-k8s-calico--apiserver--7f9ff6c558--pqxl5-eth0", GenerateName:"calico-apiserver-7f9ff6c558-", Namespace:"calico-apiserver", SelfLink:"", UID:"7531431e-bdd0-4c4b-b0d9-91a26f9acf4a", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 32, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f9ff6c558", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-216", ContainerID:"069796690fb5d7f6350b986502528315dfb0805e8fc802c6ddb0554a8934512d", Pod:"calico-apiserver-7f9ff6c558-pqxl5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.40.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic312bce2af0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:33:19.621798 containerd[2100]: 2025-01-13 21:33:19.365 [INFO][5883] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="50ab2d574563d2cf3f4b79dafcfe83b0dc64537f2ab7150dfdc24bdac25ad4c8" Jan 13 21:33:19.621798 containerd[2100]: 2025-01-13 21:33:19.365 [INFO][5883] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="50ab2d574563d2cf3f4b79dafcfe83b0dc64537f2ab7150dfdc24bdac25ad4c8" iface="eth0" netns="" Jan 13 21:33:19.621798 containerd[2100]: 2025-01-13 21:33:19.366 [INFO][5883] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="50ab2d574563d2cf3f4b79dafcfe83b0dc64537f2ab7150dfdc24bdac25ad4c8" Jan 13 21:33:19.621798 containerd[2100]: 2025-01-13 21:33:19.366 [INFO][5883] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="50ab2d574563d2cf3f4b79dafcfe83b0dc64537f2ab7150dfdc24bdac25ad4c8" Jan 13 21:33:19.621798 containerd[2100]: 2025-01-13 21:33:19.575 [INFO][5891] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="50ab2d574563d2cf3f4b79dafcfe83b0dc64537f2ab7150dfdc24bdac25ad4c8" HandleID="k8s-pod-network.50ab2d574563d2cf3f4b79dafcfe83b0dc64537f2ab7150dfdc24bdac25ad4c8" Workload="ip--172--31--23--216-k8s-calico--apiserver--7f9ff6c558--pqxl5-eth0" Jan 13 21:33:19.621798 containerd[2100]: 2025-01-13 21:33:19.577 [INFO][5891] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:33:19.621798 containerd[2100]: 2025-01-13 21:33:19.577 [INFO][5891] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:33:19.621798 containerd[2100]: 2025-01-13 21:33:19.600 [WARNING][5891] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="50ab2d574563d2cf3f4b79dafcfe83b0dc64537f2ab7150dfdc24bdac25ad4c8" HandleID="k8s-pod-network.50ab2d574563d2cf3f4b79dafcfe83b0dc64537f2ab7150dfdc24bdac25ad4c8" Workload="ip--172--31--23--216-k8s-calico--apiserver--7f9ff6c558--pqxl5-eth0" Jan 13 21:33:19.621798 containerd[2100]: 2025-01-13 21:33:19.600 [INFO][5891] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="50ab2d574563d2cf3f4b79dafcfe83b0dc64537f2ab7150dfdc24bdac25ad4c8" HandleID="k8s-pod-network.50ab2d574563d2cf3f4b79dafcfe83b0dc64537f2ab7150dfdc24bdac25ad4c8" Workload="ip--172--31--23--216-k8s-calico--apiserver--7f9ff6c558--pqxl5-eth0" Jan 13 21:33:19.621798 containerd[2100]: 2025-01-13 21:33:19.606 [INFO][5891] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:33:19.621798 containerd[2100]: 2025-01-13 21:33:19.615 [INFO][5883] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="50ab2d574563d2cf3f4b79dafcfe83b0dc64537f2ab7150dfdc24bdac25ad4c8" Jan 13 21:33:19.621798 containerd[2100]: time="2025-01-13T21:33:19.621771217Z" level=info msg="TearDown network for sandbox \"50ab2d574563d2cf3f4b79dafcfe83b0dc64537f2ab7150dfdc24bdac25ad4c8\" successfully" Jan 13 21:33:19.624207 containerd[2100]: time="2025-01-13T21:33:19.621801370Z" level=info msg="StopPodSandbox for \"50ab2d574563d2cf3f4b79dafcfe83b0dc64537f2ab7150dfdc24bdac25ad4c8\" returns successfully" Jan 13 21:33:19.641655 containerd[2100]: time="2025-01-13T21:33:19.641258315Z" level=info msg="RemovePodSandbox for \"50ab2d574563d2cf3f4b79dafcfe83b0dc64537f2ab7150dfdc24bdac25ad4c8\"" Jan 13 21:33:19.641655 containerd[2100]: time="2025-01-13T21:33:19.641305951Z" level=info msg="Forcibly stopping sandbox \"50ab2d574563d2cf3f4b79dafcfe83b0dc64537f2ab7150dfdc24bdac25ad4c8\"" Jan 13 21:33:19.837253 sshd[5898]: Accepted publickey for core from 147.75.109.163 port 60500 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:33:19.855593 sshd[5898]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:33:19.858190 containerd[2100]: 2025-01-13 21:33:19.727 [WARNING][5915] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="50ab2d574563d2cf3f4b79dafcfe83b0dc64537f2ab7150dfdc24bdac25ad4c8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--216-k8s-calico--apiserver--7f9ff6c558--pqxl5-eth0", GenerateName:"calico-apiserver-7f9ff6c558-", Namespace:"calico-apiserver", SelfLink:"", UID:"7531431e-bdd0-4c4b-b0d9-91a26f9acf4a", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 32, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f9ff6c558", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-216", ContainerID:"069796690fb5d7f6350b986502528315dfb0805e8fc802c6ddb0554a8934512d", Pod:"calico-apiserver-7f9ff6c558-pqxl5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.40.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic312bce2af0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:33:19.858190 containerd[2100]: 2025-01-13 21:33:19.727 [INFO][5915] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="50ab2d574563d2cf3f4b79dafcfe83b0dc64537f2ab7150dfdc24bdac25ad4c8" Jan 13 21:33:19.858190 containerd[2100]: 2025-01-13 21:33:19.727 [INFO][5915] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="50ab2d574563d2cf3f4b79dafcfe83b0dc64537f2ab7150dfdc24bdac25ad4c8" iface="eth0" netns="" Jan 13 21:33:19.858190 containerd[2100]: 2025-01-13 21:33:19.727 [INFO][5915] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="50ab2d574563d2cf3f4b79dafcfe83b0dc64537f2ab7150dfdc24bdac25ad4c8" Jan 13 21:33:19.858190 containerd[2100]: 2025-01-13 21:33:19.727 [INFO][5915] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="50ab2d574563d2cf3f4b79dafcfe83b0dc64537f2ab7150dfdc24bdac25ad4c8" Jan 13 21:33:19.858190 containerd[2100]: 2025-01-13 21:33:19.804 [INFO][5921] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="50ab2d574563d2cf3f4b79dafcfe83b0dc64537f2ab7150dfdc24bdac25ad4c8" HandleID="k8s-pod-network.50ab2d574563d2cf3f4b79dafcfe83b0dc64537f2ab7150dfdc24bdac25ad4c8" Workload="ip--172--31--23--216-k8s-calico--apiserver--7f9ff6c558--pqxl5-eth0" Jan 13 21:33:19.858190 containerd[2100]: 2025-01-13 21:33:19.805 [INFO][5921] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:33:19.858190 containerd[2100]: 2025-01-13 21:33:19.805 [INFO][5921] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:33:19.858190 containerd[2100]: 2025-01-13 21:33:19.835 [WARNING][5921] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="50ab2d574563d2cf3f4b79dafcfe83b0dc64537f2ab7150dfdc24bdac25ad4c8" HandleID="k8s-pod-network.50ab2d574563d2cf3f4b79dafcfe83b0dc64537f2ab7150dfdc24bdac25ad4c8" Workload="ip--172--31--23--216-k8s-calico--apiserver--7f9ff6c558--pqxl5-eth0" Jan 13 21:33:19.858190 containerd[2100]: 2025-01-13 21:33:19.835 [INFO][5921] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="50ab2d574563d2cf3f4b79dafcfe83b0dc64537f2ab7150dfdc24bdac25ad4c8" HandleID="k8s-pod-network.50ab2d574563d2cf3f4b79dafcfe83b0dc64537f2ab7150dfdc24bdac25ad4c8" Workload="ip--172--31--23--216-k8s-calico--apiserver--7f9ff6c558--pqxl5-eth0" Jan 13 21:33:19.858190 containerd[2100]: 2025-01-13 21:33:19.841 [INFO][5921] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:33:19.858190 containerd[2100]: 2025-01-13 21:33:19.848 [INFO][5915] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="50ab2d574563d2cf3f4b79dafcfe83b0dc64537f2ab7150dfdc24bdac25ad4c8" Jan 13 21:33:19.858190 containerd[2100]: time="2025-01-13T21:33:19.856220138Z" level=info msg="TearDown network for sandbox \"50ab2d574563d2cf3f4b79dafcfe83b0dc64537f2ab7150dfdc24bdac25ad4c8\" successfully" Jan 13 21:33:19.870432 containerd[2100]: time="2025-01-13T21:33:19.868964673Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"50ab2d574563d2cf3f4b79dafcfe83b0dc64537f2ab7150dfdc24bdac25ad4c8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:33:19.870432 containerd[2100]: time="2025-01-13T21:33:19.869055726Z" level=info msg="RemovePodSandbox \"50ab2d574563d2cf3f4b79dafcfe83b0dc64537f2ab7150dfdc24bdac25ad4c8\" returns successfully" Jan 13 21:33:19.873837 containerd[2100]: time="2025-01-13T21:33:19.871510156Z" level=info msg="StopPodSandbox for \"0960b6c50f41504172e0320529dad5eef1e1024faaf7f631ad753b6edb6e5465\"" Jan 13 21:33:19.872242 systemd-logind[2056]: New session 12 of user core. Jan 13 21:33:19.882964 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 13 21:33:20.274108 containerd[2100]: 2025-01-13 21:33:20.058 [WARNING][5945] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0960b6c50f41504172e0320529dad5eef1e1024faaf7f631ad753b6edb6e5465" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--216-k8s-calico--apiserver--7f9ff6c558--5ns5g-eth0", GenerateName:"calico-apiserver-7f9ff6c558-", Namespace:"calico-apiserver", SelfLink:"", UID:"3ddd0a1e-3af8-462a-b5e7-d0696cbfc1e1", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 32, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f9ff6c558", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-216", ContainerID:"bd72b0e3faa851cd6a634fafe3e868cbfd6892eeb23913cd627473147ee63462", Pod:"calico-apiserver-7f9ff6c558-5ns5g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.40.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali25a1bbb9fa1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:33:20.274108 containerd[2100]: 2025-01-13 21:33:20.058 [INFO][5945] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0960b6c50f41504172e0320529dad5eef1e1024faaf7f631ad753b6edb6e5465" Jan 13 21:33:20.274108 containerd[2100]: 2025-01-13 21:33:20.058 [INFO][5945] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0960b6c50f41504172e0320529dad5eef1e1024faaf7f631ad753b6edb6e5465" iface="eth0" netns="" Jan 13 21:33:20.274108 containerd[2100]: 2025-01-13 21:33:20.058 [INFO][5945] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0960b6c50f41504172e0320529dad5eef1e1024faaf7f631ad753b6edb6e5465" Jan 13 21:33:20.274108 containerd[2100]: 2025-01-13 21:33:20.058 [INFO][5945] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0960b6c50f41504172e0320529dad5eef1e1024faaf7f631ad753b6edb6e5465" Jan 13 21:33:20.274108 containerd[2100]: 2025-01-13 21:33:20.237 [INFO][5958] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0960b6c50f41504172e0320529dad5eef1e1024faaf7f631ad753b6edb6e5465" HandleID="k8s-pod-network.0960b6c50f41504172e0320529dad5eef1e1024faaf7f631ad753b6edb6e5465" Workload="ip--172--31--23--216-k8s-calico--apiserver--7f9ff6c558--5ns5g-eth0" Jan 13 21:33:20.274108 containerd[2100]: 2025-01-13 21:33:20.237 [INFO][5958] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:33:20.274108 containerd[2100]: 2025-01-13 21:33:20.237 [INFO][5958] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:33:20.274108 containerd[2100]: 2025-01-13 21:33:20.253 [WARNING][5958] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0960b6c50f41504172e0320529dad5eef1e1024faaf7f631ad753b6edb6e5465" HandleID="k8s-pod-network.0960b6c50f41504172e0320529dad5eef1e1024faaf7f631ad753b6edb6e5465" Workload="ip--172--31--23--216-k8s-calico--apiserver--7f9ff6c558--5ns5g-eth0" Jan 13 21:33:20.274108 containerd[2100]: 2025-01-13 21:33:20.253 [INFO][5958] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0960b6c50f41504172e0320529dad5eef1e1024faaf7f631ad753b6edb6e5465" HandleID="k8s-pod-network.0960b6c50f41504172e0320529dad5eef1e1024faaf7f631ad753b6edb6e5465" Workload="ip--172--31--23--216-k8s-calico--apiserver--7f9ff6c558--5ns5g-eth0" Jan 13 21:33:20.274108 containerd[2100]: 2025-01-13 21:33:20.257 [INFO][5958] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:33:20.274108 containerd[2100]: 2025-01-13 21:33:20.269 [INFO][5945] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0960b6c50f41504172e0320529dad5eef1e1024faaf7f631ad753b6edb6e5465" Jan 13 21:33:20.274108 containerd[2100]: time="2025-01-13T21:33:20.273545220Z" level=info msg="TearDown network for sandbox \"0960b6c50f41504172e0320529dad5eef1e1024faaf7f631ad753b6edb6e5465\" successfully" Jan 13 21:33:20.274108 containerd[2100]: time="2025-01-13T21:33:20.273578703Z" level=info msg="StopPodSandbox for \"0960b6c50f41504172e0320529dad5eef1e1024faaf7f631ad753b6edb6e5465\" returns successfully" Jan 13 21:33:20.278455 containerd[2100]: time="2025-01-13T21:33:20.275762060Z" level=info msg="RemovePodSandbox for \"0960b6c50f41504172e0320529dad5eef1e1024faaf7f631ad753b6edb6e5465\"" Jan 13 21:33:20.278455 containerd[2100]: time="2025-01-13T21:33:20.275808074Z" level=info msg="Forcibly stopping sandbox \"0960b6c50f41504172e0320529dad5eef1e1024faaf7f631ad753b6edb6e5465\"" Jan 13 21:33:20.331429 sshd[5898]: pam_unix(sshd:session): session closed for user core Jan 13 21:33:20.340358 systemd[1]: sshd@11-172.31.23.216:22-147.75.109.163:60500.service: Deactivated successfully. Jan 13 21:33:20.351545 systemd[1]: session-12.scope: Deactivated successfully. Jan 13 21:33:20.351780 systemd-logind[2056]: Session 12 logged out. Waiting for processes to exit. Jan 13 21:33:20.358323 systemd-logind[2056]: Removed session 12. Jan 13 21:33:20.405881 systemd-resolved[1974]: Under memory pressure, flushing caches. Jan 13 21:33:20.407656 systemd-journald[1569]: Under memory pressure, flushing caches. Jan 13 21:33:20.405923 systemd-resolved[1974]: Flushed all caches. Jan 13 21:33:20.562148 containerd[2100]: 2025-01-13 21:33:20.429 [WARNING][5977] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0960b6c50f41504172e0320529dad5eef1e1024faaf7f631ad753b6edb6e5465" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--216-k8s-calico--apiserver--7f9ff6c558--5ns5g-eth0", GenerateName:"calico-apiserver-7f9ff6c558-", Namespace:"calico-apiserver", SelfLink:"", UID:"3ddd0a1e-3af8-462a-b5e7-d0696cbfc1e1", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 32, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f9ff6c558", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-216", ContainerID:"bd72b0e3faa851cd6a634fafe3e868cbfd6892eeb23913cd627473147ee63462", Pod:"calico-apiserver-7f9ff6c558-5ns5g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.40.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali25a1bbb9fa1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:33:20.562148 containerd[2100]: 2025-01-13 21:33:20.432 [INFO][5977] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0960b6c50f41504172e0320529dad5eef1e1024faaf7f631ad753b6edb6e5465" Jan 13 21:33:20.562148 containerd[2100]: 2025-01-13 21:33:20.432 [INFO][5977] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0960b6c50f41504172e0320529dad5eef1e1024faaf7f631ad753b6edb6e5465" iface="eth0" netns="" Jan 13 21:33:20.562148 containerd[2100]: 2025-01-13 21:33:20.432 [INFO][5977] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0960b6c50f41504172e0320529dad5eef1e1024faaf7f631ad753b6edb6e5465" Jan 13 21:33:20.562148 containerd[2100]: 2025-01-13 21:33:20.432 [INFO][5977] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0960b6c50f41504172e0320529dad5eef1e1024faaf7f631ad753b6edb6e5465" Jan 13 21:33:20.562148 containerd[2100]: 2025-01-13 21:33:20.505 [INFO][5986] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0960b6c50f41504172e0320529dad5eef1e1024faaf7f631ad753b6edb6e5465" HandleID="k8s-pod-network.0960b6c50f41504172e0320529dad5eef1e1024faaf7f631ad753b6edb6e5465" Workload="ip--172--31--23--216-k8s-calico--apiserver--7f9ff6c558--5ns5g-eth0" Jan 13 21:33:20.562148 containerd[2100]: 2025-01-13 21:33:20.505 [INFO][5986] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:33:20.562148 containerd[2100]: 2025-01-13 21:33:20.505 [INFO][5986] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:33:20.562148 containerd[2100]: 2025-01-13 21:33:20.526 [WARNING][5986] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0960b6c50f41504172e0320529dad5eef1e1024faaf7f631ad753b6edb6e5465" HandleID="k8s-pod-network.0960b6c50f41504172e0320529dad5eef1e1024faaf7f631ad753b6edb6e5465" Workload="ip--172--31--23--216-k8s-calico--apiserver--7f9ff6c558--5ns5g-eth0" Jan 13 21:33:20.562148 containerd[2100]: 2025-01-13 21:33:20.526 [INFO][5986] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0960b6c50f41504172e0320529dad5eef1e1024faaf7f631ad753b6edb6e5465" HandleID="k8s-pod-network.0960b6c50f41504172e0320529dad5eef1e1024faaf7f631ad753b6edb6e5465" Workload="ip--172--31--23--216-k8s-calico--apiserver--7f9ff6c558--5ns5g-eth0" Jan 13 21:33:20.562148 containerd[2100]: 2025-01-13 21:33:20.551 [INFO][5986] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:33:20.562148 containerd[2100]: 2025-01-13 21:33:20.556 [INFO][5977] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0960b6c50f41504172e0320529dad5eef1e1024faaf7f631ad753b6edb6e5465" Jan 13 21:33:20.564438 containerd[2100]: time="2025-01-13T21:33:20.562270666Z" level=info msg="TearDown network for sandbox \"0960b6c50f41504172e0320529dad5eef1e1024faaf7f631ad753b6edb6e5465\" successfully" Jan 13 21:33:20.571053 containerd[2100]: time="2025-01-13T21:33:20.571004888Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0960b6c50f41504172e0320529dad5eef1e1024faaf7f631ad753b6edb6e5465\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:33:20.571053 containerd[2100]: time="2025-01-13T21:33:20.571082576Z" level=info msg="RemovePodSandbox \"0960b6c50f41504172e0320529dad5eef1e1024faaf7f631ad753b6edb6e5465\" returns successfully" Jan 13 21:33:20.572711 containerd[2100]: time="2025-01-13T21:33:20.572666238Z" level=info msg="StopPodSandbox for \"e720c157c9e2fea93369da4ca75ddaa59865e235c48c826fdb55612408673a9f\"" Jan 13 21:33:20.813626 containerd[2100]: 2025-01-13 21:33:20.693 [WARNING][6004] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e720c157c9e2fea93369da4ca75ddaa59865e235c48c826fdb55612408673a9f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--216-k8s-csi--node--driver--m7j9j-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1349c369-e827-4f6c-bda4-a032fbaa74c0", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 32, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-216", ContainerID:"4c98c3d2f5c6021b26675f5c836294d726d77409ad636d98e07c130b23ffacb3", Pod:"csi-node-driver-m7j9j", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.40.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali52c7d2c0e73", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:33:20.813626 containerd[2100]: 2025-01-13 21:33:20.693 [INFO][6004] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e720c157c9e2fea93369da4ca75ddaa59865e235c48c826fdb55612408673a9f" Jan 13 21:33:20.813626 containerd[2100]: 2025-01-13 21:33:20.693 [INFO][6004] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e720c157c9e2fea93369da4ca75ddaa59865e235c48c826fdb55612408673a9f" iface="eth0" netns="" Jan 13 21:33:20.813626 containerd[2100]: 2025-01-13 21:33:20.693 [INFO][6004] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e720c157c9e2fea93369da4ca75ddaa59865e235c48c826fdb55612408673a9f" Jan 13 21:33:20.813626 containerd[2100]: 2025-01-13 21:33:20.693 [INFO][6004] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e720c157c9e2fea93369da4ca75ddaa59865e235c48c826fdb55612408673a9f" Jan 13 21:33:20.813626 containerd[2100]: 2025-01-13 21:33:20.770 [INFO][6012] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e720c157c9e2fea93369da4ca75ddaa59865e235c48c826fdb55612408673a9f" HandleID="k8s-pod-network.e720c157c9e2fea93369da4ca75ddaa59865e235c48c826fdb55612408673a9f" Workload="ip--172--31--23--216-k8s-csi--node--driver--m7j9j-eth0" Jan 13 21:33:20.813626 containerd[2100]: 2025-01-13 21:33:20.772 [INFO][6012] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:33:20.813626 containerd[2100]: 2025-01-13 21:33:20.772 [INFO][6012] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:33:20.813626 containerd[2100]: 2025-01-13 21:33:20.800 [WARNING][6012] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e720c157c9e2fea93369da4ca75ddaa59865e235c48c826fdb55612408673a9f" HandleID="k8s-pod-network.e720c157c9e2fea93369da4ca75ddaa59865e235c48c826fdb55612408673a9f" Workload="ip--172--31--23--216-k8s-csi--node--driver--m7j9j-eth0" Jan 13 21:33:20.813626 containerd[2100]: 2025-01-13 21:33:20.800 [INFO][6012] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e720c157c9e2fea93369da4ca75ddaa59865e235c48c826fdb55612408673a9f" HandleID="k8s-pod-network.e720c157c9e2fea93369da4ca75ddaa59865e235c48c826fdb55612408673a9f" Workload="ip--172--31--23--216-k8s-csi--node--driver--m7j9j-eth0" Jan 13 21:33:20.813626 containerd[2100]: 2025-01-13 21:33:20.804 [INFO][6012] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:33:20.813626 containerd[2100]: 2025-01-13 21:33:20.809 [INFO][6004] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e720c157c9e2fea93369da4ca75ddaa59865e235c48c826fdb55612408673a9f" Jan 13 21:33:20.813626 containerd[2100]: time="2025-01-13T21:33:20.813186020Z" level=info msg="TearDown network for sandbox \"e720c157c9e2fea93369da4ca75ddaa59865e235c48c826fdb55612408673a9f\" successfully" Jan 13 21:33:20.813626 containerd[2100]: time="2025-01-13T21:33:20.813214969Z" level=info msg="StopPodSandbox for \"e720c157c9e2fea93369da4ca75ddaa59865e235c48c826fdb55612408673a9f\" returns successfully" Jan 13 21:33:20.815556 containerd[2100]: time="2025-01-13T21:33:20.815425944Z" level=info msg="RemovePodSandbox for \"e720c157c9e2fea93369da4ca75ddaa59865e235c48c826fdb55612408673a9f\"" Jan 13 21:33:20.815556 containerd[2100]: time="2025-01-13T21:33:20.815555283Z" level=info msg="Forcibly stopping sandbox \"e720c157c9e2fea93369da4ca75ddaa59865e235c48c826fdb55612408673a9f\"" Jan 13 21:33:20.987229 containerd[2100]: 2025-01-13 21:33:20.913 [WARNING][6034] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e720c157c9e2fea93369da4ca75ddaa59865e235c48c826fdb55612408673a9f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--216-k8s-csi--node--driver--m7j9j-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1349c369-e827-4f6c-bda4-a032fbaa74c0", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 32, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-216", ContainerID:"4c98c3d2f5c6021b26675f5c836294d726d77409ad636d98e07c130b23ffacb3", Pod:"csi-node-driver-m7j9j", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.40.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali52c7d2c0e73", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:33:20.987229 containerd[2100]: 2025-01-13 21:33:20.913 [INFO][6034] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e720c157c9e2fea93369da4ca75ddaa59865e235c48c826fdb55612408673a9f" Jan 13 21:33:20.987229 containerd[2100]: 2025-01-13 21:33:20.913 [INFO][6034] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e720c157c9e2fea93369da4ca75ddaa59865e235c48c826fdb55612408673a9f" iface="eth0" netns="" Jan 13 21:33:20.987229 containerd[2100]: 2025-01-13 21:33:20.914 [INFO][6034] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e720c157c9e2fea93369da4ca75ddaa59865e235c48c826fdb55612408673a9f" Jan 13 21:33:20.987229 containerd[2100]: 2025-01-13 21:33:20.914 [INFO][6034] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e720c157c9e2fea93369da4ca75ddaa59865e235c48c826fdb55612408673a9f" Jan 13 21:33:20.987229 containerd[2100]: 2025-01-13 21:33:20.967 [INFO][6040] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e720c157c9e2fea93369da4ca75ddaa59865e235c48c826fdb55612408673a9f" HandleID="k8s-pod-network.e720c157c9e2fea93369da4ca75ddaa59865e235c48c826fdb55612408673a9f" Workload="ip--172--31--23--216-k8s-csi--node--driver--m7j9j-eth0" Jan 13 21:33:20.987229 containerd[2100]: 2025-01-13 21:33:20.967 [INFO][6040] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:33:20.987229 containerd[2100]: 2025-01-13 21:33:20.968 [INFO][6040] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:33:20.987229 containerd[2100]: 2025-01-13 21:33:20.980 [WARNING][6040] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e720c157c9e2fea93369da4ca75ddaa59865e235c48c826fdb55612408673a9f" HandleID="k8s-pod-network.e720c157c9e2fea93369da4ca75ddaa59865e235c48c826fdb55612408673a9f" Workload="ip--172--31--23--216-k8s-csi--node--driver--m7j9j-eth0" Jan 13 21:33:20.987229 containerd[2100]: 2025-01-13 21:33:20.980 [INFO][6040] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e720c157c9e2fea93369da4ca75ddaa59865e235c48c826fdb55612408673a9f" HandleID="k8s-pod-network.e720c157c9e2fea93369da4ca75ddaa59865e235c48c826fdb55612408673a9f" Workload="ip--172--31--23--216-k8s-csi--node--driver--m7j9j-eth0" Jan 13 21:33:20.987229 containerd[2100]: 2025-01-13 21:33:20.982 [INFO][6040] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:33:20.987229 containerd[2100]: 2025-01-13 21:33:20.984 [INFO][6034] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e720c157c9e2fea93369da4ca75ddaa59865e235c48c826fdb55612408673a9f" Jan 13 21:33:20.988709 containerd[2100]: time="2025-01-13T21:33:20.988563207Z" level=info msg="TearDown network for sandbox \"e720c157c9e2fea93369da4ca75ddaa59865e235c48c826fdb55612408673a9f\" successfully" Jan 13 21:33:20.996694 containerd[2100]: time="2025-01-13T21:33:20.996645027Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e720c157c9e2fea93369da4ca75ddaa59865e235c48c826fdb55612408673a9f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:33:20.997261 containerd[2100]: time="2025-01-13T21:33:20.997226869Z" level=info msg="RemovePodSandbox \"e720c157c9e2fea93369da4ca75ddaa59865e235c48c826fdb55612408673a9f\" returns successfully" Jan 13 21:33:20.999328 containerd[2100]: time="2025-01-13T21:33:20.999295135Z" level=info msg="StopPodSandbox for \"06a90424f5008c3a4cb972bf2e357e5159c1097ee9d9e188e864596c33bfaafd\"" Jan 13 21:33:21.182531 containerd[2100]: 2025-01-13 21:33:21.093 [WARNING][6059] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="06a90424f5008c3a4cb972bf2e357e5159c1097ee9d9e188e864596c33bfaafd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--216-k8s-coredns--76f75df574--xh79p-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"5d39a778-23bc-4ff9-9d67-cbce50e1aa94", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 32, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-216", ContainerID:"4888a017b387696fd4d61f9239f3639dd0dc77b36a6fb9dd9a393f6c946cf43d", Pod:"coredns-76f75df574-xh79p", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.40.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4097fc172ab", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:33:21.182531 containerd[2100]: 2025-01-13 21:33:21.094 [INFO][6059] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="06a90424f5008c3a4cb972bf2e357e5159c1097ee9d9e188e864596c33bfaafd" Jan 13 21:33:21.182531 containerd[2100]: 2025-01-13 21:33:21.094 [INFO][6059] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="06a90424f5008c3a4cb972bf2e357e5159c1097ee9d9e188e864596c33bfaafd" iface="eth0" netns="" Jan 13 21:33:21.182531 containerd[2100]: 2025-01-13 21:33:21.094 [INFO][6059] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="06a90424f5008c3a4cb972bf2e357e5159c1097ee9d9e188e864596c33bfaafd" Jan 13 21:33:21.182531 containerd[2100]: 2025-01-13 21:33:21.094 [INFO][6059] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="06a90424f5008c3a4cb972bf2e357e5159c1097ee9d9e188e864596c33bfaafd" Jan 13 21:33:21.182531 containerd[2100]: 2025-01-13 21:33:21.158 [INFO][6065] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="06a90424f5008c3a4cb972bf2e357e5159c1097ee9d9e188e864596c33bfaafd" HandleID="k8s-pod-network.06a90424f5008c3a4cb972bf2e357e5159c1097ee9d9e188e864596c33bfaafd" Workload="ip--172--31--23--216-k8s-coredns--76f75df574--xh79p-eth0" Jan 13 21:33:21.182531 containerd[2100]: 2025-01-13 21:33:21.159 [INFO][6065] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:33:21.182531 containerd[2100]: 2025-01-13 21:33:21.159 [INFO][6065] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:33:21.182531 containerd[2100]: 2025-01-13 21:33:21.171 [WARNING][6065] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="06a90424f5008c3a4cb972bf2e357e5159c1097ee9d9e188e864596c33bfaafd" HandleID="k8s-pod-network.06a90424f5008c3a4cb972bf2e357e5159c1097ee9d9e188e864596c33bfaafd" Workload="ip--172--31--23--216-k8s-coredns--76f75df574--xh79p-eth0" Jan 13 21:33:21.182531 containerd[2100]: 2025-01-13 21:33:21.171 [INFO][6065] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="06a90424f5008c3a4cb972bf2e357e5159c1097ee9d9e188e864596c33bfaafd" HandleID="k8s-pod-network.06a90424f5008c3a4cb972bf2e357e5159c1097ee9d9e188e864596c33bfaafd" Workload="ip--172--31--23--216-k8s-coredns--76f75df574--xh79p-eth0" Jan 13 21:33:21.182531 containerd[2100]: 2025-01-13 21:33:21.174 [INFO][6065] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:33:21.182531 containerd[2100]: 2025-01-13 21:33:21.176 [INFO][6059] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="06a90424f5008c3a4cb972bf2e357e5159c1097ee9d9e188e864596c33bfaafd" Jan 13 21:33:21.182531 containerd[2100]: time="2025-01-13T21:33:21.182289189Z" level=info msg="TearDown network for sandbox \"06a90424f5008c3a4cb972bf2e357e5159c1097ee9d9e188e864596c33bfaafd\" successfully" Jan 13 21:33:21.182531 containerd[2100]: time="2025-01-13T21:33:21.182320965Z" level=info msg="StopPodSandbox for \"06a90424f5008c3a4cb972bf2e357e5159c1097ee9d9e188e864596c33bfaafd\" returns successfully" Jan 13 21:33:21.184508 containerd[2100]: time="2025-01-13T21:33:21.184060671Z" level=info msg="RemovePodSandbox for \"06a90424f5008c3a4cb972bf2e357e5159c1097ee9d9e188e864596c33bfaafd\"" Jan 13 21:33:21.184508 containerd[2100]: time="2025-01-13T21:33:21.184104183Z" level=info msg="Forcibly stopping sandbox \"06a90424f5008c3a4cb972bf2e357e5159c1097ee9d9e188e864596c33bfaafd\"" Jan 13 21:33:21.346556 containerd[2100]: 2025-01-13 21:33:21.273 [WARNING][6083] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="06a90424f5008c3a4cb972bf2e357e5159c1097ee9d9e188e864596c33bfaafd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--216-k8s-coredns--76f75df574--xh79p-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"5d39a778-23bc-4ff9-9d67-cbce50e1aa94", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 32, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-216", ContainerID:"4888a017b387696fd4d61f9239f3639dd0dc77b36a6fb9dd9a393f6c946cf43d", Pod:"coredns-76f75df574-xh79p", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.40.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4097fc172ab", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:33:21.346556 containerd[2100]: 2025-01-13 21:33:21.273 [INFO][6083] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="06a90424f5008c3a4cb972bf2e357e5159c1097ee9d9e188e864596c33bfaafd" Jan 13 21:33:21.346556 containerd[2100]: 2025-01-13 21:33:21.273 [INFO][6083] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="06a90424f5008c3a4cb972bf2e357e5159c1097ee9d9e188e864596c33bfaafd" iface="eth0" netns="" Jan 13 21:33:21.346556 containerd[2100]: 2025-01-13 21:33:21.273 [INFO][6083] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="06a90424f5008c3a4cb972bf2e357e5159c1097ee9d9e188e864596c33bfaafd" Jan 13 21:33:21.346556 containerd[2100]: 2025-01-13 21:33:21.273 [INFO][6083] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="06a90424f5008c3a4cb972bf2e357e5159c1097ee9d9e188e864596c33bfaafd" Jan 13 21:33:21.346556 containerd[2100]: 2025-01-13 21:33:21.323 [INFO][6089] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="06a90424f5008c3a4cb972bf2e357e5159c1097ee9d9e188e864596c33bfaafd" HandleID="k8s-pod-network.06a90424f5008c3a4cb972bf2e357e5159c1097ee9d9e188e864596c33bfaafd" Workload="ip--172--31--23--216-k8s-coredns--76f75df574--xh79p-eth0" Jan 13 21:33:21.346556 containerd[2100]: 2025-01-13 21:33:21.324 [INFO][6089] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:33:21.346556 containerd[2100]: 2025-01-13 21:33:21.324 [INFO][6089] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:33:21.346556 containerd[2100]: 2025-01-13 21:33:21.335 [WARNING][6089] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="06a90424f5008c3a4cb972bf2e357e5159c1097ee9d9e188e864596c33bfaafd" HandleID="k8s-pod-network.06a90424f5008c3a4cb972bf2e357e5159c1097ee9d9e188e864596c33bfaafd" Workload="ip--172--31--23--216-k8s-coredns--76f75df574--xh79p-eth0" Jan 13 21:33:21.346556 containerd[2100]: 2025-01-13 21:33:21.335 [INFO][6089] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="06a90424f5008c3a4cb972bf2e357e5159c1097ee9d9e188e864596c33bfaafd" HandleID="k8s-pod-network.06a90424f5008c3a4cb972bf2e357e5159c1097ee9d9e188e864596c33bfaafd" Workload="ip--172--31--23--216-k8s-coredns--76f75df574--xh79p-eth0" Jan 13 21:33:21.346556 containerd[2100]: 2025-01-13 21:33:21.338 [INFO][6089] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:33:21.346556 containerd[2100]: 2025-01-13 21:33:21.344 [INFO][6083] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="06a90424f5008c3a4cb972bf2e357e5159c1097ee9d9e188e864596c33bfaafd" Jan 13 21:33:21.348208 containerd[2100]: time="2025-01-13T21:33:21.347612497Z" level=info msg="TearDown network for sandbox \"06a90424f5008c3a4cb972bf2e357e5159c1097ee9d9e188e864596c33bfaafd\" successfully" Jan 13 21:33:21.356017 containerd[2100]: time="2025-01-13T21:33:21.355785851Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"06a90424f5008c3a4cb972bf2e357e5159c1097ee9d9e188e864596c33bfaafd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:33:21.356017 containerd[2100]: time="2025-01-13T21:33:21.355893675Z" level=info msg="RemovePodSandbox \"06a90424f5008c3a4cb972bf2e357e5159c1097ee9d9e188e864596c33bfaafd\" returns successfully" Jan 13 21:33:21.356688 containerd[2100]: time="2025-01-13T21:33:21.356580810Z" level=info msg="StopPodSandbox for \"7e61938cb3913460527310aa3c8f00066ff32ec8133d0845c3cac36864b1ea63\"" Jan 13 21:33:21.515155 containerd[2100]: 2025-01-13 21:33:21.440 [WARNING][6107] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7e61938cb3913460527310aa3c8f00066ff32ec8133d0845c3cac36864b1ea63" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--216-k8s-coredns--76f75df574--qn2h7-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"1da65a44-04e3-44d6-8959-9a867b5fe933", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 32, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-216", ContainerID:"6ae3f25e998e949f370e112e89f5c0b31913c41ddaa6af112e1d5ee886f509cb", Pod:"coredns-76f75df574-qn2h7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.40.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali72875e634b0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:33:21.515155 containerd[2100]: 2025-01-13 21:33:21.441 [INFO][6107] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7e61938cb3913460527310aa3c8f00066ff32ec8133d0845c3cac36864b1ea63" Jan 13 21:33:21.515155 containerd[2100]: 2025-01-13 21:33:21.441 [INFO][6107] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7e61938cb3913460527310aa3c8f00066ff32ec8133d0845c3cac36864b1ea63" iface="eth0" netns="" Jan 13 21:33:21.515155 containerd[2100]: 2025-01-13 21:33:21.441 [INFO][6107] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7e61938cb3913460527310aa3c8f00066ff32ec8133d0845c3cac36864b1ea63" Jan 13 21:33:21.515155 containerd[2100]: 2025-01-13 21:33:21.441 [INFO][6107] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7e61938cb3913460527310aa3c8f00066ff32ec8133d0845c3cac36864b1ea63" Jan 13 21:33:21.515155 containerd[2100]: 2025-01-13 21:33:21.493 [INFO][6113] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7e61938cb3913460527310aa3c8f00066ff32ec8133d0845c3cac36864b1ea63" HandleID="k8s-pod-network.7e61938cb3913460527310aa3c8f00066ff32ec8133d0845c3cac36864b1ea63" Workload="ip--172--31--23--216-k8s-coredns--76f75df574--qn2h7-eth0" Jan 13 21:33:21.515155 containerd[2100]: 2025-01-13 21:33:21.493 [INFO][6113] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:33:21.515155 containerd[2100]: 2025-01-13 21:33:21.494 [INFO][6113] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:33:21.515155 containerd[2100]: 2025-01-13 21:33:21.505 [WARNING][6113] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7e61938cb3913460527310aa3c8f00066ff32ec8133d0845c3cac36864b1ea63" HandleID="k8s-pod-network.7e61938cb3913460527310aa3c8f00066ff32ec8133d0845c3cac36864b1ea63" Workload="ip--172--31--23--216-k8s-coredns--76f75df574--qn2h7-eth0" Jan 13 21:33:21.515155 containerd[2100]: 2025-01-13 21:33:21.505 [INFO][6113] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7e61938cb3913460527310aa3c8f00066ff32ec8133d0845c3cac36864b1ea63" HandleID="k8s-pod-network.7e61938cb3913460527310aa3c8f00066ff32ec8133d0845c3cac36864b1ea63" Workload="ip--172--31--23--216-k8s-coredns--76f75df574--qn2h7-eth0" Jan 13 21:33:21.515155 containerd[2100]: 2025-01-13 21:33:21.509 [INFO][6113] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:33:21.515155 containerd[2100]: 2025-01-13 21:33:21.513 [INFO][6107] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7e61938cb3913460527310aa3c8f00066ff32ec8133d0845c3cac36864b1ea63" Jan 13 21:33:21.517434 containerd[2100]: time="2025-01-13T21:33:21.515123404Z" level=info msg="TearDown network for sandbox \"7e61938cb3913460527310aa3c8f00066ff32ec8133d0845c3cac36864b1ea63\" successfully" Jan 13 21:33:21.517555 containerd[2100]: time="2025-01-13T21:33:21.517436070Z" level=info msg="StopPodSandbox for \"7e61938cb3913460527310aa3c8f00066ff32ec8133d0845c3cac36864b1ea63\" returns successfully" Jan 13 21:33:21.518449 containerd[2100]: time="2025-01-13T21:33:21.517984823Z" level=info msg="RemovePodSandbox for \"7e61938cb3913460527310aa3c8f00066ff32ec8133d0845c3cac36864b1ea63\"" Jan 13 21:33:21.518859 containerd[2100]: time="2025-01-13T21:33:21.518789652Z" level=info msg="Forcibly stopping sandbox \"7e61938cb3913460527310aa3c8f00066ff32ec8133d0845c3cac36864b1ea63\"" Jan 13 21:33:21.569429 containerd[2100]: time="2025-01-13T21:33:21.569380260Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:33:21.571235 containerd[2100]: time="2025-01-13T21:33:21.571175372Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 13 21:33:21.573970 containerd[2100]: time="2025-01-13T21:33:21.573893367Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:33:21.579104 containerd[2100]: time="2025-01-13T21:33:21.579032014Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:33:21.580439 containerd[2100]: time="2025-01-13T21:33:21.579819635Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 3.710780438s" Jan 13 21:33:21.580439 containerd[2100]: time="2025-01-13T21:33:21.579876747Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 13 21:33:21.582230 containerd[2100]: time="2025-01-13T21:33:21.580879619Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 13 21:33:21.627958 containerd[2100]: time="2025-01-13T21:33:21.627920255Z" level=info msg="CreateContainer within sandbox \"730d5ce859f774b043e518251a98851c2c0f88a7124d555c205b096292c300e7\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 13 21:33:21.654461 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2476782627.mount: Deactivated successfully. Jan 13 21:33:21.665130 containerd[2100]: time="2025-01-13T21:33:21.665084149Z" level=info msg="CreateContainer within sandbox \"730d5ce859f774b043e518251a98851c2c0f88a7124d555c205b096292c300e7\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"a0004ccaa8241599b8dc04df0263dda6fc4085ecf9d93964fa8eb417f7370bd5\"" Jan 13 21:33:21.667821 containerd[2100]: time="2025-01-13T21:33:21.667362595Z" level=info msg="StartContainer for \"a0004ccaa8241599b8dc04df0263dda6fc4085ecf9d93964fa8eb417f7370bd5\"" Jan 13 21:33:21.702099 containerd[2100]: 2025-01-13 21:33:21.617 [WARNING][6131] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7e61938cb3913460527310aa3c8f00066ff32ec8133d0845c3cac36864b1ea63" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--216-k8s-coredns--76f75df574--qn2h7-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"1da65a44-04e3-44d6-8959-9a867b5fe933", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 32, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-216", ContainerID:"6ae3f25e998e949f370e112e89f5c0b31913c41ddaa6af112e1d5ee886f509cb", Pod:"coredns-76f75df574-qn2h7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.40.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali72875e634b0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:33:21.702099 containerd[2100]: 2025-01-13 21:33:21.618 [INFO][6131] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7e61938cb3913460527310aa3c8f00066ff32ec8133d0845c3cac36864b1ea63" Jan 13 21:33:21.702099 containerd[2100]: 2025-01-13 21:33:21.618 [INFO][6131] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7e61938cb3913460527310aa3c8f00066ff32ec8133d0845c3cac36864b1ea63" iface="eth0" netns="" Jan 13 21:33:21.702099 containerd[2100]: 2025-01-13 21:33:21.618 [INFO][6131] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7e61938cb3913460527310aa3c8f00066ff32ec8133d0845c3cac36864b1ea63" Jan 13 21:33:21.702099 containerd[2100]: 2025-01-13 21:33:21.618 [INFO][6131] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7e61938cb3913460527310aa3c8f00066ff32ec8133d0845c3cac36864b1ea63" Jan 13 21:33:21.702099 containerd[2100]: 2025-01-13 21:33:21.682 [INFO][6139] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7e61938cb3913460527310aa3c8f00066ff32ec8133d0845c3cac36864b1ea63" HandleID="k8s-pod-network.7e61938cb3913460527310aa3c8f00066ff32ec8133d0845c3cac36864b1ea63" Workload="ip--172--31--23--216-k8s-coredns--76f75df574--qn2h7-eth0" Jan 13 21:33:21.702099 containerd[2100]: 2025-01-13 21:33:21.682 [INFO][6139] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:33:21.702099 containerd[2100]: 2025-01-13 21:33:21.682 [INFO][6139] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:33:21.702099 containerd[2100]: 2025-01-13 21:33:21.693 [WARNING][6139] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7e61938cb3913460527310aa3c8f00066ff32ec8133d0845c3cac36864b1ea63" HandleID="k8s-pod-network.7e61938cb3913460527310aa3c8f00066ff32ec8133d0845c3cac36864b1ea63" Workload="ip--172--31--23--216-k8s-coredns--76f75df574--qn2h7-eth0" Jan 13 21:33:21.702099 containerd[2100]: 2025-01-13 21:33:21.693 [INFO][6139] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7e61938cb3913460527310aa3c8f00066ff32ec8133d0845c3cac36864b1ea63" HandleID="k8s-pod-network.7e61938cb3913460527310aa3c8f00066ff32ec8133d0845c3cac36864b1ea63" Workload="ip--172--31--23--216-k8s-coredns--76f75df574--qn2h7-eth0" Jan 13 21:33:21.702099 containerd[2100]: 2025-01-13 21:33:21.696 [INFO][6139] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:33:21.702099 containerd[2100]: 2025-01-13 21:33:21.699 [INFO][6131] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7e61938cb3913460527310aa3c8f00066ff32ec8133d0845c3cac36864b1ea63" Jan 13 21:33:21.702099 containerd[2100]: time="2025-01-13T21:33:21.701242528Z" level=info msg="TearDown network for sandbox \"7e61938cb3913460527310aa3c8f00066ff32ec8133d0845c3cac36864b1ea63\" successfully" Jan 13 21:33:21.722639 containerd[2100]: time="2025-01-13T21:33:21.721912959Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7e61938cb3913460527310aa3c8f00066ff32ec8133d0845c3cac36864b1ea63\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:33:21.722639 containerd[2100]: time="2025-01-13T21:33:21.722024204Z" level=info msg="RemovePodSandbox \"7e61938cb3913460527310aa3c8f00066ff32ec8133d0845c3cac36864b1ea63\" returns successfully" Jan 13 21:33:21.724596 containerd[2100]: time="2025-01-13T21:33:21.723355914Z" level=info msg="StopPodSandbox for \"aba47a4051f24607b4bb469a39d943adfdb3508b13bee8e4ff6ee6abb58edec2\"" Jan 13 21:33:21.865634 containerd[2100]: time="2025-01-13T21:33:21.865585427Z" level=info msg="StartContainer for \"a0004ccaa8241599b8dc04df0263dda6fc4085ecf9d93964fa8eb417f7370bd5\" returns successfully" Jan 13 21:33:21.924962 containerd[2100]: 2025-01-13 21:33:21.846 [WARNING][6171] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="aba47a4051f24607b4bb469a39d943adfdb3508b13bee8e4ff6ee6abb58edec2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--216-k8s-calico--kube--controllers--97574c6fb--sdstw-eth0", GenerateName:"calico-kube-controllers-97574c6fb-", Namespace:"calico-system", SelfLink:"", UID:"6516d32b-0c84-4b53-a73d-5859b4a02633", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 32, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"97574c6fb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-216", ContainerID:"730d5ce859f774b043e518251a98851c2c0f88a7124d555c205b096292c300e7", Pod:"calico-kube-controllers-97574c6fb-sdstw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.40.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif1fabf08c78", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:33:21.924962 containerd[2100]: 2025-01-13 21:33:21.847 [INFO][6171] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="aba47a4051f24607b4bb469a39d943adfdb3508b13bee8e4ff6ee6abb58edec2" Jan 13 21:33:21.924962 containerd[2100]: 2025-01-13 21:33:21.847 [INFO][6171] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="aba47a4051f24607b4bb469a39d943adfdb3508b13bee8e4ff6ee6abb58edec2" iface="eth0" netns="" Jan 13 21:33:21.924962 containerd[2100]: 2025-01-13 21:33:21.848 [INFO][6171] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="aba47a4051f24607b4bb469a39d943adfdb3508b13bee8e4ff6ee6abb58edec2" Jan 13 21:33:21.924962 containerd[2100]: 2025-01-13 21:33:21.848 [INFO][6171] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aba47a4051f24607b4bb469a39d943adfdb3508b13bee8e4ff6ee6abb58edec2" Jan 13 21:33:21.924962 containerd[2100]: 2025-01-13 21:33:21.906 [INFO][6188] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="aba47a4051f24607b4bb469a39d943adfdb3508b13bee8e4ff6ee6abb58edec2" HandleID="k8s-pod-network.aba47a4051f24607b4bb469a39d943adfdb3508b13bee8e4ff6ee6abb58edec2" Workload="ip--172--31--23--216-k8s-calico--kube--controllers--97574c6fb--sdstw-eth0" Jan 13 21:33:21.924962 containerd[2100]: 2025-01-13 21:33:21.907 [INFO][6188] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:33:21.924962 containerd[2100]: 2025-01-13 21:33:21.907 [INFO][6188] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:33:21.924962 containerd[2100]: 2025-01-13 21:33:21.916 [WARNING][6188] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="aba47a4051f24607b4bb469a39d943adfdb3508b13bee8e4ff6ee6abb58edec2" HandleID="k8s-pod-network.aba47a4051f24607b4bb469a39d943adfdb3508b13bee8e4ff6ee6abb58edec2" Workload="ip--172--31--23--216-k8s-calico--kube--controllers--97574c6fb--sdstw-eth0" Jan 13 21:33:21.924962 containerd[2100]: 2025-01-13 21:33:21.916 [INFO][6188] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="aba47a4051f24607b4bb469a39d943adfdb3508b13bee8e4ff6ee6abb58edec2" HandleID="k8s-pod-network.aba47a4051f24607b4bb469a39d943adfdb3508b13bee8e4ff6ee6abb58edec2" Workload="ip--172--31--23--216-k8s-calico--kube--controllers--97574c6fb--sdstw-eth0" Jan 13 21:33:21.924962 containerd[2100]: 2025-01-13 21:33:21.919 [INFO][6188] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:33:21.924962 containerd[2100]: 2025-01-13 21:33:21.921 [INFO][6171] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="aba47a4051f24607b4bb469a39d943adfdb3508b13bee8e4ff6ee6abb58edec2" Jan 13 21:33:21.927201 containerd[2100]: time="2025-01-13T21:33:21.924998111Z" level=info msg="TearDown network for sandbox \"aba47a4051f24607b4bb469a39d943adfdb3508b13bee8e4ff6ee6abb58edec2\" successfully" Jan 13 21:33:21.927201 containerd[2100]: time="2025-01-13T21:33:21.925029686Z" level=info msg="StopPodSandbox for \"aba47a4051f24607b4bb469a39d943adfdb3508b13bee8e4ff6ee6abb58edec2\" returns successfully" Jan 13 21:33:21.928296 containerd[2100]: time="2025-01-13T21:33:21.927453264Z" level=info msg="RemovePodSandbox for \"aba47a4051f24607b4bb469a39d943adfdb3508b13bee8e4ff6ee6abb58edec2\"" Jan 13 21:33:21.928296 containerd[2100]: time="2025-01-13T21:33:21.927717321Z" level=info msg="Forcibly stopping sandbox \"aba47a4051f24607b4bb469a39d943adfdb3508b13bee8e4ff6ee6abb58edec2\"" Jan 13 21:33:21.981165 containerd[2100]: time="2025-01-13T21:33:21.980583111Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:33:21.989470 containerd[2100]: time="2025-01-13T21:33:21.989178245Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 13 21:33:22.004954 containerd[2100]: time="2025-01-13T21:33:22.004892554Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 423.460592ms" Jan 13 21:33:22.006084 containerd[2100]: time="2025-01-13T21:33:22.005952809Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 13 21:33:22.018047 containerd[2100]: time="2025-01-13T21:33:22.017133543Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 13 21:33:22.053167 kubelet[3391]: I0113 21:33:22.053104 3391 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-97574c6fb-sdstw" podStartSLOduration=29.768632301 podStartE2EDuration="38.052290316s" podCreationTimestamp="2025-01-13 21:32:44 +0000 UTC" firstStartedPulling="2025-01-13 21:33:13.296621601 +0000 UTC m=+54.764973439" lastFinishedPulling="2025-01-13 21:33:21.580279611 +0000 UTC m=+63.048631454" observedRunningTime="2025-01-13 21:33:22.048189443 +0000 UTC m=+63.516541307" watchObservedRunningTime="2025-01-13 21:33:22.052290316 +0000 UTC m=+63.520642169" Jan 13 21:33:22.066452 containerd[2100]: time="2025-01-13T21:33:22.066400729Z" level=info msg="CreateContainer within sandbox \"bd72b0e3faa851cd6a634fafe3e868cbfd6892eeb23913cd627473147ee63462\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 13 21:33:22.133101 containerd[2100]: time="2025-01-13T21:33:22.132072986Z" level=info msg="CreateContainer within sandbox \"bd72b0e3faa851cd6a634fafe3e868cbfd6892eeb23913cd627473147ee63462\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"dff2fa36eb871dfe31912ba8829bfbea5f0f87ea13f8a028762814fac304cf62\"" Jan 13 21:33:22.133101 containerd[2100]: time="2025-01-13T21:33:22.132913669Z" level=info msg="StartContainer for \"dff2fa36eb871dfe31912ba8829bfbea5f0f87ea13f8a028762814fac304cf62\"" Jan 13 21:33:22.246150 containerd[2100]: 2025-01-13 21:33:22.135 [WARNING][6215] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="aba47a4051f24607b4bb469a39d943adfdb3508b13bee8e4ff6ee6abb58edec2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--216-k8s-calico--kube--controllers--97574c6fb--sdstw-eth0", GenerateName:"calico-kube-controllers-97574c6fb-", Namespace:"calico-system", SelfLink:"", UID:"6516d32b-0c84-4b53-a73d-5859b4a02633", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 21, 32, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"97574c6fb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-216", ContainerID:"730d5ce859f774b043e518251a98851c2c0f88a7124d555c205b096292c300e7", Pod:"calico-kube-controllers-97574c6fb-sdstw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.40.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif1fabf08c78", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 21:33:22.246150 containerd[2100]: 2025-01-13 21:33:22.135 [INFO][6215] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="aba47a4051f24607b4bb469a39d943adfdb3508b13bee8e4ff6ee6abb58edec2" Jan 13 21:33:22.246150 containerd[2100]: 2025-01-13 21:33:22.135 [INFO][6215] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="aba47a4051f24607b4bb469a39d943adfdb3508b13bee8e4ff6ee6abb58edec2" iface="eth0" netns="" Jan 13 21:33:22.246150 containerd[2100]: 2025-01-13 21:33:22.135 [INFO][6215] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="aba47a4051f24607b4bb469a39d943adfdb3508b13bee8e4ff6ee6abb58edec2" Jan 13 21:33:22.246150 containerd[2100]: 2025-01-13 21:33:22.135 [INFO][6215] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aba47a4051f24607b4bb469a39d943adfdb3508b13bee8e4ff6ee6abb58edec2" Jan 13 21:33:22.246150 containerd[2100]: 2025-01-13 21:33:22.206 [INFO][6228] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="aba47a4051f24607b4bb469a39d943adfdb3508b13bee8e4ff6ee6abb58edec2" HandleID="k8s-pod-network.aba47a4051f24607b4bb469a39d943adfdb3508b13bee8e4ff6ee6abb58edec2" Workload="ip--172--31--23--216-k8s-calico--kube--controllers--97574c6fb--sdstw-eth0" Jan 13 21:33:22.246150 containerd[2100]: 2025-01-13 21:33:22.209 [INFO][6228] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:33:22.246150 containerd[2100]: 2025-01-13 21:33:22.209 [INFO][6228] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:33:22.246150 containerd[2100]: 2025-01-13 21:33:22.221 [WARNING][6228] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="aba47a4051f24607b4bb469a39d943adfdb3508b13bee8e4ff6ee6abb58edec2" HandleID="k8s-pod-network.aba47a4051f24607b4bb469a39d943adfdb3508b13bee8e4ff6ee6abb58edec2" Workload="ip--172--31--23--216-k8s-calico--kube--controllers--97574c6fb--sdstw-eth0" Jan 13 21:33:22.246150 containerd[2100]: 2025-01-13 21:33:22.221 [INFO][6228] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="aba47a4051f24607b4bb469a39d943adfdb3508b13bee8e4ff6ee6abb58edec2" HandleID="k8s-pod-network.aba47a4051f24607b4bb469a39d943adfdb3508b13bee8e4ff6ee6abb58edec2" Workload="ip--172--31--23--216-k8s-calico--kube--controllers--97574c6fb--sdstw-eth0" Jan 13 21:33:22.246150 containerd[2100]: 2025-01-13 21:33:22.233 [INFO][6228] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:33:22.246150 containerd[2100]: 2025-01-13 21:33:22.240 [INFO][6215] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="aba47a4051f24607b4bb469a39d943adfdb3508b13bee8e4ff6ee6abb58edec2" Jan 13 21:33:22.246150 containerd[2100]: time="2025-01-13T21:33:22.244352660Z" level=info msg="TearDown network for sandbox \"aba47a4051f24607b4bb469a39d943adfdb3508b13bee8e4ff6ee6abb58edec2\" successfully" Jan 13 21:33:22.258810 containerd[2100]: time="2025-01-13T21:33:22.258744540Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"aba47a4051f24607b4bb469a39d943adfdb3508b13bee8e4ff6ee6abb58edec2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:33:22.258976 containerd[2100]: time="2025-01-13T21:33:22.258845324Z" level=info msg="RemovePodSandbox \"aba47a4051f24607b4bb469a39d943adfdb3508b13bee8e4ff6ee6abb58edec2\" returns successfully" Jan 13 21:33:22.312858 containerd[2100]: time="2025-01-13T21:33:22.312692612Z" level=info msg="StartContainer for \"dff2fa36eb871dfe31912ba8829bfbea5f0f87ea13f8a028762814fac304cf62\" returns successfully" Jan 13 21:33:23.630758 containerd[2100]: time="2025-01-13T21:33:23.630707437Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:33:23.632054 containerd[2100]: time="2025-01-13T21:33:23.631851552Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 13 21:33:23.633882 containerd[2100]: time="2025-01-13T21:33:23.633474834Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:33:23.636281 containerd[2100]: time="2025-01-13T21:33:23.636188778Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:33:23.637814 containerd[2100]: time="2025-01-13T21:33:23.637697781Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.620486672s" Jan 13 21:33:23.639850 containerd[2100]: time="2025-01-13T21:33:23.638059056Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 13 21:33:23.642237 containerd[2100]: time="2025-01-13T21:33:23.642201228Z" level=info msg="CreateContainer within sandbox \"4c98c3d2f5c6021b26675f5c836294d726d77409ad636d98e07c130b23ffacb3\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 13 21:33:23.715647 containerd[2100]: time="2025-01-13T21:33:23.715567415Z" level=info msg="CreateContainer within sandbox \"4c98c3d2f5c6021b26675f5c836294d726d77409ad636d98e07c130b23ffacb3\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"e2bcd0d2d341da70fb19e906c1e86f4b44b3b58061e88f463a4dcf8261afc668\"" Jan 13 21:33:23.738617 containerd[2100]: time="2025-01-13T21:33:23.731625862Z" level=info msg="StartContainer for \"e2bcd0d2d341da70fb19e906c1e86f4b44b3b58061e88f463a4dcf8261afc668\"" Jan 13 21:33:23.884934 containerd[2100]: time="2025-01-13T21:33:23.883669265Z" level=info msg="StartContainer for \"e2bcd0d2d341da70fb19e906c1e86f4b44b3b58061e88f463a4dcf8261afc668\" returns successfully" Jan 13 21:33:23.937203 kubelet[3391]: I0113 21:33:23.937144 3391 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7f9ff6c558-5ns5g" podStartSLOduration=30.254202061 podStartE2EDuration="38.937010864s" podCreationTimestamp="2025-01-13 21:32:45 +0000 UTC" firstStartedPulling="2025-01-13 21:33:13.330537661 +0000 UTC m=+54.798889494" lastFinishedPulling="2025-01-13 21:33:22.013346464 +0000 UTC m=+63.481698297" observedRunningTime="2025-01-13 21:33:23.12821406 +0000 UTC m=+64.596565911" watchObservedRunningTime="2025-01-13 21:33:23.937010864 +0000 UTC m=+65.405362714" Jan 13 21:33:24.180575 kubelet[3391]: I0113 21:33:24.180453 3391 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-m7j9j" podStartSLOduration=29.242026694 podStartE2EDuration="40.180396604s" podCreationTimestamp="2025-01-13 21:32:44 +0000 UTC" firstStartedPulling="2025-01-13 21:33:12.700089479 +0000 UTC m=+54.168441310" lastFinishedPulling="2025-01-13 21:33:23.638459375 +0000 UTC m=+65.106811220" observedRunningTime="2025-01-13 21:33:24.17829493 +0000 UTC m=+65.646646783" watchObservedRunningTime="2025-01-13 21:33:24.180396604 +0000 UTC m=+65.648748457" Jan 13 21:33:24.898953 kubelet[3391]: I0113 21:33:24.898904 3391 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 13 21:33:24.909793 kubelet[3391]: I0113 21:33:24.909753 3391 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 13 21:33:25.365303 systemd[1]: Started sshd@12-172.31.23.216:22-147.75.109.163:60510.service - OpenSSH per-connection server daemon (147.75.109.163:60510). Jan 13 21:33:25.628956 sshd[6337]: Accepted publickey for core from 147.75.109.163 port 60510 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:33:25.632514 sshd[6337]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:33:25.640195 systemd-logind[2056]: New session 13 of user core. Jan 13 21:33:25.646675 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 13 21:33:25.751246 systemd[1]: run-containerd-runc-k8s.io-a0004ccaa8241599b8dc04df0263dda6fc4085ecf9d93964fa8eb417f7370bd5-runc.qV7rss.mount: Deactivated successfully. Jan 13 21:33:26.367935 systemd-journald[1569]: Under memory pressure, flushing caches. Jan 13 21:33:26.358622 systemd-resolved[1974]: Under memory pressure, flushing caches. Jan 13 21:33:26.358670 systemd-resolved[1974]: Flushed all caches. Jan 13 21:33:26.635681 sshd[6337]: pam_unix(sshd:session): session closed for user core Jan 13 21:33:26.641116 systemd[1]: sshd@12-172.31.23.216:22-147.75.109.163:60510.service: Deactivated successfully. Jan 13 21:33:26.647261 systemd[1]: session-13.scope: Deactivated successfully. Jan 13 21:33:26.648365 systemd-logind[2056]: Session 13 logged out. Waiting for processes to exit. Jan 13 21:33:26.649685 systemd-logind[2056]: Removed session 13. Jan 13 21:33:31.669247 systemd[1]: Started sshd@13-172.31.23.216:22-147.75.109.163:45758.service - OpenSSH per-connection server daemon (147.75.109.163:45758). Jan 13 21:33:31.865436 sshd[6399]: Accepted publickey for core from 147.75.109.163 port 45758 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:33:31.868403 sshd[6399]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:33:31.877956 systemd-logind[2056]: New session 14 of user core. Jan 13 21:33:31.881419 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 13 21:33:32.173331 sshd[6399]: pam_unix(sshd:session): session closed for user core Jan 13 21:33:32.185423 systemd[1]: sshd@13-172.31.23.216:22-147.75.109.163:45758.service: Deactivated successfully. Jan 13 21:33:32.191244 systemd[1]: session-14.scope: Deactivated successfully. Jan 13 21:33:32.192289 systemd-logind[2056]: Session 14 logged out. Waiting for processes to exit. Jan 13 21:33:32.193907 systemd-logind[2056]: Removed session 14. Jan 13 21:33:37.201301 systemd[1]: Started sshd@14-172.31.23.216:22-147.75.109.163:45774.service - OpenSSH per-connection server daemon (147.75.109.163:45774). Jan 13 21:33:37.405875 sshd[6418]: Accepted publickey for core from 147.75.109.163 port 45774 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:33:37.408323 sshd[6418]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:33:37.416572 systemd-logind[2056]: New session 15 of user core. Jan 13 21:33:37.422252 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 13 21:33:37.731128 sshd[6418]: pam_unix(sshd:session): session closed for user core Jan 13 21:33:37.738047 systemd[1]: sshd@14-172.31.23.216:22-147.75.109.163:45774.service: Deactivated successfully. Jan 13 21:33:37.745726 systemd[1]: session-15.scope: Deactivated successfully. Jan 13 21:33:37.747018 systemd-logind[2056]: Session 15 logged out. Waiting for processes to exit. Jan 13 21:33:37.748341 systemd-logind[2056]: Removed session 15. Jan 13 21:33:41.836877 kubelet[3391]: I0113 21:33:41.833934 3391 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 21:33:42.764133 systemd[1]: Started sshd@15-172.31.23.216:22-147.75.109.163:56670.service - OpenSSH per-connection server daemon (147.75.109.163:56670). Jan 13 21:33:42.955952 sshd[6435]: Accepted publickey for core from 147.75.109.163 port 56670 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:33:42.956370 sshd[6435]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:33:42.965203 systemd-logind[2056]: New session 16 of user core. Jan 13 21:33:42.972416 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 13 21:33:43.269729 sshd[6435]: pam_unix(sshd:session): session closed for user core Jan 13 21:33:43.281792 systemd[1]: sshd@15-172.31.23.216:22-147.75.109.163:56670.service: Deactivated successfully. Jan 13 21:33:43.292896 systemd-logind[2056]: Session 16 logged out. Waiting for processes to exit. Jan 13 21:33:43.315598 systemd[1]: Started sshd@16-172.31.23.216:22-147.75.109.163:56678.service - OpenSSH per-connection server daemon (147.75.109.163:56678). Jan 13 21:33:43.316205 systemd[1]: session-16.scope: Deactivated successfully. Jan 13 21:33:43.319175 systemd-logind[2056]: Removed session 16. Jan 13 21:33:43.557090 sshd[6449]: Accepted publickey for core from 147.75.109.163 port 56678 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:33:43.562422 sshd[6449]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:33:43.573172 systemd-logind[2056]: New session 17 of user core. Jan 13 21:33:43.577372 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 13 21:33:44.298941 sshd[6449]: pam_unix(sshd:session): session closed for user core Jan 13 21:33:44.307702 systemd[1]: sshd@16-172.31.23.216:22-147.75.109.163:56678.service: Deactivated successfully. Jan 13 21:33:44.324142 systemd[1]: session-17.scope: Deactivated successfully. Jan 13 21:33:44.326896 systemd-logind[2056]: Session 17 logged out. Waiting for processes to exit. Jan 13 21:33:44.339383 systemd[1]: Started sshd@17-172.31.23.216:22-147.75.109.163:56686.service - OpenSSH per-connection server daemon (147.75.109.163:56686). Jan 13 21:33:44.343372 systemd-logind[2056]: Removed session 17. Jan 13 21:33:44.540902 sshd[6467]: Accepted publickey for core from 147.75.109.163 port 56686 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:33:44.542516 sshd[6467]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:33:44.547893 systemd-logind[2056]: New session 18 of user core. Jan 13 21:33:44.558026 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 13 21:33:46.240666 containerd[2100]: time="2025-01-13T21:33:46.239884647Z" level=info msg="StopContainer for \"dc73a1e969a2e18e01802b7dc5c7590837da06f71b4eca2aed90783f94a6eff8\" with timeout 300 (s)" Jan 13 21:33:46.262918 containerd[2100]: time="2025-01-13T21:33:46.257259332Z" level=info msg="Stop container \"dc73a1e969a2e18e01802b7dc5c7590837da06f71b4eca2aed90783f94a6eff8\" with signal terminated" Jan 13 21:33:46.329234 systemd-journald[1569]: Under memory pressure, flushing caches. Jan 13 21:33:46.329009 systemd-resolved[1974]: Under memory pressure, flushing caches. Jan 13 21:33:46.329047 systemd-resolved[1974]: Flushed all caches. Jan 13 21:33:47.061042 systemd[1]: run-containerd-runc-k8s.io-21aa5069122deaa58584a553b207ba99a6bb56541b7e926d08f08bb1b0fe71e8-runc.DHHv5Z.mount: Deactivated successfully. Jan 13 21:33:47.428402 containerd[2100]: time="2025-01-13T21:33:47.428157276Z" level=info msg="StopContainer for \"a0004ccaa8241599b8dc04df0263dda6fc4085ecf9d93964fa8eb417f7370bd5\" with timeout 30 (s)" Jan 13 21:33:47.430455 containerd[2100]: time="2025-01-13T21:33:47.430230444Z" level=info msg="Stop container \"a0004ccaa8241599b8dc04df0263dda6fc4085ecf9d93964fa8eb417f7370bd5\" with signal terminated" Jan 13 21:33:47.755793 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a0004ccaa8241599b8dc04df0263dda6fc4085ecf9d93964fa8eb417f7370bd5-rootfs.mount: Deactivated successfully. Jan 13 21:33:47.792912 containerd[2100]: time="2025-01-13T21:33:47.784291342Z" level=info msg="shim disconnected" id=a0004ccaa8241599b8dc04df0263dda6fc4085ecf9d93964fa8eb417f7370bd5 namespace=k8s.io Jan 13 21:33:47.835764 containerd[2100]: time="2025-01-13T21:33:47.835486688Z" level=warning msg="cleaning up after shim disconnected" id=a0004ccaa8241599b8dc04df0263dda6fc4085ecf9d93964fa8eb417f7370bd5 namespace=k8s.io Jan 13 21:33:47.835764 containerd[2100]: time="2025-01-13T21:33:47.835535446Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:33:48.373300 systemd-resolved[1974]: Under memory pressure, flushing caches. Jan 13 21:33:48.387176 systemd-journald[1569]: Under memory pressure, flushing caches. Jan 13 21:33:48.373311 systemd-resolved[1974]: Flushed all caches. Jan 13 21:33:48.417738 containerd[2100]: time="2025-01-13T21:33:48.416861926Z" level=info msg="StopContainer for \"a0004ccaa8241599b8dc04df0263dda6fc4085ecf9d93964fa8eb417f7370bd5\" returns successfully" Jan 13 21:33:48.564221 containerd[2100]: time="2025-01-13T21:33:48.563461342Z" level=info msg="StopContainer for \"21aa5069122deaa58584a553b207ba99a6bb56541b7e926d08f08bb1b0fe71e8\" with timeout 4 (s)" Jan 13 21:33:48.576167 containerd[2100]: time="2025-01-13T21:33:48.567951993Z" level=info msg="StopPodSandbox for \"730d5ce859f774b043e518251a98851c2c0f88a7124d555c205b096292c300e7\"" Jan 13 21:33:48.578087 containerd[2100]: time="2025-01-13T21:33:48.577408587Z" level=info msg="Container to stop \"a0004ccaa8241599b8dc04df0263dda6fc4085ecf9d93964fa8eb417f7370bd5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:33:48.580941 containerd[2100]: time="2025-01-13T21:33:48.576755696Z" level=info msg="Stop container \"21aa5069122deaa58584a553b207ba99a6bb56541b7e926d08f08bb1b0fe71e8\" with signal terminated" Jan 13 21:33:48.602214 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-730d5ce859f774b043e518251a98851c2c0f88a7124d555c205b096292c300e7-shm.mount: Deactivated successfully. Jan 13 21:33:48.915785 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-730d5ce859f774b043e518251a98851c2c0f88a7124d555c205b096292c300e7-rootfs.mount: Deactivated successfully. Jan 13 21:33:48.917121 containerd[2100]: time="2025-01-13T21:33:48.916027533Z" level=info msg="shim disconnected" id=730d5ce859f774b043e518251a98851c2c0f88a7124d555c205b096292c300e7 namespace=k8s.io Jan 13 21:33:48.917121 containerd[2100]: time="2025-01-13T21:33:48.916094119Z" level=warning msg="cleaning up after shim disconnected" id=730d5ce859f774b043e518251a98851c2c0f88a7124d555c205b096292c300e7 namespace=k8s.io Jan 13 21:33:48.917121 containerd[2100]: time="2025-01-13T21:33:48.916105388Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:33:48.922646 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-21aa5069122deaa58584a553b207ba99a6bb56541b7e926d08f08bb1b0fe71e8-rootfs.mount: Deactivated successfully. Jan 13 21:33:48.936482 containerd[2100]: time="2025-01-13T21:33:48.936254878Z" level=info msg="shim disconnected" id=21aa5069122deaa58584a553b207ba99a6bb56541b7e926d08f08bb1b0fe71e8 namespace=k8s.io Jan 13 21:33:48.936482 containerd[2100]: time="2025-01-13T21:33:48.936338407Z" level=warning msg="cleaning up after shim disconnected" id=21aa5069122deaa58584a553b207ba99a6bb56541b7e926d08f08bb1b0fe71e8 namespace=k8s.io Jan 13 21:33:48.936482 containerd[2100]: time="2025-01-13T21:33:48.936351152Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:33:49.197438 containerd[2100]: time="2025-01-13T21:33:49.196594992Z" level=info msg="StopContainer for \"21aa5069122deaa58584a553b207ba99a6bb56541b7e926d08f08bb1b0fe71e8\" returns successfully" Jan 13 21:33:49.205560 containerd[2100]: time="2025-01-13T21:33:49.202728143Z" level=info msg="StopPodSandbox for \"86065975bf38d7eed27a648545feb36ac980ab74f010ab503ba8b83db0700f88\"" Jan 13 21:33:49.205560 containerd[2100]: time="2025-01-13T21:33:49.202788077Z" level=info msg="Container to stop \"f4928e5573813d695eb78a172909eab8f6b047e62b6ce19670871d244b1673ea\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:33:49.205560 containerd[2100]: time="2025-01-13T21:33:49.202806968Z" level=info msg="Container to stop \"21aa5069122deaa58584a553b207ba99a6bb56541b7e926d08f08bb1b0fe71e8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:33:49.205560 containerd[2100]: time="2025-01-13T21:33:49.202824285Z" level=info msg="Container to stop \"a8c35e3bdc1ed6cdc2139c1480182e99763d9cbf4901dee719311bc9ff4a57ba\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:33:49.220903 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-86065975bf38d7eed27a648545feb36ac980ab74f010ab503ba8b83db0700f88-shm.mount: Deactivated successfully. Jan 13 21:33:49.395653 containerd[2100]: time="2025-01-13T21:33:49.394039258Z" level=info msg="shim disconnected" id=86065975bf38d7eed27a648545feb36ac980ab74f010ab503ba8b83db0700f88 namespace=k8s.io Jan 13 21:33:49.395653 containerd[2100]: time="2025-01-13T21:33:49.394104890Z" level=warning msg="cleaning up after shim disconnected" id=86065975bf38d7eed27a648545feb36ac980ab74f010ab503ba8b83db0700f88 namespace=k8s.io Jan 13 21:33:49.395653 containerd[2100]: time="2025-01-13T21:33:49.394116831Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:33:49.398244 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-86065975bf38d7eed27a648545feb36ac980ab74f010ab503ba8b83db0700f88-rootfs.mount: Deactivated successfully. Jan 13 21:33:49.495550 containerd[2100]: time="2025-01-13T21:33:49.495286027Z" level=info msg="TearDown network for sandbox \"86065975bf38d7eed27a648545feb36ac980ab74f010ab503ba8b83db0700f88\" successfully" Jan 13 21:33:49.496816 containerd[2100]: time="2025-01-13T21:33:49.496780472Z" level=info msg="StopPodSandbox for \"86065975bf38d7eed27a648545feb36ac980ab74f010ab503ba8b83db0700f88\" returns successfully" Jan 13 21:33:49.786966 kubelet[3391]: I0113 21:33:49.786905 3391 topology_manager.go:215] "Topology Admit Handler" podUID="726cbd57-f04b-4938-895f-e39534270dab" podNamespace="calico-system" podName="calico-node-lgfmz" Jan 13 21:33:49.805144 kubelet[3391]: E0113 21:33:49.805041 3391 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ee2e4eb5-5141-49ae-b4d9-ac88f344b28e" containerName="flexvol-driver" Jan 13 21:33:49.805062 sshd[6467]: pam_unix(sshd:session): session closed for user core Jan 13 21:33:49.851994 systemd[1]: Started sshd@18-172.31.23.216:22-147.75.109.163:56440.service - OpenSSH per-connection server daemon (147.75.109.163:56440). Jan 13 21:33:49.852601 systemd[1]: sshd@17-172.31.23.216:22-147.75.109.163:56686.service: Deactivated successfully. Jan 13 21:33:49.866873 systemd[1]: session-18.scope: Deactivated successfully. Jan 13 21:33:49.879348 systemd-logind[2056]: Session 18 logged out. Waiting for processes to exit. Jan 13 21:33:49.891281 systemd-logind[2056]: Removed session 18. Jan 13 21:33:49.892728 kubelet[3391]: E0113 21:33:49.892349 3391 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ee2e4eb5-5141-49ae-b4d9-ac88f344b28e" containerName="install-cni" Jan 13 21:33:49.892728 kubelet[3391]: E0113 21:33:49.892384 3391 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ee2e4eb5-5141-49ae-b4d9-ac88f344b28e" containerName="calico-node" Jan 13 21:33:49.900363 kubelet[3391]: I0113 21:33:49.899992 3391 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee2e4eb5-5141-49ae-b4d9-ac88f344b28e" containerName="calico-node" Jan 13 21:33:49.910156 kubelet[3391]: I0113 21:33:49.908813 3391 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee2e4eb5-5141-49ae-b4d9-ac88f344b28e-tigera-ca-bundle\") pod \"ee2e4eb5-5141-49ae-b4d9-ac88f344b28e\" (UID: \"ee2e4eb5-5141-49ae-b4d9-ac88f344b28e\") " Jan 13 21:33:49.910156 kubelet[3391]: I0113 21:33:49.909260 3391 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ee2e4eb5-5141-49ae-b4d9-ac88f344b28e-xtables-lock\") pod \"ee2e4eb5-5141-49ae-b4d9-ac88f344b28e\" (UID: \"ee2e4eb5-5141-49ae-b4d9-ac88f344b28e\") " Jan 13 21:33:49.910156 kubelet[3391]: I0113 21:33:49.909303 3391 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kr9mh\" (UniqueName: \"kubernetes.io/projected/ee2e4eb5-5141-49ae-b4d9-ac88f344b28e-kube-api-access-kr9mh\") pod \"ee2e4eb5-5141-49ae-b4d9-ac88f344b28e\" (UID: \"ee2e4eb5-5141-49ae-b4d9-ac88f344b28e\") " Jan 13 21:33:49.910156 kubelet[3391]: I0113 21:33:49.909339 3391 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ee2e4eb5-5141-49ae-b4d9-ac88f344b28e-flexvol-driver-host\") pod \"ee2e4eb5-5141-49ae-b4d9-ac88f344b28e\" (UID: \"ee2e4eb5-5141-49ae-b4d9-ac88f344b28e\") " Jan 13 21:33:49.910156 kubelet[3391]: I0113 21:33:49.909363 3391 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ee2e4eb5-5141-49ae-b4d9-ac88f344b28e-cni-net-dir\") pod \"ee2e4eb5-5141-49ae-b4d9-ac88f344b28e\" (UID: \"ee2e4eb5-5141-49ae-b4d9-ac88f344b28e\") " Jan 13 21:33:49.910156 kubelet[3391]: I0113 21:33:49.909392 3391 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ee2e4eb5-5141-49ae-b4d9-ac88f344b28e-cni-log-dir\") pod \"ee2e4eb5-5141-49ae-b4d9-ac88f344b28e\" (UID: \"ee2e4eb5-5141-49ae-b4d9-ac88f344b28e\") " Jan 13 21:33:49.912598 kubelet[3391]: I0113 21:33:49.909415 3391 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ee2e4eb5-5141-49ae-b4d9-ac88f344b28e-lib-modules\") pod \"ee2e4eb5-5141-49ae-b4d9-ac88f344b28e\" (UID: \"ee2e4eb5-5141-49ae-b4d9-ac88f344b28e\") " Jan 13 21:33:49.912598 kubelet[3391]: I0113 21:33:49.909446 3391 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ee2e4eb5-5141-49ae-b4d9-ac88f344b28e-var-run-calico\") pod \"ee2e4eb5-5141-49ae-b4d9-ac88f344b28e\" (UID: \"ee2e4eb5-5141-49ae-b4d9-ac88f344b28e\") " Jan 13 21:33:49.912598 kubelet[3391]: I0113 21:33:49.909487 3391 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ee2e4eb5-5141-49ae-b4d9-ac88f344b28e-node-certs\") pod \"ee2e4eb5-5141-49ae-b4d9-ac88f344b28e\" (UID: \"ee2e4eb5-5141-49ae-b4d9-ac88f344b28e\") " Jan 13 21:33:49.912598 kubelet[3391]: I0113 21:33:49.909515 3391 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ee2e4eb5-5141-49ae-b4d9-ac88f344b28e-var-lib-calico\") pod \"ee2e4eb5-5141-49ae-b4d9-ac88f344b28e\" (UID: \"ee2e4eb5-5141-49ae-b4d9-ac88f344b28e\") " Jan 13 21:33:49.912598 kubelet[3391]: I0113 21:33:49.909547 3391 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ee2e4eb5-5141-49ae-b4d9-ac88f344b28e-cni-bin-dir\") pod \"ee2e4eb5-5141-49ae-b4d9-ac88f344b28e\" (UID: \"ee2e4eb5-5141-49ae-b4d9-ac88f344b28e\") " Jan 13 21:33:49.912598 kubelet[3391]: I0113 21:33:49.909581 3391 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ee2e4eb5-5141-49ae-b4d9-ac88f344b28e-policysync\") pod \"ee2e4eb5-5141-49ae-b4d9-ac88f344b28e\" (UID: \"ee2e4eb5-5141-49ae-b4d9-ac88f344b28e\") " Jan 13 21:33:49.936694 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dc73a1e969a2e18e01802b7dc5c7590837da06f71b4eca2aed90783f94a6eff8-rootfs.mount: Deactivated successfully. Jan 13 21:33:49.942958 containerd[2100]: time="2025-01-13T21:33:49.942892029Z" level=info msg="shim disconnected" id=dc73a1e969a2e18e01802b7dc5c7590837da06f71b4eca2aed90783f94a6eff8 namespace=k8s.io Jan 13 21:33:49.944279 containerd[2100]: time="2025-01-13T21:33:49.943418498Z" level=warning msg="cleaning up after shim disconnected" id=dc73a1e969a2e18e01802b7dc5c7590837da06f71b4eca2aed90783f94a6eff8 namespace=k8s.io Jan 13 21:33:49.944279 containerd[2100]: time="2025-01-13T21:33:49.943445497Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:33:49.970021 kubelet[3391]: I0113 21:33:49.969726 3391 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee2e4eb5-5141-49ae-b4d9-ac88f344b28e-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "ee2e4eb5-5141-49ae-b4d9-ac88f344b28e" (UID: "ee2e4eb5-5141-49ae-b4d9-ac88f344b28e"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:33:49.970268 kubelet[3391]: I0113 21:33:49.970245 3391 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee2e4eb5-5141-49ae-b4d9-ac88f344b28e-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "ee2e4eb5-5141-49ae-b4d9-ac88f344b28e" (UID: "ee2e4eb5-5141-49ae-b4d9-ac88f344b28e"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:33:49.970379 kubelet[3391]: I0113 21:33:49.970364 3391 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee2e4eb5-5141-49ae-b4d9-ac88f344b28e-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "ee2e4eb5-5141-49ae-b4d9-ac88f344b28e" (UID: "ee2e4eb5-5141-49ae-b4d9-ac88f344b28e"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:33:49.972007 kubelet[3391]: I0113 21:33:49.971973 3391 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee2e4eb5-5141-49ae-b4d9-ac88f344b28e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ee2e4eb5-5141-49ae-b4d9-ac88f344b28e" (UID: "ee2e4eb5-5141-49ae-b4d9-ac88f344b28e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:33:49.972432 kubelet[3391]: I0113 21:33:49.972179 3391 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee2e4eb5-5141-49ae-b4d9-ac88f344b28e-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "ee2e4eb5-5141-49ae-b4d9-ac88f344b28e" (UID: "ee2e4eb5-5141-49ae-b4d9-ac88f344b28e"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:33:49.974651 kubelet[3391]: I0113 21:33:49.967174 3391 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee2e4eb5-5141-49ae-b4d9-ac88f344b28e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ee2e4eb5-5141-49ae-b4d9-ac88f344b28e" (UID: "ee2e4eb5-5141-49ae-b4d9-ac88f344b28e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:33:49.985427 systemd-networkd[1652]: calif1fabf08c78: Link DOWN Jan 13 21:33:49.985437 systemd-networkd[1652]: calif1fabf08c78: Lost carrier Jan 13 21:33:50.017547 kubelet[3391]: I0113 21:33:50.015797 3391 reconciler_common.go:300] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ee2e4eb5-5141-49ae-b4d9-ac88f344b28e-flexvol-driver-host\") on node \"ip-172-31-23-216\" DevicePath \"\"" Jan 13 21:33:50.099469 kubelet[3391]: I0113 21:33:50.099370 3391 reconciler_common.go:300] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ee2e4eb5-5141-49ae-b4d9-ac88f344b28e-cni-net-dir\") on node \"ip-172-31-23-216\" DevicePath \"\"" Jan 13 21:33:50.099748 kubelet[3391]: I0113 21:33:50.099734 3391 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ee2e4eb5-5141-49ae-b4d9-ac88f344b28e-xtables-lock\") on node \"ip-172-31-23-216\" DevicePath \"\"" Jan 13 21:33:50.100660 kubelet[3391]: I0113 21:33:50.100366 3391 reconciler_common.go:300] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ee2e4eb5-5141-49ae-b4d9-ac88f344b28e-cni-log-dir\") on node \"ip-172-31-23-216\" DevicePath \"\"" Jan 13 21:33:50.100816 kubelet[3391]: I0113 21:33:50.100803 3391 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ee2e4eb5-5141-49ae-b4d9-ac88f344b28e-lib-modules\") on node \"ip-172-31-23-216\" DevicePath \"\"" Jan 13 21:33:50.100939 kubelet[3391]: I0113 21:33:50.100930 3391 reconciler_common.go:300] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ee2e4eb5-5141-49ae-b4d9-ac88f344b28e-var-run-calico\") on node \"ip-172-31-23-216\" DevicePath \"\"" Jan 13 21:33:50.101015 kubelet[3391]: I0113 21:33:50.017813 3391 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee2e4eb5-5141-49ae-b4d9-ac88f344b28e-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "ee2e4eb5-5141-49ae-b4d9-ac88f344b28e" (UID: "ee2e4eb5-5141-49ae-b4d9-ac88f344b28e"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:33:50.101118 kubelet[3391]: I0113 21:33:50.017882 3391 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee2e4eb5-5141-49ae-b4d9-ac88f344b28e-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "ee2e4eb5-5141-49ae-b4d9-ac88f344b28e" (UID: "ee2e4eb5-5141-49ae-b4d9-ac88f344b28e"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:33:50.101193 kubelet[3391]: I0113 21:33:50.017892 3391 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee2e4eb5-5141-49ae-b4d9-ac88f344b28e-policysync" (OuterVolumeSpecName: "policysync") pod "ee2e4eb5-5141-49ae-b4d9-ac88f344b28e" (UID: "ee2e4eb5-5141-49ae-b4d9-ac88f344b28e"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:33:50.120125 systemd[1]: var-lib-kubelet-pods-ee2e4eb5\x2d5141\x2d49ae\x2db4d9\x2dac88f344b28e-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Jan 13 21:33:50.128086 kubelet[3391]: I0113 21:33:50.128025 3391 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee2e4eb5-5141-49ae-b4d9-ac88f344b28e-node-certs" (OuterVolumeSpecName: "node-certs") pod "ee2e4eb5-5141-49ae-b4d9-ac88f344b28e" (UID: "ee2e4eb5-5141-49ae-b4d9-ac88f344b28e"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 13 21:33:50.137245 systemd[1]: var-lib-kubelet-pods-ee2e4eb5\x2d5141\x2d49ae\x2db4d9\x2dac88f344b28e-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dnode-1.mount: Deactivated successfully. Jan 13 21:33:50.145301 containerd[2100]: time="2025-01-13T21:33:50.145252607Z" level=info msg="StopContainer for \"dc73a1e969a2e18e01802b7dc5c7590837da06f71b4eca2aed90783f94a6eff8\" returns successfully" Jan 13 21:33:50.153886 kubelet[3391]: I0113 21:33:50.153403 3391 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee2e4eb5-5141-49ae-b4d9-ac88f344b28e-kube-api-access-kr9mh" (OuterVolumeSpecName: "kube-api-access-kr9mh") pod "ee2e4eb5-5141-49ae-b4d9-ac88f344b28e" (UID: "ee2e4eb5-5141-49ae-b4d9-ac88f344b28e"). InnerVolumeSpecName "kube-api-access-kr9mh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 21:33:50.159250 systemd[1]: var-lib-kubelet-pods-ee2e4eb5\x2d5141\x2d49ae\x2db4d9\x2dac88f344b28e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkr9mh.mount: Deactivated successfully. Jan 13 21:33:50.174336 kubelet[3391]: I0113 21:33:50.174291 3391 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee2e4eb5-5141-49ae-b4d9-ac88f344b28e-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "ee2e4eb5-5141-49ae-b4d9-ac88f344b28e" (UID: "ee2e4eb5-5141-49ae-b4d9-ac88f344b28e"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 21:33:50.175854 sshd[6705]: Accepted publickey for core from 147.75.109.163 port 56440 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:33:50.179891 containerd[2100]: time="2025-01-13T21:33:50.178435593Z" level=info msg="StopPodSandbox for \"db9256db5d192b356351934106bb6b94a629e6ecefda2cbc743d9eb60a42d88b\"" Jan 13 21:33:50.179891 containerd[2100]: time="2025-01-13T21:33:50.178502163Z" level=info msg="Container to stop \"dc73a1e969a2e18e01802b7dc5c7590837da06f71b4eca2aed90783f94a6eff8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:33:50.189785 kubelet[3391]: I0113 21:33:50.185599 3391 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="730d5ce859f774b043e518251a98851c2c0f88a7124d555c205b096292c300e7" Jan 13 21:33:50.187235 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-db9256db5d192b356351934106bb6b94a629e6ecefda2cbc743d9eb60a42d88b-shm.mount: Deactivated successfully. Jan 13 21:33:50.192276 sshd[6705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:33:50.210097 systemd-logind[2056]: New session 19 of user core. Jan 13 21:33:50.216299 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 13 21:33:50.247842 kubelet[3391]: I0113 21:33:50.247790 3391 reconciler_common.go:300] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ee2e4eb5-5141-49ae-b4d9-ac88f344b28e-node-certs\") on node \"ip-172-31-23-216\" DevicePath \"\"" Jan 13 21:33:50.248304 kubelet[3391]: I0113 21:33:50.248072 3391 reconciler_common.go:300] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ee2e4eb5-5141-49ae-b4d9-ac88f344b28e-var-lib-calico\") on node \"ip-172-31-23-216\" DevicePath \"\"" Jan 13 21:33:50.248304 kubelet[3391]: I0113 21:33:50.248094 3391 reconciler_common.go:300] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ee2e4eb5-5141-49ae-b4d9-ac88f344b28e-cni-bin-dir\") on node \"ip-172-31-23-216\" DevicePath \"\"" Jan 13 21:33:50.248304 kubelet[3391]: I0113 21:33:50.248110 3391 reconciler_common.go:300] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ee2e4eb5-5141-49ae-b4d9-ac88f344b28e-policysync\") on node \"ip-172-31-23-216\" DevicePath \"\"" Jan 13 21:33:50.248304 kubelet[3391]: I0113 21:33:50.248125 3391 reconciler_common.go:300] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee2e4eb5-5141-49ae-b4d9-ac88f344b28e-tigera-ca-bundle\") on node \"ip-172-31-23-216\" DevicePath \"\"" Jan 13 21:33:50.248304 kubelet[3391]: I0113 21:33:50.248141 3391 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-kr9mh\" (UniqueName: \"kubernetes.io/projected/ee2e4eb5-5141-49ae-b4d9-ac88f344b28e-kube-api-access-kr9mh\") on node \"ip-172-31-23-216\" DevicePath \"\"" Jan 13 21:33:50.279674 containerd[2100]: time="2025-01-13T21:33:50.278268072Z" level=info msg="shim disconnected" id=db9256db5d192b356351934106bb6b94a629e6ecefda2cbc743d9eb60a42d88b namespace=k8s.io Jan 13 21:33:50.279674 containerd[2100]: time="2025-01-13T21:33:50.278334034Z" level=warning msg="cleaning up after shim disconnected" id=db9256db5d192b356351934106bb6b94a629e6ecefda2cbc743d9eb60a42d88b namespace=k8s.io Jan 13 21:33:50.279674 containerd[2100]: time="2025-01-13T21:33:50.278345463Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:33:50.285105 kubelet[3391]: I0113 21:33:50.285037 3391 scope.go:117] "RemoveContainer" containerID="21aa5069122deaa58584a553b207ba99a6bb56541b7e926d08f08bb1b0fe71e8" Jan 13 21:33:50.383754 containerd[2100]: time="2025-01-13T21:33:50.383460650Z" level=info msg="TearDown network for sandbox \"db9256db5d192b356351934106bb6b94a629e6ecefda2cbc743d9eb60a42d88b\" successfully" Jan 13 21:33:50.383754 containerd[2100]: time="2025-01-13T21:33:50.383501201Z" level=info msg="StopPodSandbox for \"db9256db5d192b356351934106bb6b94a629e6ecefda2cbc743d9eb60a42d88b\" returns successfully" Jan 13 21:33:50.386844 kubelet[3391]: I0113 21:33:50.386228 3391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/726cbd57-f04b-4938-895f-e39534270dab-xtables-lock\") pod \"calico-node-lgfmz\" (UID: \"726cbd57-f04b-4938-895f-e39534270dab\") " pod="calico-system/calico-node-lgfmz" Jan 13 21:33:50.386844 kubelet[3391]: I0113 21:33:50.386286 3391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/726cbd57-f04b-4938-895f-e39534270dab-tigera-ca-bundle\") pod \"calico-node-lgfmz\" (UID: \"726cbd57-f04b-4938-895f-e39534270dab\") " pod="calico-system/calico-node-lgfmz" Jan 13 21:33:50.386844 kubelet[3391]: I0113 21:33:50.386320 3391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/726cbd57-f04b-4938-895f-e39534270dab-cni-bin-dir\") pod \"calico-node-lgfmz\" (UID: \"726cbd57-f04b-4938-895f-e39534270dab\") " pod="calico-system/calico-node-lgfmz" Jan 13 21:33:50.386844 kubelet[3391]: I0113 21:33:50.386438 3391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/726cbd57-f04b-4938-895f-e39534270dab-lib-modules\") pod \"calico-node-lgfmz\" (UID: \"726cbd57-f04b-4938-895f-e39534270dab\") " pod="calico-system/calico-node-lgfmz" Jan 13 21:33:50.386844 kubelet[3391]: I0113 21:33:50.386491 3391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/726cbd57-f04b-4938-895f-e39534270dab-node-certs\") pod \"calico-node-lgfmz\" (UID: \"726cbd57-f04b-4938-895f-e39534270dab\") " pod="calico-system/calico-node-lgfmz" Jan 13 21:33:50.389308 kubelet[3391]: I0113 21:33:50.386527 3391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/726cbd57-f04b-4938-895f-e39534270dab-cni-log-dir\") pod \"calico-node-lgfmz\" (UID: \"726cbd57-f04b-4938-895f-e39534270dab\") " pod="calico-system/calico-node-lgfmz" Jan 13 21:33:50.389308 kubelet[3391]: I0113 21:33:50.386559 3391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qz2tw\" (UniqueName: \"kubernetes.io/projected/726cbd57-f04b-4938-895f-e39534270dab-kube-api-access-qz2tw\") pod \"calico-node-lgfmz\" (UID: \"726cbd57-f04b-4938-895f-e39534270dab\") " pod="calico-system/calico-node-lgfmz" Jan 13 21:33:50.389308 kubelet[3391]: I0113 21:33:50.386599 3391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/726cbd57-f04b-4938-895f-e39534270dab-var-lib-calico\") pod \"calico-node-lgfmz\" (UID: \"726cbd57-f04b-4938-895f-e39534270dab\") " pod="calico-system/calico-node-lgfmz" Jan 13 21:33:50.389308 kubelet[3391]: I0113 21:33:50.386627 3391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/726cbd57-f04b-4938-895f-e39534270dab-cni-net-dir\") pod \"calico-node-lgfmz\" (UID: \"726cbd57-f04b-4938-895f-e39534270dab\") " pod="calico-system/calico-node-lgfmz" Jan 13 21:33:50.389308 kubelet[3391]: I0113 21:33:50.386659 3391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/726cbd57-f04b-4938-895f-e39534270dab-policysync\") pod \"calico-node-lgfmz\" (UID: \"726cbd57-f04b-4938-895f-e39534270dab\") " pod="calico-system/calico-node-lgfmz" Jan 13 21:33:50.389719 kubelet[3391]: I0113 21:33:50.386696 3391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/726cbd57-f04b-4938-895f-e39534270dab-flexvol-driver-host\") pod \"calico-node-lgfmz\" (UID: \"726cbd57-f04b-4938-895f-e39534270dab\") " pod="calico-system/calico-node-lgfmz" Jan 13 21:33:50.389719 kubelet[3391]: I0113 21:33:50.386728 3391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/726cbd57-f04b-4938-895f-e39534270dab-var-run-calico\") pod \"calico-node-lgfmz\" (UID: \"726cbd57-f04b-4938-895f-e39534270dab\") " pod="calico-system/calico-node-lgfmz" Jan 13 21:33:50.422009 systemd-journald[1569]: Under memory pressure, flushing caches. Jan 13 21:33:50.423674 systemd-resolved[1974]: Under memory pressure, flushing caches. Jan 13 21:33:50.423688 systemd-resolved[1974]: Flushed all caches. Jan 13 21:33:50.424464 containerd[2100]: time="2025-01-13T21:33:50.424417082Z" level=info msg="RemoveContainer for \"21aa5069122deaa58584a553b207ba99a6bb56541b7e926d08f08bb1b0fe71e8\"" Jan 13 21:33:50.437466 containerd[2100]: 2025-01-13 21:33:49.967 [INFO][6671] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="730d5ce859f774b043e518251a98851c2c0f88a7124d555c205b096292c300e7" Jan 13 21:33:50.437466 containerd[2100]: 2025-01-13 21:33:49.971 [INFO][6671] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="730d5ce859f774b043e518251a98851c2c0f88a7124d555c205b096292c300e7" iface="eth0" netns="/var/run/netns/cni-9dd354db-20dc-5c30-d8ae-24f48fcf2726" Jan 13 21:33:50.437466 containerd[2100]: 2025-01-13 21:33:49.974 [INFO][6671] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="730d5ce859f774b043e518251a98851c2c0f88a7124d555c205b096292c300e7" iface="eth0" netns="/var/run/netns/cni-9dd354db-20dc-5c30-d8ae-24f48fcf2726" Jan 13 21:33:50.437466 containerd[2100]: 2025-01-13 21:33:49.994 [INFO][6671] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="730d5ce859f774b043e518251a98851c2c0f88a7124d555c205b096292c300e7" after=22.008728ms iface="eth0" netns="/var/run/netns/cni-9dd354db-20dc-5c30-d8ae-24f48fcf2726" Jan 13 21:33:50.437466 containerd[2100]: 2025-01-13 21:33:49.994 [INFO][6671] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="730d5ce859f774b043e518251a98851c2c0f88a7124d555c205b096292c300e7" Jan 13 21:33:50.437466 containerd[2100]: 2025-01-13 21:33:49.994 [INFO][6671] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="730d5ce859f774b043e518251a98851c2c0f88a7124d555c205b096292c300e7" Jan 13 21:33:50.437466 containerd[2100]: 2025-01-13 21:33:50.275 [INFO][6730] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="730d5ce859f774b043e518251a98851c2c0f88a7124d555c205b096292c300e7" HandleID="k8s-pod-network.730d5ce859f774b043e518251a98851c2c0f88a7124d555c205b096292c300e7" Workload="ip--172--31--23--216-k8s-calico--kube--controllers--97574c6fb--sdstw-eth0" Jan 13 21:33:50.437466 containerd[2100]: 2025-01-13 21:33:50.276 [INFO][6730] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:33:50.437466 containerd[2100]: 2025-01-13 21:33:50.276 [INFO][6730] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:33:50.437466 containerd[2100]: 2025-01-13 21:33:50.405 [INFO][6730] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="730d5ce859f774b043e518251a98851c2c0f88a7124d555c205b096292c300e7" HandleID="k8s-pod-network.730d5ce859f774b043e518251a98851c2c0f88a7124d555c205b096292c300e7" Workload="ip--172--31--23--216-k8s-calico--kube--controllers--97574c6fb--sdstw-eth0" Jan 13 21:33:50.437466 containerd[2100]: 2025-01-13 21:33:50.405 [INFO][6730] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="730d5ce859f774b043e518251a98851c2c0f88a7124d555c205b096292c300e7" HandleID="k8s-pod-network.730d5ce859f774b043e518251a98851c2c0f88a7124d555c205b096292c300e7" Workload="ip--172--31--23--216-k8s-calico--kube--controllers--97574c6fb--sdstw-eth0" Jan 13 21:33:50.437466 containerd[2100]: 2025-01-13 21:33:50.409 [INFO][6730] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:33:50.437466 containerd[2100]: 2025-01-13 21:33:50.419 [INFO][6671] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="730d5ce859f774b043e518251a98851c2c0f88a7124d555c205b096292c300e7" Jan 13 21:33:50.438481 containerd[2100]: time="2025-01-13T21:33:50.438319158Z" level=info msg="TearDown network for sandbox \"730d5ce859f774b043e518251a98851c2c0f88a7124d555c205b096292c300e7\" successfully" Jan 13 21:33:50.438481 containerd[2100]: time="2025-01-13T21:33:50.438359630Z" level=info msg="StopPodSandbox for \"730d5ce859f774b043e518251a98851c2c0f88a7124d555c205b096292c300e7\" returns successfully" Jan 13 21:33:50.443291 containerd[2100]: time="2025-01-13T21:33:50.443137224Z" level=info msg="RemoveContainer for \"21aa5069122deaa58584a553b207ba99a6bb56541b7e926d08f08bb1b0fe71e8\" returns successfully" Jan 13 21:33:50.465396 kubelet[3391]: I0113 21:33:50.465140 3391 scope.go:117] "RemoveContainer" containerID="a8c35e3bdc1ed6cdc2139c1480182e99763d9cbf4901dee719311bc9ff4a57ba" Jan 13 21:33:50.467283 containerd[2100]: time="2025-01-13T21:33:50.467245958Z" level=info msg="RemoveContainer for \"a8c35e3bdc1ed6cdc2139c1480182e99763d9cbf4901dee719311bc9ff4a57ba\"" Jan 13 21:33:50.487678 kubelet[3391]: I0113 21:33:50.487639 3391 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/54104ea0-0440-40a9-9abe-b389894c34cf-tigera-ca-bundle\") pod \"54104ea0-0440-40a9-9abe-b389894c34cf\" (UID: \"54104ea0-0440-40a9-9abe-b389894c34cf\") " Jan 13 21:33:50.487896 kubelet[3391]: I0113 21:33:50.487697 3391 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xggnp\" (UniqueName: \"kubernetes.io/projected/54104ea0-0440-40a9-9abe-b389894c34cf-kube-api-access-xggnp\") pod \"54104ea0-0440-40a9-9abe-b389894c34cf\" (UID: \"54104ea0-0440-40a9-9abe-b389894c34cf\") " Jan 13 21:33:50.487896 kubelet[3391]: I0113 21:33:50.487732 3391 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/54104ea0-0440-40a9-9abe-b389894c34cf-typha-certs\") pod \"54104ea0-0440-40a9-9abe-b389894c34cf\" (UID: \"54104ea0-0440-40a9-9abe-b389894c34cf\") " Jan 13 21:33:50.508509 containerd[2100]: time="2025-01-13T21:33:50.506108370Z" level=info msg="RemoveContainer for \"a8c35e3bdc1ed6cdc2139c1480182e99763d9cbf4901dee719311bc9ff4a57ba\" returns successfully" Jan 13 21:33:50.551040 kubelet[3391]: I0113 21:33:50.551001 3391 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54104ea0-0440-40a9-9abe-b389894c34cf-kube-api-access-xggnp" (OuterVolumeSpecName: "kube-api-access-xggnp") pod "54104ea0-0440-40a9-9abe-b389894c34cf" (UID: "54104ea0-0440-40a9-9abe-b389894c34cf"). InnerVolumeSpecName "kube-api-access-xggnp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 21:33:50.559318 kubelet[3391]: I0113 21:33:50.559284 3391 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54104ea0-0440-40a9-9abe-b389894c34cf-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "54104ea0-0440-40a9-9abe-b389894c34cf" (UID: "54104ea0-0440-40a9-9abe-b389894c34cf"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 13 21:33:50.562800 kubelet[3391]: I0113 21:33:50.562760 3391 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54104ea0-0440-40a9-9abe-b389894c34cf-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "54104ea0-0440-40a9-9abe-b389894c34cf" (UID: "54104ea0-0440-40a9-9abe-b389894c34cf"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 21:33:50.590764 kubelet[3391]: I0113 21:33:50.590733 3391 scope.go:117] "RemoveContainer" containerID="f4928e5573813d695eb78a172909eab8f6b047e62b6ce19670871d244b1673ea" Jan 13 21:33:50.602763 containerd[2100]: time="2025-01-13T21:33:50.601178432Z" level=info msg="RemoveContainer for \"f4928e5573813d695eb78a172909eab8f6b047e62b6ce19670871d244b1673ea\"" Jan 13 21:33:50.602989 kubelet[3391]: I0113 21:33:50.601372 3391 reconciler_common.go:300] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/54104ea0-0440-40a9-9abe-b389894c34cf-tigera-ca-bundle\") on node \"ip-172-31-23-216\" DevicePath \"\"" Jan 13 21:33:50.602989 kubelet[3391]: I0113 21:33:50.601400 3391 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-xggnp\" (UniqueName: \"kubernetes.io/projected/54104ea0-0440-40a9-9abe-b389894c34cf-kube-api-access-xggnp\") on node \"ip-172-31-23-216\" DevicePath \"\"" Jan 13 21:33:50.602989 kubelet[3391]: I0113 21:33:50.601413 3391 reconciler_common.go:300] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/54104ea0-0440-40a9-9abe-b389894c34cf-typha-certs\") on node \"ip-172-31-23-216\" DevicePath \"\"" Jan 13 21:33:50.625905 containerd[2100]: time="2025-01-13T21:33:50.618436514Z" level=info msg="RemoveContainer for \"f4928e5573813d695eb78a172909eab8f6b047e62b6ce19670871d244b1673ea\" returns successfully" Jan 13 21:33:50.626066 kubelet[3391]: I0113 21:33:50.624418 3391 scope.go:117] "RemoveContainer" containerID="21aa5069122deaa58584a553b207ba99a6bb56541b7e926d08f08bb1b0fe71e8" Jan 13 21:33:50.681308 containerd[2100]: time="2025-01-13T21:33:50.634674104Z" level=error msg="ContainerStatus for \"21aa5069122deaa58584a553b207ba99a6bb56541b7e926d08f08bb1b0fe71e8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"21aa5069122deaa58584a553b207ba99a6bb56541b7e926d08f08bb1b0fe71e8\": not found" Jan 13 21:33:50.725572 kubelet[3391]: I0113 21:33:50.725395 3391 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6516d32b-0c84-4b53-a73d-5859b4a02633-tigera-ca-bundle\") pod \"6516d32b-0c84-4b53-a73d-5859b4a02633\" (UID: \"6516d32b-0c84-4b53-a73d-5859b4a02633\") " Jan 13 21:33:50.727850 kubelet[3391]: I0113 21:33:50.725992 3391 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sq9qv\" (UniqueName: \"kubernetes.io/projected/6516d32b-0c84-4b53-a73d-5859b4a02633-kube-api-access-sq9qv\") pod \"6516d32b-0c84-4b53-a73d-5859b4a02633\" (UID: \"6516d32b-0c84-4b53-a73d-5859b4a02633\") " Jan 13 21:33:50.764782 kubelet[3391]: I0113 21:33:50.764732 3391 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6516d32b-0c84-4b53-a73d-5859b4a02633-kube-api-access-sq9qv" (OuterVolumeSpecName: "kube-api-access-sq9qv") pod "6516d32b-0c84-4b53-a73d-5859b4a02633" (UID: "6516d32b-0c84-4b53-a73d-5859b4a02633"). InnerVolumeSpecName "kube-api-access-sq9qv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 21:33:50.787897 kubelet[3391]: I0113 21:33:50.786963 3391 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6516d32b-0c84-4b53-a73d-5859b4a02633-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "6516d32b-0c84-4b53-a73d-5859b4a02633" (UID: "6516d32b-0c84-4b53-a73d-5859b4a02633"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 21:33:50.808088 kubelet[3391]: E0113 21:33:50.807807 3391 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"21aa5069122deaa58584a553b207ba99a6bb56541b7e926d08f08bb1b0fe71e8\": not found" containerID="21aa5069122deaa58584a553b207ba99a6bb56541b7e926d08f08bb1b0fe71e8" Jan 13 21:33:50.817271 kubelet[3391]: I0113 21:33:50.817048 3391 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"21aa5069122deaa58584a553b207ba99a6bb56541b7e926d08f08bb1b0fe71e8"} err="failed to get container status \"21aa5069122deaa58584a553b207ba99a6bb56541b7e926d08f08bb1b0fe71e8\": rpc error: code = NotFound desc = an error occurred when try to find container \"21aa5069122deaa58584a553b207ba99a6bb56541b7e926d08f08bb1b0fe71e8\": not found" Jan 13 21:33:50.818312 kubelet[3391]: I0113 21:33:50.818278 3391 scope.go:117] "RemoveContainer" containerID="a8c35e3bdc1ed6cdc2139c1480182e99763d9cbf4901dee719311bc9ff4a57ba" Jan 13 21:33:50.821966 containerd[2100]: time="2025-01-13T21:33:50.819774052Z" level=error msg="ContainerStatus for \"a8c35e3bdc1ed6cdc2139c1480182e99763d9cbf4901dee719311bc9ff4a57ba\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a8c35e3bdc1ed6cdc2139c1480182e99763d9cbf4901dee719311bc9ff4a57ba\": not found" Jan 13 21:33:50.822432 kubelet[3391]: E0113 21:33:50.822403 3391 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a8c35e3bdc1ed6cdc2139c1480182e99763d9cbf4901dee719311bc9ff4a57ba\": not found" containerID="a8c35e3bdc1ed6cdc2139c1480182e99763d9cbf4901dee719311bc9ff4a57ba" Jan 13 21:33:50.822838 kubelet[3391]: I0113 21:33:50.822772 3391 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a8c35e3bdc1ed6cdc2139c1480182e99763d9cbf4901dee719311bc9ff4a57ba"} err="failed to get container status \"a8c35e3bdc1ed6cdc2139c1480182e99763d9cbf4901dee719311bc9ff4a57ba\": rpc error: code = NotFound desc = an error occurred when try to find container \"a8c35e3bdc1ed6cdc2139c1480182e99763d9cbf4901dee719311bc9ff4a57ba\": not found" Jan 13 21:33:50.822838 kubelet[3391]: I0113 21:33:50.822810 3391 scope.go:117] "RemoveContainer" containerID="f4928e5573813d695eb78a172909eab8f6b047e62b6ce19670871d244b1673ea" Jan 13 21:33:50.825230 containerd[2100]: time="2025-01-13T21:33:50.823288316Z" level=error msg="ContainerStatus for \"f4928e5573813d695eb78a172909eab8f6b047e62b6ce19670871d244b1673ea\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f4928e5573813d695eb78a172909eab8f6b047e62b6ce19670871d244b1673ea\": not found" Jan 13 21:33:50.825516 kubelet[3391]: E0113 21:33:50.825437 3391 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f4928e5573813d695eb78a172909eab8f6b047e62b6ce19670871d244b1673ea\": not found" containerID="f4928e5573813d695eb78a172909eab8f6b047e62b6ce19670871d244b1673ea" Jan 13 21:33:50.825516 kubelet[3391]: I0113 21:33:50.825504 3391 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f4928e5573813d695eb78a172909eab8f6b047e62b6ce19670871d244b1673ea"} err="failed to get container status \"f4928e5573813d695eb78a172909eab8f6b047e62b6ce19670871d244b1673ea\": rpc error: code = NotFound desc = an error occurred when try to find container \"f4928e5573813d695eb78a172909eab8f6b047e62b6ce19670871d244b1673ea\": not found" Jan 13 21:33:50.831175 kubelet[3391]: I0113 21:33:50.831045 3391 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-sq9qv\" (UniqueName: \"kubernetes.io/projected/6516d32b-0c84-4b53-a73d-5859b4a02633-kube-api-access-sq9qv\") on node \"ip-172-31-23-216\" DevicePath \"\"" Jan 13 21:33:50.831175 kubelet[3391]: I0113 21:33:50.831090 3391 reconciler_common.go:300] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6516d32b-0c84-4b53-a73d-5859b4a02633-tigera-ca-bundle\") on node \"ip-172-31-23-216\" DevicePath \"\"" Jan 13 21:33:50.950003 systemd[1]: var-lib-kubelet-pods-6516d32b\x2d0c84\x2d4b53\x2da73d\x2d5859b4a02633-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dkube\x2dcontrollers-1.mount: Deactivated successfully. Jan 13 21:33:50.950750 systemd[1]: run-netns-cni\x2d9dd354db\x2d20dc\x2d5c30\x2dd8ae\x2d24f48fcf2726.mount: Deactivated successfully. Jan 13 21:33:50.950975 systemd[1]: var-lib-kubelet-pods-6516d32b\x2d0c84\x2d4b53\x2da73d\x2d5859b4a02633-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsq9qv.mount: Deactivated successfully. Jan 13 21:33:50.951116 systemd[1]: var-lib-kubelet-pods-54104ea0\x2d0440\x2d40a9\x2d9abe\x2db389894c34cf-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. Jan 13 21:33:50.951256 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-db9256db5d192b356351934106bb6b94a629e6ecefda2cbc743d9eb60a42d88b-rootfs.mount: Deactivated successfully. Jan 13 21:33:50.951385 systemd[1]: var-lib-kubelet-pods-54104ea0\x2d0440\x2d40a9\x2d9abe\x2db389894c34cf-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxggnp.mount: Deactivated successfully. Jan 13 21:33:50.951515 systemd[1]: var-lib-kubelet-pods-54104ea0\x2d0440\x2d40a9\x2d9abe\x2db389894c34cf-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. Jan 13 21:33:50.970866 containerd[2100]: time="2025-01-13T21:33:50.968218310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lgfmz,Uid:726cbd57-f04b-4938-895f-e39534270dab,Namespace:calico-system,Attempt:0,}" Jan 13 21:33:51.075271 kubelet[3391]: I0113 21:33:51.073306 3391 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="ee2e4eb5-5141-49ae-b4d9-ac88f344b28e" path="/var/lib/kubelet/pods/ee2e4eb5-5141-49ae-b4d9-ac88f344b28e/volumes" Jan 13 21:33:51.212168 kubelet[3391]: I0113 21:33:51.210866 3391 scope.go:117] "RemoveContainer" containerID="dc73a1e969a2e18e01802b7dc5c7590837da06f71b4eca2aed90783f94a6eff8" Jan 13 21:33:51.230353 containerd[2100]: time="2025-01-13T21:33:51.229517270Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:33:51.230353 containerd[2100]: time="2025-01-13T21:33:51.229641181Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:33:51.230353 containerd[2100]: time="2025-01-13T21:33:51.229863082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:33:51.234345 containerd[2100]: time="2025-01-13T21:33:51.233858577Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:33:51.343155 containerd[2100]: time="2025-01-13T21:33:51.342981358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lgfmz,Uid:726cbd57-f04b-4938-895f-e39534270dab,Namespace:calico-system,Attempt:0,} returns sandbox id \"4ee41f28686021ff118d2611996eee333f90293151f30e33c78dd3925c423eb2\"" Jan 13 21:33:51.389663 containerd[2100]: time="2025-01-13T21:33:51.389576922Z" level=info msg="RemoveContainer for \"dc73a1e969a2e18e01802b7dc5c7590837da06f71b4eca2aed90783f94a6eff8\"" Jan 13 21:33:51.439165 containerd[2100]: time="2025-01-13T21:33:51.436413839Z" level=info msg="RemoveContainer for \"dc73a1e969a2e18e01802b7dc5c7590837da06f71b4eca2aed90783f94a6eff8\" returns successfully" Jan 13 21:33:51.447769 kubelet[3391]: I0113 21:33:51.447741 3391 scope.go:117] "RemoveContainer" containerID="dc73a1e969a2e18e01802b7dc5c7590837da06f71b4eca2aed90783f94a6eff8" Jan 13 21:33:51.454999 containerd[2100]: time="2025-01-13T21:33:51.454943889Z" level=error msg="ContainerStatus for \"dc73a1e969a2e18e01802b7dc5c7590837da06f71b4eca2aed90783f94a6eff8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dc73a1e969a2e18e01802b7dc5c7590837da06f71b4eca2aed90783f94a6eff8\": not found" Jan 13 21:33:51.470938 kubelet[3391]: E0113 21:33:51.469566 3391 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dc73a1e969a2e18e01802b7dc5c7590837da06f71b4eca2aed90783f94a6eff8\": not found" containerID="dc73a1e969a2e18e01802b7dc5c7590837da06f71b4eca2aed90783f94a6eff8" Jan 13 21:33:51.470938 kubelet[3391]: I0113 21:33:51.469626 3391 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dc73a1e969a2e18e01802b7dc5c7590837da06f71b4eca2aed90783f94a6eff8"} err="failed to get container status \"dc73a1e969a2e18e01802b7dc5c7590837da06f71b4eca2aed90783f94a6eff8\": rpc error: code = NotFound desc = an error occurred when try to find container \"dc73a1e969a2e18e01802b7dc5c7590837da06f71b4eca2aed90783f94a6eff8\": not found" Jan 13 21:33:51.494854 containerd[2100]: time="2025-01-13T21:33:51.491127758Z" level=info msg="CreateContainer within sandbox \"4ee41f28686021ff118d2611996eee333f90293151f30e33c78dd3925c423eb2\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 13 21:33:51.547488 containerd[2100]: time="2025-01-13T21:33:51.542965368Z" level=info msg="CreateContainer within sandbox \"4ee41f28686021ff118d2611996eee333f90293151f30e33c78dd3925c423eb2\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"66de66339d1755a25d9f7d3ce4e4cf59d296ac5a0d068e3f5e5f10779f1c92c9\"" Jan 13 21:33:51.551418 containerd[2100]: time="2025-01-13T21:33:51.550752491Z" level=info msg="StartContainer for \"66de66339d1755a25d9f7d3ce4e4cf59d296ac5a0d068e3f5e5f10779f1c92c9\"" Jan 13 21:33:51.703518 containerd[2100]: time="2025-01-13T21:33:51.703484171Z" level=info msg="StartContainer for \"66de66339d1755a25d9f7d3ce4e4cf59d296ac5a0d068e3f5e5f10779f1c92c9\" returns successfully" Jan 13 21:33:51.877856 containerd[2100]: time="2025-01-13T21:33:51.877776458Z" level=info msg="shim disconnected" id=66de66339d1755a25d9f7d3ce4e4cf59d296ac5a0d068e3f5e5f10779f1c92c9 namespace=k8s.io Jan 13 21:33:51.878387 containerd[2100]: time="2025-01-13T21:33:51.878175504Z" level=warning msg="cleaning up after shim disconnected" id=66de66339d1755a25d9f7d3ce4e4cf59d296ac5a0d068e3f5e5f10779f1c92c9 namespace=k8s.io Jan 13 21:33:51.878387 containerd[2100]: time="2025-01-13T21:33:51.878198504Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:33:51.950519 sshd[6705]: pam_unix(sshd:session): session closed for user core Jan 13 21:33:51.956593 systemd[1]: sshd@18-172.31.23.216:22-147.75.109.163:56440.service: Deactivated successfully. Jan 13 21:33:51.962179 systemd-logind[2056]: Session 19 logged out. Waiting for processes to exit. Jan 13 21:33:51.962978 systemd[1]: session-19.scope: Deactivated successfully. Jan 13 21:33:51.965702 systemd-logind[2056]: Removed session 19. Jan 13 21:33:51.977223 systemd[1]: Started sshd@19-172.31.23.216:22-147.75.109.163:56454.service - OpenSSH per-connection server daemon (147.75.109.163:56454). Jan 13 21:33:52.133847 sshd[6896]: Accepted publickey for core from 147.75.109.163 port 56454 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:33:52.134542 sshd[6896]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:33:52.140624 systemd-logind[2056]: New session 20 of user core. Jan 13 21:33:52.145177 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 13 21:33:52.147065 ntpd[2042]: Deleting interface #12 calif1fabf08c78, fe80::ecee:eeff:feee:eeee%11#123, interface stats: received=0, sent=0, dropped=0, active_time=35 secs Jan 13 21:33:52.148686 ntpd[2042]: 13 Jan 21:33:52 ntpd[2042]: Deleting interface #12 calif1fabf08c78, fe80::ecee:eeff:feee:eeee%11#123, interface stats: received=0, sent=0, dropped=0, active_time=35 secs Jan 13 21:33:52.345041 containerd[2100]: time="2025-01-13T21:33:52.343596529Z" level=info msg="CreateContainer within sandbox \"4ee41f28686021ff118d2611996eee333f90293151f30e33c78dd3925c423eb2\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 13 21:33:52.407675 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1713455315.mount: Deactivated successfully. Jan 13 21:33:52.418278 containerd[2100]: time="2025-01-13T21:33:52.418233794Z" level=info msg="CreateContainer within sandbox \"4ee41f28686021ff118d2611996eee333f90293151f30e33c78dd3925c423eb2\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ecebadbf57c975f5487eae69754100af01c4c7f3f614d9a666c0efd2c7952751\"" Jan 13 21:33:52.419766 containerd[2100]: time="2025-01-13T21:33:52.418925387Z" level=info msg="StartContainer for \"ecebadbf57c975f5487eae69754100af01c4c7f3f614d9a666c0efd2c7952751\"" Jan 13 21:33:52.428533 sshd[6896]: pam_unix(sshd:session): session closed for user core Jan 13 21:33:52.438259 systemd[1]: sshd@19-172.31.23.216:22-147.75.109.163:56454.service: Deactivated successfully. Jan 13 21:33:52.447810 systemd[1]: session-20.scope: Deactivated successfully. Jan 13 21:33:52.449116 systemd-logind[2056]: Session 20 logged out. Waiting for processes to exit. Jan 13 21:33:52.471040 systemd-journald[1569]: Under memory pressure, flushing caches. Jan 13 21:33:52.469117 systemd-resolved[1974]: Under memory pressure, flushing caches. Jan 13 21:33:52.469144 systemd-resolved[1974]: Flushed all caches. Jan 13 21:33:52.475145 systemd-logind[2056]: Removed session 20. Jan 13 21:33:52.579145 containerd[2100]: time="2025-01-13T21:33:52.579094661Z" level=info msg="StartContainer for \"ecebadbf57c975f5487eae69754100af01c4c7f3f614d9a666c0efd2c7952751\" returns successfully" Jan 13 21:33:53.073927 kubelet[3391]: I0113 21:33:53.073859 3391 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="54104ea0-0440-40a9-9abe-b389894c34cf" path="/var/lib/kubelet/pods/54104ea0-0440-40a9-9abe-b389894c34cf/volumes" Jan 13 21:33:53.077284 kubelet[3391]: I0113 21:33:53.077210 3391 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="6516d32b-0c84-4b53-a73d-5859b4a02633" path="/var/lib/kubelet/pods/6516d32b-0c84-4b53-a73d-5859b4a02633/volumes" Jan 13 21:33:54.517886 systemd-journald[1569]: Under memory pressure, flushing caches. Jan 13 21:33:54.518244 systemd-resolved[1974]: Under memory pressure, flushing caches. Jan 13 21:33:54.518255 systemd-resolved[1974]: Flushed all caches. Jan 13 21:33:54.972403 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ecebadbf57c975f5487eae69754100af01c4c7f3f614d9a666c0efd2c7952751-rootfs.mount: Deactivated successfully. Jan 13 21:33:54.996067 containerd[2100]: time="2025-01-13T21:33:54.995560713Z" level=info msg="shim disconnected" id=ecebadbf57c975f5487eae69754100af01c4c7f3f614d9a666c0efd2c7952751 namespace=k8s.io Jan 13 21:33:54.996067 containerd[2100]: time="2025-01-13T21:33:54.995649729Z" level=warning msg="cleaning up after shim disconnected" id=ecebadbf57c975f5487eae69754100af01c4c7f3f614d9a666c0efd2c7952751 namespace=k8s.io Jan 13 21:33:54.996067 containerd[2100]: time="2025-01-13T21:33:54.995663764Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:33:55.698637 containerd[2100]: time="2025-01-13T21:33:55.698231196Z" level=info msg="CreateContainer within sandbox \"4ee41f28686021ff118d2611996eee333f90293151f30e33c78dd3925c423eb2\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 13 21:33:55.782981 containerd[2100]: time="2025-01-13T21:33:55.782928116Z" level=info msg="CreateContainer within sandbox \"4ee41f28686021ff118d2611996eee333f90293151f30e33c78dd3925c423eb2\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e93c9d79d66c7df1c05ff873bcd2b8990c5957feb6af33f7b9f69b9c7a06ea42\"" Jan 13 21:33:55.790673 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount415330133.mount: Deactivated successfully. Jan 13 21:33:55.796493 containerd[2100]: time="2025-01-13T21:33:55.796444514Z" level=info msg="StartContainer for \"e93c9d79d66c7df1c05ff873bcd2b8990c5957feb6af33f7b9f69b9c7a06ea42\"" Jan 13 21:33:55.913246 containerd[2100]: time="2025-01-13T21:33:55.913194361Z" level=info msg="StartContainer for \"e93c9d79d66c7df1c05ff873bcd2b8990c5957feb6af33f7b9f69b9c7a06ea42\" returns successfully" Jan 13 21:33:56.566022 systemd-resolved[1974]: Under memory pressure, flushing caches. Jan 13 21:33:56.566037 systemd-resolved[1974]: Flushed all caches. Jan 13 21:33:56.566856 systemd-journald[1569]: Under memory pressure, flushing caches. Jan 13 21:33:56.783496 kubelet[3391]: I0113 21:33:56.783435 3391 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-lgfmz" podStartSLOduration=7.745364028 podStartE2EDuration="7.745364028s" podCreationTimestamp="2025-01-13 21:33:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:33:56.714163195 +0000 UTC m=+98.182515047" watchObservedRunningTime="2025-01-13 21:33:56.745364028 +0000 UTC m=+98.213715881" Jan 13 21:33:57.467204 systemd[1]: Started sshd@20-172.31.23.216:22-147.75.109.163:36944.service - OpenSSH per-connection server daemon (147.75.109.163:36944). Jan 13 21:33:57.787586 sshd[7060]: Accepted publickey for core from 147.75.109.163 port 36944 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:33:57.793530 sshd[7060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:33:57.823635 systemd-logind[2056]: New session 21 of user core. Jan 13 21:33:57.831017 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 13 21:33:58.616019 systemd-journald[1569]: Under memory pressure, flushing caches. Jan 13 21:33:58.617401 systemd-resolved[1974]: Under memory pressure, flushing caches. Jan 13 21:33:58.617428 systemd-resolved[1974]: Flushed all caches. Jan 13 21:33:58.777578 (udev-worker)[7198]: Network interface NamePolicy= disabled on kernel command line. Jan 13 21:33:58.781417 (udev-worker)[7199]: Network interface NamePolicy= disabled on kernel command line. Jan 13 21:33:59.111396 sshd[7060]: pam_unix(sshd:session): session closed for user core Jan 13 21:33:59.128453 systemd[1]: sshd@20-172.31.23.216:22-147.75.109.163:36944.service: Deactivated successfully. Jan 13 21:33:59.133390 systemd-logind[2056]: Session 21 logged out. Waiting for processes to exit. Jan 13 21:33:59.134184 systemd[1]: session-21.scope: Deactivated successfully. Jan 13 21:33:59.155424 systemd-logind[2056]: Removed session 21. Jan 13 21:34:00.661229 systemd-resolved[1974]: Under memory pressure, flushing caches. Jan 13 21:34:00.663741 systemd-journald[1569]: Under memory pressure, flushing caches. Jan 13 21:34:00.661238 systemd-resolved[1974]: Flushed all caches. Jan 13 21:34:04.145195 systemd[1]: Started sshd@21-172.31.23.216:22-147.75.109.163:36950.service - OpenSSH per-connection server daemon (147.75.109.163:36950). Jan 13 21:34:04.359613 sshd[7245]: Accepted publickey for core from 147.75.109.163 port 36950 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:34:04.365110 sshd[7245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:34:04.398690 systemd-logind[2056]: New session 22 of user core. Jan 13 21:34:04.407234 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 13 21:34:04.684551 sshd[7245]: pam_unix(sshd:session): session closed for user core Jan 13 21:34:04.699628 systemd[1]: sshd@21-172.31.23.216:22-147.75.109.163:36950.service: Deactivated successfully. Jan 13 21:34:04.706291 systemd-logind[2056]: Session 22 logged out. Waiting for processes to exit. Jan 13 21:34:04.707116 systemd[1]: session-22.scope: Deactivated successfully. Jan 13 21:34:04.711023 systemd-logind[2056]: Removed session 22. Jan 13 21:34:09.714892 systemd[1]: Started sshd@22-172.31.23.216:22-147.75.109.163:56720.service - OpenSSH per-connection server daemon (147.75.109.163:56720). Jan 13 21:34:09.897048 sshd[7276]: Accepted publickey for core from 147.75.109.163 port 56720 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:34:09.899900 sshd[7276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:34:09.906131 systemd-logind[2056]: New session 23 of user core. Jan 13 21:34:09.911198 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 13 21:34:10.164855 sshd[7276]: pam_unix(sshd:session): session closed for user core Jan 13 21:34:10.182288 systemd[1]: sshd@22-172.31.23.216:22-147.75.109.163:56720.service: Deactivated successfully. Jan 13 21:34:10.194740 systemd-logind[2056]: Session 23 logged out. Waiting for processes to exit. Jan 13 21:34:10.195999 systemd[1]: session-23.scope: Deactivated successfully. Jan 13 21:34:10.198207 systemd-logind[2056]: Removed session 23. Jan 13 21:34:15.190385 systemd[1]: Started sshd@23-172.31.23.216:22-147.75.109.163:56724.service - OpenSSH per-connection server daemon (147.75.109.163:56724). Jan 13 21:34:15.377996 sshd[7290]: Accepted publickey for core from 147.75.109.163 port 56724 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:34:15.379688 sshd[7290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:34:15.386366 systemd-logind[2056]: New session 24 of user core. Jan 13 21:34:15.396624 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 13 21:34:15.717399 sshd[7290]: pam_unix(sshd:session): session closed for user core Jan 13 21:34:15.734146 systemd[1]: sshd@23-172.31.23.216:22-147.75.109.163:56724.service: Deactivated successfully. Jan 13 21:34:15.742898 systemd-logind[2056]: Session 24 logged out. Waiting for processes to exit. Jan 13 21:34:15.743288 systemd[1]: session-24.scope: Deactivated successfully. Jan 13 21:34:15.760185 systemd-logind[2056]: Removed session 24. Jan 13 21:34:20.744460 systemd[1]: Started sshd@24-172.31.23.216:22-147.75.109.163:51612.service - OpenSSH per-connection server daemon (147.75.109.163:51612). Jan 13 21:34:20.941019 sshd[7314]: Accepted publickey for core from 147.75.109.163 port 51612 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:34:20.945662 sshd[7314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:34:20.956596 systemd-logind[2056]: New session 25 of user core. Jan 13 21:34:20.962360 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 13 21:34:21.350446 systemd[1]: run-containerd-runc-k8s.io-e93c9d79d66c7df1c05ff873bcd2b8990c5957feb6af33f7b9f69b9c7a06ea42-runc.s732S6.mount: Deactivated successfully. Jan 13 21:34:21.554157 sshd[7314]: pam_unix(sshd:session): session closed for user core Jan 13 21:34:21.566308 systemd[1]: sshd@24-172.31.23.216:22-147.75.109.163:51612.service: Deactivated successfully. Jan 13 21:34:21.575263 systemd[1]: session-25.scope: Deactivated successfully. Jan 13 21:34:21.576794 systemd-logind[2056]: Session 25 logged out. Waiting for processes to exit. Jan 13 21:34:21.578778 systemd-logind[2056]: Removed session 25. Jan 13 21:34:22.289891 kubelet[3391]: I0113 21:34:22.288021 3391 scope.go:117] "RemoveContainer" containerID="a0004ccaa8241599b8dc04df0263dda6fc4085ecf9d93964fa8eb417f7370bd5" Jan 13 21:34:22.301578 containerd[2100]: time="2025-01-13T21:34:22.301529472Z" level=info msg="RemoveContainer for \"a0004ccaa8241599b8dc04df0263dda6fc4085ecf9d93964fa8eb417f7370bd5\"" Jan 13 21:34:22.307799 containerd[2100]: time="2025-01-13T21:34:22.307744575Z" level=info msg="RemoveContainer for \"a0004ccaa8241599b8dc04df0263dda6fc4085ecf9d93964fa8eb417f7370bd5\" returns successfully" Jan 13 21:34:22.314587 containerd[2100]: time="2025-01-13T21:34:22.314546108Z" level=info msg="StopPodSandbox for \"86065975bf38d7eed27a648545feb36ac980ab74f010ab503ba8b83db0700f88\"" Jan 13 21:34:22.328747 containerd[2100]: time="2025-01-13T21:34:22.328593790Z" level=info msg="TearDown network for sandbox \"86065975bf38d7eed27a648545feb36ac980ab74f010ab503ba8b83db0700f88\" successfully" Jan 13 21:34:22.328747 containerd[2100]: time="2025-01-13T21:34:22.328736863Z" level=info msg="StopPodSandbox for \"86065975bf38d7eed27a648545feb36ac980ab74f010ab503ba8b83db0700f88\" returns successfully" Jan 13 21:34:22.336629 containerd[2100]: time="2025-01-13T21:34:22.336552747Z" level=info msg="RemovePodSandbox for \"86065975bf38d7eed27a648545feb36ac980ab74f010ab503ba8b83db0700f88\"" Jan 13 21:34:22.340164 containerd[2100]: time="2025-01-13T21:34:22.340115113Z" level=info msg="Forcibly stopping sandbox \"86065975bf38d7eed27a648545feb36ac980ab74f010ab503ba8b83db0700f88\"" Jan 13 21:34:22.340292 containerd[2100]: time="2025-01-13T21:34:22.340230171Z" level=info msg="TearDown network for sandbox \"86065975bf38d7eed27a648545feb36ac980ab74f010ab503ba8b83db0700f88\" successfully" Jan 13 21:34:22.351449 containerd[2100]: time="2025-01-13T21:34:22.351401736Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"86065975bf38d7eed27a648545feb36ac980ab74f010ab503ba8b83db0700f88\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:34:22.351449 containerd[2100]: time="2025-01-13T21:34:22.351489314Z" level=info msg="RemovePodSandbox \"86065975bf38d7eed27a648545feb36ac980ab74f010ab503ba8b83db0700f88\" returns successfully" Jan 13 21:34:22.352484 containerd[2100]: time="2025-01-13T21:34:22.352292889Z" level=info msg="StopPodSandbox for \"db9256db5d192b356351934106bb6b94a629e6ecefda2cbc743d9eb60a42d88b\"" Jan 13 21:34:22.352484 containerd[2100]: time="2025-01-13T21:34:22.352393481Z" level=info msg="TearDown network for sandbox \"db9256db5d192b356351934106bb6b94a629e6ecefda2cbc743d9eb60a42d88b\" successfully" Jan 13 21:34:22.352484 containerd[2100]: time="2025-01-13T21:34:22.352405138Z" level=info msg="StopPodSandbox for \"db9256db5d192b356351934106bb6b94a629e6ecefda2cbc743d9eb60a42d88b\" returns successfully" Jan 13 21:34:22.353039 containerd[2100]: time="2025-01-13T21:34:22.353010610Z" level=info msg="RemovePodSandbox for \"db9256db5d192b356351934106bb6b94a629e6ecefda2cbc743d9eb60a42d88b\"" Jan 13 21:34:22.353116 containerd[2100]: time="2025-01-13T21:34:22.353047289Z" level=info msg="Forcibly stopping sandbox \"db9256db5d192b356351934106bb6b94a629e6ecefda2cbc743d9eb60a42d88b\"" Jan 13 21:34:22.353169 containerd[2100]: time="2025-01-13T21:34:22.353126877Z" level=info msg="TearDown network for sandbox \"db9256db5d192b356351934106bb6b94a629e6ecefda2cbc743d9eb60a42d88b\" successfully" Jan 13 21:34:22.358463 systemd-resolved[1974]: Under memory pressure, flushing caches. Jan 13 21:34:22.359938 systemd-journald[1569]: Under memory pressure, flushing caches. Jan 13 21:34:22.358521 systemd-resolved[1974]: Flushed all caches. Jan 13 21:34:22.370823 containerd[2100]: time="2025-01-13T21:34:22.370387973Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"db9256db5d192b356351934106bb6b94a629e6ecefda2cbc743d9eb60a42d88b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:34:22.370823 containerd[2100]: time="2025-01-13T21:34:22.370500598Z" level=info msg="RemovePodSandbox \"db9256db5d192b356351934106bb6b94a629e6ecefda2cbc743d9eb60a42d88b\" returns successfully" Jan 13 21:34:22.371811 containerd[2100]: time="2025-01-13T21:34:22.371325986Z" level=info msg="StopPodSandbox for \"730d5ce859f774b043e518251a98851c2c0f88a7124d555c205b096292c300e7\"" Jan 13 21:34:22.725120 containerd[2100]: 2025-01-13 21:34:22.631 [WARNING][7385] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="730d5ce859f774b043e518251a98851c2c0f88a7124d555c205b096292c300e7" WorkloadEndpoint="ip--172--31--23--216-k8s-calico--kube--controllers--97574c6fb--sdstw-eth0" Jan 13 21:34:22.725120 containerd[2100]: 2025-01-13 21:34:22.633 [INFO][7385] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="730d5ce859f774b043e518251a98851c2c0f88a7124d555c205b096292c300e7" Jan 13 21:34:22.725120 containerd[2100]: 2025-01-13 21:34:22.633 [INFO][7385] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="730d5ce859f774b043e518251a98851c2c0f88a7124d555c205b096292c300e7" iface="eth0" netns="" Jan 13 21:34:22.725120 containerd[2100]: 2025-01-13 21:34:22.635 [INFO][7385] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="730d5ce859f774b043e518251a98851c2c0f88a7124d555c205b096292c300e7" Jan 13 21:34:22.725120 containerd[2100]: 2025-01-13 21:34:22.635 [INFO][7385] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="730d5ce859f774b043e518251a98851c2c0f88a7124d555c205b096292c300e7" Jan 13 21:34:22.725120 containerd[2100]: 2025-01-13 21:34:22.699 [INFO][7391] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="730d5ce859f774b043e518251a98851c2c0f88a7124d555c205b096292c300e7" HandleID="k8s-pod-network.730d5ce859f774b043e518251a98851c2c0f88a7124d555c205b096292c300e7" Workload="ip--172--31--23--216-k8s-calico--kube--controllers--97574c6fb--sdstw-eth0" Jan 13 21:34:22.725120 containerd[2100]: 2025-01-13 21:34:22.699 [INFO][7391] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:34:22.725120 containerd[2100]: 2025-01-13 21:34:22.700 [INFO][7391] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:34:22.725120 containerd[2100]: 2025-01-13 21:34:22.707 [WARNING][7391] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="730d5ce859f774b043e518251a98851c2c0f88a7124d555c205b096292c300e7" HandleID="k8s-pod-network.730d5ce859f774b043e518251a98851c2c0f88a7124d555c205b096292c300e7" Workload="ip--172--31--23--216-k8s-calico--kube--controllers--97574c6fb--sdstw-eth0" Jan 13 21:34:22.725120 containerd[2100]: 2025-01-13 21:34:22.707 [INFO][7391] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="730d5ce859f774b043e518251a98851c2c0f88a7124d555c205b096292c300e7" HandleID="k8s-pod-network.730d5ce859f774b043e518251a98851c2c0f88a7124d555c205b096292c300e7" Workload="ip--172--31--23--216-k8s-calico--kube--controllers--97574c6fb--sdstw-eth0" Jan 13 21:34:22.725120 containerd[2100]: 2025-01-13 21:34:22.712 [INFO][7391] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:34:22.725120 containerd[2100]: 2025-01-13 21:34:22.721 [INFO][7385] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="730d5ce859f774b043e518251a98851c2c0f88a7124d555c205b096292c300e7" Jan 13 21:34:22.725120 containerd[2100]: time="2025-01-13T21:34:22.725099210Z" level=info msg="TearDown network for sandbox \"730d5ce859f774b043e518251a98851c2c0f88a7124d555c205b096292c300e7\" successfully" Jan 13 21:34:22.728165 containerd[2100]: time="2025-01-13T21:34:22.725133435Z" level=info msg="StopPodSandbox for \"730d5ce859f774b043e518251a98851c2c0f88a7124d555c205b096292c300e7\" returns successfully" Jan 13 21:34:22.728165 containerd[2100]: time="2025-01-13T21:34:22.726726583Z" level=info msg="RemovePodSandbox for \"730d5ce859f774b043e518251a98851c2c0f88a7124d555c205b096292c300e7\"" Jan 13 21:34:22.728165 containerd[2100]: time="2025-01-13T21:34:22.726768293Z" level=info msg="Forcibly stopping sandbox \"730d5ce859f774b043e518251a98851c2c0f88a7124d555c205b096292c300e7\"" Jan 13 21:34:22.885899 containerd[2100]: 2025-01-13 21:34:22.792 [WARNING][7409] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="730d5ce859f774b043e518251a98851c2c0f88a7124d555c205b096292c300e7" WorkloadEndpoint="ip--172--31--23--216-k8s-calico--kube--controllers--97574c6fb--sdstw-eth0" Jan 13 21:34:22.885899 containerd[2100]: 2025-01-13 21:34:22.793 [INFO][7409] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="730d5ce859f774b043e518251a98851c2c0f88a7124d555c205b096292c300e7" Jan 13 21:34:22.885899 containerd[2100]: 2025-01-13 21:34:22.793 [INFO][7409] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="730d5ce859f774b043e518251a98851c2c0f88a7124d555c205b096292c300e7" iface="eth0" netns="" Jan 13 21:34:22.885899 containerd[2100]: 2025-01-13 21:34:22.793 [INFO][7409] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="730d5ce859f774b043e518251a98851c2c0f88a7124d555c205b096292c300e7" Jan 13 21:34:22.885899 containerd[2100]: 2025-01-13 21:34:22.793 [INFO][7409] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="730d5ce859f774b043e518251a98851c2c0f88a7124d555c205b096292c300e7" Jan 13 21:34:22.885899 containerd[2100]: 2025-01-13 21:34:22.851 [INFO][7415] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="730d5ce859f774b043e518251a98851c2c0f88a7124d555c205b096292c300e7" HandleID="k8s-pod-network.730d5ce859f774b043e518251a98851c2c0f88a7124d555c205b096292c300e7" Workload="ip--172--31--23--216-k8s-calico--kube--controllers--97574c6fb--sdstw-eth0" Jan 13 21:34:22.885899 containerd[2100]: 2025-01-13 21:34:22.852 [INFO][7415] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 21:34:22.885899 containerd[2100]: 2025-01-13 21:34:22.852 [INFO][7415] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 21:34:22.885899 containerd[2100]: 2025-01-13 21:34:22.876 [WARNING][7415] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="730d5ce859f774b043e518251a98851c2c0f88a7124d555c205b096292c300e7" HandleID="k8s-pod-network.730d5ce859f774b043e518251a98851c2c0f88a7124d555c205b096292c300e7" Workload="ip--172--31--23--216-k8s-calico--kube--controllers--97574c6fb--sdstw-eth0" Jan 13 21:34:22.885899 containerd[2100]: 2025-01-13 21:34:22.877 [INFO][7415] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="730d5ce859f774b043e518251a98851c2c0f88a7124d555c205b096292c300e7" HandleID="k8s-pod-network.730d5ce859f774b043e518251a98851c2c0f88a7124d555c205b096292c300e7" Workload="ip--172--31--23--216-k8s-calico--kube--controllers--97574c6fb--sdstw-eth0" Jan 13 21:34:22.885899 containerd[2100]: 2025-01-13 21:34:22.881 [INFO][7415] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 21:34:22.885899 containerd[2100]: 2025-01-13 21:34:22.883 [INFO][7409] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="730d5ce859f774b043e518251a98851c2c0f88a7124d555c205b096292c300e7" Jan 13 21:34:22.885899 containerd[2100]: time="2025-01-13T21:34:22.885684892Z" level=info msg="TearDown network for sandbox \"730d5ce859f774b043e518251a98851c2c0f88a7124d555c205b096292c300e7\" successfully" Jan 13 21:34:22.896583 containerd[2100]: time="2025-01-13T21:34:22.896188210Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"730d5ce859f774b043e518251a98851c2c0f88a7124d555c205b096292c300e7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:34:22.896583 containerd[2100]: time="2025-01-13T21:34:22.896323894Z" level=info msg="RemovePodSandbox \"730d5ce859f774b043e518251a98851c2c0f88a7124d555c205b096292c300e7\" returns successfully" Jan 13 21:34:24.407136 systemd-resolved[1974]: Under memory pressure, flushing caches. Jan 13 21:34:24.407150 systemd-resolved[1974]: Flushed all caches. Jan 13 21:34:24.408226 systemd-journald[1569]: Under memory pressure, flushing caches. Jan 13 21:34:26.577729 systemd[1]: Started sshd@25-172.31.23.216:22-147.75.109.163:51628.service - OpenSSH per-connection server daemon (147.75.109.163:51628). Jan 13 21:34:26.799317 sshd[7421]: Accepted publickey for core from 147.75.109.163 port 51628 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:34:26.801263 sshd[7421]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:34:26.812301 systemd-logind[2056]: New session 26 of user core. Jan 13 21:34:26.839960 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 13 21:34:27.284099 sshd[7421]: pam_unix(sshd:session): session closed for user core Jan 13 21:34:27.298313 systemd-logind[2056]: Session 26 logged out. Waiting for processes to exit. Jan 13 21:34:27.301286 systemd[1]: sshd@25-172.31.23.216:22-147.75.109.163:51628.service: Deactivated successfully. Jan 13 21:34:27.317107 systemd[1]: session-26.scope: Deactivated successfully. Jan 13 21:34:27.320293 systemd-logind[2056]: Removed session 26. Jan 13 21:34:32.314217 systemd[1]: Started sshd@26-172.31.23.216:22-147.75.109.163:47258.service - OpenSSH per-connection server daemon (147.75.109.163:47258). Jan 13 21:34:32.482736 sshd[7436]: Accepted publickey for core from 147.75.109.163 port 47258 ssh2: RSA SHA256:nsHiw8PVVL24fpE4j+jgc6OXg1spU6FuMiVFhQManAc Jan 13 21:34:32.484425 sshd[7436]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:34:32.500911 systemd-logind[2056]: New session 27 of user core. Jan 13 21:34:32.508211 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 13 21:34:32.724575 sshd[7436]: pam_unix(sshd:session): session closed for user core Jan 13 21:34:32.729388 systemd[1]: sshd@26-172.31.23.216:22-147.75.109.163:47258.service: Deactivated successfully. Jan 13 21:34:32.734745 systemd[1]: session-27.scope: Deactivated successfully. Jan 13 21:34:32.737373 systemd-logind[2056]: Session 27 logged out. Waiting for processes to exit. Jan 13 21:34:32.739045 systemd-logind[2056]: Removed session 27. Jan 13 21:34:50.982181 systemd[1]: run-containerd-runc-k8s.io-e93c9d79d66c7df1c05ff873bcd2b8990c5957feb6af33f7b9f69b9c7a06ea42-runc.QISMLE.mount: Deactivated successfully. Jan 13 21:34:57.672014 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8e089319162d58d42fe81468d0ac72eb99e2c506604af18e84ec5cb1d809fa5d-rootfs.mount: Deactivated successfully. Jan 13 21:34:57.682166 containerd[2100]: time="2025-01-13T21:34:57.659131768Z" level=info msg="shim disconnected" id=8e089319162d58d42fe81468d0ac72eb99e2c506604af18e84ec5cb1d809fa5d namespace=k8s.io Jan 13 21:34:57.682743 containerd[2100]: time="2025-01-13T21:34:57.682704149Z" level=warning msg="cleaning up after shim disconnected" id=8e089319162d58d42fe81468d0ac72eb99e2c506604af18e84ec5cb1d809fa5d namespace=k8s.io Jan 13 21:34:57.682743 containerd[2100]: time="2025-01-13T21:34:57.682736671Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:34:57.828621 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-56e62e046bfbd46893da97df4402ab9d91b1b02d9c3f5fd921eea6f9a72e2973-rootfs.mount: Deactivated successfully. Jan 13 21:34:57.838622 containerd[2100]: time="2025-01-13T21:34:57.829091504Z" level=info msg="shim disconnected" id=56e62e046bfbd46893da97df4402ab9d91b1b02d9c3f5fd921eea6f9a72e2973 namespace=k8s.io Jan 13 21:34:57.838622 containerd[2100]: time="2025-01-13T21:34:57.838619930Z" level=warning msg="cleaning up after shim disconnected" id=56e62e046bfbd46893da97df4402ab9d91b1b02d9c3f5fd921eea6f9a72e2973 namespace=k8s.io Jan 13 21:34:57.838883 containerd[2100]: time="2025-01-13T21:34:57.838638349Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:34:57.958411 kubelet[3391]: I0113 21:34:57.957914 3391 scope.go:117] "RemoveContainer" containerID="8e089319162d58d42fe81468d0ac72eb99e2c506604af18e84ec5cb1d809fa5d" Jan 13 21:34:57.962685 kubelet[3391]: I0113 21:34:57.962286 3391 scope.go:117] "RemoveContainer" containerID="56e62e046bfbd46893da97df4402ab9d91b1b02d9c3f5fd921eea6f9a72e2973" Jan 13 21:34:57.978713 containerd[2100]: time="2025-01-13T21:34:57.978621696Z" level=info msg="CreateContainer within sandbox \"47353b1a209c2723fd0157b217c0286bdbd4040e4c50880177b8f8efa3f1d76a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 13 21:34:57.983934 containerd[2100]: time="2025-01-13T21:34:57.983887109Z" level=info msg="CreateContainer within sandbox \"7cf301a189f0b4830896083a67cfe6d71185fc017cd8dfd9326c06f3d8329675\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 13 21:34:58.039847 containerd[2100]: time="2025-01-13T21:34:58.039608678Z" level=info msg="CreateContainer within sandbox \"47353b1a209c2723fd0157b217c0286bdbd4040e4c50880177b8f8efa3f1d76a\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"d21fa10ae66d8ceec5d80edaa83b1d8cd6f933c58f7bcfc045584f16bcdb4298\"" Jan 13 21:34:58.041201 containerd[2100]: time="2025-01-13T21:34:58.041094062Z" level=info msg="StartContainer for \"d21fa10ae66d8ceec5d80edaa83b1d8cd6f933c58f7bcfc045584f16bcdb4298\"" Jan 13 21:34:58.048331 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3911304170.mount: Deactivated successfully. Jan 13 21:34:58.105370 containerd[2100]: time="2025-01-13T21:34:58.105235644Z" level=info msg="CreateContainer within sandbox \"7cf301a189f0b4830896083a67cfe6d71185fc017cd8dfd9326c06f3d8329675\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"3ece5c206a92038c0f77d584e7599c4a076d83307e4c1d5ed40d0c777b314953\"" Jan 13 21:34:58.105979 containerd[2100]: time="2025-01-13T21:34:58.105932209Z" level=info msg="StartContainer for \"3ece5c206a92038c0f77d584e7599c4a076d83307e4c1d5ed40d0c777b314953\"" Jan 13 21:34:58.171266 containerd[2100]: time="2025-01-13T21:34:58.171030284Z" level=info msg="StartContainer for \"d21fa10ae66d8ceec5d80edaa83b1d8cd6f933c58f7bcfc045584f16bcdb4298\" returns successfully" Jan 13 21:34:58.265956 containerd[2100]: time="2025-01-13T21:34:58.265139811Z" level=info msg="StartContainer for \"3ece5c206a92038c0f77d584e7599c4a076d83307e4c1d5ed40d0c777b314953\" returns successfully" Jan 13 21:34:58.667213 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3890642504.mount: Deactivated successfully. Jan 13 21:35:02.018976 kubelet[3391]: E0113 21:35:02.018917 3391 controller.go:195] "Failed to update lease" err="Put \"https://172.31.23.216:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-216?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 13 21:35:02.606100 containerd[2100]: time="2025-01-13T21:35:02.606028724Z" level=info msg="shim disconnected" id=a022267ffc4e7d0869fabfab5499ca3cb5dd8279d8c7b7f59b8826999c37fcba namespace=k8s.io Jan 13 21:35:02.606100 containerd[2100]: time="2025-01-13T21:35:02.606103393Z" level=warning msg="cleaning up after shim disconnected" id=a022267ffc4e7d0869fabfab5499ca3cb5dd8279d8c7b7f59b8826999c37fcba namespace=k8s.io Jan 13 21:35:02.615613 containerd[2100]: time="2025-01-13T21:35:02.606116912Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:35:02.616350 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a022267ffc4e7d0869fabfab5499ca3cb5dd8279d8c7b7f59b8826999c37fcba-rootfs.mount: Deactivated successfully. Jan 13 21:35:02.995921 kubelet[3391]: I0113 21:35:02.995776 3391 scope.go:117] "RemoveContainer" containerID="a022267ffc4e7d0869fabfab5499ca3cb5dd8279d8c7b7f59b8826999c37fcba" Jan 13 21:35:03.032450 containerd[2100]: time="2025-01-13T21:35:03.032388138Z" level=info msg="CreateContainer within sandbox \"bdc5b78ead2c90a9887594baa40916d413282fc6b0447bd6501c6d5e40a5e035\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 13 21:35:03.097745 containerd[2100]: time="2025-01-13T21:35:03.097379231Z" level=info msg="CreateContainer within sandbox \"bdc5b78ead2c90a9887594baa40916d413282fc6b0447bd6501c6d5e40a5e035\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"b06612d0f8016f6821aa4ac81e831fe62ecffed57d84e7aa3b3773d8df406442\"" Jan 13 21:35:03.102425 containerd[2100]: time="2025-01-13T21:35:03.098598972Z" level=info msg="StartContainer for \"b06612d0f8016f6821aa4ac81e831fe62ecffed57d84e7aa3b3773d8df406442\"" Jan 13 21:35:03.103226 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3679722774.mount: Deactivated successfully. Jan 13 21:35:03.376003 containerd[2100]: time="2025-01-13T21:35:03.375949549Z" level=info msg="StartContainer for \"b06612d0f8016f6821aa4ac81e831fe62ecffed57d84e7aa3b3773d8df406442\" returns successfully" Jan 13 21:35:12.019914 kubelet[3391]: E0113 21:35:12.019872 3391 controller.go:195] "Failed to update lease" err="Put \"https://172.31.23.216:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-216?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"