Jan 30 13:51:45.030635 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 30 13:51:45.030677 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:51:45.030694 kernel: BIOS-provided physical RAM map: Jan 30 13:51:45.030706 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 30 13:51:45.030717 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 30 13:51:45.030729 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 30 13:51:45.030747 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Jan 30 13:51:45.030760 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Jan 30 13:51:45.030772 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Jan 30 13:51:45.030784 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 30 13:51:45.030797 kernel: NX (Execute Disable) protection: active Jan 30 13:51:45.030809 kernel: APIC: Static calls initialized Jan 30 13:51:45.030822 kernel: SMBIOS 2.7 present. Jan 30 13:51:45.030835 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Jan 30 13:51:45.030854 kernel: Hypervisor detected: KVM Jan 30 13:51:45.030868 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 30 13:51:45.030882 kernel: kvm-clock: using sched offset of 6484921121 cycles Jan 30 13:51:45.030897 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 30 13:51:45.030911 kernel: tsc: Detected 2499.998 MHz processor Jan 30 13:51:45.030926 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 13:51:45.030940 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 13:51:45.030957 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Jan 30 13:51:45.030972 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 30 13:51:45.030986 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 13:51:45.030999 kernel: Using GB pages for direct mapping Jan 30 13:51:45.031013 kernel: ACPI: Early table checksum verification disabled Jan 30 13:51:45.031027 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Jan 30 13:51:45.031041 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Jan 30 13:51:45.031056 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 30 13:51:45.031070 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 30 13:51:45.031087 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Jan 30 13:51:45.031101 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 30 13:51:45.031115 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 30 13:51:45.031129 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Jan 30 13:51:45.031143 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 30 13:51:45.031157 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Jan 30 13:51:45.031171 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Jan 30 13:51:45.031185 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 30 13:51:45.031199 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Jan 30 13:51:45.031216 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Jan 30 13:51:45.031237 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Jan 30 13:51:45.031251 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Jan 30 13:51:45.031266 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Jan 30 13:51:45.031281 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Jan 30 13:51:45.031300 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Jan 30 13:51:45.031315 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Jan 30 13:51:45.031329 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Jan 30 13:51:45.031344 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Jan 30 13:51:45.031359 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 30 13:51:45.031374 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 30 13:51:45.031389 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Jan 30 13:51:45.031404 kernel: NUMA: Initialized distance table, cnt=1 Jan 30 13:51:45.033511 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Jan 30 13:51:45.033551 kernel: Zone ranges: Jan 30 13:51:45.033564 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 13:51:45.033578 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Jan 30 13:51:45.033593 kernel: Normal empty Jan 30 13:51:45.033608 kernel: Movable zone start for each node Jan 30 13:51:45.033622 kernel: Early memory node ranges Jan 30 13:51:45.033637 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 30 13:51:45.033651 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Jan 30 13:51:45.033665 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Jan 30 13:51:45.033680 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 13:51:45.033698 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 30 13:51:45.033712 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Jan 30 13:51:45.033727 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 30 13:51:45.033741 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 30 13:51:45.033756 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Jan 30 13:51:45.033770 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 30 13:51:45.033784 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 13:51:45.033799 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 30 13:51:45.033813 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 30 13:51:45.033831 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 13:51:45.033846 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 30 13:51:45.033860 kernel: TSC deadline timer available Jan 30 13:51:45.033874 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 30 13:51:45.033889 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 30 13:51:45.033903 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Jan 30 13:51:45.033917 kernel: Booting paravirtualized kernel on KVM Jan 30 13:51:45.033932 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 13:51:45.033946 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 30 13:51:45.033964 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 30 13:51:45.033979 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 30 13:51:45.033993 kernel: pcpu-alloc: [0] 0 1 Jan 30 13:51:45.034007 kernel: kvm-guest: PV spinlocks enabled Jan 30 13:51:45.034022 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 30 13:51:45.034038 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:51:45.034053 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 13:51:45.034067 kernel: random: crng init done Jan 30 13:51:45.034084 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 13:51:45.034098 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 30 13:51:45.034113 kernel: Fallback order for Node 0: 0 Jan 30 13:51:45.034127 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Jan 30 13:51:45.034141 kernel: Policy zone: DMA32 Jan 30 13:51:45.034156 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 13:51:45.034171 kernel: Memory: 1932348K/2057760K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 125152K reserved, 0K cma-reserved) Jan 30 13:51:45.034272 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 30 13:51:45.034291 kernel: Kernel/User page tables isolation: enabled Jan 30 13:51:45.034310 kernel: ftrace: allocating 37921 entries in 149 pages Jan 30 13:51:45.034325 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 13:51:45.034340 kernel: Dynamic Preempt: voluntary Jan 30 13:51:45.034354 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 13:51:45.034370 kernel: rcu: RCU event tracing is enabled. Jan 30 13:51:45.034385 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 30 13:51:45.034399 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 13:51:45.036460 kernel: Rude variant of Tasks RCU enabled. Jan 30 13:51:45.036564 kernel: Tracing variant of Tasks RCU enabled. Jan 30 13:51:45.036586 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 13:51:45.036601 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 30 13:51:45.036616 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 30 13:51:45.036630 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 13:51:45.036644 kernel: Console: colour VGA+ 80x25 Jan 30 13:51:45.036659 kernel: printk: console [ttyS0] enabled Jan 30 13:51:45.036673 kernel: ACPI: Core revision 20230628 Jan 30 13:51:45.036688 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Jan 30 13:51:45.036709 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 13:51:45.036726 kernel: x2apic enabled Jan 30 13:51:45.036741 kernel: APIC: Switched APIC routing to: physical x2apic Jan 30 13:51:45.036767 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jan 30 13:51:45.036785 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Jan 30 13:51:45.036799 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 30 13:51:45.036814 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 30 13:51:45.036829 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 13:51:45.036844 kernel: Spectre V2 : Mitigation: Retpolines Jan 30 13:51:45.036858 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 13:51:45.036873 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 30 13:51:45.036888 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 30 13:51:45.036902 kernel: RETBleed: Vulnerable Jan 30 13:51:45.036920 kernel: Speculative Store Bypass: Vulnerable Jan 30 13:51:45.036934 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Jan 30 13:51:45.036949 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 30 13:51:45.036964 kernel: GDS: Unknown: Dependent on hypervisor status Jan 30 13:51:45.036978 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 30 13:51:45.036993 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 30 13:51:45.037007 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 30 13:51:45.037025 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jan 30 13:51:45.037039 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jan 30 13:51:45.037054 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 30 13:51:45.037069 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 30 13:51:45.037083 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 30 13:51:45.037098 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 30 13:51:45.037113 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 30 13:51:45.037128 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jan 30 13:51:45.037143 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jan 30 13:51:45.037158 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Jan 30 13:51:45.037176 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Jan 30 13:51:45.037200 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Jan 30 13:51:45.037224 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Jan 30 13:51:45.037245 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Jan 30 13:51:45.037261 kernel: Freeing SMP alternatives memory: 32K Jan 30 13:51:45.037276 kernel: pid_max: default: 32768 minimum: 301 Jan 30 13:51:45.037292 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 13:51:45.037307 kernel: landlock: Up and running. Jan 30 13:51:45.037323 kernel: SELinux: Initializing. Jan 30 13:51:45.037339 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 30 13:51:45.037355 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 30 13:51:45.037371 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 30 13:51:45.037390 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:51:45.037406 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:51:45.037435 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:51:45.037451 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 30 13:51:45.037467 kernel: signal: max sigframe size: 3632 Jan 30 13:51:45.037483 kernel: rcu: Hierarchical SRCU implementation. Jan 30 13:51:45.037499 kernel: rcu: Max phase no-delay instances is 400. Jan 30 13:51:45.037515 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 30 13:51:45.037531 kernel: smp: Bringing up secondary CPUs ... Jan 30 13:51:45.037550 kernel: smpboot: x86: Booting SMP configuration: Jan 30 13:51:45.037566 kernel: .... node #0, CPUs: #1 Jan 30 13:51:45.037627 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 30 13:51:45.037645 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 30 13:51:45.037661 kernel: smp: Brought up 1 node, 2 CPUs Jan 30 13:51:45.037677 kernel: smpboot: Max logical packages: 1 Jan 30 13:51:45.037693 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Jan 30 13:51:45.037710 kernel: devtmpfs: initialized Jan 30 13:51:45.037763 kernel: x86/mm: Memory block size: 128MB Jan 30 13:51:45.037780 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 13:51:45.037796 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 30 13:51:45.037812 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 13:51:45.037828 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 13:51:45.037844 kernel: audit: initializing netlink subsys (disabled) Jan 30 13:51:45.037860 kernel: audit: type=2000 audit(1738245103.910:1): state=initialized audit_enabled=0 res=1 Jan 30 13:51:45.037876 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 13:51:45.037892 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 13:51:45.037911 kernel: cpuidle: using governor menu Jan 30 13:51:45.037927 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 13:51:45.037943 kernel: dca service started, version 1.12.1 Jan 30 13:51:45.037959 kernel: PCI: Using configuration type 1 for base access Jan 30 13:51:45.037975 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 13:51:45.037991 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 13:51:45.038007 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 13:51:45.038023 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 13:51:45.038039 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 13:51:45.038059 kernel: ACPI: Added _OSI(Module Device) Jan 30 13:51:45.038075 kernel: ACPI: Added _OSI(Processor Device) Jan 30 13:51:45.038090 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 13:51:45.038107 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 13:51:45.038123 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 30 13:51:45.038138 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 30 13:51:45.038155 kernel: ACPI: Interpreter enabled Jan 30 13:51:45.038171 kernel: ACPI: PM: (supports S0 S5) Jan 30 13:51:45.038187 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 13:51:45.038203 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 13:51:45.038222 kernel: PCI: Using E820 reservations for host bridge windows Jan 30 13:51:45.038238 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jan 30 13:51:45.038254 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 13:51:45.039361 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 30 13:51:45.039748 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 30 13:51:45.039880 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 30 13:51:45.039898 kernel: acpiphp: Slot [3] registered Jan 30 13:51:45.039917 kernel: acpiphp: Slot [4] registered Jan 30 13:51:45.039931 kernel: acpiphp: Slot [5] registered Jan 30 13:51:45.039945 kernel: acpiphp: Slot [6] registered Jan 30 13:51:45.039959 kernel: acpiphp: Slot [7] registered Jan 30 13:51:45.039972 kernel: acpiphp: Slot [8] registered Jan 30 13:51:45.039986 kernel: acpiphp: Slot [9] registered Jan 30 13:51:45.040000 kernel: acpiphp: Slot [10] registered Jan 30 13:51:45.040014 kernel: acpiphp: Slot [11] registered Jan 30 13:51:45.040028 kernel: acpiphp: Slot [12] registered Jan 30 13:51:45.040045 kernel: acpiphp: Slot [13] registered Jan 30 13:51:45.040059 kernel: acpiphp: Slot [14] registered Jan 30 13:51:45.040072 kernel: acpiphp: Slot [15] registered Jan 30 13:51:45.040086 kernel: acpiphp: Slot [16] registered Jan 30 13:51:45.040099 kernel: acpiphp: Slot [17] registered Jan 30 13:51:45.040112 kernel: acpiphp: Slot [18] registered Jan 30 13:51:45.040126 kernel: acpiphp: Slot [19] registered Jan 30 13:51:45.040139 kernel: acpiphp: Slot [20] registered Jan 30 13:51:45.040152 kernel: acpiphp: Slot [21] registered Jan 30 13:51:45.040166 kernel: acpiphp: Slot [22] registered Jan 30 13:51:45.040184 kernel: acpiphp: Slot [23] registered Jan 30 13:51:45.040200 kernel: acpiphp: Slot [24] registered Jan 30 13:51:45.040216 kernel: acpiphp: Slot [25] registered Jan 30 13:51:45.040232 kernel: acpiphp: Slot [26] registered Jan 30 13:51:45.040248 kernel: acpiphp: Slot [27] registered Jan 30 13:51:45.040264 kernel: acpiphp: Slot [28] registered Jan 30 13:51:45.040280 kernel: acpiphp: Slot [29] registered Jan 30 13:51:45.040296 kernel: acpiphp: Slot [30] registered Jan 30 13:51:45.040312 kernel: acpiphp: Slot [31] registered Jan 30 13:51:45.040331 kernel: PCI host bridge to bus 0000:00 Jan 30 13:51:45.040476 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 30 13:51:45.040594 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 30 13:51:45.040738 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 30 13:51:45.040852 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 30 13:51:45.040964 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 13:51:45.041142 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 30 13:51:45.042144 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 30 13:51:45.042294 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Jan 30 13:51:45.042448 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 30 13:51:45.042579 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Jan 30 13:51:45.042705 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Jan 30 13:51:45.042829 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Jan 30 13:51:45.042951 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Jan 30 13:51:45.043162 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Jan 30 13:51:45.043290 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Jan 30 13:51:45.043432 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Jan 30 13:51:45.044200 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Jan 30 13:51:45.044348 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Jan 30 13:51:45.044514 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jan 30 13:51:45.044653 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 30 13:51:45.044962 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 30 13:51:45.045105 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Jan 30 13:51:45.045243 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 30 13:51:45.045373 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Jan 30 13:51:45.045392 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 30 13:51:45.045467 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 30 13:51:45.045489 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 30 13:51:45.045506 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 30 13:51:45.045553 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 30 13:51:45.045570 kernel: iommu: Default domain type: Translated Jan 30 13:51:45.045585 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 13:51:45.045628 kernel: PCI: Using ACPI for IRQ routing Jan 30 13:51:45.045646 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 30 13:51:45.045662 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 30 13:51:45.045677 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Jan 30 13:51:45.046075 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Jan 30 13:51:45.046222 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Jan 30 13:51:45.046367 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 30 13:51:45.046387 kernel: vgaarb: loaded Jan 30 13:51:45.046402 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jan 30 13:51:45.046461 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Jan 30 13:51:45.046476 kernel: clocksource: Switched to clocksource kvm-clock Jan 30 13:51:45.046492 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 13:51:45.046507 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 13:51:45.046532 kernel: pnp: PnP ACPI init Jan 30 13:51:45.046546 kernel: pnp: PnP ACPI: found 5 devices Jan 30 13:51:45.046560 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 13:51:45.046574 kernel: NET: Registered PF_INET protocol family Jan 30 13:51:45.046591 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 13:51:45.046607 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 30 13:51:45.046624 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 13:51:45.046641 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 30 13:51:45.046661 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 30 13:51:45.046678 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 30 13:51:45.046695 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 30 13:51:45.046712 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 30 13:51:45.046728 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 13:51:45.046744 kernel: NET: Registered PF_XDP protocol family Jan 30 13:51:45.046889 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 30 13:51:45.047057 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 30 13:51:45.047174 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 30 13:51:45.047293 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 30 13:51:45.047544 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 30 13:51:45.047566 kernel: PCI: CLS 0 bytes, default 64 Jan 30 13:51:45.047583 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 30 13:51:45.047599 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jan 30 13:51:45.047659 kernel: clocksource: Switched to clocksource tsc Jan 30 13:51:45.047676 kernel: Initialise system trusted keyrings Jan 30 13:51:45.047691 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 30 13:51:45.047711 kernel: Key type asymmetric registered Jan 30 13:51:45.047727 kernel: Asymmetric key parser 'x509' registered Jan 30 13:51:45.047743 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 13:51:45.047758 kernel: io scheduler mq-deadline registered Jan 30 13:51:45.047774 kernel: io scheduler kyber registered Jan 30 13:51:45.047790 kernel: io scheduler bfq registered Jan 30 13:51:45.047805 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 13:51:45.047821 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 13:51:45.047837 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 13:51:45.047855 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 30 13:51:45.047870 kernel: i8042: Warning: Keylock active Jan 30 13:51:45.047884 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 30 13:51:45.047900 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 30 13:51:45.048111 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 30 13:51:45.048337 kernel: rtc_cmos 00:00: registered as rtc0 Jan 30 13:51:45.048519 kernel: rtc_cmos 00:00: setting system clock to 2025-01-30T13:51:44 UTC (1738245104) Jan 30 13:51:45.048779 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 30 13:51:45.048809 kernel: intel_pstate: CPU model not supported Jan 30 13:51:45.048824 kernel: NET: Registered PF_INET6 protocol family Jan 30 13:51:45.048838 kernel: Segment Routing with IPv6 Jan 30 13:51:45.048852 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 13:51:45.048866 kernel: NET: Registered PF_PACKET protocol family Jan 30 13:51:45.048880 kernel: Key type dns_resolver registered Jan 30 13:51:45.048894 kernel: IPI shorthand broadcast: enabled Jan 30 13:51:45.048908 kernel: sched_clock: Marking stable (586023760, 265414241)->(952903182, -101465181) Jan 30 13:51:45.048921 kernel: registered taskstats version 1 Jan 30 13:51:45.048938 kernel: Loading compiled-in X.509 certificates Jan 30 13:51:45.048953 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 30 13:51:45.048966 kernel: Key type .fscrypt registered Jan 30 13:51:45.048981 kernel: Key type fscrypt-provisioning registered Jan 30 13:51:45.048995 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 13:51:45.049009 kernel: ima: Allocated hash algorithm: sha1 Jan 30 13:51:45.049023 kernel: ima: No architecture policies found Jan 30 13:51:45.049037 kernel: clk: Disabling unused clocks Jan 30 13:51:45.049052 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 30 13:51:45.049069 kernel: Write protecting the kernel read-only data: 36864k Jan 30 13:51:45.049083 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 30 13:51:45.049097 kernel: Run /init as init process Jan 30 13:51:45.049110 kernel: with arguments: Jan 30 13:51:45.049125 kernel: /init Jan 30 13:51:45.049140 kernel: with environment: Jan 30 13:51:45.049155 kernel: HOME=/ Jan 30 13:51:45.049169 kernel: TERM=linux Jan 30 13:51:45.049184 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 13:51:45.049211 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:51:45.049246 systemd[1]: Detected virtualization amazon. Jan 30 13:51:45.049267 systemd[1]: Detected architecture x86-64. Jan 30 13:51:45.049283 systemd[1]: Running in initrd. Jan 30 13:51:45.049300 systemd[1]: No hostname configured, using default hostname. Jan 30 13:51:45.049319 systemd[1]: Hostname set to . Jan 30 13:51:45.049335 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:51:45.049351 systemd[1]: Queued start job for default target initrd.target. Jan 30 13:51:45.049369 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:51:45.049385 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:51:45.049414 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 13:51:45.049449 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:51:45.049465 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 13:51:45.049483 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 13:51:45.049595 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 13:51:45.049614 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 13:51:45.049631 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:51:45.049646 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:51:45.049662 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:51:45.049685 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:51:45.049703 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:51:45.049718 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:51:45.049734 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:51:45.049751 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:51:45.049770 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 13:51:45.049788 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 13:51:45.049804 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:51:45.049820 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:51:45.049838 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:51:45.049853 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:51:45.049868 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 13:51:45.049884 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:51:45.049901 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 13:51:45.049915 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 13:51:45.049932 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:51:45.049951 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 30 13:51:45.049970 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:51:45.049989 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:51:45.050006 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 13:51:45.050051 systemd-journald[178]: Collecting audit messages is disabled. Jan 30 13:51:45.050094 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:51:45.050112 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 13:51:45.050133 systemd-journald[178]: Journal started Jan 30 13:51:45.050173 systemd-journald[178]: Runtime Journal (/run/log/journal/ec2ded965a0d126a6781b4ce19182f87) is 4.8M, max 38.6M, 33.7M free. Jan 30 13:51:45.043824 systemd-modules-load[179]: Inserted module 'overlay' Jan 30 13:51:45.067655 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:51:45.070442 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:51:45.095452 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 13:51:45.098697 kernel: Bridge firewalling registered Jan 30 13:51:45.098051 systemd-modules-load[179]: Inserted module 'br_netfilter' Jan 30 13:51:45.098664 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:51:45.218648 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:51:45.220272 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:51:45.222812 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:51:45.234909 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:51:45.238286 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:51:45.260732 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:51:45.266480 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:51:45.280079 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:51:45.290727 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:51:45.296873 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:51:45.300526 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:51:45.305988 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 13:51:45.335595 dracut-cmdline[215]: dracut-dracut-053 Jan 30 13:51:45.342226 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:51:45.358165 systemd-resolved[211]: Positive Trust Anchors: Jan 30 13:51:45.358189 systemd-resolved[211]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:51:45.358236 systemd-resolved[211]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:51:45.372307 systemd-resolved[211]: Defaulting to hostname 'linux'. Jan 30 13:51:45.374725 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:51:45.377045 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:51:45.452469 kernel: SCSI subsystem initialized Jan 30 13:51:45.463498 kernel: Loading iSCSI transport class v2.0-870. Jan 30 13:51:45.476449 kernel: iscsi: registered transport (tcp) Jan 30 13:51:45.500443 kernel: iscsi: registered transport (qla4xxx) Jan 30 13:51:45.500520 kernel: QLogic iSCSI HBA Driver Jan 30 13:51:45.544545 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 13:51:45.552667 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 13:51:45.582022 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 13:51:45.582090 kernel: device-mapper: uevent: version 1.0.3 Jan 30 13:51:45.582105 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 13:51:45.637458 kernel: raid6: avx512x4 gen() 14825 MB/s Jan 30 13:51:45.654448 kernel: raid6: avx512x2 gen() 16472 MB/s Jan 30 13:51:45.671456 kernel: raid6: avx512x1 gen() 16229 MB/s Jan 30 13:51:45.688457 kernel: raid6: avx2x4 gen() 14756 MB/s Jan 30 13:51:45.705450 kernel: raid6: avx2x2 gen() 16180 MB/s Jan 30 13:51:45.722448 kernel: raid6: avx2x1 gen() 8061 MB/s Jan 30 13:51:45.722540 kernel: raid6: using algorithm avx512x2 gen() 16472 MB/s Jan 30 13:51:45.739661 kernel: raid6: .... xor() 20693 MB/s, rmw enabled Jan 30 13:51:45.739744 kernel: raid6: using avx512x2 recovery algorithm Jan 30 13:51:45.762445 kernel: xor: automatically using best checksumming function avx Jan 30 13:51:45.953528 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 13:51:45.966680 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:51:45.976718 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:51:45.992133 systemd-udevd[398]: Using default interface naming scheme 'v255'. Jan 30 13:51:45.998217 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:51:46.008820 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 13:51:46.035066 dracut-pre-trigger[406]: rd.md=0: removing MD RAID activation Jan 30 13:51:46.069364 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:51:46.078671 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:51:46.140530 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:51:46.153325 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 13:51:46.191214 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 13:51:46.196995 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:51:46.199880 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:51:46.202578 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:51:46.208633 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 13:51:46.247324 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:51:46.265779 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 13:51:46.269312 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 30 13:51:46.292536 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 30 13:51:46.292731 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Jan 30 13:51:46.293018 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:30:8d:ec:ed:85 Jan 30 13:51:46.284220 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:51:46.284357 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:51:46.286853 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:51:46.288683 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:51:46.289532 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:51:46.291192 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:51:46.295318 (udev-worker)[447]: Network interface NamePolicy= disabled on kernel command line. Jan 30 13:51:46.299735 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:51:46.324608 kernel: AVX2 version of gcm_enc/dec engaged. Jan 30 13:51:46.324659 kernel: AES CTR mode by8 optimization enabled Jan 30 13:51:46.344038 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 30 13:51:46.344302 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 30 13:51:46.359204 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 30 13:51:46.370467 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 13:51:46.370603 kernel: GPT:9289727 != 16777215 Jan 30 13:51:46.370624 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 13:51:46.370641 kernel: GPT:9289727 != 16777215 Jan 30 13:51:46.370688 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 13:51:46.370709 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 30 13:51:46.529440 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (457) Jan 30 13:51:46.554037 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/nvme0n1p3 scanned by (udev-worker) (449) Jan 30 13:51:46.586407 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 30 13:51:46.593730 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:51:46.646024 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 30 13:51:46.662594 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 30 13:51:46.664116 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 30 13:51:46.677341 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 30 13:51:46.690757 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 13:51:46.701529 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:51:46.716029 disk-uuid[620]: Primary Header is updated. Jan 30 13:51:46.716029 disk-uuid[620]: Secondary Entries is updated. Jan 30 13:51:46.716029 disk-uuid[620]: Secondary Header is updated. Jan 30 13:51:46.722444 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 30 13:51:46.728210 kernel: GPT:disk_guids don't match. Jan 30 13:51:46.728278 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 13:51:46.728300 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 30 13:51:46.736735 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 30 13:51:46.744825 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:51:47.738476 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 30 13:51:47.740076 disk-uuid[621]: The operation has completed successfully. Jan 30 13:51:47.950438 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 13:51:47.950567 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 13:51:47.970947 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 13:51:47.974785 sh[970]: Success Jan 30 13:51:48.000450 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 30 13:51:48.108652 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 13:51:48.121731 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 13:51:48.126134 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 13:51:48.154536 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 30 13:51:48.154612 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:51:48.154633 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 13:51:48.155486 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 13:51:48.156840 kernel: BTRFS info (device dm-0): using free space tree Jan 30 13:51:48.277461 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 30 13:51:48.291116 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 13:51:48.293139 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 13:51:48.306071 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 13:51:48.313066 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 13:51:48.347290 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:51:48.347349 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:51:48.347501 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 30 13:51:48.353510 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 30 13:51:48.369870 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:51:48.369142 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 13:51:48.377882 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 13:51:48.388748 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 13:51:48.452303 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:51:48.460617 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:51:48.488148 systemd-networkd[1162]: lo: Link UP Jan 30 13:51:48.488162 systemd-networkd[1162]: lo: Gained carrier Jan 30 13:51:48.491730 systemd-networkd[1162]: Enumeration completed Jan 30 13:51:48.492220 systemd-networkd[1162]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:51:48.492226 systemd-networkd[1162]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:51:48.493155 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:51:48.501158 systemd[1]: Reached target network.target - Network. Jan 30 13:51:48.506707 systemd-networkd[1162]: eth0: Link UP Jan 30 13:51:48.506717 systemd-networkd[1162]: eth0: Gained carrier Jan 30 13:51:48.506731 systemd-networkd[1162]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:51:48.522860 systemd-networkd[1162]: eth0: DHCPv4 address 172.31.19.166/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 30 13:51:48.746233 ignition[1085]: Ignition 2.19.0 Jan 30 13:51:48.746250 ignition[1085]: Stage: fetch-offline Jan 30 13:51:48.746540 ignition[1085]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:51:48.746553 ignition[1085]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 30 13:51:48.748593 ignition[1085]: Ignition finished successfully Jan 30 13:51:48.752846 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:51:48.758841 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 30 13:51:48.781950 ignition[1171]: Ignition 2.19.0 Jan 30 13:51:48.781961 ignition[1171]: Stage: fetch Jan 30 13:51:48.782385 ignition[1171]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:51:48.782394 ignition[1171]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 30 13:51:48.782504 ignition[1171]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 30 13:51:48.810314 ignition[1171]: PUT result: OK Jan 30 13:51:48.817705 ignition[1171]: parsed url from cmdline: "" Jan 30 13:51:48.817716 ignition[1171]: no config URL provided Jan 30 13:51:48.817724 ignition[1171]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:51:48.817736 ignition[1171]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:51:48.817756 ignition[1171]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 30 13:51:48.820836 ignition[1171]: PUT result: OK Jan 30 13:51:48.820903 ignition[1171]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 30 13:51:48.824891 ignition[1171]: GET result: OK Jan 30 13:51:48.825002 ignition[1171]: parsing config with SHA512: c9135ad6c894e258a1bba11cc8731ae6dc315442e143ff0fbd0da1b7b7ff51c9003e6bf673a59ec0d05f76bdfd4263491ca05fa3b650f92c3ab41286040a0763 Jan 30 13:51:48.856138 unknown[1171]: fetched base config from "system" Jan 30 13:51:48.856158 unknown[1171]: fetched base config from "system" Jan 30 13:51:48.856962 ignition[1171]: fetch: fetch complete Jan 30 13:51:48.856168 unknown[1171]: fetched user config from "aws" Jan 30 13:51:48.856970 ignition[1171]: fetch: fetch passed Jan 30 13:51:48.858980 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 30 13:51:48.857032 ignition[1171]: Ignition finished successfully Jan 30 13:51:48.881009 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 13:51:48.903845 ignition[1178]: Ignition 2.19.0 Jan 30 13:51:48.903862 ignition[1178]: Stage: kargs Jan 30 13:51:48.904326 ignition[1178]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:51:48.904339 ignition[1178]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 30 13:51:48.906012 ignition[1178]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 30 13:51:48.906929 ignition[1178]: PUT result: OK Jan 30 13:51:48.915342 ignition[1178]: kargs: kargs passed Jan 30 13:51:48.915462 ignition[1178]: Ignition finished successfully Jan 30 13:51:48.917649 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 13:51:48.926707 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 13:51:48.958467 ignition[1184]: Ignition 2.19.0 Jan 30 13:51:48.958482 ignition[1184]: Stage: disks Jan 30 13:51:48.959111 ignition[1184]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:51:48.959124 ignition[1184]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 30 13:51:48.959233 ignition[1184]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 30 13:51:48.961012 ignition[1184]: PUT result: OK Jan 30 13:51:48.970254 ignition[1184]: disks: disks passed Jan 30 13:51:48.970318 ignition[1184]: Ignition finished successfully Jan 30 13:51:48.974388 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 13:51:48.976232 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 13:51:48.982259 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 13:51:48.985335 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:51:48.986915 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:51:48.989591 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:51:48.996872 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 13:51:49.056374 systemd-fsck[1192]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 30 13:51:49.059614 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 13:51:49.069790 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 13:51:49.253461 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 30 13:51:49.254162 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 13:51:49.257726 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 13:51:49.278760 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:51:49.289583 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 13:51:49.292622 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 30 13:51:49.295173 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 13:51:49.299612 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:51:49.304314 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 13:51:49.311677 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 13:51:49.317464 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1211) Jan 30 13:51:49.319459 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:51:49.319508 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:51:49.320904 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 30 13:51:49.333445 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 30 13:51:49.335546 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:51:49.746519 initrd-setup-root[1235]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 13:51:49.766967 initrd-setup-root[1242]: cut: /sysroot/etc/group: No such file or directory Jan 30 13:51:49.773530 initrd-setup-root[1249]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 13:51:49.791267 initrd-setup-root[1256]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 13:51:50.179088 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 13:51:50.187557 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 13:51:50.191558 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 13:51:50.200548 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:51:50.200393 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 13:51:50.205680 systemd-networkd[1162]: eth0: Gained IPv6LL Jan 30 13:51:50.232063 ignition[1323]: INFO : Ignition 2.19.0 Jan 30 13:51:50.233855 ignition[1323]: INFO : Stage: mount Jan 30 13:51:50.233855 ignition[1323]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:51:50.233855 ignition[1323]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 30 13:51:50.233855 ignition[1323]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 30 13:51:50.239365 ignition[1323]: INFO : PUT result: OK Jan 30 13:51:50.242113 ignition[1323]: INFO : mount: mount passed Jan 30 13:51:50.242113 ignition[1323]: INFO : Ignition finished successfully Jan 30 13:51:50.244597 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 13:51:50.257613 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 13:51:50.266293 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 13:51:50.278721 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:51:50.307493 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1335) Jan 30 13:51:50.309442 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:51:50.309502 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:51:50.310694 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 30 13:51:50.315444 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 30 13:51:50.317340 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:51:50.353794 ignition[1352]: INFO : Ignition 2.19.0 Jan 30 13:51:50.353794 ignition[1352]: INFO : Stage: files Jan 30 13:51:50.356487 ignition[1352]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:51:50.356487 ignition[1352]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 30 13:51:50.356487 ignition[1352]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 30 13:51:50.356487 ignition[1352]: INFO : PUT result: OK Jan 30 13:51:50.363214 ignition[1352]: DEBUG : files: compiled without relabeling support, skipping Jan 30 13:51:50.385855 ignition[1352]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 13:51:50.385855 ignition[1352]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 13:51:50.431146 ignition[1352]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 13:51:50.432837 ignition[1352]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 13:51:50.434539 unknown[1352]: wrote ssh authorized keys file for user: core Jan 30 13:51:50.435878 ignition[1352]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 13:51:50.437805 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 13:51:50.440248 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 30 13:51:50.528902 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 30 13:51:50.733183 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 13:51:50.733183 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 30 13:51:50.737985 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 13:51:50.737985 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:51:50.742953 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:51:50.742953 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:51:50.746739 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:51:50.746739 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:51:50.746739 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:51:50.753087 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:51:50.755791 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:51:50.755791 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:51:50.762593 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:51:50.762593 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:51:50.762593 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 30 13:51:51.091991 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 30 13:51:51.476303 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:51:51.476303 ignition[1352]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 30 13:51:51.483672 ignition[1352]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:51:51.486525 ignition[1352]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:51:51.486525 ignition[1352]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 30 13:51:51.486525 ignition[1352]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 30 13:51:51.486525 ignition[1352]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 13:51:51.486525 ignition[1352]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:51:51.486525 ignition[1352]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:51:51.486525 ignition[1352]: INFO : files: files passed Jan 30 13:51:51.486525 ignition[1352]: INFO : Ignition finished successfully Jan 30 13:51:51.487827 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 13:51:51.506761 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 13:51:51.512353 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 13:51:51.516280 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 13:51:51.517701 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 13:51:51.531465 initrd-setup-root-after-ignition[1381]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:51:51.531465 initrd-setup-root-after-ignition[1381]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:51:51.536448 initrd-setup-root-after-ignition[1385]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:51:51.538494 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:51:51.541005 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 13:51:51.549770 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 13:51:51.591670 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 13:51:51.591783 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 13:51:51.596102 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 13:51:51.598399 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 13:51:51.600738 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 13:51:51.606707 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 13:51:51.628034 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:51:51.638767 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 13:51:51.653479 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:51:51.655821 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:51:51.657301 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 13:51:51.659314 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 13:51:51.659495 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:51:51.661930 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 13:51:51.664138 systemd[1]: Stopped target basic.target - Basic System. Jan 30 13:51:51.668072 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 13:51:51.672306 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:51:51.674830 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 13:51:51.676638 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 13:51:51.680229 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:51:51.684193 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 13:51:51.687204 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 13:51:51.697000 systemd[1]: Stopped target swap.target - Swaps. Jan 30 13:51:51.706138 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 13:51:51.706628 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:51:51.713542 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:51:51.715216 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:51:51.725303 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 13:51:51.727333 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:51:51.732520 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 13:51:51.732720 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 13:51:51.735189 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 13:51:51.735463 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:51:51.738958 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 13:51:51.739072 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 13:51:51.762915 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 13:51:51.764140 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 13:51:51.764358 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:51:51.791705 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 13:51:51.794070 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 13:51:51.794492 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:51:51.796134 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 13:51:51.796312 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:51:51.811919 ignition[1405]: INFO : Ignition 2.19.0 Jan 30 13:51:51.811919 ignition[1405]: INFO : Stage: umount Jan 30 13:51:51.811919 ignition[1405]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:51:51.811919 ignition[1405]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 30 13:51:51.811919 ignition[1405]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 30 13:51:51.813161 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 13:51:51.813286 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 13:51:51.826041 ignition[1405]: INFO : PUT result: OK Jan 30 13:51:51.828668 ignition[1405]: INFO : umount: umount passed Jan 30 13:51:51.828668 ignition[1405]: INFO : Ignition finished successfully Jan 30 13:51:51.831729 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 13:51:51.832467 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 13:51:51.837995 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 13:51:51.839245 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 13:51:51.840494 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 13:51:51.840560 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 13:51:51.841912 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 30 13:51:51.841971 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 30 13:51:51.844102 systemd[1]: Stopped target network.target - Network. Jan 30 13:51:51.846430 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 13:51:51.846511 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:51:51.848619 systemd[1]: Stopped target paths.target - Path Units. Jan 30 13:51:51.849027 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 13:51:51.850958 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:51:51.854345 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 13:51:51.862456 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 13:51:51.866521 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 13:51:51.866571 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:51:51.868815 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 13:51:51.868850 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:51:51.869967 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 13:51:51.870019 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 13:51:51.872272 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 13:51:51.872319 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 13:51:51.874624 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 13:51:51.876777 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 13:51:51.883720 systemd-networkd[1162]: eth0: DHCPv6 lease lost Jan 30 13:51:51.885879 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 13:51:51.886714 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 13:51:51.886872 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 13:51:51.889377 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 13:51:51.889627 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 13:51:51.895463 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 13:51:51.895530 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:51:51.908718 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 13:51:51.909626 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 13:51:51.909855 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:51:51.911162 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:51:51.911221 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:51:51.912266 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 13:51:51.912315 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 13:51:51.913358 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 13:51:51.913408 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:51:51.914976 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:51:51.942698 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 13:51:51.942898 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:51:51.946051 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 13:51:51.946125 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 13:51:51.948601 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 13:51:51.949565 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:51:51.953542 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 13:51:51.953641 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:51:51.956957 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 13:51:51.957038 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 13:51:51.959285 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:51:51.959352 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:51:51.968737 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 13:51:51.970316 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 13:51:51.970399 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:51:51.971976 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:51:51.972050 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:51:51.992569 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 13:51:51.992725 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 13:51:51.996890 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 13:51:51.997813 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 13:51:52.020874 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 13:51:52.021014 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 13:51:52.026456 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 13:51:52.028382 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 13:51:52.028612 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 13:51:52.037657 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 13:51:52.055366 systemd[1]: Switching root. Jan 30 13:51:52.105050 systemd-journald[178]: Journal stopped Jan 30 13:51:54.005051 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Jan 30 13:51:54.005143 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 13:51:54.005218 kernel: SELinux: policy capability open_perms=1 Jan 30 13:51:54.007490 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 13:51:54.007530 kernel: SELinux: policy capability always_check_network=0 Jan 30 13:51:54.007549 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 13:51:54.007573 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 13:51:54.007590 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 13:51:54.007613 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 13:51:54.007631 kernel: audit: type=1403 audit(1738245112.417:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 13:51:54.007651 systemd[1]: Successfully loaded SELinux policy in 39.048ms. Jan 30 13:51:54.007673 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.518ms. Jan 30 13:51:54.007692 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:51:54.007712 systemd[1]: Detected virtualization amazon. Jan 30 13:51:54.007732 systemd[1]: Detected architecture x86-64. Jan 30 13:51:54.007751 systemd[1]: Detected first boot. Jan 30 13:51:54.007775 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:51:54.007797 zram_generator::config[1448]: No configuration found. Jan 30 13:51:54.007817 systemd[1]: Populated /etc with preset unit settings. Jan 30 13:51:54.007839 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 30 13:51:54.007856 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 30 13:51:54.007875 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 30 13:51:54.007894 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 13:51:54.007913 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 13:51:54.007932 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 13:51:54.007953 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 13:51:54.007972 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 13:51:54.007991 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 13:51:54.008010 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 13:51:54.008029 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 13:51:54.008047 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:51:54.008065 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:51:54.008083 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 13:51:54.008102 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 13:51:54.008124 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 13:51:54.008143 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:51:54.008160 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 30 13:51:54.008178 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:51:54.008197 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 30 13:51:54.008215 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 30 13:51:54.008233 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 30 13:51:54.008254 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 13:51:54.008273 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:51:54.008292 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:51:54.008408 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:51:54.010482 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:51:54.010510 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 13:51:54.010529 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 13:51:54.010550 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:51:54.010569 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:51:54.010588 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:51:54.010614 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 13:51:54.010632 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 13:51:54.010651 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 13:51:54.010670 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 13:51:54.010689 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:51:54.010709 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 13:51:54.010729 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 13:51:54.010747 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 13:51:54.010767 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 13:51:54.010789 systemd[1]: Reached target machines.target - Containers. Jan 30 13:51:54.010807 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 13:51:54.010826 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:51:54.010845 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:51:54.010863 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 13:51:54.010882 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:51:54.010900 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:51:54.010918 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:51:54.010939 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 13:51:54.010958 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:51:54.010976 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 13:51:54.011002 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 30 13:51:54.011020 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 30 13:51:54.011090 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 30 13:51:54.011110 systemd[1]: Stopped systemd-fsck-usr.service. Jan 30 13:51:54.011129 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:51:54.011148 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:51:54.011170 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 13:51:54.011189 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 13:51:54.011207 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:51:54.011227 systemd[1]: verity-setup.service: Deactivated successfully. Jan 30 13:51:54.011246 systemd[1]: Stopped verity-setup.service. Jan 30 13:51:54.011265 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:51:54.011284 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 13:51:54.011302 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 13:51:54.011324 kernel: fuse: init (API version 7.39) Jan 30 13:51:54.011344 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 13:51:54.011363 kernel: loop: module loaded Jan 30 13:51:54.011382 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 13:51:54.011401 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 13:51:54.011446 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 13:51:54.011465 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:51:54.011482 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 13:51:54.011501 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 13:51:54.011520 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:51:54.011538 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:51:54.011556 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:51:54.011574 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:51:54.011594 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 13:51:54.011615 kernel: ACPI: bus type drm_connector registered Jan 30 13:51:54.011633 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 13:51:54.011651 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:51:54.011670 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:51:54.011691 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:51:54.011713 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:51:54.011731 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 13:51:54.011783 systemd-journald[1530]: Collecting audit messages is disabled. Jan 30 13:51:54.011817 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:51:54.011836 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 13:51:54.011854 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 13:51:54.011872 systemd-journald[1530]: Journal started Jan 30 13:51:54.011910 systemd-journald[1530]: Runtime Journal (/run/log/journal/ec2ded965a0d126a6781b4ce19182f87) is 4.8M, max 38.6M, 33.7M free. Jan 30 13:51:53.479160 systemd[1]: Queued start job for default target multi-user.target. Jan 30 13:51:53.500883 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 30 13:51:53.501284 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 30 13:51:54.015125 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:51:54.034307 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 13:51:54.042579 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 13:51:54.059569 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 13:51:54.062478 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 13:51:54.062607 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:51:54.066764 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 13:51:54.070729 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 13:51:54.080574 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 13:51:54.082723 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:51:54.090260 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 13:51:54.112659 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 13:51:54.114345 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:51:54.117620 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 13:51:54.119082 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:51:54.124577 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:51:54.133538 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 13:51:54.139551 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 13:51:54.145036 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 13:51:54.146800 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:51:54.148396 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 13:51:54.150625 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 13:51:54.175045 systemd-journald[1530]: Time spent on flushing to /var/log/journal/ec2ded965a0d126a6781b4ce19182f87 is 51.893ms for 962 entries. Jan 30 13:51:54.175045 systemd-journald[1530]: System Journal (/var/log/journal/ec2ded965a0d126a6781b4ce19182f87) is 8.0M, max 195.6M, 187.6M free. Jan 30 13:51:54.246529 systemd-journald[1530]: Received client request to flush runtime journal. Jan 30 13:51:54.246634 kernel: loop0: detected capacity change from 0 to 210664 Jan 30 13:51:54.181923 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 13:51:54.194919 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 13:51:54.197263 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 13:51:54.209665 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 13:51:54.254514 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 13:51:54.265104 udevadm[1581]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 30 13:51:54.275122 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:51:54.287987 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 13:51:54.290541 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 13:51:54.305455 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 13:51:54.328225 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 13:51:54.342844 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:51:54.362440 kernel: loop1: detected capacity change from 0 to 142488 Jan 30 13:51:54.389969 systemd-tmpfiles[1593]: ACLs are not supported, ignoring. Jan 30 13:51:54.390000 systemd-tmpfiles[1593]: ACLs are not supported, ignoring. Jan 30 13:51:54.402479 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:51:54.506704 kernel: loop2: detected capacity change from 0 to 140768 Jan 30 13:51:54.676337 kernel: loop3: detected capacity change from 0 to 61336 Jan 30 13:51:54.795453 kernel: loop4: detected capacity change from 0 to 210664 Jan 30 13:51:54.847519 kernel: loop5: detected capacity change from 0 to 142488 Jan 30 13:51:54.874450 kernel: loop6: detected capacity change from 0 to 140768 Jan 30 13:51:54.912788 kernel: loop7: detected capacity change from 0 to 61336 Jan 30 13:51:54.937921 (sd-merge)[1599]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 30 13:51:54.939627 (sd-merge)[1599]: Merged extensions into '/usr'. Jan 30 13:51:54.952542 systemd[1]: Reloading requested from client PID 1574 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 13:51:54.952866 systemd[1]: Reloading... Jan 30 13:51:55.061495 zram_generator::config[1622]: No configuration found. Jan 30 13:51:55.227503 ldconfig[1569]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 13:51:55.348163 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:51:55.484069 systemd[1]: Reloading finished in 530 ms. Jan 30 13:51:55.529859 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 13:51:55.534473 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 13:51:55.552769 systemd[1]: Starting ensure-sysext.service... Jan 30 13:51:55.560763 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:51:55.572517 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 13:51:55.591647 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:51:55.598776 systemd[1]: Reloading requested from client PID 1674 ('systemctl') (unit ensure-sysext.service)... Jan 30 13:51:55.598801 systemd[1]: Reloading... Jan 30 13:51:55.610295 systemd-tmpfiles[1675]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 13:51:55.611273 systemd-tmpfiles[1675]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 13:51:55.614119 systemd-tmpfiles[1675]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 13:51:55.614724 systemd-tmpfiles[1675]: ACLs are not supported, ignoring. Jan 30 13:51:55.614819 systemd-tmpfiles[1675]: ACLs are not supported, ignoring. Jan 30 13:51:55.622690 systemd-tmpfiles[1675]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:51:55.622889 systemd-tmpfiles[1675]: Skipping /boot Jan 30 13:51:55.648611 systemd-tmpfiles[1675]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:51:55.648626 systemd-tmpfiles[1675]: Skipping /boot Jan 30 13:51:55.682125 systemd-udevd[1678]: Using default interface naming scheme 'v255'. Jan 30 13:51:55.770669 zram_generator::config[1707]: No configuration found. Jan 30 13:51:55.964130 (udev-worker)[1706]: Network interface NamePolicy= disabled on kernel command line. Jan 30 13:51:56.063746 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 30 13:51:56.083570 kernel: ACPI: button: Power Button [PWRF] Jan 30 13:51:56.087641 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Jan 30 13:51:56.104620 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Jan 30 13:51:56.110019 kernel: ACPI: button: Sleep Button [SLPF] Jan 30 13:51:56.105955 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:51:56.137544 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input5 Jan 30 13:51:56.164082 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1702) Jan 30 13:51:56.242675 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 30 13:51:56.243115 systemd[1]: Reloading finished in 643 ms. Jan 30 13:51:56.259473 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 13:51:56.263537 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:51:56.265744 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:51:56.326002 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 13:51:56.335038 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 13:51:56.349901 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 13:51:56.362739 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:51:56.405065 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:51:56.418918 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 13:51:56.424682 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:51:56.463105 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:51:56.464474 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:51:56.472879 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:51:56.486726 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:51:56.497671 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:51:56.498974 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:51:56.507899 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 13:51:56.509109 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:51:56.564620 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 13:51:56.599327 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:51:56.599882 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:51:56.623535 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 30 13:51:56.629779 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 13:51:56.632146 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:51:56.632343 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:51:56.635317 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:51:56.635539 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:51:56.649001 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 13:51:56.666220 augenrules[1893]: No rules Jan 30 13:51:56.672390 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 13:51:56.682090 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:51:56.683949 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:51:56.693340 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 13:51:56.700947 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:51:56.707456 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:51:56.712862 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:51:56.718383 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:51:56.719123 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:51:56.733748 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 13:51:56.734392 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 13:51:56.745812 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 13:51:56.746255 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:51:56.750184 systemd[1]: Finished ensure-sysext.service. Jan 30 13:51:56.770928 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 13:51:56.776487 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:51:56.776926 lvm[1901]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:51:56.776753 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:51:56.778774 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:51:56.782855 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:51:56.783229 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:51:56.827325 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 13:51:56.836828 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:51:56.839660 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:51:56.846741 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:51:56.846948 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:51:56.848337 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:51:56.854694 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 13:51:56.856270 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 13:51:56.870330 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 13:51:56.872310 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 13:51:56.876785 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:51:56.887693 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 13:51:56.966702 systemd-networkd[1861]: lo: Link UP Jan 30 13:51:56.966712 systemd-networkd[1861]: lo: Gained carrier Jan 30 13:51:56.968753 systemd-networkd[1861]: Enumeration completed Jan 30 13:51:56.970930 systemd-networkd[1861]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:51:56.971041 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:51:56.971225 systemd-networkd[1861]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:51:56.973298 systemd-resolved[1867]: Positive Trust Anchors: Jan 30 13:51:56.973668 systemd-resolved[1867]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:51:56.973777 systemd-resolved[1867]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:51:56.974617 systemd-networkd[1861]: eth0: Link UP Jan 30 13:51:56.974832 systemd-networkd[1861]: eth0: Gained carrier Jan 30 13:51:56.974860 systemd-networkd[1861]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:51:56.980330 systemd-resolved[1867]: Defaulting to hostname 'linux'. Jan 30 13:51:56.989539 systemd-networkd[1861]: eth0: DHCPv4 address 172.31.19.166/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 30 13:51:57.022102 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:51:57.023885 systemd[1]: Reached target network.target - Network. Jan 30 13:51:57.024858 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:51:57.031640 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 13:51:57.033547 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:51:57.035737 lvm[1926]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:51:57.036197 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:51:57.054038 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 13:51:57.059758 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 13:51:57.061626 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 13:51:57.070526 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 13:51:57.074471 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 13:51:57.079672 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 13:51:57.079731 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:51:57.082667 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:51:57.084682 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 13:51:57.087843 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 13:51:57.099211 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 13:51:57.101140 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 13:51:57.102762 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 13:51:57.104935 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:51:57.106023 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:51:57.107641 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:51:57.107678 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:51:57.112888 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 13:51:57.122124 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 30 13:51:57.125664 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 13:51:57.135604 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 13:51:57.140638 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 13:51:57.143500 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 13:51:57.146887 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 13:51:57.202148 systemd[1]: Started ntpd.service - Network Time Service. Jan 30 13:51:57.212897 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 13:51:57.218837 jq[1937]: false Jan 30 13:51:57.224654 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 30 13:51:57.230114 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 13:51:57.238632 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 13:51:57.250656 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 13:51:57.253052 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 13:51:57.254576 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 13:51:57.261923 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 13:51:57.268167 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 13:51:57.280727 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 13:51:57.282040 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 13:51:57.314381 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 13:51:57.314896 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 13:51:57.344583 jq[1951]: true Jan 30 13:51:57.370103 dbus-daemon[1936]: [system] SELinux support is enabled Jan 30 13:51:57.371840 extend-filesystems[1938]: Found loop4 Jan 30 13:51:57.374397 extend-filesystems[1938]: Found loop5 Jan 30 13:51:57.374397 extend-filesystems[1938]: Found loop6 Jan 30 13:51:57.374397 extend-filesystems[1938]: Found loop7 Jan 30 13:51:57.374397 extend-filesystems[1938]: Found nvme0n1 Jan 30 13:51:57.374397 extend-filesystems[1938]: Found nvme0n1p1 Jan 30 13:51:57.374397 extend-filesystems[1938]: Found nvme0n1p2 Jan 30 13:51:57.374397 extend-filesystems[1938]: Found nvme0n1p3 Jan 30 13:51:57.374397 extend-filesystems[1938]: Found usr Jan 30 13:51:57.374397 extend-filesystems[1938]: Found nvme0n1p4 Jan 30 13:51:57.374397 extend-filesystems[1938]: Found nvme0n1p6 Jan 30 13:51:57.374397 extend-filesystems[1938]: Found nvme0n1p7 Jan 30 13:51:57.374397 extend-filesystems[1938]: Found nvme0n1p9 Jan 30 13:51:57.374397 extend-filesystems[1938]: Checking size of /dev/nvme0n1p9 Jan 30 13:51:57.379774 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 13:51:57.403871 dbus-daemon[1936]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1861 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 30 13:51:57.390998 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 13:51:57.391053 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 13:51:57.393165 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 13:51:57.393197 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 13:51:57.425339 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 13:51:57.427305 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 13:51:57.446120 dbus-daemon[1936]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 30 13:51:57.449936 extend-filesystems[1938]: Resized partition /dev/nvme0n1p9 Jan 30 13:51:57.451770 coreos-metadata[1935]: Jan 30 13:51:57.451 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 30 13:51:57.479007 coreos-metadata[1935]: Jan 30 13:51:57.453 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 30 13:51:57.479007 coreos-metadata[1935]: Jan 30 13:51:57.462 INFO Fetch successful Jan 30 13:51:57.479007 coreos-metadata[1935]: Jan 30 13:51:57.463 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 30 13:51:57.479174 extend-filesystems[1982]: resize2fs 1.47.1 (20-May-2024) Jan 30 13:51:57.487604 coreos-metadata[1935]: Jan 30 13:51:57.481 INFO Fetch successful Jan 30 13:51:57.487604 coreos-metadata[1935]: Jan 30 13:51:57.481 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 30 13:51:57.487604 coreos-metadata[1935]: Jan 30 13:51:57.481 INFO Fetch successful Jan 30 13:51:57.487604 coreos-metadata[1935]: Jan 30 13:51:57.482 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 30 13:51:57.487604 coreos-metadata[1935]: Jan 30 13:51:57.483 INFO Fetch successful Jan 30 13:51:57.487604 coreos-metadata[1935]: Jan 30 13:51:57.483 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 30 13:51:57.487604 coreos-metadata[1935]: Jan 30 13:51:57.484 INFO Fetch failed with 404: resource not found Jan 30 13:51:57.487604 coreos-metadata[1935]: Jan 30 13:51:57.484 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 30 13:51:57.487604 coreos-metadata[1935]: Jan 30 13:51:57.484 INFO Fetch successful Jan 30 13:51:57.487604 coreos-metadata[1935]: Jan 30 13:51:57.484 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 30 13:51:57.492732 tar[1953]: linux-amd64/helm Jan 30 13:51:57.511708 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jan 30 13:51:57.511749 jq[1970]: true Jan 30 13:51:57.500245 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 30 13:51:57.512040 coreos-metadata[1935]: Jan 30 13:51:57.493 INFO Fetch successful Jan 30 13:51:57.512040 coreos-metadata[1935]: Jan 30 13:51:57.493 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 30 13:51:57.512040 coreos-metadata[1935]: Jan 30 13:51:57.495 INFO Fetch successful Jan 30 13:51:57.512040 coreos-metadata[1935]: Jan 30 13:51:57.495 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 30 13:51:57.512040 coreos-metadata[1935]: Jan 30 13:51:57.496 INFO Fetch successful Jan 30 13:51:57.512040 coreos-metadata[1935]: Jan 30 13:51:57.496 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 30 13:51:57.512040 coreos-metadata[1935]: Jan 30 13:51:57.498 INFO Fetch successful Jan 30 13:51:57.513840 ntpd[1940]: ntpd 4.2.8p17@1.4004-o Wed Jan 29 09:31:52 UTC 2025 (1): Starting Jan 30 13:51:57.519201 ntpd[1940]: 30 Jan 13:51:57 ntpd[1940]: ntpd 4.2.8p17@1.4004-o Wed Jan 29 09:31:52 UTC 2025 (1): Starting Jan 30 13:51:57.519201 ntpd[1940]: 30 Jan 13:51:57 ntpd[1940]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 30 13:51:57.519201 ntpd[1940]: 30 Jan 13:51:57 ntpd[1940]: ---------------------------------------------------- Jan 30 13:51:57.519201 ntpd[1940]: 30 Jan 13:51:57 ntpd[1940]: ntp-4 is maintained by Network Time Foundation, Jan 30 13:51:57.519201 ntpd[1940]: 30 Jan 13:51:57 ntpd[1940]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 30 13:51:57.519201 ntpd[1940]: 30 Jan 13:51:57 ntpd[1940]: corporation. Support and training for ntp-4 are Jan 30 13:51:57.519201 ntpd[1940]: 30 Jan 13:51:57 ntpd[1940]: available at https://www.nwtime.org/support Jan 30 13:51:57.519201 ntpd[1940]: 30 Jan 13:51:57 ntpd[1940]: ---------------------------------------------------- Jan 30 13:51:57.523875 update_engine[1949]: I20250130 13:51:57.512246 1949 main.cc:92] Flatcar Update Engine starting Jan 30 13:51:57.513876 ntpd[1940]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 30 13:51:57.546412 update_engine[1949]: I20250130 13:51:57.530598 1949 update_check_scheduler.cc:74] Next update check in 11m29s Jan 30 13:51:57.546513 ntpd[1940]: 30 Jan 13:51:57 ntpd[1940]: proto: precision = 0.083 usec (-23) Jan 30 13:51:57.546513 ntpd[1940]: 30 Jan 13:51:57 ntpd[1940]: basedate set to 2025-01-17 Jan 30 13:51:57.546513 ntpd[1940]: 30 Jan 13:51:57 ntpd[1940]: gps base set to 2025-01-19 (week 2350) Jan 30 13:51:57.546513 ntpd[1940]: 30 Jan 13:51:57 ntpd[1940]: Listen and drop on 0 v6wildcard [::]:123 Jan 30 13:51:57.546513 ntpd[1940]: 30 Jan 13:51:57 ntpd[1940]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 30 13:51:57.546513 ntpd[1940]: 30 Jan 13:51:57 ntpd[1940]: Listen normally on 2 lo 127.0.0.1:123 Jan 30 13:51:57.546513 ntpd[1940]: 30 Jan 13:51:57 ntpd[1940]: Listen normally on 3 eth0 172.31.19.166:123 Jan 30 13:51:57.546513 ntpd[1940]: 30 Jan 13:51:57 ntpd[1940]: Listen normally on 4 lo [::1]:123 Jan 30 13:51:57.546513 ntpd[1940]: 30 Jan 13:51:57 ntpd[1940]: bind(21) AF_INET6 fe80::430:8dff:feec:ed85%2#123 flags 0x11 failed: Cannot assign requested address Jan 30 13:51:57.546513 ntpd[1940]: 30 Jan 13:51:57 ntpd[1940]: unable to create socket on eth0 (5) for fe80::430:8dff:feec:ed85%2#123 Jan 30 13:51:57.546513 ntpd[1940]: 30 Jan 13:51:57 ntpd[1940]: failed to init interface for address fe80::430:8dff:feec:ed85%2 Jan 30 13:51:57.546513 ntpd[1940]: 30 Jan 13:51:57 ntpd[1940]: Listening on routing socket on fd #21 for interface updates Jan 30 13:51:57.531563 systemd[1]: Started update-engine.service - Update Engine. Jan 30 13:51:57.513888 ntpd[1940]: ---------------------------------------------------- Jan 30 13:51:57.544454 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 13:51:57.513898 ntpd[1940]: ntp-4 is maintained by Network Time Foundation, Jan 30 13:51:57.513908 ntpd[1940]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 30 13:51:57.513919 ntpd[1940]: corporation. Support and training for ntp-4 are Jan 30 13:51:57.513930 ntpd[1940]: available at https://www.nwtime.org/support Jan 30 13:51:57.513940 ntpd[1940]: ---------------------------------------------------- Jan 30 13:51:57.526057 ntpd[1940]: proto: precision = 0.083 usec (-23) Jan 30 13:51:57.526528 ntpd[1940]: basedate set to 2025-01-17 Jan 30 13:51:57.526547 ntpd[1940]: gps base set to 2025-01-19 (week 2350) Jan 30 13:51:57.539915 ntpd[1940]: Listen and drop on 0 v6wildcard [::]:123 Jan 30 13:51:57.539974 ntpd[1940]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 30 13:51:57.544596 ntpd[1940]: Listen normally on 2 lo 127.0.0.1:123 Jan 30 13:51:57.544642 ntpd[1940]: Listen normally on 3 eth0 172.31.19.166:123 Jan 30 13:51:57.544698 ntpd[1940]: Listen normally on 4 lo [::1]:123 Jan 30 13:51:57.544749 ntpd[1940]: bind(21) AF_INET6 fe80::430:8dff:feec:ed85%2#123 flags 0x11 failed: Cannot assign requested address Jan 30 13:51:57.544770 ntpd[1940]: unable to create socket on eth0 (5) for fe80::430:8dff:feec:ed85%2#123 Jan 30 13:51:57.565745 ntpd[1940]: 30 Jan 13:51:57 ntpd[1940]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 30 13:51:57.565745 ntpd[1940]: 30 Jan 13:51:57 ntpd[1940]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 30 13:51:57.554569 (ntainerd)[1983]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 13:51:57.544784 ntpd[1940]: failed to init interface for address fe80::430:8dff:feec:ed85%2 Jan 30 13:51:57.544819 ntpd[1940]: Listening on routing socket on fd #21 for interface updates Jan 30 13:51:57.558513 ntpd[1940]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 30 13:51:57.558551 ntpd[1940]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 30 13:51:57.646863 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 30 13:51:57.670448 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jan 30 13:51:57.694457 extend-filesystems[1982]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 30 13:51:57.694457 extend-filesystems[1982]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 30 13:51:57.694457 extend-filesystems[1982]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jan 30 13:51:57.716394 extend-filesystems[1938]: Resized filesystem in /dev/nvme0n1p9 Jan 30 13:51:57.696291 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 13:51:57.696912 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 13:51:57.753772 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1706) Jan 30 13:51:57.750566 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 13:51:57.774435 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 30 13:51:57.779638 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 13:51:57.901998 dbus-daemon[1936]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 30 13:51:57.902185 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 30 13:51:57.905885 dbus-daemon[1936]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.4' (uid=0 pid=1984 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 30 13:51:57.920630 systemd[1]: Starting polkit.service - Authorization Manager... Jan 30 13:51:57.926764 bash[2026]: Updated "/home/core/.ssh/authorized_keys" Jan 30 13:51:57.931610 systemd-logind[1947]: Watching system buttons on /dev/input/event1 (Power Button) Jan 30 13:51:57.931640 systemd-logind[1947]: Watching system buttons on /dev/input/event2 (Sleep Button) Jan 30 13:51:57.931665 systemd-logind[1947]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 30 13:51:57.936792 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 13:51:57.948605 systemd-logind[1947]: New seat seat0. Jan 30 13:51:57.969674 systemd[1]: Starting sshkeys.service... Jan 30 13:51:57.971617 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 13:51:58.030035 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 30 13:51:58.042543 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 30 13:51:58.060591 polkitd[2033]: Started polkitd version 121 Jan 30 13:51:58.095873 polkitd[2033]: Loading rules from directory /etc/polkit-1/rules.d Jan 30 13:51:58.095966 polkitd[2033]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 30 13:51:58.110899 polkitd[2033]: Finished loading, compiling and executing 2 rules Jan 30 13:51:58.112843 dbus-daemon[1936]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 30 13:51:58.113050 systemd[1]: Started polkit.service - Authorization Manager. Jan 30 13:51:58.116596 polkitd[2033]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 30 13:51:58.153856 systemd-hostnamed[1984]: Hostname set to (transient) Jan 30 13:51:58.156606 systemd-resolved[1867]: System hostname changed to 'ip-172-31-19-166'. Jan 30 13:51:58.173928 locksmithd[1990]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 13:51:58.342921 coreos-metadata[2068]: Jan 30 13:51:58.342 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 30 13:51:58.349710 coreos-metadata[2068]: Jan 30 13:51:58.349 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 30 13:51:58.354461 coreos-metadata[2068]: Jan 30 13:51:58.353 INFO Fetch successful Jan 30 13:51:58.354461 coreos-metadata[2068]: Jan 30 13:51:58.353 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 30 13:51:58.357171 coreos-metadata[2068]: Jan 30 13:51:58.357 INFO Fetch successful Jan 30 13:51:58.374682 unknown[2068]: wrote ssh authorized keys file for user: core Jan 30 13:51:58.427909 update-ssh-keys[2134]: Updated "/home/core/.ssh/authorized_keys" Jan 30 13:51:58.428747 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 30 13:51:58.440436 systemd[1]: Finished sshkeys.service. Jan 30 13:51:58.514341 ntpd[1940]: bind(24) AF_INET6 fe80::430:8dff:feec:ed85%2#123 flags 0x11 failed: Cannot assign requested address Jan 30 13:51:58.514938 ntpd[1940]: 30 Jan 13:51:58 ntpd[1940]: bind(24) AF_INET6 fe80::430:8dff:feec:ed85%2#123 flags 0x11 failed: Cannot assign requested address Jan 30 13:51:58.514938 ntpd[1940]: 30 Jan 13:51:58 ntpd[1940]: unable to create socket on eth0 (6) for fe80::430:8dff:feec:ed85%2#123 Jan 30 13:51:58.514938 ntpd[1940]: 30 Jan 13:51:58 ntpd[1940]: failed to init interface for address fe80::430:8dff:feec:ed85%2 Jan 30 13:51:58.514405 ntpd[1940]: unable to create socket on eth0 (6) for fe80::430:8dff:feec:ed85%2#123 Jan 30 13:51:58.514464 ntpd[1940]: failed to init interface for address fe80::430:8dff:feec:ed85%2 Jan 30 13:51:58.691978 containerd[1983]: time="2025-01-30T13:51:58.691819474Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 30 13:51:58.799455 containerd[1983]: time="2025-01-30T13:51:58.798054228Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:51:58.800520 containerd[1983]: time="2025-01-30T13:51:58.800472238Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:51:58.801251 containerd[1983]: time="2025-01-30T13:51:58.801227631Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 13:51:58.801349 containerd[1983]: time="2025-01-30T13:51:58.801334109Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 13:51:58.801626 containerd[1983]: time="2025-01-30T13:51:58.801606842Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 13:51:58.802026 containerd[1983]: time="2025-01-30T13:51:58.802006784Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 13:51:58.802303 containerd[1983]: time="2025-01-30T13:51:58.802280385Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:51:58.802641 containerd[1983]: time="2025-01-30T13:51:58.802622971Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:51:58.803895 containerd[1983]: time="2025-01-30T13:51:58.802927985Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:51:58.803895 containerd[1983]: time="2025-01-30T13:51:58.802955064Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 13:51:58.803895 containerd[1983]: time="2025-01-30T13:51:58.802976798Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:51:58.803895 containerd[1983]: time="2025-01-30T13:51:58.802993944Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 13:51:58.803895 containerd[1983]: time="2025-01-30T13:51:58.803085129Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:51:58.803895 containerd[1983]: time="2025-01-30T13:51:58.803329650Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:51:58.805502 containerd[1983]: time="2025-01-30T13:51:58.805470657Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:51:58.805592 containerd[1983]: time="2025-01-30T13:51:58.805576571Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 13:51:58.805779 containerd[1983]: time="2025-01-30T13:51:58.805762492Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 13:51:58.806302 containerd[1983]: time="2025-01-30T13:51:58.806283381Z" level=info msg="metadata content store policy set" policy=shared Jan 30 13:51:58.812132 containerd[1983]: time="2025-01-30T13:51:58.812090765Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 13:51:58.812339 containerd[1983]: time="2025-01-30T13:51:58.812320575Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 13:51:58.820139 containerd[1983]: time="2025-01-30T13:51:58.819937953Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 13:51:58.820139 containerd[1983]: time="2025-01-30T13:51:58.820010269Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 13:51:58.820139 containerd[1983]: time="2025-01-30T13:51:58.820098496Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 13:51:58.822112 containerd[1983]: time="2025-01-30T13:51:58.820404635Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 13:51:58.823940 containerd[1983]: time="2025-01-30T13:51:58.823903536Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 13:51:58.824219 containerd[1983]: time="2025-01-30T13:51:58.824098516Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 13:51:58.824288 containerd[1983]: time="2025-01-30T13:51:58.824229080Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 13:51:58.824288 containerd[1983]: time="2025-01-30T13:51:58.824251289Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 13:51:58.824288 containerd[1983]: time="2025-01-30T13:51:58.824274012Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 13:51:58.824395 containerd[1983]: time="2025-01-30T13:51:58.824302948Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 13:51:58.824395 containerd[1983]: time="2025-01-30T13:51:58.824327824Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 13:51:58.824395 containerd[1983]: time="2025-01-30T13:51:58.824349970Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 13:51:58.824395 containerd[1983]: time="2025-01-30T13:51:58.824372952Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 13:51:58.824617 containerd[1983]: time="2025-01-30T13:51:58.824394913Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 13:51:58.824617 containerd[1983]: time="2025-01-30T13:51:58.824414546Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 13:51:58.824617 containerd[1983]: time="2025-01-30T13:51:58.824447865Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 13:51:58.824617 containerd[1983]: time="2025-01-30T13:51:58.824479191Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 13:51:58.824617 containerd[1983]: time="2025-01-30T13:51:58.824501016Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 13:51:58.824617 containerd[1983]: time="2025-01-30T13:51:58.824570883Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 13:51:58.824617 containerd[1983]: time="2025-01-30T13:51:58.824598603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 13:51:58.825440 containerd[1983]: time="2025-01-30T13:51:58.824826596Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 13:51:58.825440 containerd[1983]: time="2025-01-30T13:51:58.824894284Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 13:51:58.825440 containerd[1983]: time="2025-01-30T13:51:58.824925292Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 13:51:58.825440 containerd[1983]: time="2025-01-30T13:51:58.824950811Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 13:51:58.825440 containerd[1983]: time="2025-01-30T13:51:58.824972114Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 13:51:58.825440 containerd[1983]: time="2025-01-30T13:51:58.825171145Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 13:51:58.825440 containerd[1983]: time="2025-01-30T13:51:58.825293506Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 13:51:58.825440 containerd[1983]: time="2025-01-30T13:51:58.825319052Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 13:51:58.825440 containerd[1983]: time="2025-01-30T13:51:58.825342017Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 13:51:58.825440 containerd[1983]: time="2025-01-30T13:51:58.825367024Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 13:51:58.825440 containerd[1983]: time="2025-01-30T13:51:58.825404475Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 13:51:58.827646 containerd[1983]: time="2025-01-30T13:51:58.827470799Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 13:51:58.827646 containerd[1983]: time="2025-01-30T13:51:58.827504007Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 13:51:58.827646 containerd[1983]: time="2025-01-30T13:51:58.827631058Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 13:51:58.827950 containerd[1983]: time="2025-01-30T13:51:58.827662250Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 13:51:58.827950 containerd[1983]: time="2025-01-30T13:51:58.827680442Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 13:51:58.827950 containerd[1983]: time="2025-01-30T13:51:58.827703373Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 13:51:58.827950 containerd[1983]: time="2025-01-30T13:51:58.827718871Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 13:51:58.827950 containerd[1983]: time="2025-01-30T13:51:58.827737947Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 13:51:58.827950 containerd[1983]: time="2025-01-30T13:51:58.827753802Z" level=info msg="NRI interface is disabled by configuration." Jan 30 13:51:58.827950 containerd[1983]: time="2025-01-30T13:51:58.827768500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 13:51:58.828952 containerd[1983]: time="2025-01-30T13:51:58.828178588Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 13:51:58.828952 containerd[1983]: time="2025-01-30T13:51:58.828269512Z" level=info msg="Connect containerd service" Jan 30 13:51:58.828952 containerd[1983]: time="2025-01-30T13:51:58.828325294Z" level=info msg="using legacy CRI server" Jan 30 13:51:58.828952 containerd[1983]: time="2025-01-30T13:51:58.828337642Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 13:51:58.829280 sshd_keygen[1960]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 13:51:58.830317 containerd[1983]: time="2025-01-30T13:51:58.830073528Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 13:51:58.831077 containerd[1983]: time="2025-01-30T13:51:58.830992998Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:51:58.831411 containerd[1983]: time="2025-01-30T13:51:58.831387386Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 13:51:58.831507 containerd[1983]: time="2025-01-30T13:51:58.831465182Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 13:51:58.832457 containerd[1983]: time="2025-01-30T13:51:58.831539606Z" level=info msg="Start subscribing containerd event" Jan 30 13:51:58.832457 containerd[1983]: time="2025-01-30T13:51:58.831642360Z" level=info msg="Start recovering state" Jan 30 13:51:58.832457 containerd[1983]: time="2025-01-30T13:51:58.831731165Z" level=info msg="Start event monitor" Jan 30 13:51:58.832457 containerd[1983]: time="2025-01-30T13:51:58.831751979Z" level=info msg="Start snapshots syncer" Jan 30 13:51:58.832457 containerd[1983]: time="2025-01-30T13:51:58.831766244Z" level=info msg="Start cni network conf syncer for default" Jan 30 13:51:58.832457 containerd[1983]: time="2025-01-30T13:51:58.831779524Z" level=info msg="Start streaming server" Jan 30 13:51:58.832457 containerd[1983]: time="2025-01-30T13:51:58.831853476Z" level=info msg="containerd successfully booted in 0.143985s" Jan 30 13:51:58.832585 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 13:51:58.889361 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 13:51:58.898877 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 13:51:58.907136 systemd[1]: Started sshd@0-172.31.19.166:22-139.178.68.195:49984.service - OpenSSH per-connection server daemon (139.178.68.195:49984). Jan 30 13:51:58.910371 systemd-networkd[1861]: eth0: Gained IPv6LL Jan 30 13:51:58.918532 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 13:51:58.923324 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 13:51:58.923725 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 13:51:58.931045 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 13:51:58.943942 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 30 13:51:58.957186 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:51:58.966724 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 13:51:58.972732 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 13:51:59.045789 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 13:51:59.060835 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 13:51:59.065636 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 30 13:51:59.067837 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 13:51:59.099585 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 13:51:59.200859 amazon-ssm-agent[2155]: Initializing new seelog logger Jan 30 13:51:59.201353 amazon-ssm-agent[2155]: New Seelog Logger Creation Complete Jan 30 13:51:59.201353 amazon-ssm-agent[2155]: 2025/01/30 13:51:59 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 13:51:59.201353 amazon-ssm-agent[2155]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 13:51:59.201645 amazon-ssm-agent[2155]: 2025/01/30 13:51:59 processing appconfig overrides Jan 30 13:51:59.202117 amazon-ssm-agent[2155]: 2025/01/30 13:51:59 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 13:51:59.202117 amazon-ssm-agent[2155]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 13:51:59.202271 amazon-ssm-agent[2155]: 2025/01/30 13:51:59 processing appconfig overrides Jan 30 13:51:59.204973 amazon-ssm-agent[2155]: 2025-01-30 13:51:59 INFO Proxy environment variables: Jan 30 13:51:59.206040 amazon-ssm-agent[2155]: 2025/01/30 13:51:59 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 13:51:59.206040 amazon-ssm-agent[2155]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 13:51:59.206040 amazon-ssm-agent[2155]: 2025/01/30 13:51:59 processing appconfig overrides Jan 30 13:51:59.213348 amazon-ssm-agent[2155]: 2025/01/30 13:51:59 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 13:51:59.213348 amazon-ssm-agent[2155]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 13:51:59.213348 amazon-ssm-agent[2155]: 2025/01/30 13:51:59 processing appconfig overrides Jan 30 13:51:59.283372 sshd[2150]: Accepted publickey for core from 139.178.68.195 port 49984 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:51:59.287561 sshd[2150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:51:59.306211 amazon-ssm-agent[2155]: 2025-01-30 13:51:59 INFO https_proxy: Jan 30 13:51:59.310753 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 13:51:59.323491 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 13:51:59.336249 systemd-logind[1947]: New session 1 of user core. Jan 30 13:51:59.362538 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 13:51:59.377939 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 13:51:59.385850 tar[1953]: linux-amd64/LICENSE Jan 30 13:51:59.391345 tar[1953]: linux-amd64/README.md Jan 30 13:51:59.398525 (systemd)[2179]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 13:51:59.409470 amazon-ssm-agent[2155]: 2025-01-30 13:51:59 INFO http_proxy: Jan 30 13:51:59.419488 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 13:51:59.507021 amazon-ssm-agent[2155]: 2025-01-30 13:51:59 INFO no_proxy: Jan 30 13:51:59.607362 amazon-ssm-agent[2155]: 2025-01-30 13:51:59 INFO Checking if agent identity type OnPrem can be assumed Jan 30 13:51:59.682077 systemd[2179]: Queued start job for default target default.target. Jan 30 13:51:59.687305 systemd[2179]: Created slice app.slice - User Application Slice. Jan 30 13:51:59.687352 systemd[2179]: Reached target paths.target - Paths. Jan 30 13:51:59.687371 systemd[2179]: Reached target timers.target - Timers. Jan 30 13:51:59.693705 systemd[2179]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 13:51:59.705888 amazon-ssm-agent[2155]: 2025-01-30 13:51:59 INFO Checking if agent identity type EC2 can be assumed Jan 30 13:51:59.730031 systemd[2179]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 13:51:59.730290 systemd[2179]: Reached target sockets.target - Sockets. Jan 30 13:51:59.730312 systemd[2179]: Reached target basic.target - Basic System. Jan 30 13:51:59.730370 systemd[2179]: Reached target default.target - Main User Target. Jan 30 13:51:59.730408 systemd[2179]: Startup finished in 311ms. Jan 30 13:51:59.731562 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 13:51:59.744875 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 13:51:59.804590 amazon-ssm-agent[2155]: 2025-01-30 13:51:59 INFO Agent will take identity from EC2 Jan 30 13:51:59.909736 amazon-ssm-agent[2155]: 2025-01-30 13:51:59 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 30 13:51:59.917814 systemd[1]: Started sshd@1-172.31.19.166:22-139.178.68.195:49986.service - OpenSSH per-connection server daemon (139.178.68.195:49986). Jan 30 13:52:00.003962 amazon-ssm-agent[2155]: 2025-01-30 13:51:59 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 30 13:52:00.046638 amazon-ssm-agent[2155]: 2025-01-30 13:51:59 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 30 13:52:00.046638 amazon-ssm-agent[2155]: 2025-01-30 13:51:59 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 30 13:52:00.046638 amazon-ssm-agent[2155]: 2025-01-30 13:51:59 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Jan 30 13:52:00.046890 amazon-ssm-agent[2155]: 2025-01-30 13:51:59 INFO [amazon-ssm-agent] Starting Core Agent Jan 30 13:52:00.046890 amazon-ssm-agent[2155]: 2025-01-30 13:51:59 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 30 13:52:00.046890 amazon-ssm-agent[2155]: 2025-01-30 13:51:59 INFO [Registrar] Starting registrar module Jan 30 13:52:00.046890 amazon-ssm-agent[2155]: 2025-01-30 13:51:59 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 30 13:52:00.046890 amazon-ssm-agent[2155]: 2025-01-30 13:52:00 INFO [EC2Identity] EC2 registration was successful. Jan 30 13:52:00.047496 amazon-ssm-agent[2155]: 2025-01-30 13:52:00 INFO [CredentialRefresher] credentialRefresher has started Jan 30 13:52:00.047689 amazon-ssm-agent[2155]: 2025-01-30 13:52:00 INFO [CredentialRefresher] Starting credentials refresher loop Jan 30 13:52:00.047807 amazon-ssm-agent[2155]: 2025-01-30 13:52:00 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 30 13:52:00.103596 amazon-ssm-agent[2155]: 2025-01-30 13:52:00 INFO [CredentialRefresher] Next credential rotation will be in 32.17497758413333 minutes Jan 30 13:52:00.127014 sshd[2194]: Accepted publickey for core from 139.178.68.195 port 49986 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:52:00.128022 sshd[2194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:52:00.135221 systemd-logind[1947]: New session 2 of user core. Jan 30 13:52:00.138639 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 13:52:00.263992 sshd[2194]: pam_unix(sshd:session): session closed for user core Jan 30 13:52:00.269056 systemd[1]: sshd@1-172.31.19.166:22-139.178.68.195:49986.service: Deactivated successfully. Jan 30 13:52:00.271954 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 13:52:00.274248 systemd-logind[1947]: Session 2 logged out. Waiting for processes to exit. Jan 30 13:52:00.276175 systemd-logind[1947]: Removed session 2. Jan 30 13:52:00.312408 systemd[1]: Started sshd@2-172.31.19.166:22-139.178.68.195:49988.service - OpenSSH per-connection server daemon (139.178.68.195:49988). Jan 30 13:52:00.475794 sshd[2201]: Accepted publickey for core from 139.178.68.195 port 49988 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:52:00.478587 sshd[2201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:52:00.485463 systemd-logind[1947]: New session 3 of user core. Jan 30 13:52:00.489666 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 13:52:00.620268 sshd[2201]: pam_unix(sshd:session): session closed for user core Jan 30 13:52:00.630462 systemd[1]: sshd@2-172.31.19.166:22-139.178.68.195:49988.service: Deactivated successfully. Jan 30 13:52:00.635155 systemd[1]: session-3.scope: Deactivated successfully. Jan 30 13:52:00.636744 systemd-logind[1947]: Session 3 logged out. Waiting for processes to exit. Jan 30 13:52:00.637954 systemd-logind[1947]: Removed session 3. Jan 30 13:52:00.999676 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:52:01.004997 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 13:52:01.015791 systemd[1]: Startup finished in 733ms (kernel) + 7.709s (initrd) + 8.636s (userspace) = 17.079s. Jan 30 13:52:01.104998 amazon-ssm-agent[2155]: 2025-01-30 13:52:01 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 30 13:52:01.211748 amazon-ssm-agent[2155]: 2025-01-30 13:52:01 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2216) started Jan 30 13:52:01.343556 amazon-ssm-agent[2155]: 2025-01-30 13:52:01 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 30 13:52:01.542075 ntpd[1940]: Listen normally on 7 eth0 [fe80::430:8dff:feec:ed85%2]:123 Jan 30 13:52:01.543912 ntpd[1940]: 30 Jan 13:52:01 ntpd[1940]: Listen normally on 7 eth0 [fe80::430:8dff:feec:ed85%2]:123 Jan 30 13:52:01.578468 (kubelet)[2212]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:52:02.577090 kernel: hrtimer: interrupt took 3198034 ns Jan 30 13:52:03.614022 kubelet[2212]: E0130 13:52:03.613932 2212 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:52:03.626714 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:52:03.626933 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:52:03.634048 systemd[1]: kubelet.service: Consumed 1.045s CPU time. Jan 30 13:52:04.807157 systemd-resolved[1867]: Clock change detected. Flushing caches. Jan 30 13:52:10.948780 systemd[1]: Started sshd@3-172.31.19.166:22-139.178.68.195:37168.service - OpenSSH per-connection server daemon (139.178.68.195:37168). Jan 30 13:52:11.120986 sshd[2238]: Accepted publickey for core from 139.178.68.195 port 37168 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:52:11.122941 sshd[2238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:52:11.130330 systemd-logind[1947]: New session 4 of user core. Jan 30 13:52:11.134441 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 13:52:11.263584 sshd[2238]: pam_unix(sshd:session): session closed for user core Jan 30 13:52:11.268311 systemd[1]: sshd@3-172.31.19.166:22-139.178.68.195:37168.service: Deactivated successfully. Jan 30 13:52:11.270417 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 13:52:11.273017 systemd-logind[1947]: Session 4 logged out. Waiting for processes to exit. Jan 30 13:52:11.275010 systemd-logind[1947]: Removed session 4. Jan 30 13:52:11.304903 systemd[1]: Started sshd@4-172.31.19.166:22-139.178.68.195:37172.service - OpenSSH per-connection server daemon (139.178.68.195:37172). Jan 30 13:52:11.473784 sshd[2245]: Accepted publickey for core from 139.178.68.195 port 37172 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:52:11.475973 sshd[2245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:52:11.496008 systemd-logind[1947]: New session 5 of user core. Jan 30 13:52:11.503394 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 13:52:11.630835 sshd[2245]: pam_unix(sshd:session): session closed for user core Jan 30 13:52:11.646991 systemd[1]: sshd@4-172.31.19.166:22-139.178.68.195:37172.service: Deactivated successfully. Jan 30 13:52:11.652537 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 13:52:11.674491 systemd-logind[1947]: Session 5 logged out. Waiting for processes to exit. Jan 30 13:52:11.686456 systemd[1]: Started sshd@5-172.31.19.166:22-139.178.68.195:37176.service - OpenSSH per-connection server daemon (139.178.68.195:37176). Jan 30 13:52:11.690274 systemd-logind[1947]: Removed session 5. Jan 30 13:52:11.863049 sshd[2252]: Accepted publickey for core from 139.178.68.195 port 37176 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:52:11.865133 sshd[2252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:52:11.873802 systemd-logind[1947]: New session 6 of user core. Jan 30 13:52:11.883439 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 13:52:12.012331 sshd[2252]: pam_unix(sshd:session): session closed for user core Jan 30 13:52:12.018137 systemd[1]: sshd@5-172.31.19.166:22-139.178.68.195:37176.service: Deactivated successfully. Jan 30 13:52:12.027570 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 13:52:12.037133 systemd-logind[1947]: Session 6 logged out. Waiting for processes to exit. Jan 30 13:52:12.073543 systemd[1]: Started sshd@6-172.31.19.166:22-139.178.68.195:37184.service - OpenSSH per-connection server daemon (139.178.68.195:37184). Jan 30 13:52:12.075434 systemd-logind[1947]: Removed session 6. Jan 30 13:52:12.254529 sshd[2259]: Accepted publickey for core from 139.178.68.195 port 37184 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:52:12.256255 sshd[2259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:52:12.265819 systemd-logind[1947]: New session 7 of user core. Jan 30 13:52:12.274772 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 13:52:12.396801 sudo[2262]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 13:52:12.397245 sudo[2262]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:52:12.424843 sudo[2262]: pam_unix(sudo:session): session closed for user root Jan 30 13:52:12.449230 sshd[2259]: pam_unix(sshd:session): session closed for user core Jan 30 13:52:12.457299 systemd[1]: sshd@6-172.31.19.166:22-139.178.68.195:37184.service: Deactivated successfully. Jan 30 13:52:12.463677 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 13:52:12.466075 systemd-logind[1947]: Session 7 logged out. Waiting for processes to exit. Jan 30 13:52:12.484124 systemd[1]: Started sshd@7-172.31.19.166:22-139.178.68.195:37186.service - OpenSSH per-connection server daemon (139.178.68.195:37186). Jan 30 13:52:12.485900 systemd-logind[1947]: Removed session 7. Jan 30 13:52:12.654988 sshd[2267]: Accepted publickey for core from 139.178.68.195 port 37186 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:52:12.656975 sshd[2267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:52:12.662689 systemd-logind[1947]: New session 8 of user core. Jan 30 13:52:12.668340 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 13:52:12.768703 sudo[2271]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 13:52:12.769315 sudo[2271]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:52:12.773392 sudo[2271]: pam_unix(sudo:session): session closed for user root Jan 30 13:52:12.779648 sudo[2270]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 30 13:52:12.780131 sudo[2270]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:52:12.798767 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 30 13:52:12.800452 auditctl[2274]: No rules Jan 30 13:52:12.801669 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 13:52:12.802351 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 30 13:52:12.805838 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 13:52:12.846855 augenrules[2292]: No rules Jan 30 13:52:12.848762 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 13:52:12.850586 sudo[2270]: pam_unix(sudo:session): session closed for user root Jan 30 13:52:12.874823 sshd[2267]: pam_unix(sshd:session): session closed for user core Jan 30 13:52:12.881119 systemd[1]: sshd@7-172.31.19.166:22-139.178.68.195:37186.service: Deactivated successfully. Jan 30 13:52:12.883458 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 13:52:12.885060 systemd-logind[1947]: Session 8 logged out. Waiting for processes to exit. Jan 30 13:52:12.886872 systemd-logind[1947]: Removed session 8. Jan 30 13:52:12.915583 systemd[1]: Started sshd@8-172.31.19.166:22-139.178.68.195:37190.service - OpenSSH per-connection server daemon (139.178.68.195:37190). Jan 30 13:52:13.104175 sshd[2300]: Accepted publickey for core from 139.178.68.195 port 37190 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:52:13.107783 sshd[2300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:52:13.114083 systemd-logind[1947]: New session 9 of user core. Jan 30 13:52:13.122407 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 13:52:13.223638 sudo[2303]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 13:52:13.225297 sudo[2303]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:52:13.713520 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 13:52:13.715623 (dockerd)[2319]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 13:52:14.168577 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 13:52:14.196507 dockerd[2319]: time="2025-01-30T13:52:14.196297421Z" level=info msg="Starting up" Jan 30 13:52:14.196942 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:52:14.462287 dockerd[2319]: time="2025-01-30T13:52:14.461849456Z" level=info msg="Loading containers: start." Jan 30 13:52:14.529342 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:52:14.547615 (kubelet)[2348]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:52:14.659498 kubelet[2348]: E0130 13:52:14.659428 2348 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:52:14.664633 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:52:14.664838 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:52:14.679307 kernel: Initializing XFRM netlink socket Jan 30 13:52:14.712871 (udev-worker)[2350]: Network interface NamePolicy= disabled on kernel command line. Jan 30 13:52:14.793950 systemd-networkd[1861]: docker0: Link UP Jan 30 13:52:14.814797 dockerd[2319]: time="2025-01-30T13:52:14.814746649Z" level=info msg="Loading containers: done." Jan 30 13:52:14.837280 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4042168825-merged.mount: Deactivated successfully. Jan 30 13:52:14.848802 dockerd[2319]: time="2025-01-30T13:52:14.848743554Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 13:52:14.849110 dockerd[2319]: time="2025-01-30T13:52:14.848883131Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 30 13:52:14.849193 dockerd[2319]: time="2025-01-30T13:52:14.849166340Z" level=info msg="Daemon has completed initialization" Jan 30 13:52:14.894750 dockerd[2319]: time="2025-01-30T13:52:14.894685513Z" level=info msg="API listen on /run/docker.sock" Jan 30 13:52:14.895050 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 13:52:16.189668 containerd[1983]: time="2025-01-30T13:52:16.189358586Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 30 13:52:16.892499 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3003258781.mount: Deactivated successfully. Jan 30 13:52:19.840341 containerd[1983]: time="2025-01-30T13:52:19.840287235Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:19.847018 containerd[1983]: time="2025-01-30T13:52:19.845042220Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=32677012" Jan 30 13:52:19.847791 containerd[1983]: time="2025-01-30T13:52:19.847747121Z" level=info msg="ImageCreate event name:\"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:19.858707 containerd[1983]: time="2025-01-30T13:52:19.858596718Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:19.860468 containerd[1983]: time="2025-01-30T13:52:19.860043657Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"32673812\" in 3.670640873s" Jan 30 13:52:19.860468 containerd[1983]: time="2025-01-30T13:52:19.860154139Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\"" Jan 30 13:52:19.887843 containerd[1983]: time="2025-01-30T13:52:19.887802966Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 30 13:52:22.404244 containerd[1983]: time="2025-01-30T13:52:22.404184275Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:22.406437 containerd[1983]: time="2025-01-30T13:52:22.406226040Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=29605745" Jan 30 13:52:22.409093 containerd[1983]: time="2025-01-30T13:52:22.408604602Z" level=info msg="ImageCreate event name:\"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:22.413275 containerd[1983]: time="2025-01-30T13:52:22.413225751Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:22.415703 containerd[1983]: time="2025-01-30T13:52:22.415652580Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"31052327\" in 2.527749405s" Jan 30 13:52:22.415902 containerd[1983]: time="2025-01-30T13:52:22.415879131Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\"" Jan 30 13:52:22.451436 containerd[1983]: time="2025-01-30T13:52:22.450468981Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 30 13:52:23.992009 containerd[1983]: time="2025-01-30T13:52:23.991951340Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:23.993675 containerd[1983]: time="2025-01-30T13:52:23.993621266Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=17783064" Jan 30 13:52:23.995126 containerd[1983]: time="2025-01-30T13:52:23.994619639Z" level=info msg="ImageCreate event name:\"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:23.997873 containerd[1983]: time="2025-01-30T13:52:23.997810560Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:23.999166 containerd[1983]: time="2025-01-30T13:52:23.999125261Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"19229664\" in 1.548586087s" Jan 30 13:52:23.999274 containerd[1983]: time="2025-01-30T13:52:23.999172794Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\"" Jan 30 13:52:24.027651 containerd[1983]: time="2025-01-30T13:52:24.027303638Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 30 13:52:24.705962 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 13:52:24.723629 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:52:25.083607 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:52:25.089265 (kubelet)[2567]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:52:25.221832 kubelet[2567]: E0130 13:52:25.221782 2567 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:52:25.228747 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:52:25.229155 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:52:25.555577 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1305382045.mount: Deactivated successfully. Jan 30 13:52:26.273866 containerd[1983]: time="2025-01-30T13:52:26.273808719Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:26.275762 containerd[1983]: time="2025-01-30T13:52:26.275539338Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=29058337" Jan 30 13:52:26.278703 containerd[1983]: time="2025-01-30T13:52:26.277543429Z" level=info msg="ImageCreate event name:\"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:26.281410 containerd[1983]: time="2025-01-30T13:52:26.280603028Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:26.281410 containerd[1983]: time="2025-01-30T13:52:26.281248760Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"29057356\" in 2.253898612s" Jan 30 13:52:26.281410 containerd[1983]: time="2025-01-30T13:52:26.281286822Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\"" Jan 30 13:52:26.310075 containerd[1983]: time="2025-01-30T13:52:26.310036933Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 30 13:52:27.049530 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount801616062.mount: Deactivated successfully. Jan 30 13:52:28.433952 containerd[1983]: time="2025-01-30T13:52:28.433898648Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:28.435758 containerd[1983]: time="2025-01-30T13:52:28.435700642Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 30 13:52:28.437170 containerd[1983]: time="2025-01-30T13:52:28.436620788Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:28.440221 containerd[1983]: time="2025-01-30T13:52:28.440172503Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:28.441363 containerd[1983]: time="2025-01-30T13:52:28.441320039Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.1310393s" Jan 30 13:52:28.441463 containerd[1983]: time="2025-01-30T13:52:28.441368960Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 30 13:52:28.466757 containerd[1983]: time="2025-01-30T13:52:28.466712456Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 30 13:52:28.479205 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 30 13:52:28.972084 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1353363395.mount: Deactivated successfully. Jan 30 13:52:28.981439 containerd[1983]: time="2025-01-30T13:52:28.981384477Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:28.982554 containerd[1983]: time="2025-01-30T13:52:28.982473473Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jan 30 13:52:28.985291 containerd[1983]: time="2025-01-30T13:52:28.983793783Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:28.988454 containerd[1983]: time="2025-01-30T13:52:28.987329413Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:28.988454 containerd[1983]: time="2025-01-30T13:52:28.988313017Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 521.559918ms" Jan 30 13:52:28.988454 containerd[1983]: time="2025-01-30T13:52:28.988349138Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 30 13:52:29.066700 containerd[1983]: time="2025-01-30T13:52:29.066610096Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 30 13:52:29.618826 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3225650096.mount: Deactivated successfully. Jan 30 13:52:32.937088 containerd[1983]: time="2025-01-30T13:52:32.937027412Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:32.938718 containerd[1983]: time="2025-01-30T13:52:32.938489943Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Jan 30 13:52:32.940128 containerd[1983]: time="2025-01-30T13:52:32.939887729Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:32.943066 containerd[1983]: time="2025-01-30T13:52:32.943003914Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:32.945182 containerd[1983]: time="2025-01-30T13:52:32.944627865Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.877919916s" Jan 30 13:52:32.945182 containerd[1983]: time="2025-01-30T13:52:32.944679546Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 30 13:52:35.454267 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 30 13:52:35.467295 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:52:35.782686 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:52:35.792642 (kubelet)[2755]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:52:35.886127 kubelet[2755]: E0130 13:52:35.883840 2755 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:52:35.887371 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:52:35.887553 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:52:36.253635 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:52:36.260629 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:52:36.300510 systemd[1]: Reloading requested from client PID 2769 ('systemctl') (unit session-9.scope)... Jan 30 13:52:36.300528 systemd[1]: Reloading... Jan 30 13:52:36.516245 zram_generator::config[2808]: No configuration found. Jan 30 13:52:36.665395 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:52:36.758229 systemd[1]: Reloading finished in 456 ms. Jan 30 13:52:36.817399 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 30 13:52:36.817508 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 30 13:52:36.817798 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:52:36.824562 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:52:37.013221 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:52:37.025865 (kubelet)[2870]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:52:37.090940 kubelet[2870]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:52:37.090940 kubelet[2870]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 13:52:37.090940 kubelet[2870]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:52:37.092749 kubelet[2870]: I0130 13:52:37.092684 2870 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:52:37.449138 kubelet[2870]: I0130 13:52:37.448673 2870 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 13:52:37.449138 kubelet[2870]: I0130 13:52:37.448704 2870 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:52:37.449360 kubelet[2870]: I0130 13:52:37.449311 2870 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 13:52:37.486210 kubelet[2870]: I0130 13:52:37.486172 2870 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:52:37.489853 kubelet[2870]: E0130 13:52:37.489818 2870 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.19.166:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.19.166:6443: connect: connection refused Jan 30 13:52:37.511857 kubelet[2870]: I0130 13:52:37.511473 2870 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:52:37.515724 kubelet[2870]: I0130 13:52:37.515650 2870 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:52:37.516031 kubelet[2870]: I0130 13:52:37.515720 2870 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-19-166","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 13:52:37.516167 kubelet[2870]: I0130 13:52:37.516046 2870 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:52:37.516167 kubelet[2870]: I0130 13:52:37.516063 2870 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 13:52:37.518584 kubelet[2870]: I0130 13:52:37.518547 2870 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:52:37.519902 kubelet[2870]: I0130 13:52:37.519877 2870 kubelet.go:400] "Attempting to sync node with API server" Jan 30 13:52:37.520000 kubelet[2870]: I0130 13:52:37.519930 2870 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:52:37.520000 kubelet[2870]: I0130 13:52:37.519962 2870 kubelet.go:312] "Adding apiserver pod source" Jan 30 13:52:37.520000 kubelet[2870]: I0130 13:52:37.519987 2870 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:52:37.526784 kubelet[2870]: W0130 13:52:37.526723 2870 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.19.166:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.19.166:6443: connect: connection refused Jan 30 13:52:37.526927 kubelet[2870]: E0130 13:52:37.526810 2870 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.19.166:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.19.166:6443: connect: connection refused Jan 30 13:52:37.528132 kubelet[2870]: W0130 13:52:37.527204 2870 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.19.166:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-166&limit=500&resourceVersion=0": dial tcp 172.31.19.166:6443: connect: connection refused Jan 30 13:52:37.528132 kubelet[2870]: E0130 13:52:37.527265 2870 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.19.166:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-166&limit=500&resourceVersion=0": dial tcp 172.31.19.166:6443: connect: connection refused Jan 30 13:52:37.528132 kubelet[2870]: I0130 13:52:37.527462 2870 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 13:52:37.531155 kubelet[2870]: I0130 13:52:37.530161 2870 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:52:37.531155 kubelet[2870]: W0130 13:52:37.530267 2870 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 13:52:37.531675 kubelet[2870]: I0130 13:52:37.531649 2870 server.go:1264] "Started kubelet" Jan 30 13:52:37.539648 kubelet[2870]: I0130 13:52:37.538988 2870 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:52:37.543122 kubelet[2870]: I0130 13:52:37.542801 2870 server.go:455] "Adding debug handlers to kubelet server" Jan 30 13:52:37.543692 kubelet[2870]: I0130 13:52:37.543631 2870 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:52:37.544282 kubelet[2870]: I0130 13:52:37.544148 2870 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:52:37.547347 kubelet[2870]: E0130 13:52:37.547211 2870 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.19.166:6443/api/v1/namespaces/default/events\": dial tcp 172.31.19.166:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-19-166.181f7cc2db3721ef default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-19-166,UID:ip-172-31-19-166,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-19-166,},FirstTimestamp:2025-01-30 13:52:37.531623919 +0000 UTC m=+0.500566428,LastTimestamp:2025-01-30 13:52:37.531623919 +0000 UTC m=+0.500566428,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-19-166,}" Jan 30 13:52:37.550648 kubelet[2870]: I0130 13:52:37.550436 2870 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:52:37.567382 kubelet[2870]: I0130 13:52:37.567224 2870 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 13:52:37.573804 kubelet[2870]: I0130 13:52:37.573760 2870 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 13:52:37.573939 kubelet[2870]: I0130 13:52:37.573855 2870 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:52:37.574870 kubelet[2870]: E0130 13:52:37.574378 2870 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.166:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-166?timeout=10s\": dial tcp 172.31.19.166:6443: connect: connection refused" interval="200ms" Jan 30 13:52:37.575444 kubelet[2870]: W0130 13:52:37.575372 2870 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.19.166:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.19.166:6443: connect: connection refused Jan 30 13:52:37.575519 kubelet[2870]: E0130 13:52:37.575461 2870 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.19.166:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.19.166:6443: connect: connection refused Jan 30 13:52:37.577350 kubelet[2870]: E0130 13:52:37.577264 2870 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:52:37.577928 kubelet[2870]: I0130 13:52:37.577905 2870 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:52:37.577928 kubelet[2870]: I0130 13:52:37.577926 2870 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:52:37.578062 kubelet[2870]: I0130 13:52:37.578001 2870 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:52:37.593135 kubelet[2870]: I0130 13:52:37.592442 2870 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:52:37.594378 kubelet[2870]: I0130 13:52:37.594339 2870 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:52:37.594378 kubelet[2870]: I0130 13:52:37.594378 2870 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 13:52:37.594532 kubelet[2870]: I0130 13:52:37.594405 2870 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 13:52:37.594532 kubelet[2870]: E0130 13:52:37.594455 2870 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:52:37.602720 kubelet[2870]: W0130 13:52:37.602504 2870 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.19.166:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.19.166:6443: connect: connection refused Jan 30 13:52:37.602720 kubelet[2870]: E0130 13:52:37.602569 2870 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.19.166:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.19.166:6443: connect: connection refused Jan 30 13:52:37.616230 kubelet[2870]: I0130 13:52:37.615996 2870 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 13:52:37.616351 kubelet[2870]: I0130 13:52:37.616341 2870 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 13:52:37.616584 kubelet[2870]: I0130 13:52:37.616396 2870 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:52:37.618694 kubelet[2870]: I0130 13:52:37.618602 2870 policy_none.go:49] "None policy: Start" Jan 30 13:52:37.619429 kubelet[2870]: I0130 13:52:37.619315 2870 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 13:52:37.619522 kubelet[2870]: I0130 13:52:37.619457 2870 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:52:37.626168 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 30 13:52:37.642813 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 30 13:52:37.648160 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 30 13:52:37.655342 kubelet[2870]: I0130 13:52:37.655310 2870 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:52:37.656087 kubelet[2870]: I0130 13:52:37.655578 2870 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:52:37.656087 kubelet[2870]: I0130 13:52:37.655708 2870 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:52:37.659148 kubelet[2870]: E0130 13:52:37.658480 2870 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-19-166\" not found" Jan 30 13:52:37.669383 kubelet[2870]: I0130 13:52:37.669325 2870 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-19-166" Jan 30 13:52:37.669759 kubelet[2870]: E0130 13:52:37.669726 2870 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.19.166:6443/api/v1/nodes\": dial tcp 172.31.19.166:6443: connect: connection refused" node="ip-172-31-19-166" Jan 30 13:52:37.695383 kubelet[2870]: I0130 13:52:37.695283 2870 topology_manager.go:215] "Topology Admit Handler" podUID="1f41ecd151849b30a22b2e6bcc5188ce" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-19-166" Jan 30 13:52:37.698297 kubelet[2870]: I0130 13:52:37.698258 2870 topology_manager.go:215] "Topology Admit Handler" podUID="9c8a9bbdae6fd5f8af98ce5c41bf7493" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-19-166" Jan 30 13:52:37.704477 kubelet[2870]: I0130 13:52:37.704253 2870 topology_manager.go:215] "Topology Admit Handler" podUID="28e000f8052af2ec3a98a44489e9be16" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-19-166" Jan 30 13:52:37.739938 systemd[1]: Created slice kubepods-burstable-pod1f41ecd151849b30a22b2e6bcc5188ce.slice - libcontainer container kubepods-burstable-pod1f41ecd151849b30a22b2e6bcc5188ce.slice. Jan 30 13:52:37.757452 systemd[1]: Created slice kubepods-burstable-pod9c8a9bbdae6fd5f8af98ce5c41bf7493.slice - libcontainer container kubepods-burstable-pod9c8a9bbdae6fd5f8af98ce5c41bf7493.slice. Jan 30 13:52:37.762861 systemd[1]: Created slice kubepods-burstable-pod28e000f8052af2ec3a98a44489e9be16.slice - libcontainer container kubepods-burstable-pod28e000f8052af2ec3a98a44489e9be16.slice. Jan 30 13:52:37.775629 kubelet[2870]: E0130 13:52:37.775443 2870 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.166:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-166?timeout=10s\": dial tcp 172.31.19.166:6443: connect: connection refused" interval="400ms" Jan 30 13:52:37.872409 kubelet[2870]: I0130 13:52:37.872378 2870 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-19-166" Jan 30 13:52:37.872757 kubelet[2870]: E0130 13:52:37.872723 2870 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.19.166:6443/api/v1/nodes\": dial tcp 172.31.19.166:6443: connect: connection refused" node="ip-172-31-19-166" Jan 30 13:52:37.875150 kubelet[2870]: I0130 13:52:37.874920 2870 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9c8a9bbdae6fd5f8af98ce5c41bf7493-kubeconfig\") pod \"kube-controller-manager-ip-172-31-19-166\" (UID: \"9c8a9bbdae6fd5f8af98ce5c41bf7493\") " pod="kube-system/kube-controller-manager-ip-172-31-19-166" Jan 30 13:52:37.875150 kubelet[2870]: I0130 13:52:37.874958 2870 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/28e000f8052af2ec3a98a44489e9be16-kubeconfig\") pod \"kube-scheduler-ip-172-31-19-166\" (UID: \"28e000f8052af2ec3a98a44489e9be16\") " pod="kube-system/kube-scheduler-ip-172-31-19-166" Jan 30 13:52:37.875150 kubelet[2870]: I0130 13:52:37.874982 2870 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1f41ecd151849b30a22b2e6bcc5188ce-k8s-certs\") pod \"kube-apiserver-ip-172-31-19-166\" (UID: \"1f41ecd151849b30a22b2e6bcc5188ce\") " pod="kube-system/kube-apiserver-ip-172-31-19-166" Jan 30 13:52:37.875150 kubelet[2870]: I0130 13:52:37.874996 2870 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9c8a9bbdae6fd5f8af98ce5c41bf7493-ca-certs\") pod \"kube-controller-manager-ip-172-31-19-166\" (UID: \"9c8a9bbdae6fd5f8af98ce5c41bf7493\") " pod="kube-system/kube-controller-manager-ip-172-31-19-166" Jan 30 13:52:37.875150 kubelet[2870]: I0130 13:52:37.875019 2870 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9c8a9bbdae6fd5f8af98ce5c41bf7493-k8s-certs\") pod \"kube-controller-manager-ip-172-31-19-166\" (UID: \"9c8a9bbdae6fd5f8af98ce5c41bf7493\") " pod="kube-system/kube-controller-manager-ip-172-31-19-166" Jan 30 13:52:37.875420 kubelet[2870]: I0130 13:52:37.875047 2870 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9c8a9bbdae6fd5f8af98ce5c41bf7493-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-19-166\" (UID: \"9c8a9bbdae6fd5f8af98ce5c41bf7493\") " pod="kube-system/kube-controller-manager-ip-172-31-19-166" Jan 30 13:52:37.875420 kubelet[2870]: I0130 13:52:37.875068 2870 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1f41ecd151849b30a22b2e6bcc5188ce-ca-certs\") pod \"kube-apiserver-ip-172-31-19-166\" (UID: \"1f41ecd151849b30a22b2e6bcc5188ce\") " pod="kube-system/kube-apiserver-ip-172-31-19-166" Jan 30 13:52:37.875420 kubelet[2870]: I0130 13:52:37.875085 2870 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1f41ecd151849b30a22b2e6bcc5188ce-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-19-166\" (UID: \"1f41ecd151849b30a22b2e6bcc5188ce\") " pod="kube-system/kube-apiserver-ip-172-31-19-166" Jan 30 13:52:37.875420 kubelet[2870]: I0130 13:52:37.875123 2870 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9c8a9bbdae6fd5f8af98ce5c41bf7493-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-19-166\" (UID: \"9c8a9bbdae6fd5f8af98ce5c41bf7493\") " pod="kube-system/kube-controller-manager-ip-172-31-19-166" Jan 30 13:52:38.056243 containerd[1983]: time="2025-01-30T13:52:38.056092631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-19-166,Uid:1f41ecd151849b30a22b2e6bcc5188ce,Namespace:kube-system,Attempt:0,}" Jan 30 13:52:38.071570 containerd[1983]: time="2025-01-30T13:52:38.071154453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-19-166,Uid:28e000f8052af2ec3a98a44489e9be16,Namespace:kube-system,Attempt:0,}" Jan 30 13:52:38.071570 containerd[1983]: time="2025-01-30T13:52:38.071154489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-19-166,Uid:9c8a9bbdae6fd5f8af98ce5c41bf7493,Namespace:kube-system,Attempt:0,}" Jan 30 13:52:38.176619 kubelet[2870]: E0130 13:52:38.176554 2870 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.166:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-166?timeout=10s\": dial tcp 172.31.19.166:6443: connect: connection refused" interval="800ms" Jan 30 13:52:38.277860 kubelet[2870]: I0130 13:52:38.277826 2870 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-19-166" Jan 30 13:52:38.278229 kubelet[2870]: E0130 13:52:38.278197 2870 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.19.166:6443/api/v1/nodes\": dial tcp 172.31.19.166:6443: connect: connection refused" node="ip-172-31-19-166" Jan 30 13:52:38.595739 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount565350547.mount: Deactivated successfully. Jan 30 13:52:38.606865 containerd[1983]: time="2025-01-30T13:52:38.606698487Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:52:38.608115 containerd[1983]: time="2025-01-30T13:52:38.608063888Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:52:38.608985 containerd[1983]: time="2025-01-30T13:52:38.608940263Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 30 13:52:38.611658 containerd[1983]: time="2025-01-30T13:52:38.611398501Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:52:38.617260 containerd[1983]: time="2025-01-30T13:52:38.617197308Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:52:38.618032 containerd[1983]: time="2025-01-30T13:52:38.617974519Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:52:38.619762 containerd[1983]: time="2025-01-30T13:52:38.619712015Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:52:38.632362 containerd[1983]: time="2025-01-30T13:52:38.631701568Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:52:38.644140 containerd[1983]: time="2025-01-30T13:52:38.643730947Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 587.52345ms" Jan 30 13:52:38.651361 kubelet[2870]: W0130 13:52:38.648916 2870 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.19.166:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.19.166:6443: connect: connection refused Jan 30 13:52:38.651361 kubelet[2870]: E0130 13:52:38.650365 2870 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.19.166:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.19.166:6443: connect: connection refused Jan 30 13:52:38.663142 containerd[1983]: time="2025-01-30T13:52:38.663067489Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 591.809749ms" Jan 30 13:52:38.665773 containerd[1983]: time="2025-01-30T13:52:38.665711000Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 594.398856ms" Jan 30 13:52:38.777217 kubelet[2870]: W0130 13:52:38.777017 2870 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.19.166:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.19.166:6443: connect: connection refused Jan 30 13:52:38.777217 kubelet[2870]: E0130 13:52:38.777121 2870 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.19.166:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.19.166:6443: connect: connection refused Jan 30 13:52:38.811771 kubelet[2870]: W0130 13:52:38.811550 2870 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.19.166:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-166&limit=500&resourceVersion=0": dial tcp 172.31.19.166:6443: connect: connection refused Jan 30 13:52:38.811771 kubelet[2870]: E0130 13:52:38.811633 2870 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.19.166:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-166&limit=500&resourceVersion=0": dial tcp 172.31.19.166:6443: connect: connection refused Jan 30 13:52:38.979425 kubelet[2870]: E0130 13:52:38.979370 2870 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.166:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-166?timeout=10s\": dial tcp 172.31.19.166:6443: connect: connection refused" interval="1.6s" Jan 30 13:52:38.988206 containerd[1983]: time="2025-01-30T13:52:38.986701830Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:52:38.988206 containerd[1983]: time="2025-01-30T13:52:38.986775032Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:52:38.988206 containerd[1983]: time="2025-01-30T13:52:38.986802864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:52:38.988206 containerd[1983]: time="2025-01-30T13:52:38.986935071Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:52:38.993959 containerd[1983]: time="2025-01-30T13:52:38.993584407Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:52:38.993959 containerd[1983]: time="2025-01-30T13:52:38.993645923Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:52:38.993959 containerd[1983]: time="2025-01-30T13:52:38.993662786Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:52:38.993959 containerd[1983]: time="2025-01-30T13:52:38.993838954Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:52:38.999217 containerd[1983]: time="2025-01-30T13:52:38.998383319Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:52:38.999217 containerd[1983]: time="2025-01-30T13:52:38.998480209Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:52:38.999217 containerd[1983]: time="2025-01-30T13:52:38.998536901Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:52:38.999217 containerd[1983]: time="2025-01-30T13:52:38.998935365Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:52:39.045459 systemd[1]: Started cri-containerd-d6078041ebca8cd2c5bc219cb2a0c21aa465d83e88bbd764d634fab9c908d4dd.scope - libcontainer container d6078041ebca8cd2c5bc219cb2a0c21aa465d83e88bbd764d634fab9c908d4dd. Jan 30 13:52:39.058414 systemd[1]: Started cri-containerd-f054cd1b402db92e01819a5b900dac5facfba2baf64043e837f8d3d6087c17ef.scope - libcontainer container f054cd1b402db92e01819a5b900dac5facfba2baf64043e837f8d3d6087c17ef. Jan 30 13:52:39.061910 systemd[1]: Started cri-containerd-f78876b23d1d249ea8881263fd55fdeeb5d758a4f821ef6fbb175f7f67ec81a3.scope - libcontainer container f78876b23d1d249ea8881263fd55fdeeb5d758a4f821ef6fbb175f7f67ec81a3. Jan 30 13:52:39.086635 kubelet[2870]: I0130 13:52:39.086602 2870 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-19-166" Jan 30 13:52:39.087120 kubelet[2870]: E0130 13:52:39.087064 2870 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.19.166:6443/api/v1/nodes\": dial tcp 172.31.19.166:6443: connect: connection refused" node="ip-172-31-19-166" Jan 30 13:52:39.107056 kubelet[2870]: W0130 13:52:39.106986 2870 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.19.166:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.19.166:6443: connect: connection refused Jan 30 13:52:39.107056 kubelet[2870]: E0130 13:52:39.107067 2870 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.19.166:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.19.166:6443: connect: connection refused Jan 30 13:52:39.204619 containerd[1983]: time="2025-01-30T13:52:39.204338514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-19-166,Uid:1f41ecd151849b30a22b2e6bcc5188ce,Namespace:kube-system,Attempt:0,} returns sandbox id \"f054cd1b402db92e01819a5b900dac5facfba2baf64043e837f8d3d6087c17ef\"" Jan 30 13:52:39.209137 containerd[1983]: time="2025-01-30T13:52:39.208824925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-19-166,Uid:28e000f8052af2ec3a98a44489e9be16,Namespace:kube-system,Attempt:0,} returns sandbox id \"d6078041ebca8cd2c5bc219cb2a0c21aa465d83e88bbd764d634fab9c908d4dd\"" Jan 30 13:52:39.227851 containerd[1983]: time="2025-01-30T13:52:39.227690182Z" level=info msg="CreateContainer within sandbox \"d6078041ebca8cd2c5bc219cb2a0c21aa465d83e88bbd764d634fab9c908d4dd\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 13:52:39.229408 containerd[1983]: time="2025-01-30T13:52:39.229235886Z" level=info msg="CreateContainer within sandbox \"f054cd1b402db92e01819a5b900dac5facfba2baf64043e837f8d3d6087c17ef\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 13:52:39.229996 containerd[1983]: time="2025-01-30T13:52:39.229752684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-19-166,Uid:9c8a9bbdae6fd5f8af98ce5c41bf7493,Namespace:kube-system,Attempt:0,} returns sandbox id \"f78876b23d1d249ea8881263fd55fdeeb5d758a4f821ef6fbb175f7f67ec81a3\"" Jan 30 13:52:39.268906 containerd[1983]: time="2025-01-30T13:52:39.268439055Z" level=info msg="CreateContainer within sandbox \"f78876b23d1d249ea8881263fd55fdeeb5d758a4f821ef6fbb175f7f67ec81a3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 13:52:39.280935 containerd[1983]: time="2025-01-30T13:52:39.280886744Z" level=info msg="CreateContainer within sandbox \"f054cd1b402db92e01819a5b900dac5facfba2baf64043e837f8d3d6087c17ef\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f1df44ad1b0fcd0999921fcf977af1a103711829d41de5da615a0751e5db9dd2\"" Jan 30 13:52:39.283121 containerd[1983]: time="2025-01-30T13:52:39.282213023Z" level=info msg="StartContainer for \"f1df44ad1b0fcd0999921fcf977af1a103711829d41de5da615a0751e5db9dd2\"" Jan 30 13:52:39.290573 containerd[1983]: time="2025-01-30T13:52:39.290523002Z" level=info msg="CreateContainer within sandbox \"d6078041ebca8cd2c5bc219cb2a0c21aa465d83e88bbd764d634fab9c908d4dd\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"aeea4c5fe06acfaa87c5aff1d2a36aa6fdc3962ee1c8be227f72aa36a6050ecc\"" Jan 30 13:52:39.296492 containerd[1983]: time="2025-01-30T13:52:39.296453588Z" level=info msg="StartContainer for \"aeea4c5fe06acfaa87c5aff1d2a36aa6fdc3962ee1c8be227f72aa36a6050ecc\"" Jan 30 13:52:39.355069 containerd[1983]: time="2025-01-30T13:52:39.354999758Z" level=info msg="CreateContainer within sandbox \"f78876b23d1d249ea8881263fd55fdeeb5d758a4f821ef6fbb175f7f67ec81a3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"bc1a8916446557c8cfc648cd63725f2f8f130d2a2306153112cbe2f97ad5609c\"" Jan 30 13:52:39.355983 containerd[1983]: time="2025-01-30T13:52:39.355718722Z" level=info msg="StartContainer for \"bc1a8916446557c8cfc648cd63725f2f8f130d2a2306153112cbe2f97ad5609c\"" Jan 30 13:52:39.387391 systemd[1]: Started cri-containerd-f1df44ad1b0fcd0999921fcf977af1a103711829d41de5da615a0751e5db9dd2.scope - libcontainer container f1df44ad1b0fcd0999921fcf977af1a103711829d41de5da615a0751e5db9dd2. Jan 30 13:52:39.432982 systemd[1]: Started cri-containerd-aeea4c5fe06acfaa87c5aff1d2a36aa6fdc3962ee1c8be227f72aa36a6050ecc.scope - libcontainer container aeea4c5fe06acfaa87c5aff1d2a36aa6fdc3962ee1c8be227f72aa36a6050ecc. Jan 30 13:52:39.466460 systemd[1]: Started cri-containerd-bc1a8916446557c8cfc648cd63725f2f8f130d2a2306153112cbe2f97ad5609c.scope - libcontainer container bc1a8916446557c8cfc648cd63725f2f8f130d2a2306153112cbe2f97ad5609c. Jan 30 13:52:39.528380 kubelet[2870]: E0130 13:52:39.528259 2870 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.19.166:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.19.166:6443: connect: connection refused Jan 30 13:52:39.577853 containerd[1983]: time="2025-01-30T13:52:39.577712554Z" level=info msg="StartContainer for \"f1df44ad1b0fcd0999921fcf977af1a103711829d41de5da615a0751e5db9dd2\" returns successfully" Jan 30 13:52:39.615725 containerd[1983]: time="2025-01-30T13:52:39.615514195Z" level=info msg="StartContainer for \"bc1a8916446557c8cfc648cd63725f2f8f130d2a2306153112cbe2f97ad5609c\" returns successfully" Jan 30 13:52:39.615725 containerd[1983]: time="2025-01-30T13:52:39.615604071Z" level=info msg="StartContainer for \"aeea4c5fe06acfaa87c5aff1d2a36aa6fdc3962ee1c8be227f72aa36a6050ecc\" returns successfully" Jan 30 13:52:40.358644 kubelet[2870]: W0130 13:52:40.358383 2870 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.19.166:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.19.166:6443: connect: connection refused Jan 30 13:52:40.358644 kubelet[2870]: E0130 13:52:40.358462 2870 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.19.166:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.19.166:6443: connect: connection refused Jan 30 13:52:40.690561 kubelet[2870]: I0130 13:52:40.689635 2870 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-19-166" Jan 30 13:52:43.231391 update_engine[1949]: I20250130 13:52:43.230151 1949 update_attempter.cc:509] Updating boot flags... Jan 30 13:52:43.276705 kubelet[2870]: E0130 13:52:43.276330 2870 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-19-166\" not found" node="ip-172-31-19-166" Jan 30 13:52:43.301198 kubelet[2870]: I0130 13:52:43.299939 2870 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-19-166" Jan 30 13:52:43.302905 kubelet[2870]: E0130 13:52:43.302502 2870 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-19-166.181f7cc2db3721ef default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-19-166,UID:ip-172-31-19-166,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-19-166,},FirstTimestamp:2025-01-30 13:52:37.531623919 +0000 UTC m=+0.500566428,LastTimestamp:2025-01-30 13:52:37.531623919 +0000 UTC m=+0.500566428,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-19-166,}" Jan 30 13:52:43.383128 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3159) Jan 30 13:52:43.408177 kubelet[2870]: E0130 13:52:43.407610 2870 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-19-166.181f7cc2ddef59c5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-19-166,UID:ip-172-31-19-166,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ip-172-31-19-166,},FirstTimestamp:2025-01-30 13:52:37.577251269 +0000 UTC m=+0.546193780,LastTimestamp:2025-01-30 13:52:37.577251269 +0000 UTC m=+0.546193780,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-19-166,}" Jan 30 13:52:43.526613 kubelet[2870]: I0130 13:52:43.525782 2870 apiserver.go:52] "Watching apiserver" Jan 30 13:52:43.574048 kubelet[2870]: I0130 13:52:43.574007 2870 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 13:52:43.763138 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3164) Jan 30 13:52:44.079149 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3164) Jan 30 13:52:45.496989 systemd[1]: Reloading requested from client PID 3413 ('systemctl') (unit session-9.scope)... Jan 30 13:52:45.497008 systemd[1]: Reloading... Jan 30 13:52:45.671133 zram_generator::config[3453]: No configuration found. Jan 30 13:52:45.827046 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:52:46.032384 systemd[1]: Reloading finished in 534 ms. Jan 30 13:52:46.099074 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:52:46.114461 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 13:52:46.115317 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:52:46.124569 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:52:46.416285 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:52:46.444757 (kubelet)[3510]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:52:46.571711 kubelet[3510]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:52:46.571711 kubelet[3510]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 13:52:46.571711 kubelet[3510]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:52:46.573835 kubelet[3510]: I0130 13:52:46.572521 3510 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:52:46.592546 kubelet[3510]: I0130 13:52:46.592495 3510 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 13:52:46.593134 kubelet[3510]: I0130 13:52:46.593113 3510 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:52:46.593451 kubelet[3510]: I0130 13:52:46.593439 3510 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 13:52:46.595734 kubelet[3510]: I0130 13:52:46.595713 3510 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 13:52:46.597668 kubelet[3510]: I0130 13:52:46.597637 3510 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:52:46.626363 kubelet[3510]: I0130 13:52:46.626307 3510 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:52:46.627268 kubelet[3510]: I0130 13:52:46.627184 3510 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:52:46.627752 kubelet[3510]: I0130 13:52:46.627517 3510 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-19-166","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 13:52:46.627926 kubelet[3510]: I0130 13:52:46.627915 3510 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:52:46.628138 kubelet[3510]: I0130 13:52:46.627981 3510 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 13:52:46.628138 kubelet[3510]: I0130 13:52:46.628055 3510 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:52:46.628251 kubelet[3510]: I0130 13:52:46.628201 3510 kubelet.go:400] "Attempting to sync node with API server" Jan 30 13:52:46.628251 kubelet[3510]: I0130 13:52:46.628219 3510 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:52:46.634589 kubelet[3510]: I0130 13:52:46.634392 3510 kubelet.go:312] "Adding apiserver pod source" Jan 30 13:52:46.634589 kubelet[3510]: I0130 13:52:46.634432 3510 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:52:46.645367 kubelet[3510]: I0130 13:52:46.643776 3510 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 13:52:46.647609 kubelet[3510]: I0130 13:52:46.647271 3510 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:52:46.648076 kubelet[3510]: I0130 13:52:46.648061 3510 server.go:1264] "Started kubelet" Jan 30 13:52:46.658651 kubelet[3510]: I0130 13:52:46.658619 3510 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:52:46.676532 kubelet[3510]: I0130 13:52:46.676401 3510 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:52:46.677731 kubelet[3510]: I0130 13:52:46.677237 3510 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 13:52:46.678536 kubelet[3510]: I0130 13:52:46.678514 3510 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 13:52:46.689370 kubelet[3510]: I0130 13:52:46.679711 3510 server.go:455] "Adding debug handlers to kubelet server" Jan 30 13:52:46.694057 kubelet[3510]: I0130 13:52:46.694026 3510 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:52:46.696501 kubelet[3510]: I0130 13:52:46.695758 3510 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:52:46.707891 kubelet[3510]: I0130 13:52:46.681113 3510 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:52:46.711965 kubelet[3510]: I0130 13:52:46.679766 3510 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:52:46.711965 kubelet[3510]: I0130 13:52:46.711893 3510 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:52:46.714493 kubelet[3510]: I0130 13:52:46.714448 3510 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:52:46.719785 kubelet[3510]: I0130 13:52:46.718574 3510 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:52:46.729218 kubelet[3510]: I0130 13:52:46.729183 3510 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:52:46.729951 kubelet[3510]: I0130 13:52:46.729400 3510 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 13:52:46.729951 kubelet[3510]: I0130 13:52:46.729429 3510 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 13:52:46.729951 kubelet[3510]: E0130 13:52:46.729478 3510 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:52:46.737494 kubelet[3510]: E0130 13:52:46.737458 3510 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:52:46.787587 kubelet[3510]: I0130 13:52:46.787560 3510 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-19-166" Jan 30 13:52:46.803697 kubelet[3510]: I0130 13:52:46.802654 3510 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-19-166" Jan 30 13:52:46.803697 kubelet[3510]: I0130 13:52:46.802750 3510 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-19-166" Jan 30 13:52:46.835125 kubelet[3510]: E0130 13:52:46.831696 3510 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 30 13:52:46.862275 kubelet[3510]: I0130 13:52:46.862191 3510 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 13:52:46.862275 kubelet[3510]: I0130 13:52:46.862215 3510 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 13:52:46.862498 kubelet[3510]: I0130 13:52:46.862297 3510 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:52:46.862498 kubelet[3510]: I0130 13:52:46.862489 3510 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 13:52:46.862576 kubelet[3510]: I0130 13:52:46.862503 3510 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 13:52:46.862576 kubelet[3510]: I0130 13:52:46.862532 3510 policy_none.go:49] "None policy: Start" Jan 30 13:52:46.863988 kubelet[3510]: I0130 13:52:46.863969 3510 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 13:52:46.864149 kubelet[3510]: I0130 13:52:46.863999 3510 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:52:46.865342 kubelet[3510]: I0130 13:52:46.865310 3510 state_mem.go:75] "Updated machine memory state" Jan 30 13:52:46.895948 kubelet[3510]: I0130 13:52:46.895556 3510 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:52:46.895948 kubelet[3510]: I0130 13:52:46.895766 3510 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:52:46.896172 kubelet[3510]: I0130 13:52:46.895997 3510 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:52:47.035203 kubelet[3510]: I0130 13:52:47.032682 3510 topology_manager.go:215] "Topology Admit Handler" podUID="1f41ecd151849b30a22b2e6bcc5188ce" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-19-166" Jan 30 13:52:47.035203 kubelet[3510]: I0130 13:52:47.032916 3510 topology_manager.go:215] "Topology Admit Handler" podUID="9c8a9bbdae6fd5f8af98ce5c41bf7493" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-19-166" Jan 30 13:52:47.035203 kubelet[3510]: I0130 13:52:47.033022 3510 topology_manager.go:215] "Topology Admit Handler" podUID="28e000f8052af2ec3a98a44489e9be16" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-19-166" Jan 30 13:52:47.056260 kubelet[3510]: E0130 13:52:47.056222 3510 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-19-166\" already exists" pod="kube-system/kube-scheduler-ip-172-31-19-166" Jan 30 13:52:47.058815 kubelet[3510]: E0130 13:52:47.057156 3510 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-19-166\" already exists" pod="kube-system/kube-apiserver-ip-172-31-19-166" Jan 30 13:52:47.112284 kubelet[3510]: I0130 13:52:47.111549 3510 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9c8a9bbdae6fd5f8af98ce5c41bf7493-k8s-certs\") pod \"kube-controller-manager-ip-172-31-19-166\" (UID: \"9c8a9bbdae6fd5f8af98ce5c41bf7493\") " pod="kube-system/kube-controller-manager-ip-172-31-19-166" Jan 30 13:52:47.112284 kubelet[3510]: I0130 13:52:47.111679 3510 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9c8a9bbdae6fd5f8af98ce5c41bf7493-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-19-166\" (UID: \"9c8a9bbdae6fd5f8af98ce5c41bf7493\") " pod="kube-system/kube-controller-manager-ip-172-31-19-166" Jan 30 13:52:47.112284 kubelet[3510]: I0130 13:52:47.112062 3510 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1f41ecd151849b30a22b2e6bcc5188ce-k8s-certs\") pod \"kube-apiserver-ip-172-31-19-166\" (UID: \"1f41ecd151849b30a22b2e6bcc5188ce\") " pod="kube-system/kube-apiserver-ip-172-31-19-166" Jan 30 13:52:47.112284 kubelet[3510]: I0130 13:52:47.112091 3510 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1f41ecd151849b30a22b2e6bcc5188ce-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-19-166\" (UID: \"1f41ecd151849b30a22b2e6bcc5188ce\") " pod="kube-system/kube-apiserver-ip-172-31-19-166" Jan 30 13:52:47.113410 kubelet[3510]: I0130 13:52:47.112148 3510 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9c8a9bbdae6fd5f8af98ce5c41bf7493-ca-certs\") pod \"kube-controller-manager-ip-172-31-19-166\" (UID: \"9c8a9bbdae6fd5f8af98ce5c41bf7493\") " pod="kube-system/kube-controller-manager-ip-172-31-19-166" Jan 30 13:52:47.113522 kubelet[3510]: I0130 13:52:47.113434 3510 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/28e000f8052af2ec3a98a44489e9be16-kubeconfig\") pod \"kube-scheduler-ip-172-31-19-166\" (UID: \"28e000f8052af2ec3a98a44489e9be16\") " pod="kube-system/kube-scheduler-ip-172-31-19-166" Jan 30 13:52:47.113522 kubelet[3510]: I0130 13:52:47.113460 3510 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1f41ecd151849b30a22b2e6bcc5188ce-ca-certs\") pod \"kube-apiserver-ip-172-31-19-166\" (UID: \"1f41ecd151849b30a22b2e6bcc5188ce\") " pod="kube-system/kube-apiserver-ip-172-31-19-166" Jan 30 13:52:47.113522 kubelet[3510]: I0130 13:52:47.113482 3510 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9c8a9bbdae6fd5f8af98ce5c41bf7493-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-19-166\" (UID: \"9c8a9bbdae6fd5f8af98ce5c41bf7493\") " pod="kube-system/kube-controller-manager-ip-172-31-19-166" Jan 30 13:52:47.113522 kubelet[3510]: I0130 13:52:47.113508 3510 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9c8a9bbdae6fd5f8af98ce5c41bf7493-kubeconfig\") pod \"kube-controller-manager-ip-172-31-19-166\" (UID: \"9c8a9bbdae6fd5f8af98ce5c41bf7493\") " pod="kube-system/kube-controller-manager-ip-172-31-19-166" Jan 30 13:52:47.637549 kubelet[3510]: I0130 13:52:47.637494 3510 apiserver.go:52] "Watching apiserver" Jan 30 13:52:47.696365 kubelet[3510]: I0130 13:52:47.696294 3510 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 13:52:47.753905 kubelet[3510]: I0130 13:52:47.753669 3510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-19-166" podStartSLOduration=2.753646097 podStartE2EDuration="2.753646097s" podCreationTimestamp="2025-01-30 13:52:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:52:47.739183849 +0000 UTC m=+1.275949188" watchObservedRunningTime="2025-01-30 13:52:47.753646097 +0000 UTC m=+1.290411433" Jan 30 13:52:47.769308 kubelet[3510]: I0130 13:52:47.769237 3510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-19-166" podStartSLOduration=2.769214019 podStartE2EDuration="2.769214019s" podCreationTimestamp="2025-01-30 13:52:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:52:47.755441048 +0000 UTC m=+1.292206386" watchObservedRunningTime="2025-01-30 13:52:47.769214019 +0000 UTC m=+1.305979352" Jan 30 13:52:47.784771 kubelet[3510]: I0130 13:52:47.784419 3510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-19-166" podStartSLOduration=0.784400877 podStartE2EDuration="784.400877ms" podCreationTimestamp="2025-01-30 13:52:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:52:47.769694496 +0000 UTC m=+1.306459834" watchObservedRunningTime="2025-01-30 13:52:47.784400877 +0000 UTC m=+1.321166214" Jan 30 13:52:53.198054 sudo[2303]: pam_unix(sudo:session): session closed for user root Jan 30 13:52:53.222862 sshd[2300]: pam_unix(sshd:session): session closed for user core Jan 30 13:52:53.228763 systemd[1]: sshd@8-172.31.19.166:22-139.178.68.195:37190.service: Deactivated successfully. Jan 30 13:52:53.232816 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 13:52:53.233025 systemd[1]: session-9.scope: Consumed 5.060s CPU time, 187.8M memory peak, 0B memory swap peak. Jan 30 13:52:53.234075 systemd-logind[1947]: Session 9 logged out. Waiting for processes to exit. Jan 30 13:52:53.237612 systemd-logind[1947]: Removed session 9. Jan 30 13:52:59.423638 kubelet[3510]: I0130 13:52:59.423597 3510 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 13:52:59.442300 containerd[1983]: time="2025-01-30T13:52:59.442242976Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 13:52:59.443238 kubelet[3510]: I0130 13:52:59.442615 3510 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 13:53:00.250440 kubelet[3510]: I0130 13:53:00.250389 3510 topology_manager.go:215] "Topology Admit Handler" podUID="a1d25065-f757-4742-adfa-334c7a57c053" podNamespace="kube-system" podName="kube-proxy-8pcx6" Jan 30 13:53:00.291899 systemd[1]: Created slice kubepods-besteffort-poda1d25065_f757_4742_adfa_334c7a57c053.slice - libcontainer container kubepods-besteffort-poda1d25065_f757_4742_adfa_334c7a57c053.slice. Jan 30 13:53:00.323833 kubelet[3510]: I0130 13:53:00.323783 3510 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a1d25065-f757-4742-adfa-334c7a57c053-kube-proxy\") pod \"kube-proxy-8pcx6\" (UID: \"a1d25065-f757-4742-adfa-334c7a57c053\") " pod="kube-system/kube-proxy-8pcx6" Jan 30 13:53:00.323833 kubelet[3510]: I0130 13:53:00.323831 3510 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a1d25065-f757-4742-adfa-334c7a57c053-xtables-lock\") pod \"kube-proxy-8pcx6\" (UID: \"a1d25065-f757-4742-adfa-334c7a57c053\") " pod="kube-system/kube-proxy-8pcx6" Jan 30 13:53:00.324037 kubelet[3510]: I0130 13:53:00.323865 3510 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a1d25065-f757-4742-adfa-334c7a57c053-lib-modules\") pod \"kube-proxy-8pcx6\" (UID: \"a1d25065-f757-4742-adfa-334c7a57c053\") " pod="kube-system/kube-proxy-8pcx6" Jan 30 13:53:00.324037 kubelet[3510]: I0130 13:53:00.323890 3510 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7khp9\" (UniqueName: \"kubernetes.io/projected/a1d25065-f757-4742-adfa-334c7a57c053-kube-api-access-7khp9\") pod \"kube-proxy-8pcx6\" (UID: \"a1d25065-f757-4742-adfa-334c7a57c053\") " pod="kube-system/kube-proxy-8pcx6" Jan 30 13:53:00.551629 kubelet[3510]: I0130 13:53:00.548572 3510 topology_manager.go:215] "Topology Admit Handler" podUID="00e620dd-f0b4-46a9-aed1-3d10b70a35ce" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-5vphk" Jan 30 13:53:00.565672 systemd[1]: Created slice kubepods-besteffort-pod00e620dd_f0b4_46a9_aed1_3d10b70a35ce.slice - libcontainer container kubepods-besteffort-pod00e620dd_f0b4_46a9_aed1_3d10b70a35ce.slice. Jan 30 13:53:00.604537 containerd[1983]: time="2025-01-30T13:53:00.604080196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8pcx6,Uid:a1d25065-f757-4742-adfa-334c7a57c053,Namespace:kube-system,Attempt:0,}" Jan 30 13:53:00.626479 kubelet[3510]: I0130 13:53:00.626322 3510 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/00e620dd-f0b4-46a9-aed1-3d10b70a35ce-var-lib-calico\") pod \"tigera-operator-7bc55997bb-5vphk\" (UID: \"00e620dd-f0b4-46a9-aed1-3d10b70a35ce\") " pod="tigera-operator/tigera-operator-7bc55997bb-5vphk" Jan 30 13:53:00.626479 kubelet[3510]: I0130 13:53:00.626382 3510 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m69gn\" (UniqueName: \"kubernetes.io/projected/00e620dd-f0b4-46a9-aed1-3d10b70a35ce-kube-api-access-m69gn\") pod \"tigera-operator-7bc55997bb-5vphk\" (UID: \"00e620dd-f0b4-46a9-aed1-3d10b70a35ce\") " pod="tigera-operator/tigera-operator-7bc55997bb-5vphk" Jan 30 13:53:00.661813 containerd[1983]: time="2025-01-30T13:53:00.661270337Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:53:00.661813 containerd[1983]: time="2025-01-30T13:53:00.661361219Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:53:00.661813 containerd[1983]: time="2025-01-30T13:53:00.661383377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:53:00.661813 containerd[1983]: time="2025-01-30T13:53:00.661671805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:53:00.693326 systemd[1]: Started cri-containerd-ecbbb52e138e796cacff3ba7b3f8c898a453c57bf4eb4940f70001160e884334.scope - libcontainer container ecbbb52e138e796cacff3ba7b3f8c898a453c57bf4eb4940f70001160e884334. Jan 30 13:53:00.723437 containerd[1983]: time="2025-01-30T13:53:00.723376803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8pcx6,Uid:a1d25065-f757-4742-adfa-334c7a57c053,Namespace:kube-system,Attempt:0,} returns sandbox id \"ecbbb52e138e796cacff3ba7b3f8c898a453c57bf4eb4940f70001160e884334\"" Jan 30 13:53:00.736794 containerd[1983]: time="2025-01-30T13:53:00.736525909Z" level=info msg="CreateContainer within sandbox \"ecbbb52e138e796cacff3ba7b3f8c898a453c57bf4eb4940f70001160e884334\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 13:53:00.782455 containerd[1983]: time="2025-01-30T13:53:00.782315291Z" level=info msg="CreateContainer within sandbox \"ecbbb52e138e796cacff3ba7b3f8c898a453c57bf4eb4940f70001160e884334\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7fa7c81ac3d2014c9ec6d1606f22d216b1c8484ef58adb00f76155dff988e13f\"" Jan 30 13:53:00.790193 containerd[1983]: time="2025-01-30T13:53:00.790126115Z" level=info msg="StartContainer for \"7fa7c81ac3d2014c9ec6d1606f22d216b1c8484ef58adb00f76155dff988e13f\"" Jan 30 13:53:00.827321 systemd[1]: Started cri-containerd-7fa7c81ac3d2014c9ec6d1606f22d216b1c8484ef58adb00f76155dff988e13f.scope - libcontainer container 7fa7c81ac3d2014c9ec6d1606f22d216b1c8484ef58adb00f76155dff988e13f. Jan 30 13:53:00.870758 containerd[1983]: time="2025-01-30T13:53:00.870684735Z" level=info msg="StartContainer for \"7fa7c81ac3d2014c9ec6d1606f22d216b1c8484ef58adb00f76155dff988e13f\" returns successfully" Jan 30 13:53:00.873363 containerd[1983]: time="2025-01-30T13:53:00.873316318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-5vphk,Uid:00e620dd-f0b4-46a9-aed1-3d10b70a35ce,Namespace:tigera-operator,Attempt:0,}" Jan 30 13:53:00.913145 containerd[1983]: time="2025-01-30T13:53:00.912715885Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:53:00.913145 containerd[1983]: time="2025-01-30T13:53:00.912871761Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:53:00.913145 containerd[1983]: time="2025-01-30T13:53:00.912889601Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:53:00.913866 containerd[1983]: time="2025-01-30T13:53:00.913495332Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:53:00.939933 systemd[1]: Started cri-containerd-4fc5a8cbbb2b8fe7ef285717843bee47d49522e9f624415b2e0a9317a8886e4e.scope - libcontainer container 4fc5a8cbbb2b8fe7ef285717843bee47d49522e9f624415b2e0a9317a8886e4e. Jan 30 13:53:01.009484 containerd[1983]: time="2025-01-30T13:53:01.009438560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-5vphk,Uid:00e620dd-f0b4-46a9-aed1-3d10b70a35ce,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"4fc5a8cbbb2b8fe7ef285717843bee47d49522e9f624415b2e0a9317a8886e4e\"" Jan 30 13:53:01.011929 containerd[1983]: time="2025-01-30T13:53:01.011871626Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 30 13:53:03.412689 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2073428822.mount: Deactivated successfully. Jan 30 13:53:04.618693 containerd[1983]: time="2025-01-30T13:53:04.618640686Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:04.621414 containerd[1983]: time="2025-01-30T13:53:04.621344214Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Jan 30 13:53:04.623838 containerd[1983]: time="2025-01-30T13:53:04.623766808Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:04.628347 containerd[1983]: time="2025-01-30T13:53:04.628290737Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:04.629342 containerd[1983]: time="2025-01-30T13:53:04.629174015Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 3.617255067s" Jan 30 13:53:04.629342 containerd[1983]: time="2025-01-30T13:53:04.629219346Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 30 13:53:04.633437 containerd[1983]: time="2025-01-30T13:53:04.633383891Z" level=info msg="CreateContainer within sandbox \"4fc5a8cbbb2b8fe7ef285717843bee47d49522e9f624415b2e0a9317a8886e4e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 30 13:53:04.693521 containerd[1983]: time="2025-01-30T13:53:04.693332815Z" level=info msg="CreateContainer within sandbox \"4fc5a8cbbb2b8fe7ef285717843bee47d49522e9f624415b2e0a9317a8886e4e\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a141f8c050d86a659ee4b14162b9cedc19a8745a097dae59ae8b01a99544a940\"" Jan 30 13:53:04.698921 containerd[1983]: time="2025-01-30T13:53:04.698881564Z" level=info msg="StartContainer for \"a141f8c050d86a659ee4b14162b9cedc19a8745a097dae59ae8b01a99544a940\"" Jan 30 13:53:04.771376 systemd[1]: Started cri-containerd-a141f8c050d86a659ee4b14162b9cedc19a8745a097dae59ae8b01a99544a940.scope - libcontainer container a141f8c050d86a659ee4b14162b9cedc19a8745a097dae59ae8b01a99544a940. Jan 30 13:53:04.819383 containerd[1983]: time="2025-01-30T13:53:04.815319579Z" level=info msg="StartContainer for \"a141f8c050d86a659ee4b14162b9cedc19a8745a097dae59ae8b01a99544a940\" returns successfully" Jan 30 13:53:05.239296 kubelet[3510]: I0130 13:53:05.237013 3510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8pcx6" podStartSLOduration=5.236990566 podStartE2EDuration="5.236990566s" podCreationTimestamp="2025-01-30 13:53:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:53:02.500414155 +0000 UTC m=+16.037179499" watchObservedRunningTime="2025-01-30 13:53:05.236990566 +0000 UTC m=+18.773755902" Jan 30 13:53:05.239992 kubelet[3510]: I0130 13:53:05.239439 3510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-5vphk" podStartSLOduration=1.619804996 podStartE2EDuration="5.239417161s" podCreationTimestamp="2025-01-30 13:53:00 +0000 UTC" firstStartedPulling="2025-01-30 13:53:01.011161289 +0000 UTC m=+14.547926607" lastFinishedPulling="2025-01-30 13:53:04.630773445 +0000 UTC m=+18.167538772" observedRunningTime="2025-01-30 13:53:05.233701896 +0000 UTC m=+18.770467232" watchObservedRunningTime="2025-01-30 13:53:05.239417161 +0000 UTC m=+18.776182502" Jan 30 13:53:08.432349 kubelet[3510]: I0130 13:53:08.432290 3510 topology_manager.go:215] "Topology Admit Handler" podUID="5e4c27cd-a0ce-4246-90a8-b057742c14af" podNamespace="calico-system" podName="calico-typha-9f9f68dfb-6kdt9" Jan 30 13:53:08.446416 systemd[1]: Created slice kubepods-besteffort-pod5e4c27cd_a0ce_4246_90a8_b057742c14af.slice - libcontainer container kubepods-besteffort-pod5e4c27cd_a0ce_4246_90a8_b057742c14af.slice. Jan 30 13:53:08.489618 kubelet[3510]: I0130 13:53:08.489533 3510 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e4c27cd-a0ce-4246-90a8-b057742c14af-tigera-ca-bundle\") pod \"calico-typha-9f9f68dfb-6kdt9\" (UID: \"5e4c27cd-a0ce-4246-90a8-b057742c14af\") " pod="calico-system/calico-typha-9f9f68dfb-6kdt9" Jan 30 13:53:08.489780 kubelet[3510]: I0130 13:53:08.489634 3510 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/5e4c27cd-a0ce-4246-90a8-b057742c14af-typha-certs\") pod \"calico-typha-9f9f68dfb-6kdt9\" (UID: \"5e4c27cd-a0ce-4246-90a8-b057742c14af\") " pod="calico-system/calico-typha-9f9f68dfb-6kdt9" Jan 30 13:53:08.489780 kubelet[3510]: I0130 13:53:08.489669 3510 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsk6m\" (UniqueName: \"kubernetes.io/projected/5e4c27cd-a0ce-4246-90a8-b057742c14af-kube-api-access-dsk6m\") pod \"calico-typha-9f9f68dfb-6kdt9\" (UID: \"5e4c27cd-a0ce-4246-90a8-b057742c14af\") " pod="calico-system/calico-typha-9f9f68dfb-6kdt9" Jan 30 13:53:08.638821 kubelet[3510]: I0130 13:53:08.634191 3510 topology_manager.go:215] "Topology Admit Handler" podUID="bbb7a247-88a9-407e-bdc1-d8a1b8ba3fbb" podNamespace="calico-system" podName="calico-node-79lvl" Jan 30 13:53:08.660794 systemd[1]: Created slice kubepods-besteffort-podbbb7a247_88a9_407e_bdc1_d8a1b8ba3fbb.slice - libcontainer container kubepods-besteffort-podbbb7a247_88a9_407e_bdc1_d8a1b8ba3fbb.slice. Jan 30 13:53:08.694476 kubelet[3510]: I0130 13:53:08.691168 3510 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/bbb7a247-88a9-407e-bdc1-d8a1b8ba3fbb-var-run-calico\") pod \"calico-node-79lvl\" (UID: \"bbb7a247-88a9-407e-bdc1-d8a1b8ba3fbb\") " pod="calico-system/calico-node-79lvl" Jan 30 13:53:08.694476 kubelet[3510]: I0130 13:53:08.691215 3510 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bbb7a247-88a9-407e-bdc1-d8a1b8ba3fbb-var-lib-calico\") pod \"calico-node-79lvl\" (UID: \"bbb7a247-88a9-407e-bdc1-d8a1b8ba3fbb\") " pod="calico-system/calico-node-79lvl" Jan 30 13:53:08.694476 kubelet[3510]: I0130 13:53:08.691248 3510 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/bbb7a247-88a9-407e-bdc1-d8a1b8ba3fbb-node-certs\") pod \"calico-node-79lvl\" (UID: \"bbb7a247-88a9-407e-bdc1-d8a1b8ba3fbb\") " pod="calico-system/calico-node-79lvl" Jan 30 13:53:08.694476 kubelet[3510]: I0130 13:53:08.691274 3510 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fq92\" (UniqueName: \"kubernetes.io/projected/bbb7a247-88a9-407e-bdc1-d8a1b8ba3fbb-kube-api-access-5fq92\") pod \"calico-node-79lvl\" (UID: \"bbb7a247-88a9-407e-bdc1-d8a1b8ba3fbb\") " pod="calico-system/calico-node-79lvl" Jan 30 13:53:08.694476 kubelet[3510]: I0130 13:53:08.691302 3510 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bbb7a247-88a9-407e-bdc1-d8a1b8ba3fbb-tigera-ca-bundle\") pod \"calico-node-79lvl\" (UID: \"bbb7a247-88a9-407e-bdc1-d8a1b8ba3fbb\") " pod="calico-system/calico-node-79lvl" Jan 30 13:53:08.695002 kubelet[3510]: I0130 13:53:08.691336 3510 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bbb7a247-88a9-407e-bdc1-d8a1b8ba3fbb-xtables-lock\") pod \"calico-node-79lvl\" (UID: \"bbb7a247-88a9-407e-bdc1-d8a1b8ba3fbb\") " pod="calico-system/calico-node-79lvl" Jan 30 13:53:08.695002 kubelet[3510]: I0130 13:53:08.691362 3510 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/bbb7a247-88a9-407e-bdc1-d8a1b8ba3fbb-policysync\") pod \"calico-node-79lvl\" (UID: \"bbb7a247-88a9-407e-bdc1-d8a1b8ba3fbb\") " pod="calico-system/calico-node-79lvl" Jan 30 13:53:08.695002 kubelet[3510]: I0130 13:53:08.691386 3510 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/bbb7a247-88a9-407e-bdc1-d8a1b8ba3fbb-flexvol-driver-host\") pod \"calico-node-79lvl\" (UID: \"bbb7a247-88a9-407e-bdc1-d8a1b8ba3fbb\") " pod="calico-system/calico-node-79lvl" Jan 30 13:53:08.695002 kubelet[3510]: I0130 13:53:08.691412 3510 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/bbb7a247-88a9-407e-bdc1-d8a1b8ba3fbb-cni-bin-dir\") pod \"calico-node-79lvl\" (UID: \"bbb7a247-88a9-407e-bdc1-d8a1b8ba3fbb\") " pod="calico-system/calico-node-79lvl" Jan 30 13:53:08.695002 kubelet[3510]: I0130 13:53:08.691436 3510 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/bbb7a247-88a9-407e-bdc1-d8a1b8ba3fbb-cni-log-dir\") pod \"calico-node-79lvl\" (UID: \"bbb7a247-88a9-407e-bdc1-d8a1b8ba3fbb\") " pod="calico-system/calico-node-79lvl" Jan 30 13:53:08.697211 kubelet[3510]: I0130 13:53:08.691460 3510 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/bbb7a247-88a9-407e-bdc1-d8a1b8ba3fbb-cni-net-dir\") pod \"calico-node-79lvl\" (UID: \"bbb7a247-88a9-407e-bdc1-d8a1b8ba3fbb\") " pod="calico-system/calico-node-79lvl" Jan 30 13:53:08.697211 kubelet[3510]: I0130 13:53:08.691493 3510 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bbb7a247-88a9-407e-bdc1-d8a1b8ba3fbb-lib-modules\") pod \"calico-node-79lvl\" (UID: \"bbb7a247-88a9-407e-bdc1-d8a1b8ba3fbb\") " pod="calico-system/calico-node-79lvl" Jan 30 13:53:08.750642 containerd[1983]: time="2025-01-30T13:53:08.750011270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-9f9f68dfb-6kdt9,Uid:5e4c27cd-a0ce-4246-90a8-b057742c14af,Namespace:calico-system,Attempt:0,}" Jan 30 13:53:08.833143 kubelet[3510]: E0130 13:53:08.831697 3510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:08.833143 kubelet[3510]: W0130 13:53:08.831733 3510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:08.833143 kubelet[3510]: E0130 13:53:08.831765 3510 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:08.878371 containerd[1983]: time="2025-01-30T13:53:08.877671761Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:53:08.878371 containerd[1983]: time="2025-01-30T13:53:08.877951742Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:53:08.878371 containerd[1983]: time="2025-01-30T13:53:08.878006368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:53:08.880272 containerd[1983]: time="2025-01-30T13:53:08.879913432Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:53:08.899786 kubelet[3510]: E0130 13:53:08.899745 3510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:08.899786 kubelet[3510]: W0130 13:53:08.899776 3510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:08.900074 kubelet[3510]: E0130 13:53:08.899802 3510 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:08.912019 kubelet[3510]: E0130 13:53:08.911987 3510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:08.912019 kubelet[3510]: W0130 13:53:08.912017 3510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:08.913300 kubelet[3510]: E0130 13:53:08.912044 3510 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:08.944578 systemd[1]: Started cri-containerd-c1f999f519bf2cc21047369dd180e4b97b694276028db8a4c16005bb252f3f08.scope - libcontainer container c1f999f519bf2cc21047369dd180e4b97b694276028db8a4c16005bb252f3f08. Jan 30 13:53:08.971437 containerd[1983]: time="2025-01-30T13:53:08.970650605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-79lvl,Uid:bbb7a247-88a9-407e-bdc1-d8a1b8ba3fbb,Namespace:calico-system,Attempt:0,}" Jan 30 13:53:08.978958 kubelet[3510]: I0130 13:53:08.978227 3510 topology_manager.go:215] "Topology Admit Handler" podUID="092d9e15-ee48-4734-aba0-f5135cecdc7c" podNamespace="calico-system" podName="csi-node-driver-dlwgg" Jan 30 13:53:08.983119 kubelet[3510]: E0130 13:53:08.982681 3510 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dlwgg" podUID="092d9e15-ee48-4734-aba0-f5135cecdc7c" Jan 30 13:53:09.079325 kubelet[3510]: E0130 13:53:09.079291 3510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:09.079325 kubelet[3510]: W0130 13:53:09.079338 3510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:09.079696 kubelet[3510]: E0130 13:53:09.079369 3510 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:09.081188 kubelet[3510]: E0130 13:53:09.081162 3510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:09.081188 kubelet[3510]: W0130 13:53:09.081187 3510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:09.081698 kubelet[3510]: E0130 13:53:09.081208 3510 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:09.086558 kubelet[3510]: E0130 13:53:09.086530 3510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:09.086558 kubelet[3510]: W0130 13:53:09.086554 3510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:09.086843 kubelet[3510]: E0130 13:53:09.086579 3510 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:09.095067 kubelet[3510]: E0130 13:53:09.088740 3510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:09.095067 kubelet[3510]: W0130 13:53:09.088761 3510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:09.095067 kubelet[3510]: E0130 13:53:09.088786 3510 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:09.095067 kubelet[3510]: E0130 13:53:09.091425 3510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:09.095067 kubelet[3510]: W0130 13:53:09.092023 3510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:09.095067 kubelet[3510]: E0130 13:53:09.092055 3510 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:09.095067 kubelet[3510]: E0130 13:53:09.092775 3510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:09.095067 kubelet[3510]: W0130 13:53:09.092788 3510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:09.095067 kubelet[3510]: E0130 13:53:09.092820 3510 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:09.095067 kubelet[3510]: E0130 13:53:09.094553 3510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:09.096594 kubelet[3510]: W0130 13:53:09.094568 3510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:09.096594 kubelet[3510]: E0130 13:53:09.094587 3510 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:09.096594 kubelet[3510]: E0130 13:53:09.095410 3510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:09.096594 kubelet[3510]: W0130 13:53:09.095424 3510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:09.096594 kubelet[3510]: E0130 13:53:09.095442 3510 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:09.096594 kubelet[3510]: E0130 13:53:09.096180 3510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:09.096594 kubelet[3510]: W0130 13:53:09.096193 3510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:09.096594 kubelet[3510]: E0130 13:53:09.096209 3510 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:09.100131 kubelet[3510]: E0130 13:53:09.097526 3510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:09.100131 kubelet[3510]: W0130 13:53:09.097541 3510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:09.100131 kubelet[3510]: E0130 13:53:09.097557 3510 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:09.100131 kubelet[3510]: E0130 13:53:09.098143 3510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:09.100131 kubelet[3510]: W0130 13:53:09.098155 3510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:09.100131 kubelet[3510]: E0130 13:53:09.098262 3510 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:09.100131 kubelet[3510]: E0130 13:53:09.099388 3510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:09.100131 kubelet[3510]: W0130 13:53:09.099400 3510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:09.100131 kubelet[3510]: E0130 13:53:09.099414 3510 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:09.101934 kubelet[3510]: E0130 13:53:09.100257 3510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:09.101934 kubelet[3510]: W0130 13:53:09.100269 3510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:09.101934 kubelet[3510]: E0130 13:53:09.100283 3510 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:09.104109 kubelet[3510]: E0130 13:53:09.104062 3510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:09.104228 kubelet[3510]: W0130 13:53:09.104089 3510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:09.104228 kubelet[3510]: E0130 13:53:09.104144 3510 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:09.105125 kubelet[3510]: E0130 13:53:09.104422 3510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:09.105125 kubelet[3510]: W0130 13:53:09.104445 3510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:09.105125 kubelet[3510]: E0130 13:53:09.104458 3510 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:09.105125 kubelet[3510]: E0130 13:53:09.104768 3510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:09.105125 kubelet[3510]: W0130 13:53:09.104779 3510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:09.105125 kubelet[3510]: E0130 13:53:09.104792 3510 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:09.108545 kubelet[3510]: E0130 13:53:09.106142 3510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:09.108545 kubelet[3510]: W0130 13:53:09.106239 3510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:09.108545 kubelet[3510]: E0130 13:53:09.106257 3510 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:09.108545 kubelet[3510]: E0130 13:53:09.106657 3510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:09.108545 kubelet[3510]: W0130 13:53:09.106668 3510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:09.108545 kubelet[3510]: E0130 13:53:09.106682 3510 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:09.108545 kubelet[3510]: E0130 13:53:09.108258 3510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:09.108545 kubelet[3510]: W0130 13:53:09.108272 3510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:09.108545 kubelet[3510]: E0130 13:53:09.108287 3510 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:09.108980 kubelet[3510]: E0130 13:53:09.108959 3510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:09.109025 kubelet[3510]: W0130 13:53:09.108979 3510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:09.109025 kubelet[3510]: E0130 13:53:09.108995 3510 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:09.110230 kubelet[3510]: E0130 13:53:09.110176 3510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:09.110230 kubelet[3510]: W0130 13:53:09.110223 3510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:09.110420 kubelet[3510]: E0130 13:53:09.110242 3510 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:09.110420 kubelet[3510]: I0130 13:53:09.110277 3510 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gtsz\" (UniqueName: \"kubernetes.io/projected/092d9e15-ee48-4734-aba0-f5135cecdc7c-kube-api-access-8gtsz\") pod \"csi-node-driver-dlwgg\" (UID: \"092d9e15-ee48-4734-aba0-f5135cecdc7c\") " pod="calico-system/csi-node-driver-dlwgg" Jan 30 13:53:09.112132 kubelet[3510]: E0130 13:53:09.112086 3510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:09.112132 kubelet[3510]: W0130 13:53:09.112117 3510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:09.113256 kubelet[3510]: E0130 13:53:09.112542 3510 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:09.113256 kubelet[3510]: I0130 13:53:09.112571 3510 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/092d9e15-ee48-4734-aba0-f5135cecdc7c-kubelet-dir\") pod \"csi-node-driver-dlwgg\" (UID: \"092d9e15-ee48-4734-aba0-f5135cecdc7c\") " pod="calico-system/csi-node-driver-dlwgg" Jan 30 13:53:09.113256 kubelet[3510]: E0130 13:53:09.113078 3510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:09.113256 kubelet[3510]: W0130 13:53:09.113091 3510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:09.114006 kubelet[3510]: E0130 13:53:09.113962 3510 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:09.114541 kubelet[3510]: E0130 13:53:09.114515 3510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:09.114541 kubelet[3510]: W0130 13:53:09.114533 3510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:09.114666 kubelet[3510]: E0130 13:53:09.114647 3510 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:09.115492 kubelet[3510]: I0130 13:53:09.114681 3510 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/092d9e15-ee48-4734-aba0-f5135cecdc7c-socket-dir\") pod \"csi-node-driver-dlwgg\" (UID: \"092d9e15-ee48-4734-aba0-f5135cecdc7c\") " pod="calico-system/csi-node-driver-dlwgg" Jan 30 13:53:09.116232 kubelet[3510]: E0130 13:53:09.116205 3510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:09.116232 kubelet[3510]: W0130 13:53:09.116222 3510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:09.116692 kubelet[3510]: E0130 13:53:09.116467 3510 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:09.117171 kubelet[3510]: E0130 13:53:09.116945 3510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:09.117171 kubelet[3510]: W0130 13:53:09.116960 3510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:09.117171 kubelet[3510]: E0130 13:53:09.116978 3510 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:09.118814 kubelet[3510]: E0130 13:53:09.118786 3510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:09.118814 kubelet[3510]: W0130 13:53:09.118804 3510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:09.118944 kubelet[3510]: E0130 13:53:09.118916 3510 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:09.122464 kubelet[3510]: E0130 13:53:09.122431 3510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:09.122464 kubelet[3510]: W0130 13:53:09.122460 3510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:09.124620 kubelet[3510]: E0130 13:53:09.122494 3510 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:09.124620 kubelet[3510]: E0130 13:53:09.123341 3510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:09.124620 kubelet[3510]: W0130 13:53:09.123356 3510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:09.124620 kubelet[3510]: E0130 13:53:09.123375 3510 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:09.125518 kubelet[3510]: E0130 13:53:09.124940 3510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:09.125518 kubelet[3510]: W0130 13:53:09.124957 3510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:09.125518 kubelet[3510]: E0130 13:53:09.124976 3510 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:09.125518 kubelet[3510]: I0130 13:53:09.125393 3510 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/092d9e15-ee48-4734-aba0-f5135cecdc7c-varrun\") pod \"csi-node-driver-dlwgg\" (UID: \"092d9e15-ee48-4734-aba0-f5135cecdc7c\") " pod="calico-system/csi-node-driver-dlwgg" Jan 30 13:53:09.127222 kubelet[3510]: E0130 13:53:09.126611 3510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:09.127222 kubelet[3510]: W0130 13:53:09.126639 3510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:09.127222 kubelet[3510]: E0130 13:53:09.126661 3510 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:09.130280 kubelet[3510]: E0130 13:53:09.129174 3510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:09.130280 kubelet[3510]: W0130 13:53:09.129199 3510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:09.134060 kubelet[3510]: E0130 13:53:09.131006 3510 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:09.134060 kubelet[3510]: E0130 13:53:09.131683 3510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:09.134060 kubelet[3510]: W0130 13:53:09.131701 3510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:09.134060 kubelet[3510]: E0130 13:53:09.131723 3510 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:09.134060 kubelet[3510]: I0130 13:53:09.131798 3510 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/092d9e15-ee48-4734-aba0-f5135cecdc7c-registration-dir\") pod \"csi-node-driver-dlwgg\" (UID: \"092d9e15-ee48-4734-aba0-f5135cecdc7c\") " pod="calico-system/csi-node-driver-dlwgg" Jan 30 13:53:09.134060 kubelet[3510]: E0130 13:53:09.133594 3510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:09.134060 kubelet[3510]: W0130 13:53:09.133610 3510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:09.134060 kubelet[3510]: E0130 13:53:09.133627 3510 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:09.136865 kubelet[3510]: E0130 13:53:09.135300 3510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:09.136865 kubelet[3510]: W0130 13:53:09.135317 3510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:09.136865 kubelet[3510]: E0130 13:53:09.135498 3510 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:09.144509 containerd[1983]: time="2025-01-30T13:53:09.134022978Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:53:09.144509 containerd[1983]: time="2025-01-30T13:53:09.134130753Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:53:09.144509 containerd[1983]: time="2025-01-30T13:53:09.134167742Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:53:09.144509 containerd[1983]: time="2025-01-30T13:53:09.134308744Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:53:09.213743 systemd[1]: Started cri-containerd-01739a7bb355ee4e1e7a576c4d68a7a57aab33744be632b4e0df32f2f44ed930.scope - libcontainer container 01739a7bb355ee4e1e7a576c4d68a7a57aab33744be632b4e0df32f2f44ed930. Jan 30 13:53:09.232937 kubelet[3510]: E0130 13:53:09.232892 3510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:09.232937 kubelet[3510]: W0130 13:53:09.232922 3510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:09.232937 kubelet[3510]: E0130 13:53:09.232946 3510 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:09.235056 kubelet[3510]: E0130 13:53:09.235016 3510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:09.235056 kubelet[3510]: W0130 13:53:09.235041 3510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:09.235225 kubelet[3510]: E0130 13:53:09.235064 3510 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:09.237278 kubelet[3510]: E0130 13:53:09.237241 3510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:09.237278 kubelet[3510]: W0130 13:53:09.237266 3510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:09.238163 kubelet[3510]: E0130 13:53:09.238015 3510 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:09.238293 kubelet[3510]: E0130 13:53:09.238176 3510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:09.238293 kubelet[3510]: W0130 13:53:09.238188 3510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:09.238293 kubelet[3510]: E0130 13:53:09.238221 3510 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:09.240635 kubelet[3510]: E0130 13:53:09.240610 3510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:09.240635 kubelet[3510]: W0130 13:53:09.240632 3510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:09.240792 kubelet[3510]: E0130 13:53:09.240668 3510 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:09.240989 kubelet[3510]: E0130 13:53:09.240971 3510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:09.240989 kubelet[3510]: W0130 13:53:09.240989 3510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:09.241324 kubelet[3510]: E0130 13:53:09.241008 3510 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:09.241324 kubelet[3510]: E0130 13:53:09.241293 3510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:09.241324 kubelet[3510]: W0130 13:53:09.241304 3510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:09.241324 kubelet[3510]: E0130 13:53:09.241321 3510 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:09.244549 kubelet[3510]: E0130 13:53:09.244520 3510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:09.244549 kubelet[3510]: W0130 13:53:09.244547 3510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:09.244728 kubelet[3510]: E0130 13:53:09.244681 3510 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:09.244912 kubelet[3510]: E0130 13:53:09.244894 3510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:09.244912 kubelet[3510]: W0130 13:53:09.244911 3510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:09.245045 kubelet[3510]: E0130 13:53:09.244998 3510 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:09.245484 kubelet[3510]: E0130 13:53:09.245350 3510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:09.245484 kubelet[3510]: W0130 13:53:09.245366 3510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:09.245628 kubelet[3510]: E0130 13:53:09.245576 3510 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:09.247184 kubelet[3510]: E0130 13:53:09.247161 3510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:09.247184 kubelet[3510]: W0130 13:53:09.247182 3510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:09.247512 kubelet[3510]: E0130 13:53:09.247272 3510 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:09.247512 kubelet[3510]: E0130 13:53:09.247505 3510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:09.247614 kubelet[3510]: W0130 13:53:09.247516 3510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:09.247614 kubelet[3510]: E0130 13:53:09.247601 3510 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:09.247987 kubelet[3510]: E0130 13:53:09.247782 3510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:09.247987 kubelet[3510]: W0130 13:53:09.247793 3510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:09.247987 kubelet[3510]: E0130 13:53:09.247881 3510 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:09.249083 kubelet[3510]: E0130 13:53:09.248843 3510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:09.249083 kubelet[3510]: W0130 13:53:09.248858 3510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:09.249083 kubelet[3510]: E0130 13:53:09.249066 3510 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:09.249919 kubelet[3510]: E0130 13:53:09.249828 3510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:09.249919 kubelet[3510]: W0130 13:53:09.249844 3510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:09.250194 kubelet[3510]: E0130 13:53:09.249928 3510 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:09.251111 kubelet[3510]: E0130 13:53:09.251081 3510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:09.251205 kubelet[3510]: W0130 13:53:09.251115 3510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:09.251257 kubelet[3510]: E0130 13:53:09.251220 3510 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:09.251481 kubelet[3510]: E0130 13:53:09.251463 3510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:09.251553 kubelet[3510]: W0130 13:53:09.251484 3510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:09.251604 kubelet[3510]: E0130 13:53:09.251570 3510 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:09.252171 kubelet[3510]: E0130 13:53:09.252136 3510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:09.252171 kubelet[3510]: W0130 13:53:09.252155 3510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:09.255183 kubelet[3510]: E0130 13:53:09.255149 3510 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:09.255470 kubelet[3510]: E0130 13:53:09.255450 3510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:09.255555 kubelet[3510]: W0130 13:53:09.255471 3510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:09.255639 kubelet[3510]: E0130 13:53:09.255618 3510 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:09.255829 kubelet[3510]: E0130 13:53:09.255813 3510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:09.255829 kubelet[3510]: W0130 13:53:09.255828 3510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:09.255962 kubelet[3510]: E0130 13:53:09.255941 3510 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:09.256222 kubelet[3510]: E0130 13:53:09.256139 3510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:09.256222 kubelet[3510]: W0130 13:53:09.256152 3510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:09.257290 kubelet[3510]: E0130 13:53:09.257267 3510 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:09.259213 kubelet[3510]: E0130 13:53:09.259188 3510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:09.259213 kubelet[3510]: W0130 13:53:09.259211 3510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:09.259367 kubelet[3510]: E0130 13:53:09.259237 3510 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:09.262275 kubelet[3510]: E0130 13:53:09.262242 3510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:09.262275 kubelet[3510]: W0130 13:53:09.262273 3510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:09.262610 kubelet[3510]: E0130 13:53:09.262493 3510 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:09.263130 kubelet[3510]: E0130 13:53:09.262855 3510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:09.263130 kubelet[3510]: W0130 13:53:09.262872 3510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:09.263130 kubelet[3510]: E0130 13:53:09.262912 3510 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:09.263321 kubelet[3510]: E0130 13:53:09.263238 3510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:09.263321 kubelet[3510]: W0130 13:53:09.263250 3510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:09.263321 kubelet[3510]: E0130 13:53:09.263264 3510 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:09.303405 kubelet[3510]: E0130 13:53:09.303349 3510 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:53:09.303405 kubelet[3510]: W0130 13:53:09.303377 3510 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:53:09.303983 kubelet[3510]: E0130 13:53:09.303700 3510 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:53:09.323149 containerd[1983]: time="2025-01-30T13:53:09.322462221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-79lvl,Uid:bbb7a247-88a9-407e-bdc1-d8a1b8ba3fbb,Namespace:calico-system,Attempt:0,} returns sandbox id \"01739a7bb355ee4e1e7a576c4d68a7a57aab33744be632b4e0df32f2f44ed930\"" Jan 30 13:53:09.331986 containerd[1983]: time="2025-01-30T13:53:09.331946969Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 30 13:53:09.417560 containerd[1983]: time="2025-01-30T13:53:09.417344894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-9f9f68dfb-6kdt9,Uid:5e4c27cd-a0ce-4246-90a8-b057742c14af,Namespace:calico-system,Attempt:0,} returns sandbox id \"c1f999f519bf2cc21047369dd180e4b97b694276028db8a4c16005bb252f3f08\"" Jan 30 13:53:10.593091 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4008516596.mount: Deactivated successfully. Jan 30 13:53:10.731208 kubelet[3510]: E0130 13:53:10.729997 3510 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dlwgg" podUID="092d9e15-ee48-4734-aba0-f5135cecdc7c" Jan 30 13:53:10.792765 containerd[1983]: time="2025-01-30T13:53:10.792573296Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:10.795712 containerd[1983]: time="2025-01-30T13:53:10.795292913Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Jan 30 13:53:10.799737 containerd[1983]: time="2025-01-30T13:53:10.798768656Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:10.802503 containerd[1983]: time="2025-01-30T13:53:10.802461228Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:10.803261 containerd[1983]: time="2025-01-30T13:53:10.803216460Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.471166695s" Jan 30 13:53:10.803413 containerd[1983]: time="2025-01-30T13:53:10.803267934Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 30 13:53:10.805683 containerd[1983]: time="2025-01-30T13:53:10.805643470Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 30 13:53:10.807644 containerd[1983]: time="2025-01-30T13:53:10.807310024Z" level=info msg="CreateContainer within sandbox \"01739a7bb355ee4e1e7a576c4d68a7a57aab33744be632b4e0df32f2f44ed930\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 30 13:53:10.847632 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount211615164.mount: Deactivated successfully. Jan 30 13:53:10.862383 containerd[1983]: time="2025-01-30T13:53:10.862258389Z" level=info msg="CreateContainer within sandbox \"01739a7bb355ee4e1e7a576c4d68a7a57aab33744be632b4e0df32f2f44ed930\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"3fb34398b1172dfab8452c2a357e4f47aa6a29c5a18cbbb5a74324e9b4def245\"" Jan 30 13:53:10.864994 containerd[1983]: time="2025-01-30T13:53:10.863118356Z" level=info msg="StartContainer for \"3fb34398b1172dfab8452c2a357e4f47aa6a29c5a18cbbb5a74324e9b4def245\"" Jan 30 13:53:10.915620 systemd[1]: Started cri-containerd-3fb34398b1172dfab8452c2a357e4f47aa6a29c5a18cbbb5a74324e9b4def245.scope - libcontainer container 3fb34398b1172dfab8452c2a357e4f47aa6a29c5a18cbbb5a74324e9b4def245. Jan 30 13:53:10.959236 containerd[1983]: time="2025-01-30T13:53:10.958928773Z" level=info msg="StartContainer for \"3fb34398b1172dfab8452c2a357e4f47aa6a29c5a18cbbb5a74324e9b4def245\" returns successfully" Jan 30 13:53:10.982215 systemd[1]: cri-containerd-3fb34398b1172dfab8452c2a357e4f47aa6a29c5a18cbbb5a74324e9b4def245.scope: Deactivated successfully. Jan 30 13:53:11.107700 containerd[1983]: time="2025-01-30T13:53:11.049741174Z" level=info msg="shim disconnected" id=3fb34398b1172dfab8452c2a357e4f47aa6a29c5a18cbbb5a74324e9b4def245 namespace=k8s.io Jan 30 13:53:11.107700 containerd[1983]: time="2025-01-30T13:53:11.107614741Z" level=warning msg="cleaning up after shim disconnected" id=3fb34398b1172dfab8452c2a357e4f47aa6a29c5a18cbbb5a74324e9b4def245 namespace=k8s.io Jan 30 13:53:11.107700 containerd[1983]: time="2025-01-30T13:53:11.107635088Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:53:11.840701 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3fb34398b1172dfab8452c2a357e4f47aa6a29c5a18cbbb5a74324e9b4def245-rootfs.mount: Deactivated successfully. Jan 30 13:53:12.731051 kubelet[3510]: E0130 13:53:12.730837 3510 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dlwgg" podUID="092d9e15-ee48-4734-aba0-f5135cecdc7c" Jan 30 13:53:13.445452 containerd[1983]: time="2025-01-30T13:53:13.445399459Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:13.446710 containerd[1983]: time="2025-01-30T13:53:13.446551697Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29850141" Jan 30 13:53:13.449127 containerd[1983]: time="2025-01-30T13:53:13.447809578Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:13.450436 containerd[1983]: time="2025-01-30T13:53:13.450402209Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:13.451065 containerd[1983]: time="2025-01-30T13:53:13.451027434Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.645345958s" Jan 30 13:53:13.451166 containerd[1983]: time="2025-01-30T13:53:13.451071942Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 30 13:53:13.452719 containerd[1983]: time="2025-01-30T13:53:13.452690376Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 30 13:53:13.498340 containerd[1983]: time="2025-01-30T13:53:13.498242189Z" level=info msg="CreateContainer within sandbox \"c1f999f519bf2cc21047369dd180e4b97b694276028db8a4c16005bb252f3f08\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 30 13:53:13.542728 containerd[1983]: time="2025-01-30T13:53:13.542638000Z" level=info msg="CreateContainer within sandbox \"c1f999f519bf2cc21047369dd180e4b97b694276028db8a4c16005bb252f3f08\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"5fc11c6d68373ff39accac7568f2162d1053b755937c615eefdfca272816fe7a\"" Jan 30 13:53:13.544721 containerd[1983]: time="2025-01-30T13:53:13.544682441Z" level=info msg="StartContainer for \"5fc11c6d68373ff39accac7568f2162d1053b755937c615eefdfca272816fe7a\"" Jan 30 13:53:13.621989 systemd[1]: Started cri-containerd-5fc11c6d68373ff39accac7568f2162d1053b755937c615eefdfca272816fe7a.scope - libcontainer container 5fc11c6d68373ff39accac7568f2162d1053b755937c615eefdfca272816fe7a. Jan 30 13:53:13.686770 containerd[1983]: time="2025-01-30T13:53:13.686721446Z" level=info msg="StartContainer for \"5fc11c6d68373ff39accac7568f2162d1053b755937c615eefdfca272816fe7a\" returns successfully" Jan 30 13:53:14.279618 kubelet[3510]: I0130 13:53:14.278891 3510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-9f9f68dfb-6kdt9" podStartSLOduration=2.247159335 podStartE2EDuration="6.278867012s" podCreationTimestamp="2025-01-30 13:53:08 +0000 UTC" firstStartedPulling="2025-01-30 13:53:09.420808655 +0000 UTC m=+22.957573971" lastFinishedPulling="2025-01-30 13:53:13.452516321 +0000 UTC m=+26.989281648" observedRunningTime="2025-01-30 13:53:14.277579939 +0000 UTC m=+27.814345273" watchObservedRunningTime="2025-01-30 13:53:14.278867012 +0000 UTC m=+27.815632389" Jan 30 13:53:14.738625 kubelet[3510]: E0130 13:53:14.738552 3510 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dlwgg" podUID="092d9e15-ee48-4734-aba0-f5135cecdc7c" Jan 30 13:53:15.264988 kubelet[3510]: I0130 13:53:15.264956 3510 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:53:16.734033 kubelet[3510]: E0130 13:53:16.733987 3510 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dlwgg" podUID="092d9e15-ee48-4734-aba0-f5135cecdc7c" Jan 30 13:53:18.173481 containerd[1983]: time="2025-01-30T13:53:18.173427133Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:18.175478 containerd[1983]: time="2025-01-30T13:53:18.175222482Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 30 13:53:18.179139 containerd[1983]: time="2025-01-30T13:53:18.177732180Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:18.182803 containerd[1983]: time="2025-01-30T13:53:18.182442562Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:18.183836 containerd[1983]: time="2025-01-30T13:53:18.183733936Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 4.731008218s" Jan 30 13:53:18.183955 containerd[1983]: time="2025-01-30T13:53:18.183833539Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 30 13:53:18.192036 containerd[1983]: time="2025-01-30T13:53:18.191691572Z" level=info msg="CreateContainer within sandbox \"01739a7bb355ee4e1e7a576c4d68a7a57aab33744be632b4e0df32f2f44ed930\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 30 13:53:18.232232 containerd[1983]: time="2025-01-30T13:53:18.232185129Z" level=info msg="CreateContainer within sandbox \"01739a7bb355ee4e1e7a576c4d68a7a57aab33744be632b4e0df32f2f44ed930\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c23b67e31e14d8f23bfb15bf2b574772ac914a809b77cb01a1fbf0e34124292c\"" Jan 30 13:53:18.234192 containerd[1983]: time="2025-01-30T13:53:18.233542907Z" level=info msg="StartContainer for \"c23b67e31e14d8f23bfb15bf2b574772ac914a809b77cb01a1fbf0e34124292c\"" Jan 30 13:53:18.327525 systemd[1]: run-containerd-runc-k8s.io-c23b67e31e14d8f23bfb15bf2b574772ac914a809b77cb01a1fbf0e34124292c-runc.teG4Hk.mount: Deactivated successfully. Jan 30 13:53:18.339402 systemd[1]: Started cri-containerd-c23b67e31e14d8f23bfb15bf2b574772ac914a809b77cb01a1fbf0e34124292c.scope - libcontainer container c23b67e31e14d8f23bfb15bf2b574772ac914a809b77cb01a1fbf0e34124292c. Jan 30 13:53:18.412398 containerd[1983]: time="2025-01-30T13:53:18.412281345Z" level=info msg="StartContainer for \"c23b67e31e14d8f23bfb15bf2b574772ac914a809b77cb01a1fbf0e34124292c\" returns successfully" Jan 30 13:53:18.747702 kubelet[3510]: E0130 13:53:18.747617 3510 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dlwgg" podUID="092d9e15-ee48-4734-aba0-f5135cecdc7c" Jan 30 13:53:19.152536 systemd[1]: cri-containerd-c23b67e31e14d8f23bfb15bf2b574772ac914a809b77cb01a1fbf0e34124292c.scope: Deactivated successfully. Jan 30 13:53:19.224305 kubelet[3510]: I0130 13:53:19.222865 3510 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 30 13:53:19.229080 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c23b67e31e14d8f23bfb15bf2b574772ac914a809b77cb01a1fbf0e34124292c-rootfs.mount: Deactivated successfully. Jan 30 13:53:19.319476 containerd[1983]: time="2025-01-30T13:53:19.319345357Z" level=info msg="shim disconnected" id=c23b67e31e14d8f23bfb15bf2b574772ac914a809b77cb01a1fbf0e34124292c namespace=k8s.io Jan 30 13:53:19.319476 containerd[1983]: time="2025-01-30T13:53:19.319439290Z" level=warning msg="cleaning up after shim disconnected" id=c23b67e31e14d8f23bfb15bf2b574772ac914a809b77cb01a1fbf0e34124292c namespace=k8s.io Jan 30 13:53:19.321400 containerd[1983]: time="2025-01-30T13:53:19.319455930Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:53:19.368268 kubelet[3510]: I0130 13:53:19.368163 3510 topology_manager.go:215] "Topology Admit Handler" podUID="c4e15632-c2e7-4cc7-a34a-0ee80ce9b661" podNamespace="kube-system" podName="coredns-7db6d8ff4d-4dgqz" Jan 30 13:53:19.374473 kubelet[3510]: I0130 13:53:19.374422 3510 topology_manager.go:215] "Topology Admit Handler" podUID="2b094d3d-1ca0-4567-b2d8-ca3df2f86d82" podNamespace="kube-system" podName="coredns-7db6d8ff4d-k9ts7" Jan 30 13:53:19.374718 kubelet[3510]: I0130 13:53:19.374682 3510 topology_manager.go:215] "Topology Admit Handler" podUID="11b5c859-222b-40cc-bebe-26c0a9a42d40" podNamespace="calico-apiserver" podName="calico-apiserver-7455f859bb-tzswx" Jan 30 13:53:19.374966 kubelet[3510]: I0130 13:53:19.374821 3510 topology_manager.go:215] "Topology Admit Handler" podUID="228374c3-8542-47d9-a2e1-c564d0ab650c" podNamespace="calico-apiserver" podName="calico-apiserver-7455f859bb-hlpjx" Jan 30 13:53:19.377983 kubelet[3510]: I0130 13:53:19.377946 3510 topology_manager.go:215] "Topology Admit Handler" podUID="99b6f917-c78e-4eb5-a202-0c6311880c4e" podNamespace="calico-system" podName="calico-kube-controllers-6c996ffb5d-7txfd" Jan 30 13:53:19.428206 systemd[1]: Created slice kubepods-burstable-podc4e15632_c2e7_4cc7_a34a_0ee80ce9b661.slice - libcontainer container kubepods-burstable-podc4e15632_c2e7_4cc7_a34a_0ee80ce9b661.slice. Jan 30 13:53:19.452833 systemd[1]: Created slice kubepods-besteffort-pod11b5c859_222b_40cc_bebe_26c0a9a42d40.slice - libcontainer container kubepods-besteffort-pod11b5c859_222b_40cc_bebe_26c0a9a42d40.slice. Jan 30 13:53:19.464188 systemd[1]: Created slice kubepods-besteffort-pod228374c3_8542_47d9_a2e1_c564d0ab650c.slice - libcontainer container kubepods-besteffort-pod228374c3_8542_47d9_a2e1_c564d0ab650c.slice. Jan 30 13:53:19.479032 kubelet[3510]: I0130 13:53:19.477340 3510 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/228374c3-8542-47d9-a2e1-c564d0ab650c-calico-apiserver-certs\") pod \"calico-apiserver-7455f859bb-hlpjx\" (UID: \"228374c3-8542-47d9-a2e1-c564d0ab650c\") " pod="calico-apiserver/calico-apiserver-7455f859bb-hlpjx" Jan 30 13:53:19.479032 kubelet[3510]: I0130 13:53:19.477523 3510 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdn27\" (UniqueName: \"kubernetes.io/projected/228374c3-8542-47d9-a2e1-c564d0ab650c-kube-api-access-mdn27\") pod \"calico-apiserver-7455f859bb-hlpjx\" (UID: \"228374c3-8542-47d9-a2e1-c564d0ab650c\") " pod="calico-apiserver/calico-apiserver-7455f859bb-hlpjx" Jan 30 13:53:19.479032 kubelet[3510]: I0130 13:53:19.477566 3510 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/11b5c859-222b-40cc-bebe-26c0a9a42d40-calico-apiserver-certs\") pod \"calico-apiserver-7455f859bb-tzswx\" (UID: \"11b5c859-222b-40cc-bebe-26c0a9a42d40\") " pod="calico-apiserver/calico-apiserver-7455f859bb-tzswx" Jan 30 13:53:19.479032 kubelet[3510]: I0130 13:53:19.477598 3510 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/99b6f917-c78e-4eb5-a202-0c6311880c4e-tigera-ca-bundle\") pod \"calico-kube-controllers-6c996ffb5d-7txfd\" (UID: \"99b6f917-c78e-4eb5-a202-0c6311880c4e\") " pod="calico-system/calico-kube-controllers-6c996ffb5d-7txfd" Jan 30 13:53:19.479032 kubelet[3510]: I0130 13:53:19.477626 3510 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c4e15632-c2e7-4cc7-a34a-0ee80ce9b661-config-volume\") pod \"coredns-7db6d8ff4d-4dgqz\" (UID: \"c4e15632-c2e7-4cc7-a34a-0ee80ce9b661\") " pod="kube-system/coredns-7db6d8ff4d-4dgqz" Jan 30 13:53:19.478823 systemd[1]: Created slice kubepods-burstable-pod2b094d3d_1ca0_4567_b2d8_ca3df2f86d82.slice - libcontainer container kubepods-burstable-pod2b094d3d_1ca0_4567_b2d8_ca3df2f86d82.slice. Jan 30 13:53:19.479513 kubelet[3510]: I0130 13:53:19.477650 3510 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6wx8\" (UniqueName: \"kubernetes.io/projected/11b5c859-222b-40cc-bebe-26c0a9a42d40-kube-api-access-n6wx8\") pod \"calico-apiserver-7455f859bb-tzswx\" (UID: \"11b5c859-222b-40cc-bebe-26c0a9a42d40\") " pod="calico-apiserver/calico-apiserver-7455f859bb-tzswx" Jan 30 13:53:19.479513 kubelet[3510]: I0130 13:53:19.477672 3510 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-266lk\" (UniqueName: \"kubernetes.io/projected/99b6f917-c78e-4eb5-a202-0c6311880c4e-kube-api-access-266lk\") pod \"calico-kube-controllers-6c996ffb5d-7txfd\" (UID: \"99b6f917-c78e-4eb5-a202-0c6311880c4e\") " pod="calico-system/calico-kube-controllers-6c996ffb5d-7txfd" Jan 30 13:53:19.479513 kubelet[3510]: I0130 13:53:19.477703 3510 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smstr\" (UniqueName: \"kubernetes.io/projected/c4e15632-c2e7-4cc7-a34a-0ee80ce9b661-kube-api-access-smstr\") pod \"coredns-7db6d8ff4d-4dgqz\" (UID: \"c4e15632-c2e7-4cc7-a34a-0ee80ce9b661\") " pod="kube-system/coredns-7db6d8ff4d-4dgqz" Jan 30 13:53:19.479513 kubelet[3510]: I0130 13:53:19.477730 3510 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltbpr\" (UniqueName: \"kubernetes.io/projected/2b094d3d-1ca0-4567-b2d8-ca3df2f86d82-kube-api-access-ltbpr\") pod \"coredns-7db6d8ff4d-k9ts7\" (UID: \"2b094d3d-1ca0-4567-b2d8-ca3df2f86d82\") " pod="kube-system/coredns-7db6d8ff4d-k9ts7" Jan 30 13:53:19.479513 kubelet[3510]: I0130 13:53:19.477757 3510 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2b094d3d-1ca0-4567-b2d8-ca3df2f86d82-config-volume\") pod \"coredns-7db6d8ff4d-k9ts7\" (UID: \"2b094d3d-1ca0-4567-b2d8-ca3df2f86d82\") " pod="kube-system/coredns-7db6d8ff4d-k9ts7" Jan 30 13:53:19.492816 systemd[1]: Created slice kubepods-besteffort-pod99b6f917_c78e_4eb5_a202_0c6311880c4e.slice - libcontainer container kubepods-besteffort-pod99b6f917_c78e_4eb5_a202_0c6311880c4e.slice. Jan 30 13:53:19.747671 containerd[1983]: time="2025-01-30T13:53:19.747620338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4dgqz,Uid:c4e15632-c2e7-4cc7-a34a-0ee80ce9b661,Namespace:kube-system,Attempt:0,}" Jan 30 13:53:19.778499 containerd[1983]: time="2025-01-30T13:53:19.777462027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7455f859bb-tzswx,Uid:11b5c859-222b-40cc-bebe-26c0a9a42d40,Namespace:calico-apiserver,Attempt:0,}" Jan 30 13:53:19.779513 containerd[1983]: time="2025-01-30T13:53:19.779458652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7455f859bb-hlpjx,Uid:228374c3-8542-47d9-a2e1-c564d0ab650c,Namespace:calico-apiserver,Attempt:0,}" Jan 30 13:53:19.798025 containerd[1983]: time="2025-01-30T13:53:19.797975994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-k9ts7,Uid:2b094d3d-1ca0-4567-b2d8-ca3df2f86d82,Namespace:kube-system,Attempt:0,}" Jan 30 13:53:19.803141 containerd[1983]: time="2025-01-30T13:53:19.801986178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c996ffb5d-7txfd,Uid:99b6f917-c78e-4eb5-a202-0c6311880c4e,Namespace:calico-system,Attempt:0,}" Jan 30 13:53:20.319222 containerd[1983]: time="2025-01-30T13:53:20.316666979Z" level=error msg="Failed to destroy network for sandbox \"5df243c60b5b110a1c1a8bd2dccbc93a596bc92358c15f864052f423c056404b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:53:20.337349 containerd[1983]: time="2025-01-30T13:53:20.337308498Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 30 13:53:20.348225 containerd[1983]: time="2025-01-30T13:53:20.348169308Z" level=error msg="encountered an error cleaning up failed sandbox \"5df243c60b5b110a1c1a8bd2dccbc93a596bc92358c15f864052f423c056404b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:53:20.348495 containerd[1983]: time="2025-01-30T13:53:20.348432222Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c996ffb5d-7txfd,Uid:99b6f917-c78e-4eb5-a202-0c6311880c4e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5df243c60b5b110a1c1a8bd2dccbc93a596bc92358c15f864052f423c056404b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:53:20.404119 kubelet[3510]: E0130 13:53:20.349739 3510 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5df243c60b5b110a1c1a8bd2dccbc93a596bc92358c15f864052f423c056404b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:53:20.408295 containerd[1983]: time="2025-01-30T13:53:20.408235886Z" level=error msg="Failed to destroy network for sandbox \"c3896f163db0a00ffe6c18dc351f404f7a225a292dbd069dfd06c41bf299a11e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:53:20.412412 containerd[1983]: time="2025-01-30T13:53:20.411357539Z" level=error msg="encountered an error cleaning up failed sandbox \"c3896f163db0a00ffe6c18dc351f404f7a225a292dbd069dfd06c41bf299a11e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:53:20.414329 containerd[1983]: time="2025-01-30T13:53:20.414204914Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7455f859bb-hlpjx,Uid:228374c3-8542-47d9-a2e1-c564d0ab650c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c3896f163db0a00ffe6c18dc351f404f7a225a292dbd069dfd06c41bf299a11e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:53:20.414329 containerd[1983]: time="2025-01-30T13:53:20.412673758Z" level=error msg="Failed to destroy network for sandbox \"0d5a34594156568249ae9c16b8a5e94d1f908e4cbe599063d95746b3841ca534\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:53:20.419259 containerd[1983]: time="2025-01-30T13:53:20.417427634Z" level=error msg="encountered an error cleaning up failed sandbox \"0d5a34594156568249ae9c16b8a5e94d1f908e4cbe599063d95746b3841ca534\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:53:20.419259 containerd[1983]: time="2025-01-30T13:53:20.417500417Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7455f859bb-tzswx,Uid:11b5c859-222b-40cc-bebe-26c0a9a42d40,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0d5a34594156568249ae9c16b8a5e94d1f908e4cbe599063d95746b3841ca534\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:53:20.419464 kubelet[3510]: E0130 13:53:20.412343 3510 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5df243c60b5b110a1c1a8bd2dccbc93a596bc92358c15f864052f423c056404b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6c996ffb5d-7txfd" Jan 30 13:53:20.419464 kubelet[3510]: E0130 13:53:20.415639 3510 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5df243c60b5b110a1c1a8bd2dccbc93a596bc92358c15f864052f423c056404b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6c996ffb5d-7txfd" Jan 30 13:53:20.419464 kubelet[3510]: E0130 13:53:20.415721 3510 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6c996ffb5d-7txfd_calico-system(99b6f917-c78e-4eb5-a202-0c6311880c4e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6c996ffb5d-7txfd_calico-system(99b6f917-c78e-4eb5-a202-0c6311880c4e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5df243c60b5b110a1c1a8bd2dccbc93a596bc92358c15f864052f423c056404b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6c996ffb5d-7txfd" podUID="99b6f917-c78e-4eb5-a202-0c6311880c4e" Jan 30 13:53:20.419649 kubelet[3510]: E0130 13:53:20.417297 3510 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3896f163db0a00ffe6c18dc351f404f7a225a292dbd069dfd06c41bf299a11e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:53:20.419649 kubelet[3510]: E0130 13:53:20.417358 3510 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3896f163db0a00ffe6c18dc351f404f7a225a292dbd069dfd06c41bf299a11e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7455f859bb-hlpjx" Jan 30 13:53:20.419649 kubelet[3510]: E0130 13:53:20.417390 3510 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3896f163db0a00ffe6c18dc351f404f7a225a292dbd069dfd06c41bf299a11e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7455f859bb-hlpjx" Jan 30 13:53:20.419773 kubelet[3510]: E0130 13:53:20.417438 3510 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7455f859bb-hlpjx_calico-apiserver(228374c3-8542-47d9-a2e1-c564d0ab650c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7455f859bb-hlpjx_calico-apiserver(228374c3-8542-47d9-a2e1-c564d0ab650c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c3896f163db0a00ffe6c18dc351f404f7a225a292dbd069dfd06c41bf299a11e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7455f859bb-hlpjx" podUID="228374c3-8542-47d9-a2e1-c564d0ab650c" Jan 30 13:53:20.419773 kubelet[3510]: E0130 13:53:20.417668 3510 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d5a34594156568249ae9c16b8a5e94d1f908e4cbe599063d95746b3841ca534\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:53:20.419773 kubelet[3510]: E0130 13:53:20.417710 3510 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d5a34594156568249ae9c16b8a5e94d1f908e4cbe599063d95746b3841ca534\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7455f859bb-tzswx" Jan 30 13:53:20.422147 kubelet[3510]: E0130 13:53:20.417732 3510 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d5a34594156568249ae9c16b8a5e94d1f908e4cbe599063d95746b3841ca534\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7455f859bb-tzswx" Jan 30 13:53:20.422147 kubelet[3510]: E0130 13:53:20.417771 3510 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7455f859bb-tzswx_calico-apiserver(11b5c859-222b-40cc-bebe-26c0a9a42d40)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7455f859bb-tzswx_calico-apiserver(11b5c859-222b-40cc-bebe-26c0a9a42d40)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0d5a34594156568249ae9c16b8a5e94d1f908e4cbe599063d95746b3841ca534\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7455f859bb-tzswx" podUID="11b5c859-222b-40cc-bebe-26c0a9a42d40" Jan 30 13:53:20.437731 containerd[1983]: time="2025-01-30T13:53:20.424736595Z" level=error msg="Failed to destroy network for sandbox \"31d2221e30d0200aa5f6080b4a8f6a58033d610f83811b0271143583209bbd06\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:53:20.437731 containerd[1983]: time="2025-01-30T13:53:20.432961529Z" level=error msg="encountered an error cleaning up failed sandbox \"31d2221e30d0200aa5f6080b4a8f6a58033d610f83811b0271143583209bbd06\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:53:20.437731 containerd[1983]: time="2025-01-30T13:53:20.433043377Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4dgqz,Uid:c4e15632-c2e7-4cc7-a34a-0ee80ce9b661,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"31d2221e30d0200aa5f6080b4a8f6a58033d610f83811b0271143583209bbd06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:53:20.440544 kubelet[3510]: E0130 13:53:20.435606 3510 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31d2221e30d0200aa5f6080b4a8f6a58033d610f83811b0271143583209bbd06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:53:20.440544 kubelet[3510]: E0130 13:53:20.435713 3510 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31d2221e30d0200aa5f6080b4a8f6a58033d610f83811b0271143583209bbd06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-4dgqz" Jan 30 13:53:20.440544 kubelet[3510]: E0130 13:53:20.435738 3510 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31d2221e30d0200aa5f6080b4a8f6a58033d610f83811b0271143583209bbd06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-4dgqz" Jan 30 13:53:20.440780 kubelet[3510]: E0130 13:53:20.435786 3510 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-4dgqz_kube-system(c4e15632-c2e7-4cc7-a34a-0ee80ce9b661)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-4dgqz_kube-system(c4e15632-c2e7-4cc7-a34a-0ee80ce9b661)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"31d2221e30d0200aa5f6080b4a8f6a58033d610f83811b0271143583209bbd06\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-4dgqz" podUID="c4e15632-c2e7-4cc7-a34a-0ee80ce9b661" Jan 30 13:53:20.454128 containerd[1983]: time="2025-01-30T13:53:20.453670700Z" level=error msg="Failed to destroy network for sandbox \"2357f22516048c6c7ee3f3740cba728b6feeb8e90778cacab2013013fc10bbcd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:53:20.456215 containerd[1983]: time="2025-01-30T13:53:20.454423867Z" level=error msg="encountered an error cleaning up failed sandbox \"2357f22516048c6c7ee3f3740cba728b6feeb8e90778cacab2013013fc10bbcd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:53:20.456215 containerd[1983]: time="2025-01-30T13:53:20.454488261Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-k9ts7,Uid:2b094d3d-1ca0-4567-b2d8-ca3df2f86d82,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2357f22516048c6c7ee3f3740cba728b6feeb8e90778cacab2013013fc10bbcd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:53:20.457124 kubelet[3510]: E0130 13:53:20.456646 3510 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2357f22516048c6c7ee3f3740cba728b6feeb8e90778cacab2013013fc10bbcd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:53:20.457124 kubelet[3510]: E0130 13:53:20.456778 3510 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2357f22516048c6c7ee3f3740cba728b6feeb8e90778cacab2013013fc10bbcd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-k9ts7" Jan 30 13:53:20.457124 kubelet[3510]: E0130 13:53:20.457020 3510 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2357f22516048c6c7ee3f3740cba728b6feeb8e90778cacab2013013fc10bbcd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-k9ts7" Jan 30 13:53:20.457806 kubelet[3510]: E0130 13:53:20.457087 3510 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-k9ts7_kube-system(2b094d3d-1ca0-4567-b2d8-ca3df2f86d82)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-k9ts7_kube-system(2b094d3d-1ca0-4567-b2d8-ca3df2f86d82)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2357f22516048c6c7ee3f3740cba728b6feeb8e90778cacab2013013fc10bbcd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-k9ts7" podUID="2b094d3d-1ca0-4567-b2d8-ca3df2f86d82" Jan 30 13:53:20.746116 systemd[1]: Created slice kubepods-besteffort-pod092d9e15_ee48_4734_aba0_f5135cecdc7c.slice - libcontainer container kubepods-besteffort-pod092d9e15_ee48_4734_aba0_f5135cecdc7c.slice. Jan 30 13:53:20.750273 containerd[1983]: time="2025-01-30T13:53:20.750232042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dlwgg,Uid:092d9e15-ee48-4734-aba0-f5135cecdc7c,Namespace:calico-system,Attempt:0,}" Jan 30 13:53:20.907939 containerd[1983]: time="2025-01-30T13:53:20.907884789Z" level=error msg="Failed to destroy network for sandbox \"a152127c8c8513c4472499ceda39be4a0bb4173d50df2cbf2154da16ef482cfb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:53:20.911130 containerd[1983]: time="2025-01-30T13:53:20.908471265Z" level=error msg="encountered an error cleaning up failed sandbox \"a152127c8c8513c4472499ceda39be4a0bb4173d50df2cbf2154da16ef482cfb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:53:20.911130 containerd[1983]: time="2025-01-30T13:53:20.908571493Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dlwgg,Uid:092d9e15-ee48-4734-aba0-f5135cecdc7c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a152127c8c8513c4472499ceda39be4a0bb4173d50df2cbf2154da16ef482cfb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:53:20.911269 kubelet[3510]: E0130 13:53:20.909493 3510 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a152127c8c8513c4472499ceda39be4a0bb4173d50df2cbf2154da16ef482cfb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:53:20.911269 kubelet[3510]: E0130 13:53:20.909565 3510 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a152127c8c8513c4472499ceda39be4a0bb4173d50df2cbf2154da16ef482cfb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dlwgg" Jan 30 13:53:20.911269 kubelet[3510]: E0130 13:53:20.909605 3510 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a152127c8c8513c4472499ceda39be4a0bb4173d50df2cbf2154da16ef482cfb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dlwgg" Jan 30 13:53:20.911592 kubelet[3510]: E0130 13:53:20.909662 3510 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dlwgg_calico-system(092d9e15-ee48-4734-aba0-f5135cecdc7c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dlwgg_calico-system(092d9e15-ee48-4734-aba0-f5135cecdc7c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a152127c8c8513c4472499ceda39be4a0bb4173d50df2cbf2154da16ef482cfb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dlwgg" podUID="092d9e15-ee48-4734-aba0-f5135cecdc7c" Jan 30 13:53:20.912996 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a152127c8c8513c4472499ceda39be4a0bb4173d50df2cbf2154da16ef482cfb-shm.mount: Deactivated successfully. Jan 30 13:53:21.311452 kubelet[3510]: I0130 13:53:21.311252 3510 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a152127c8c8513c4472499ceda39be4a0bb4173d50df2cbf2154da16ef482cfb" Jan 30 13:53:21.314732 kubelet[3510]: I0130 13:53:21.314697 3510 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5df243c60b5b110a1c1a8bd2dccbc93a596bc92358c15f864052f423c056404b" Jan 30 13:53:21.352714 containerd[1983]: time="2025-01-30T13:53:21.352226629Z" level=info msg="StopPodSandbox for \"a152127c8c8513c4472499ceda39be4a0bb4173d50df2cbf2154da16ef482cfb\"" Jan 30 13:53:21.357421 containerd[1983]: time="2025-01-30T13:53:21.356967379Z" level=info msg="Ensure that sandbox a152127c8c8513c4472499ceda39be4a0bb4173d50df2cbf2154da16ef482cfb in task-service has been cleanup successfully" Jan 30 13:53:21.359274 kubelet[3510]: I0130 13:53:21.359032 3510 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d5a34594156568249ae9c16b8a5e94d1f908e4cbe599063d95746b3841ca534" Jan 30 13:53:21.364016 containerd[1983]: time="2025-01-30T13:53:21.363972874Z" level=info msg="StopPodSandbox for \"5df243c60b5b110a1c1a8bd2dccbc93a596bc92358c15f864052f423c056404b\"" Jan 30 13:53:21.364258 containerd[1983]: time="2025-01-30T13:53:21.364230994Z" level=info msg="Ensure that sandbox 5df243c60b5b110a1c1a8bd2dccbc93a596bc92358c15f864052f423c056404b in task-service has been cleanup successfully" Jan 30 13:53:21.367983 containerd[1983]: time="2025-01-30T13:53:21.367928933Z" level=info msg="StopPodSandbox for \"0d5a34594156568249ae9c16b8a5e94d1f908e4cbe599063d95746b3841ca534\"" Jan 30 13:53:21.369503 containerd[1983]: time="2025-01-30T13:53:21.369467120Z" level=info msg="Ensure that sandbox 0d5a34594156568249ae9c16b8a5e94d1f908e4cbe599063d95746b3841ca534 in task-service has been cleanup successfully" Jan 30 13:53:21.374140 kubelet[3510]: I0130 13:53:21.373645 3510 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31d2221e30d0200aa5f6080b4a8f6a58033d610f83811b0271143583209bbd06" Jan 30 13:53:21.382500 containerd[1983]: time="2025-01-30T13:53:21.382262100Z" level=info msg="StopPodSandbox for \"31d2221e30d0200aa5f6080b4a8f6a58033d610f83811b0271143583209bbd06\"" Jan 30 13:53:21.383287 containerd[1983]: time="2025-01-30T13:53:21.383187119Z" level=info msg="Ensure that sandbox 31d2221e30d0200aa5f6080b4a8f6a58033d610f83811b0271143583209bbd06 in task-service has been cleanup successfully" Jan 30 13:53:21.391227 kubelet[3510]: I0130 13:53:21.391051 3510 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3896f163db0a00ffe6c18dc351f404f7a225a292dbd069dfd06c41bf299a11e" Jan 30 13:53:21.398781 containerd[1983]: time="2025-01-30T13:53:21.396198296Z" level=info msg="StopPodSandbox for \"c3896f163db0a00ffe6c18dc351f404f7a225a292dbd069dfd06c41bf299a11e\"" Jan 30 13:53:21.398781 containerd[1983]: time="2025-01-30T13:53:21.396715066Z" level=info msg="Ensure that sandbox c3896f163db0a00ffe6c18dc351f404f7a225a292dbd069dfd06c41bf299a11e in task-service has been cleanup successfully" Jan 30 13:53:21.401564 kubelet[3510]: I0130 13:53:21.401534 3510 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2357f22516048c6c7ee3f3740cba728b6feeb8e90778cacab2013013fc10bbcd" Jan 30 13:53:21.415918 containerd[1983]: time="2025-01-30T13:53:21.415868076Z" level=info msg="StopPodSandbox for \"2357f22516048c6c7ee3f3740cba728b6feeb8e90778cacab2013013fc10bbcd\"" Jan 30 13:53:21.417498 containerd[1983]: time="2025-01-30T13:53:21.416992970Z" level=info msg="Ensure that sandbox 2357f22516048c6c7ee3f3740cba728b6feeb8e90778cacab2013013fc10bbcd in task-service has been cleanup successfully" Jan 30 13:53:21.583046 containerd[1983]: time="2025-01-30T13:53:21.582732073Z" level=error msg="StopPodSandbox for \"a152127c8c8513c4472499ceda39be4a0bb4173d50df2cbf2154da16ef482cfb\" failed" error="failed to destroy network for sandbox \"a152127c8c8513c4472499ceda39be4a0bb4173d50df2cbf2154da16ef482cfb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:53:21.584228 kubelet[3510]: E0130 13:53:21.582995 3510 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a152127c8c8513c4472499ceda39be4a0bb4173d50df2cbf2154da16ef482cfb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a152127c8c8513c4472499ceda39be4a0bb4173d50df2cbf2154da16ef482cfb" Jan 30 13:53:21.584228 kubelet[3510]: E0130 13:53:21.583066 3510 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a152127c8c8513c4472499ceda39be4a0bb4173d50df2cbf2154da16ef482cfb"} Jan 30 13:53:21.584228 kubelet[3510]: E0130 13:53:21.583160 3510 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"092d9e15-ee48-4734-aba0-f5135cecdc7c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a152127c8c8513c4472499ceda39be4a0bb4173d50df2cbf2154da16ef482cfb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:53:21.584228 kubelet[3510]: E0130 13:53:21.583197 3510 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"092d9e15-ee48-4734-aba0-f5135cecdc7c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a152127c8c8513c4472499ceda39be4a0bb4173d50df2cbf2154da16ef482cfb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dlwgg" podUID="092d9e15-ee48-4734-aba0-f5135cecdc7c" Jan 30 13:53:21.624622 containerd[1983]: time="2025-01-30T13:53:21.624525711Z" level=error msg="StopPodSandbox for \"c3896f163db0a00ffe6c18dc351f404f7a225a292dbd069dfd06c41bf299a11e\" failed" error="failed to destroy network for sandbox \"c3896f163db0a00ffe6c18dc351f404f7a225a292dbd069dfd06c41bf299a11e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:53:21.625586 containerd[1983]: time="2025-01-30T13:53:21.624581872Z" level=error msg="StopPodSandbox for \"5df243c60b5b110a1c1a8bd2dccbc93a596bc92358c15f864052f423c056404b\" failed" error="failed to destroy network for sandbox \"5df243c60b5b110a1c1a8bd2dccbc93a596bc92358c15f864052f423c056404b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:53:21.626203 kubelet[3510]: E0130 13:53:21.625842 3510 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5df243c60b5b110a1c1a8bd2dccbc93a596bc92358c15f864052f423c056404b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5df243c60b5b110a1c1a8bd2dccbc93a596bc92358c15f864052f423c056404b" Jan 30 13:53:21.626203 kubelet[3510]: E0130 13:53:21.625902 3510 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5df243c60b5b110a1c1a8bd2dccbc93a596bc92358c15f864052f423c056404b"} Jan 30 13:53:21.626203 kubelet[3510]: E0130 13:53:21.625944 3510 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"99b6f917-c78e-4eb5-a202-0c6311880c4e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5df243c60b5b110a1c1a8bd2dccbc93a596bc92358c15f864052f423c056404b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:53:21.626203 kubelet[3510]: E0130 13:53:21.625973 3510 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"99b6f917-c78e-4eb5-a202-0c6311880c4e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5df243c60b5b110a1c1a8bd2dccbc93a596bc92358c15f864052f423c056404b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6c996ffb5d-7txfd" podUID="99b6f917-c78e-4eb5-a202-0c6311880c4e" Jan 30 13:53:21.626576 kubelet[3510]: E0130 13:53:21.626027 3510 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c3896f163db0a00ffe6c18dc351f404f7a225a292dbd069dfd06c41bf299a11e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c3896f163db0a00ffe6c18dc351f404f7a225a292dbd069dfd06c41bf299a11e" Jan 30 13:53:21.626576 kubelet[3510]: E0130 13:53:21.626063 3510 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c3896f163db0a00ffe6c18dc351f404f7a225a292dbd069dfd06c41bf299a11e"} Jan 30 13:53:21.626890 kubelet[3510]: E0130 13:53:21.626780 3510 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"228374c3-8542-47d9-a2e1-c564d0ab650c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c3896f163db0a00ffe6c18dc351f404f7a225a292dbd069dfd06c41bf299a11e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:53:21.626890 kubelet[3510]: E0130 13:53:21.626849 3510 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"228374c3-8542-47d9-a2e1-c564d0ab650c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c3896f163db0a00ffe6c18dc351f404f7a225a292dbd069dfd06c41bf299a11e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7455f859bb-hlpjx" podUID="228374c3-8542-47d9-a2e1-c564d0ab650c" Jan 30 13:53:21.630314 containerd[1983]: time="2025-01-30T13:53:21.630270106Z" level=error msg="StopPodSandbox for \"31d2221e30d0200aa5f6080b4a8f6a58033d610f83811b0271143583209bbd06\" failed" error="failed to destroy network for sandbox \"31d2221e30d0200aa5f6080b4a8f6a58033d610f83811b0271143583209bbd06\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:53:21.630988 kubelet[3510]: E0130 13:53:21.630805 3510 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"31d2221e30d0200aa5f6080b4a8f6a58033d610f83811b0271143583209bbd06\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="31d2221e30d0200aa5f6080b4a8f6a58033d610f83811b0271143583209bbd06" Jan 30 13:53:21.630988 kubelet[3510]: E0130 13:53:21.630860 3510 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"31d2221e30d0200aa5f6080b4a8f6a58033d610f83811b0271143583209bbd06"} Jan 30 13:53:21.630988 kubelet[3510]: E0130 13:53:21.630902 3510 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c4e15632-c2e7-4cc7-a34a-0ee80ce9b661\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"31d2221e30d0200aa5f6080b4a8f6a58033d610f83811b0271143583209bbd06\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:53:21.630988 kubelet[3510]: E0130 13:53:21.630932 3510 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c4e15632-c2e7-4cc7-a34a-0ee80ce9b661\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"31d2221e30d0200aa5f6080b4a8f6a58033d610f83811b0271143583209bbd06\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-4dgqz" podUID="c4e15632-c2e7-4cc7-a34a-0ee80ce9b661" Jan 30 13:53:21.644160 containerd[1983]: time="2025-01-30T13:53:21.643422211Z" level=error msg="StopPodSandbox for \"0d5a34594156568249ae9c16b8a5e94d1f908e4cbe599063d95746b3841ca534\" failed" error="failed to destroy network for sandbox \"0d5a34594156568249ae9c16b8a5e94d1f908e4cbe599063d95746b3841ca534\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:53:21.644365 kubelet[3510]: E0130 13:53:21.644281 3510 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0d5a34594156568249ae9c16b8a5e94d1f908e4cbe599063d95746b3841ca534\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0d5a34594156568249ae9c16b8a5e94d1f908e4cbe599063d95746b3841ca534" Jan 30 13:53:21.644365 kubelet[3510]: E0130 13:53:21.644340 3510 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0d5a34594156568249ae9c16b8a5e94d1f908e4cbe599063d95746b3841ca534"} Jan 30 13:53:21.645193 kubelet[3510]: E0130 13:53:21.644381 3510 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"11b5c859-222b-40cc-bebe-26c0a9a42d40\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0d5a34594156568249ae9c16b8a5e94d1f908e4cbe599063d95746b3841ca534\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:53:21.645193 kubelet[3510]: E0130 13:53:21.644413 3510 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"11b5c859-222b-40cc-bebe-26c0a9a42d40\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0d5a34594156568249ae9c16b8a5e94d1f908e4cbe599063d95746b3841ca534\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7455f859bb-tzswx" podUID="11b5c859-222b-40cc-bebe-26c0a9a42d40" Jan 30 13:53:21.655121 containerd[1983]: time="2025-01-30T13:53:21.654937906Z" level=error msg="StopPodSandbox for \"2357f22516048c6c7ee3f3740cba728b6feeb8e90778cacab2013013fc10bbcd\" failed" error="failed to destroy network for sandbox \"2357f22516048c6c7ee3f3740cba728b6feeb8e90778cacab2013013fc10bbcd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:53:21.655603 kubelet[3510]: E0130 13:53:21.655554 3510 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2357f22516048c6c7ee3f3740cba728b6feeb8e90778cacab2013013fc10bbcd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2357f22516048c6c7ee3f3740cba728b6feeb8e90778cacab2013013fc10bbcd" Jan 30 13:53:21.655709 kubelet[3510]: E0130 13:53:21.655612 3510 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2357f22516048c6c7ee3f3740cba728b6feeb8e90778cacab2013013fc10bbcd"} Jan 30 13:53:21.655709 kubelet[3510]: E0130 13:53:21.655655 3510 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2b094d3d-1ca0-4567-b2d8-ca3df2f86d82\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2357f22516048c6c7ee3f3740cba728b6feeb8e90778cacab2013013fc10bbcd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:53:21.655709 kubelet[3510]: E0130 13:53:21.655684 3510 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2b094d3d-1ca0-4567-b2d8-ca3df2f86d82\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2357f22516048c6c7ee3f3740cba728b6feeb8e90778cacab2013013fc10bbcd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-k9ts7" podUID="2b094d3d-1ca0-4567-b2d8-ca3df2f86d82" Jan 30 13:53:28.539877 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2149577545.mount: Deactivated successfully. Jan 30 13:53:28.722768 containerd[1983]: time="2025-01-30T13:53:28.709524112Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 30 13:53:28.726690 containerd[1983]: time="2025-01-30T13:53:28.726636711Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 8.388789716s" Jan 30 13:53:28.726940 containerd[1983]: time="2025-01-30T13:53:28.726913047Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 30 13:53:28.758181 containerd[1983]: time="2025-01-30T13:53:28.758127740Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:28.811872 containerd[1983]: time="2025-01-30T13:53:28.811636453Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:28.813508 containerd[1983]: time="2025-01-30T13:53:28.813463195Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:28.865380 containerd[1983]: time="2025-01-30T13:53:28.865328852Z" level=info msg="CreateContainer within sandbox \"01739a7bb355ee4e1e7a576c4d68a7a57aab33744be632b4e0df32f2f44ed930\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 30 13:53:28.991295 containerd[1983]: time="2025-01-30T13:53:28.991238565Z" level=info msg="CreateContainer within sandbox \"01739a7bb355ee4e1e7a576c4d68a7a57aab33744be632b4e0df32f2f44ed930\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"387b95110f1249659517e73132f228bc3dd8528bb059b6f17cc1c3511001a3c0\"" Jan 30 13:53:28.993135 containerd[1983]: time="2025-01-30T13:53:28.992929558Z" level=info msg="StartContainer for \"387b95110f1249659517e73132f228bc3dd8528bb059b6f17cc1c3511001a3c0\"" Jan 30 13:53:29.224319 systemd[1]: Started cri-containerd-387b95110f1249659517e73132f228bc3dd8528bb059b6f17cc1c3511001a3c0.scope - libcontainer container 387b95110f1249659517e73132f228bc3dd8528bb059b6f17cc1c3511001a3c0. Jan 30 13:53:29.337261 containerd[1983]: time="2025-01-30T13:53:29.337209583Z" level=info msg="StartContainer for \"387b95110f1249659517e73132f228bc3dd8528bb059b6f17cc1c3511001a3c0\" returns successfully" Jan 30 13:53:29.560507 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 30 13:53:29.563001 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 30 13:53:29.590139 kubelet[3510]: I0130 13:53:29.587304 3510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-79lvl" podStartSLOduration=2.172992336 podStartE2EDuration="21.572421089s" podCreationTimestamp="2025-01-30 13:53:08 +0000 UTC" firstStartedPulling="2025-01-30 13:53:09.328892964 +0000 UTC m=+22.865658285" lastFinishedPulling="2025-01-30 13:53:28.728321721 +0000 UTC m=+42.265087038" observedRunningTime="2025-01-30 13:53:29.569557478 +0000 UTC m=+43.106322815" watchObservedRunningTime="2025-01-30 13:53:29.572421089 +0000 UTC m=+43.109186426" Jan 30 13:53:29.730739 systemd[1]: run-containerd-runc-k8s.io-387b95110f1249659517e73132f228bc3dd8528bb059b6f17cc1c3511001a3c0-runc.WaRtDh.mount: Deactivated successfully. Jan 30 13:53:31.619086 systemd[1]: run-containerd-runc-k8s.io-387b95110f1249659517e73132f228bc3dd8528bb059b6f17cc1c3511001a3c0-runc.TIPgKz.mount: Deactivated successfully. Jan 30 13:53:32.189213 kernel: bpftool[4802]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 30 13:53:32.562815 systemd-networkd[1861]: vxlan.calico: Link UP Jan 30 13:53:32.562827 systemd-networkd[1861]: vxlan.calico: Gained carrier Jan 30 13:53:32.566894 (udev-worker)[4827]: Network interface NamePolicy= disabled on kernel command line. Jan 30 13:53:32.594738 (udev-worker)[4591]: Network interface NamePolicy= disabled on kernel command line. Jan 30 13:53:32.601226 (udev-worker)[4834]: Network interface NamePolicy= disabled on kernel command line. Jan 30 13:53:32.764573 containerd[1983]: time="2025-01-30T13:53:32.764529799Z" level=info msg="StopPodSandbox for \"c3896f163db0a00ffe6c18dc351f404f7a225a292dbd069dfd06c41bf299a11e\"" Jan 30 13:53:32.766383 containerd[1983]: time="2025-01-30T13:53:32.766246788Z" level=info msg="StopPodSandbox for \"0d5a34594156568249ae9c16b8a5e94d1f908e4cbe599063d95746b3841ca534\"" Jan 30 13:53:33.439252 containerd[1983]: 2025-01-30 13:53:32.960 [INFO][4864] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0d5a34594156568249ae9c16b8a5e94d1f908e4cbe599063d95746b3841ca534" Jan 30 13:53:33.439252 containerd[1983]: 2025-01-30 13:53:32.972 [INFO][4864] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0d5a34594156568249ae9c16b8a5e94d1f908e4cbe599063d95746b3841ca534" iface="eth0" netns="/var/run/netns/cni-4c434d9e-eea1-d609-ad79-5020f9c3b2d0" Jan 30 13:53:33.439252 containerd[1983]: 2025-01-30 13:53:32.975 [INFO][4864] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0d5a34594156568249ae9c16b8a5e94d1f908e4cbe599063d95746b3841ca534" iface="eth0" netns="/var/run/netns/cni-4c434d9e-eea1-d609-ad79-5020f9c3b2d0" Jan 30 13:53:33.439252 containerd[1983]: 2025-01-30 13:53:32.978 [INFO][4864] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0d5a34594156568249ae9c16b8a5e94d1f908e4cbe599063d95746b3841ca534" iface="eth0" netns="/var/run/netns/cni-4c434d9e-eea1-d609-ad79-5020f9c3b2d0" Jan 30 13:53:33.439252 containerd[1983]: 2025-01-30 13:53:32.978 [INFO][4864] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0d5a34594156568249ae9c16b8a5e94d1f908e4cbe599063d95746b3841ca534" Jan 30 13:53:33.439252 containerd[1983]: 2025-01-30 13:53:32.978 [INFO][4864] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0d5a34594156568249ae9c16b8a5e94d1f908e4cbe599063d95746b3841ca534" Jan 30 13:53:33.439252 containerd[1983]: 2025-01-30 13:53:33.405 [INFO][4880] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0d5a34594156568249ae9c16b8a5e94d1f908e4cbe599063d95746b3841ca534" HandleID="k8s-pod-network.0d5a34594156568249ae9c16b8a5e94d1f908e4cbe599063d95746b3841ca534" Workload="ip--172--31--19--166-k8s-calico--apiserver--7455f859bb--tzswx-eth0" Jan 30 13:53:33.439252 containerd[1983]: 2025-01-30 13:53:33.409 [INFO][4880] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:53:33.439252 containerd[1983]: 2025-01-30 13:53:33.410 [INFO][4880] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:53:33.439252 containerd[1983]: 2025-01-30 13:53:33.423 [WARNING][4880] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0d5a34594156568249ae9c16b8a5e94d1f908e4cbe599063d95746b3841ca534" HandleID="k8s-pod-network.0d5a34594156568249ae9c16b8a5e94d1f908e4cbe599063d95746b3841ca534" Workload="ip--172--31--19--166-k8s-calico--apiserver--7455f859bb--tzswx-eth0" Jan 30 13:53:33.439252 containerd[1983]: 2025-01-30 13:53:33.423 [INFO][4880] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0d5a34594156568249ae9c16b8a5e94d1f908e4cbe599063d95746b3841ca534" HandleID="k8s-pod-network.0d5a34594156568249ae9c16b8a5e94d1f908e4cbe599063d95746b3841ca534" Workload="ip--172--31--19--166-k8s-calico--apiserver--7455f859bb--tzswx-eth0" Jan 30 13:53:33.439252 containerd[1983]: 2025-01-30 13:53:33.428 [INFO][4880] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:53:33.439252 containerd[1983]: 2025-01-30 13:53:33.433 [INFO][4864] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0d5a34594156568249ae9c16b8a5e94d1f908e4cbe599063d95746b3841ca534" Jan 30 13:53:33.447193 systemd[1]: run-netns-cni\x2d4c434d9e\x2deea1\x2dd609\x2dad79\x2d5020f9c3b2d0.mount: Deactivated successfully. Jan 30 13:53:33.453279 containerd[1983]: time="2025-01-30T13:53:33.453158626Z" level=info msg="TearDown network for sandbox \"0d5a34594156568249ae9c16b8a5e94d1f908e4cbe599063d95746b3841ca534\" successfully" Jan 30 13:53:33.453279 containerd[1983]: time="2025-01-30T13:53:33.453219998Z" level=info msg="StopPodSandbox for \"0d5a34594156568249ae9c16b8a5e94d1f908e4cbe599063d95746b3841ca534\" returns successfully" Jan 30 13:53:33.455925 containerd[1983]: 2025-01-30 13:53:32.975 [INFO][4868] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c3896f163db0a00ffe6c18dc351f404f7a225a292dbd069dfd06c41bf299a11e" Jan 30 13:53:33.455925 containerd[1983]: 2025-01-30 13:53:32.975 [INFO][4868] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c3896f163db0a00ffe6c18dc351f404f7a225a292dbd069dfd06c41bf299a11e" iface="eth0" netns="/var/run/netns/cni-cb5c10ff-db08-5841-5168-3326b38a002b" Jan 30 13:53:33.455925 containerd[1983]: 2025-01-30 13:53:32.975 [INFO][4868] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c3896f163db0a00ffe6c18dc351f404f7a225a292dbd069dfd06c41bf299a11e" iface="eth0" netns="/var/run/netns/cni-cb5c10ff-db08-5841-5168-3326b38a002b" Jan 30 13:53:33.455925 containerd[1983]: 2025-01-30 13:53:32.978 [INFO][4868] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c3896f163db0a00ffe6c18dc351f404f7a225a292dbd069dfd06c41bf299a11e" iface="eth0" netns="/var/run/netns/cni-cb5c10ff-db08-5841-5168-3326b38a002b" Jan 30 13:53:33.455925 containerd[1983]: 2025-01-30 13:53:32.979 [INFO][4868] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c3896f163db0a00ffe6c18dc351f404f7a225a292dbd069dfd06c41bf299a11e" Jan 30 13:53:33.455925 containerd[1983]: 2025-01-30 13:53:32.979 [INFO][4868] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c3896f163db0a00ffe6c18dc351f404f7a225a292dbd069dfd06c41bf299a11e" Jan 30 13:53:33.455925 containerd[1983]: 2025-01-30 13:53:33.407 [INFO][4881] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c3896f163db0a00ffe6c18dc351f404f7a225a292dbd069dfd06c41bf299a11e" HandleID="k8s-pod-network.c3896f163db0a00ffe6c18dc351f404f7a225a292dbd069dfd06c41bf299a11e" Workload="ip--172--31--19--166-k8s-calico--apiserver--7455f859bb--hlpjx-eth0" Jan 30 13:53:33.455925 containerd[1983]: 2025-01-30 13:53:33.409 [INFO][4881] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:53:33.455925 containerd[1983]: 2025-01-30 13:53:33.428 [INFO][4881] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:53:33.455925 containerd[1983]: 2025-01-30 13:53:33.442 [WARNING][4881] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c3896f163db0a00ffe6c18dc351f404f7a225a292dbd069dfd06c41bf299a11e" HandleID="k8s-pod-network.c3896f163db0a00ffe6c18dc351f404f7a225a292dbd069dfd06c41bf299a11e" Workload="ip--172--31--19--166-k8s-calico--apiserver--7455f859bb--hlpjx-eth0" Jan 30 13:53:33.455925 containerd[1983]: 2025-01-30 13:53:33.442 [INFO][4881] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c3896f163db0a00ffe6c18dc351f404f7a225a292dbd069dfd06c41bf299a11e" HandleID="k8s-pod-network.c3896f163db0a00ffe6c18dc351f404f7a225a292dbd069dfd06c41bf299a11e" Workload="ip--172--31--19--166-k8s-calico--apiserver--7455f859bb--hlpjx-eth0" Jan 30 13:53:33.455925 containerd[1983]: 2025-01-30 13:53:33.445 [INFO][4881] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:53:33.455925 containerd[1983]: 2025-01-30 13:53:33.453 [INFO][4868] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c3896f163db0a00ffe6c18dc351f404f7a225a292dbd069dfd06c41bf299a11e" Jan 30 13:53:33.459863 containerd[1983]: time="2025-01-30T13:53:33.459538810Z" level=info msg="TearDown network for sandbox \"c3896f163db0a00ffe6c18dc351f404f7a225a292dbd069dfd06c41bf299a11e\" successfully" Jan 30 13:53:33.459863 containerd[1983]: time="2025-01-30T13:53:33.459645747Z" level=info msg="StopPodSandbox for \"c3896f163db0a00ffe6c18dc351f404f7a225a292dbd069dfd06c41bf299a11e\" returns successfully" Jan 30 13:53:33.462251 systemd[1]: run-netns-cni\x2dcb5c10ff\x2ddb08\x2d5841\x2d5168\x2d3326b38a002b.mount: Deactivated successfully. Jan 30 13:53:33.467301 containerd[1983]: time="2025-01-30T13:53:33.467240454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7455f859bb-tzswx,Uid:11b5c859-222b-40cc-bebe-26c0a9a42d40,Namespace:calico-apiserver,Attempt:1,}" Jan 30 13:53:33.467513 containerd[1983]: time="2025-01-30T13:53:33.467239139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7455f859bb-hlpjx,Uid:228374c3-8542-47d9-a2e1-c564d0ab650c,Namespace:calico-apiserver,Attempt:1,}" Jan 30 13:53:33.731070 containerd[1983]: time="2025-01-30T13:53:33.730984458Z" level=info msg="StopPodSandbox for \"31d2221e30d0200aa5f6080b4a8f6a58033d610f83811b0271143583209bbd06\"" Jan 30 13:53:33.731256 containerd[1983]: time="2025-01-30T13:53:33.731023623Z" level=info msg="StopPodSandbox for \"a152127c8c8513c4472499ceda39be4a0bb4173d50df2cbf2154da16ef482cfb\"" Jan 30 13:53:33.827797 (udev-worker)[4840]: Network interface NamePolicy= disabled on kernel command line. Jan 30 13:53:33.830436 systemd-networkd[1861]: cali7233812dacd: Link UP Jan 30 13:53:33.835177 systemd-networkd[1861]: cali7233812dacd: Gained carrier Jan 30 13:53:33.896768 containerd[1983]: 2025-01-30 13:53:33.636 [INFO][4925] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--166-k8s-calico--apiserver--7455f859bb--tzswx-eth0 calico-apiserver-7455f859bb- calico-apiserver 11b5c859-222b-40cc-bebe-26c0a9a42d40 782 0 2025-01-30 13:53:08 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7455f859bb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-19-166 calico-apiserver-7455f859bb-tzswx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7233812dacd [] []}} ContainerID="95d30a3e9c541ec0f5aa7d1ce1040e50c9c5003000636cce3fa1996ce91c83c0" Namespace="calico-apiserver" Pod="calico-apiserver-7455f859bb-tzswx" WorkloadEndpoint="ip--172--31--19--166-k8s-calico--apiserver--7455f859bb--tzswx-" Jan 30 13:53:33.896768 containerd[1983]: 2025-01-30 13:53:33.637 [INFO][4925] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="95d30a3e9c541ec0f5aa7d1ce1040e50c9c5003000636cce3fa1996ce91c83c0" Namespace="calico-apiserver" Pod="calico-apiserver-7455f859bb-tzswx" WorkloadEndpoint="ip--172--31--19--166-k8s-calico--apiserver--7455f859bb--tzswx-eth0" Jan 30 13:53:33.896768 containerd[1983]: 2025-01-30 13:53:33.711 [INFO][4948] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="95d30a3e9c541ec0f5aa7d1ce1040e50c9c5003000636cce3fa1996ce91c83c0" HandleID="k8s-pod-network.95d30a3e9c541ec0f5aa7d1ce1040e50c9c5003000636cce3fa1996ce91c83c0" Workload="ip--172--31--19--166-k8s-calico--apiserver--7455f859bb--tzswx-eth0" Jan 30 13:53:33.896768 containerd[1983]: 2025-01-30 13:53:33.728 [INFO][4948] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="95d30a3e9c541ec0f5aa7d1ce1040e50c9c5003000636cce3fa1996ce91c83c0" HandleID="k8s-pod-network.95d30a3e9c541ec0f5aa7d1ce1040e50c9c5003000636cce3fa1996ce91c83c0" Workload="ip--172--31--19--166-k8s-calico--apiserver--7455f859bb--tzswx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00010da70), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-19-166", "pod":"calico-apiserver-7455f859bb-tzswx", "timestamp":"2025-01-30 13:53:33.711278356 +0000 UTC"}, Hostname:"ip-172-31-19-166", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:53:33.896768 containerd[1983]: 2025-01-30 13:53:33.732 [INFO][4948] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:53:33.896768 containerd[1983]: 2025-01-30 13:53:33.732 [INFO][4948] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:53:33.896768 containerd[1983]: 2025-01-30 13:53:33.732 [INFO][4948] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-166' Jan 30 13:53:33.896768 containerd[1983]: 2025-01-30 13:53:33.739 [INFO][4948] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.95d30a3e9c541ec0f5aa7d1ce1040e50c9c5003000636cce3fa1996ce91c83c0" host="ip-172-31-19-166" Jan 30 13:53:33.896768 containerd[1983]: 2025-01-30 13:53:33.756 [INFO][4948] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-19-166" Jan 30 13:53:33.896768 containerd[1983]: 2025-01-30 13:53:33.768 [INFO][4948] ipam/ipam.go 489: Trying affinity for 192.168.8.128/26 host="ip-172-31-19-166" Jan 30 13:53:33.896768 containerd[1983]: 2025-01-30 13:53:33.771 [INFO][4948] ipam/ipam.go 155: Attempting to load block cidr=192.168.8.128/26 host="ip-172-31-19-166" Jan 30 13:53:33.896768 containerd[1983]: 2025-01-30 13:53:33.776 [INFO][4948] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.8.128/26 host="ip-172-31-19-166" Jan 30 13:53:33.896768 containerd[1983]: 2025-01-30 13:53:33.776 [INFO][4948] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.8.128/26 handle="k8s-pod-network.95d30a3e9c541ec0f5aa7d1ce1040e50c9c5003000636cce3fa1996ce91c83c0" host="ip-172-31-19-166" Jan 30 13:53:33.896768 containerd[1983]: 2025-01-30 13:53:33.781 [INFO][4948] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.95d30a3e9c541ec0f5aa7d1ce1040e50c9c5003000636cce3fa1996ce91c83c0 Jan 30 13:53:33.896768 containerd[1983]: 2025-01-30 13:53:33.789 [INFO][4948] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.8.128/26 handle="k8s-pod-network.95d30a3e9c541ec0f5aa7d1ce1040e50c9c5003000636cce3fa1996ce91c83c0" host="ip-172-31-19-166" Jan 30 13:53:33.896768 containerd[1983]: 2025-01-30 13:53:33.809 [INFO][4948] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.8.129/26] block=192.168.8.128/26 handle="k8s-pod-network.95d30a3e9c541ec0f5aa7d1ce1040e50c9c5003000636cce3fa1996ce91c83c0" host="ip-172-31-19-166" Jan 30 13:53:33.896768 containerd[1983]: 2025-01-30 13:53:33.809 [INFO][4948] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.8.129/26] handle="k8s-pod-network.95d30a3e9c541ec0f5aa7d1ce1040e50c9c5003000636cce3fa1996ce91c83c0" host="ip-172-31-19-166" Jan 30 13:53:33.896768 containerd[1983]: 2025-01-30 13:53:33.809 [INFO][4948] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:53:33.896768 containerd[1983]: 2025-01-30 13:53:33.809 [INFO][4948] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.8.129/26] IPv6=[] ContainerID="95d30a3e9c541ec0f5aa7d1ce1040e50c9c5003000636cce3fa1996ce91c83c0" HandleID="k8s-pod-network.95d30a3e9c541ec0f5aa7d1ce1040e50c9c5003000636cce3fa1996ce91c83c0" Workload="ip--172--31--19--166-k8s-calico--apiserver--7455f859bb--tzswx-eth0" Jan 30 13:53:33.899709 containerd[1983]: 2025-01-30 13:53:33.822 [INFO][4925] cni-plugin/k8s.go 386: Populated endpoint ContainerID="95d30a3e9c541ec0f5aa7d1ce1040e50c9c5003000636cce3fa1996ce91c83c0" Namespace="calico-apiserver" Pod="calico-apiserver-7455f859bb-tzswx" WorkloadEndpoint="ip--172--31--19--166-k8s-calico--apiserver--7455f859bb--tzswx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--166-k8s-calico--apiserver--7455f859bb--tzswx-eth0", GenerateName:"calico-apiserver-7455f859bb-", Namespace:"calico-apiserver", SelfLink:"", UID:"11b5c859-222b-40cc-bebe-26c0a9a42d40", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 53, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7455f859bb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-166", ContainerID:"", Pod:"calico-apiserver-7455f859bb-tzswx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.8.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7233812dacd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:53:33.899709 containerd[1983]: 2025-01-30 13:53:33.822 [INFO][4925] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.8.129/32] ContainerID="95d30a3e9c541ec0f5aa7d1ce1040e50c9c5003000636cce3fa1996ce91c83c0" Namespace="calico-apiserver" Pod="calico-apiserver-7455f859bb-tzswx" WorkloadEndpoint="ip--172--31--19--166-k8s-calico--apiserver--7455f859bb--tzswx-eth0" Jan 30 13:53:33.899709 containerd[1983]: 2025-01-30 13:53:33.823 [INFO][4925] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7233812dacd ContainerID="95d30a3e9c541ec0f5aa7d1ce1040e50c9c5003000636cce3fa1996ce91c83c0" Namespace="calico-apiserver" Pod="calico-apiserver-7455f859bb-tzswx" WorkloadEndpoint="ip--172--31--19--166-k8s-calico--apiserver--7455f859bb--tzswx-eth0" Jan 30 13:53:33.899709 containerd[1983]: 2025-01-30 13:53:33.841 [INFO][4925] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="95d30a3e9c541ec0f5aa7d1ce1040e50c9c5003000636cce3fa1996ce91c83c0" Namespace="calico-apiserver" Pod="calico-apiserver-7455f859bb-tzswx" WorkloadEndpoint="ip--172--31--19--166-k8s-calico--apiserver--7455f859bb--tzswx-eth0" Jan 30 13:53:33.899709 containerd[1983]: 2025-01-30 13:53:33.849 [INFO][4925] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="95d30a3e9c541ec0f5aa7d1ce1040e50c9c5003000636cce3fa1996ce91c83c0" Namespace="calico-apiserver" Pod="calico-apiserver-7455f859bb-tzswx" WorkloadEndpoint="ip--172--31--19--166-k8s-calico--apiserver--7455f859bb--tzswx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--166-k8s-calico--apiserver--7455f859bb--tzswx-eth0", GenerateName:"calico-apiserver-7455f859bb-", Namespace:"calico-apiserver", SelfLink:"", UID:"11b5c859-222b-40cc-bebe-26c0a9a42d40", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 53, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7455f859bb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-166", ContainerID:"95d30a3e9c541ec0f5aa7d1ce1040e50c9c5003000636cce3fa1996ce91c83c0", Pod:"calico-apiserver-7455f859bb-tzswx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.8.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7233812dacd", MAC:"96:f0:ea:47:d5:21", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:53:33.899709 containerd[1983]: 2025-01-30 13:53:33.879 [INFO][4925] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="95d30a3e9c541ec0f5aa7d1ce1040e50c9c5003000636cce3fa1996ce91c83c0" Namespace="calico-apiserver" Pod="calico-apiserver-7455f859bb-tzswx" WorkloadEndpoint="ip--172--31--19--166-k8s-calico--apiserver--7455f859bb--tzswx-eth0" Jan 30 13:53:34.036358 systemd-networkd[1861]: cali2c8cbbc03f8: Link UP Jan 30 13:53:34.040346 systemd-networkd[1861]: cali2c8cbbc03f8: Gained carrier Jan 30 13:53:34.104731 containerd[1983]: time="2025-01-30T13:53:34.104313027Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:53:34.104731 containerd[1983]: time="2025-01-30T13:53:34.104393570Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:53:34.104731 containerd[1983]: time="2025-01-30T13:53:34.104435331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:53:34.104731 containerd[1983]: time="2025-01-30T13:53:34.104549893Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:53:34.123285 containerd[1983]: 2025-01-30 13:53:33.888 [INFO][4987] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="31d2221e30d0200aa5f6080b4a8f6a58033d610f83811b0271143583209bbd06" Jan 30 13:53:34.123285 containerd[1983]: 2025-01-30 13:53:33.894 [INFO][4987] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="31d2221e30d0200aa5f6080b4a8f6a58033d610f83811b0271143583209bbd06" iface="eth0" netns="/var/run/netns/cni-5cbce999-47f6-465e-fc3c-576ceaf522a4" Jan 30 13:53:34.123285 containerd[1983]: 2025-01-30 13:53:33.895 [INFO][4987] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="31d2221e30d0200aa5f6080b4a8f6a58033d610f83811b0271143583209bbd06" iface="eth0" netns="/var/run/netns/cni-5cbce999-47f6-465e-fc3c-576ceaf522a4" Jan 30 13:53:34.123285 containerd[1983]: 2025-01-30 13:53:33.895 [INFO][4987] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="31d2221e30d0200aa5f6080b4a8f6a58033d610f83811b0271143583209bbd06" iface="eth0" netns="/var/run/netns/cni-5cbce999-47f6-465e-fc3c-576ceaf522a4" Jan 30 13:53:34.123285 containerd[1983]: 2025-01-30 13:53:33.895 [INFO][4987] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="31d2221e30d0200aa5f6080b4a8f6a58033d610f83811b0271143583209bbd06" Jan 30 13:53:34.123285 containerd[1983]: 2025-01-30 13:53:33.895 [INFO][4987] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="31d2221e30d0200aa5f6080b4a8f6a58033d610f83811b0271143583209bbd06" Jan 30 13:53:34.123285 containerd[1983]: 2025-01-30 13:53:33.973 [INFO][5001] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="31d2221e30d0200aa5f6080b4a8f6a58033d610f83811b0271143583209bbd06" HandleID="k8s-pod-network.31d2221e30d0200aa5f6080b4a8f6a58033d610f83811b0271143583209bbd06" Workload="ip--172--31--19--166-k8s-coredns--7db6d8ff4d--4dgqz-eth0" Jan 30 13:53:34.123285 containerd[1983]: 2025-01-30 13:53:33.973 [INFO][5001] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:53:34.123285 containerd[1983]: 2025-01-30 13:53:33.973 [INFO][5001] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:53:34.123285 containerd[1983]: 2025-01-30 13:53:34.031 [WARNING][5001] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="31d2221e30d0200aa5f6080b4a8f6a58033d610f83811b0271143583209bbd06" HandleID="k8s-pod-network.31d2221e30d0200aa5f6080b4a8f6a58033d610f83811b0271143583209bbd06" Workload="ip--172--31--19--166-k8s-coredns--7db6d8ff4d--4dgqz-eth0" Jan 30 13:53:34.123285 containerd[1983]: 2025-01-30 13:53:34.031 [INFO][5001] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="31d2221e30d0200aa5f6080b4a8f6a58033d610f83811b0271143583209bbd06" HandleID="k8s-pod-network.31d2221e30d0200aa5f6080b4a8f6a58033d610f83811b0271143583209bbd06" Workload="ip--172--31--19--166-k8s-coredns--7db6d8ff4d--4dgqz-eth0" Jan 30 13:53:34.123285 containerd[1983]: 2025-01-30 13:53:34.085 [INFO][5001] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:53:34.123285 containerd[1983]: 2025-01-30 13:53:34.107 [INFO][4987] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="31d2221e30d0200aa5f6080b4a8f6a58033d610f83811b0271143583209bbd06" Jan 30 13:53:34.123994 containerd[1983]: time="2025-01-30T13:53:34.123693140Z" level=info msg="TearDown network for sandbox \"31d2221e30d0200aa5f6080b4a8f6a58033d610f83811b0271143583209bbd06\" successfully" Jan 30 13:53:34.123994 containerd[1983]: time="2025-01-30T13:53:34.123741139Z" level=info msg="StopPodSandbox for \"31d2221e30d0200aa5f6080b4a8f6a58033d610f83811b0271143583209bbd06\" returns successfully" Jan 30 13:53:34.127314 containerd[1983]: time="2025-01-30T13:53:34.126559093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4dgqz,Uid:c4e15632-c2e7-4cc7-a34a-0ee80ce9b661,Namespace:kube-system,Attempt:1,}" Jan 30 13:53:34.154739 containerd[1983]: 2025-01-30 13:53:33.645 [INFO][4928] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--166-k8s-calico--apiserver--7455f859bb--hlpjx-eth0 calico-apiserver-7455f859bb- calico-apiserver 228374c3-8542-47d9-a2e1-c564d0ab650c 781 0 2025-01-30 13:53:08 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7455f859bb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-19-166 calico-apiserver-7455f859bb-hlpjx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2c8cbbc03f8 [] []}} ContainerID="dce6c4e5a028526767858f173547f6d484ab135f23e609446427f33e2df05840" Namespace="calico-apiserver" Pod="calico-apiserver-7455f859bb-hlpjx" WorkloadEndpoint="ip--172--31--19--166-k8s-calico--apiserver--7455f859bb--hlpjx-" Jan 30 13:53:34.154739 containerd[1983]: 2025-01-30 13:53:33.645 [INFO][4928] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="dce6c4e5a028526767858f173547f6d484ab135f23e609446427f33e2df05840" Namespace="calico-apiserver" Pod="calico-apiserver-7455f859bb-hlpjx" WorkloadEndpoint="ip--172--31--19--166-k8s-calico--apiserver--7455f859bb--hlpjx-eth0" Jan 30 13:53:34.154739 containerd[1983]: 2025-01-30 13:53:33.726 [INFO][4949] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dce6c4e5a028526767858f173547f6d484ab135f23e609446427f33e2df05840" HandleID="k8s-pod-network.dce6c4e5a028526767858f173547f6d484ab135f23e609446427f33e2df05840" Workload="ip--172--31--19--166-k8s-calico--apiserver--7455f859bb--hlpjx-eth0" Jan 30 13:53:34.154739 containerd[1983]: 2025-01-30 13:53:33.755 [INFO][4949] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dce6c4e5a028526767858f173547f6d484ab135f23e609446427f33e2df05840" HandleID="k8s-pod-network.dce6c4e5a028526767858f173547f6d484ab135f23e609446427f33e2df05840" Workload="ip--172--31--19--166-k8s-calico--apiserver--7455f859bb--hlpjx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003c2c50), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-19-166", "pod":"calico-apiserver-7455f859bb-hlpjx", "timestamp":"2025-01-30 13:53:33.726809806 +0000 UTC"}, Hostname:"ip-172-31-19-166", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:53:34.154739 containerd[1983]: 2025-01-30 13:53:33.755 [INFO][4949] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:53:34.154739 containerd[1983]: 2025-01-30 13:53:33.810 [INFO][4949] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:53:34.154739 containerd[1983]: 2025-01-30 13:53:33.810 [INFO][4949] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-166' Jan 30 13:53:34.154739 containerd[1983]: 2025-01-30 13:53:33.820 [INFO][4949] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.dce6c4e5a028526767858f173547f6d484ab135f23e609446427f33e2df05840" host="ip-172-31-19-166" Jan 30 13:53:34.154739 containerd[1983]: 2025-01-30 13:53:33.833 [INFO][4949] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-19-166" Jan 30 13:53:34.154739 containerd[1983]: 2025-01-30 13:53:33.853 [INFO][4949] ipam/ipam.go 489: Trying affinity for 192.168.8.128/26 host="ip-172-31-19-166" Jan 30 13:53:34.154739 containerd[1983]: 2025-01-30 13:53:33.867 [INFO][4949] ipam/ipam.go 155: Attempting to load block cidr=192.168.8.128/26 host="ip-172-31-19-166" Jan 30 13:53:34.154739 containerd[1983]: 2025-01-30 13:53:33.885 [INFO][4949] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.8.128/26 host="ip-172-31-19-166" Jan 30 13:53:34.154739 containerd[1983]: 2025-01-30 13:53:33.885 [INFO][4949] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.8.128/26 handle="k8s-pod-network.dce6c4e5a028526767858f173547f6d484ab135f23e609446427f33e2df05840" host="ip-172-31-19-166" Jan 30 13:53:34.154739 containerd[1983]: 2025-01-30 13:53:33.894 [INFO][4949] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.dce6c4e5a028526767858f173547f6d484ab135f23e609446427f33e2df05840 Jan 30 13:53:34.154739 containerd[1983]: 2025-01-30 13:53:33.917 [INFO][4949] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.8.128/26 handle="k8s-pod-network.dce6c4e5a028526767858f173547f6d484ab135f23e609446427f33e2df05840" host="ip-172-31-19-166" Jan 30 13:53:34.154739 containerd[1983]: 2025-01-30 13:53:33.954 [INFO][4949] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.8.130/26] block=192.168.8.128/26 handle="k8s-pod-network.dce6c4e5a028526767858f173547f6d484ab135f23e609446427f33e2df05840" host="ip-172-31-19-166" Jan 30 13:53:34.154739 containerd[1983]: 2025-01-30 13:53:33.955 [INFO][4949] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.8.130/26] handle="k8s-pod-network.dce6c4e5a028526767858f173547f6d484ab135f23e609446427f33e2df05840" host="ip-172-31-19-166" Jan 30 13:53:34.154739 containerd[1983]: 2025-01-30 13:53:33.956 [INFO][4949] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:53:34.154739 containerd[1983]: 2025-01-30 13:53:33.956 [INFO][4949] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.8.130/26] IPv6=[] ContainerID="dce6c4e5a028526767858f173547f6d484ab135f23e609446427f33e2df05840" HandleID="k8s-pod-network.dce6c4e5a028526767858f173547f6d484ab135f23e609446427f33e2df05840" Workload="ip--172--31--19--166-k8s-calico--apiserver--7455f859bb--hlpjx-eth0" Jan 30 13:53:34.159361 containerd[1983]: 2025-01-30 13:53:33.971 [INFO][4928] cni-plugin/k8s.go 386: Populated endpoint ContainerID="dce6c4e5a028526767858f173547f6d484ab135f23e609446427f33e2df05840" Namespace="calico-apiserver" Pod="calico-apiserver-7455f859bb-hlpjx" WorkloadEndpoint="ip--172--31--19--166-k8s-calico--apiserver--7455f859bb--hlpjx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--166-k8s-calico--apiserver--7455f859bb--hlpjx-eth0", GenerateName:"calico-apiserver-7455f859bb-", Namespace:"calico-apiserver", SelfLink:"", UID:"228374c3-8542-47d9-a2e1-c564d0ab650c", ResourceVersion:"781", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 53, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7455f859bb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-166", ContainerID:"", Pod:"calico-apiserver-7455f859bb-hlpjx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.8.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2c8cbbc03f8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:53:34.159361 containerd[1983]: 2025-01-30 13:53:33.972 [INFO][4928] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.8.130/32] ContainerID="dce6c4e5a028526767858f173547f6d484ab135f23e609446427f33e2df05840" Namespace="calico-apiserver" Pod="calico-apiserver-7455f859bb-hlpjx" WorkloadEndpoint="ip--172--31--19--166-k8s-calico--apiserver--7455f859bb--hlpjx-eth0" Jan 30 13:53:34.159361 containerd[1983]: 2025-01-30 13:53:33.973 [INFO][4928] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2c8cbbc03f8 ContainerID="dce6c4e5a028526767858f173547f6d484ab135f23e609446427f33e2df05840" Namespace="calico-apiserver" Pod="calico-apiserver-7455f859bb-hlpjx" WorkloadEndpoint="ip--172--31--19--166-k8s-calico--apiserver--7455f859bb--hlpjx-eth0" Jan 30 13:53:34.159361 containerd[1983]: 2025-01-30 13:53:34.044 [INFO][4928] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dce6c4e5a028526767858f173547f6d484ab135f23e609446427f33e2df05840" Namespace="calico-apiserver" Pod="calico-apiserver-7455f859bb-hlpjx" WorkloadEndpoint="ip--172--31--19--166-k8s-calico--apiserver--7455f859bb--hlpjx-eth0" Jan 30 13:53:34.159361 containerd[1983]: 2025-01-30 13:53:34.047 [INFO][4928] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="dce6c4e5a028526767858f173547f6d484ab135f23e609446427f33e2df05840" Namespace="calico-apiserver" Pod="calico-apiserver-7455f859bb-hlpjx" WorkloadEndpoint="ip--172--31--19--166-k8s-calico--apiserver--7455f859bb--hlpjx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--166-k8s-calico--apiserver--7455f859bb--hlpjx-eth0", GenerateName:"calico-apiserver-7455f859bb-", Namespace:"calico-apiserver", SelfLink:"", UID:"228374c3-8542-47d9-a2e1-c564d0ab650c", ResourceVersion:"781", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 53, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7455f859bb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-166", ContainerID:"dce6c4e5a028526767858f173547f6d484ab135f23e609446427f33e2df05840", Pod:"calico-apiserver-7455f859bb-hlpjx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.8.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2c8cbbc03f8", MAC:"e2:fa:23:84:a9:da", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:53:34.159361 containerd[1983]: 2025-01-30 13:53:34.129 [INFO][4928] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="dce6c4e5a028526767858f173547f6d484ab135f23e609446427f33e2df05840" Namespace="calico-apiserver" Pod="calico-apiserver-7455f859bb-hlpjx" WorkloadEndpoint="ip--172--31--19--166-k8s-calico--apiserver--7455f859bb--hlpjx-eth0" Jan 30 13:53:34.243385 systemd[1]: Started cri-containerd-95d30a3e9c541ec0f5aa7d1ce1040e50c9c5003000636cce3fa1996ce91c83c0.scope - libcontainer container 95d30a3e9c541ec0f5aa7d1ce1040e50c9c5003000636cce3fa1996ce91c83c0. Jan 30 13:53:34.303311 systemd-networkd[1861]: vxlan.calico: Gained IPv6LL Jan 30 13:53:34.364540 containerd[1983]: time="2025-01-30T13:53:34.364153451Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:53:34.364540 containerd[1983]: time="2025-01-30T13:53:34.364231431Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:53:34.364540 containerd[1983]: time="2025-01-30T13:53:34.364250201Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:53:34.364540 containerd[1983]: time="2025-01-30T13:53:34.364363175Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:53:34.407448 containerd[1983]: 2025-01-30 13:53:33.939 [INFO][4986] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a152127c8c8513c4472499ceda39be4a0bb4173d50df2cbf2154da16ef482cfb" Jan 30 13:53:34.407448 containerd[1983]: 2025-01-30 13:53:33.941 [INFO][4986] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a152127c8c8513c4472499ceda39be4a0bb4173d50df2cbf2154da16ef482cfb" iface="eth0" netns="/var/run/netns/cni-959b3dce-be87-0272-c98e-51a16a3ef3c4" Jan 30 13:53:34.407448 containerd[1983]: 2025-01-30 13:53:33.944 [INFO][4986] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a152127c8c8513c4472499ceda39be4a0bb4173d50df2cbf2154da16ef482cfb" iface="eth0" netns="/var/run/netns/cni-959b3dce-be87-0272-c98e-51a16a3ef3c4" Jan 30 13:53:34.407448 containerd[1983]: 2025-01-30 13:53:33.948 [INFO][4986] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a152127c8c8513c4472499ceda39be4a0bb4173d50df2cbf2154da16ef482cfb" iface="eth0" netns="/var/run/netns/cni-959b3dce-be87-0272-c98e-51a16a3ef3c4" Jan 30 13:53:34.407448 containerd[1983]: 2025-01-30 13:53:33.948 [INFO][4986] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a152127c8c8513c4472499ceda39be4a0bb4173d50df2cbf2154da16ef482cfb" Jan 30 13:53:34.407448 containerd[1983]: 2025-01-30 13:53:33.948 [INFO][4986] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a152127c8c8513c4472499ceda39be4a0bb4173d50df2cbf2154da16ef482cfb" Jan 30 13:53:34.407448 containerd[1983]: 2025-01-30 13:53:34.333 [INFO][5015] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a152127c8c8513c4472499ceda39be4a0bb4173d50df2cbf2154da16ef482cfb" HandleID="k8s-pod-network.a152127c8c8513c4472499ceda39be4a0bb4173d50df2cbf2154da16ef482cfb" Workload="ip--172--31--19--166-k8s-csi--node--driver--dlwgg-eth0" Jan 30 13:53:34.407448 containerd[1983]: 2025-01-30 13:53:34.336 [INFO][5015] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:53:34.407448 containerd[1983]: 2025-01-30 13:53:34.336 [INFO][5015] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:53:34.407448 containerd[1983]: 2025-01-30 13:53:34.379 [WARNING][5015] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a152127c8c8513c4472499ceda39be4a0bb4173d50df2cbf2154da16ef482cfb" HandleID="k8s-pod-network.a152127c8c8513c4472499ceda39be4a0bb4173d50df2cbf2154da16ef482cfb" Workload="ip--172--31--19--166-k8s-csi--node--driver--dlwgg-eth0" Jan 30 13:53:34.407448 containerd[1983]: 2025-01-30 13:53:34.379 [INFO][5015] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a152127c8c8513c4472499ceda39be4a0bb4173d50df2cbf2154da16ef482cfb" HandleID="k8s-pod-network.a152127c8c8513c4472499ceda39be4a0bb4173d50df2cbf2154da16ef482cfb" Workload="ip--172--31--19--166-k8s-csi--node--driver--dlwgg-eth0" Jan 30 13:53:34.407448 containerd[1983]: 2025-01-30 13:53:34.386 [INFO][5015] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:53:34.407448 containerd[1983]: 2025-01-30 13:53:34.396 [INFO][4986] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a152127c8c8513c4472499ceda39be4a0bb4173d50df2cbf2154da16ef482cfb" Jan 30 13:53:34.409423 containerd[1983]: time="2025-01-30T13:53:34.408032814Z" level=info msg="TearDown network for sandbox \"a152127c8c8513c4472499ceda39be4a0bb4173d50df2cbf2154da16ef482cfb\" successfully" Jan 30 13:53:34.409423 containerd[1983]: time="2025-01-30T13:53:34.408238779Z" level=info msg="StopPodSandbox for \"a152127c8c8513c4472499ceda39be4a0bb4173d50df2cbf2154da16ef482cfb\" returns successfully" Jan 30 13:53:34.411706 containerd[1983]: time="2025-01-30T13:53:34.410408170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dlwgg,Uid:092d9e15-ee48-4734-aba0-f5135cecdc7c,Namespace:calico-system,Attempt:1,}" Jan 30 13:53:34.441355 systemd[1]: Started cri-containerd-dce6c4e5a028526767858f173547f6d484ab135f23e609446427f33e2df05840.scope - libcontainer container dce6c4e5a028526767858f173547f6d484ab135f23e609446427f33e2df05840. Jan 30 13:53:34.463804 systemd[1]: run-netns-cni\x2d959b3dce\x2dbe87\x2d0272\x2dc98e\x2d51a16a3ef3c4.mount: Deactivated successfully. Jan 30 13:53:34.464590 systemd[1]: run-netns-cni\x2d5cbce999\x2d47f6\x2d465e\x2dfc3c\x2d576ceaf522a4.mount: Deactivated successfully. Jan 30 13:53:34.745067 containerd[1983]: time="2025-01-30T13:53:34.744919727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7455f859bb-tzswx,Uid:11b5c859-222b-40cc-bebe-26c0a9a42d40,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"95d30a3e9c541ec0f5aa7d1ce1040e50c9c5003000636cce3fa1996ce91c83c0\"" Jan 30 13:53:34.751367 containerd[1983]: time="2025-01-30T13:53:34.751089204Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 30 13:53:34.864941 systemd-networkd[1861]: calib1d782861b2: Link UP Jan 30 13:53:34.875411 systemd-networkd[1861]: calib1d782861b2: Gained carrier Jan 30 13:53:34.904926 containerd[1983]: time="2025-01-30T13:53:34.904887392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7455f859bb-hlpjx,Uid:228374c3-8542-47d9-a2e1-c564d0ab650c,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"dce6c4e5a028526767858f173547f6d484ab135f23e609446427f33e2df05840\"" Jan 30 13:53:34.945374 systemd-networkd[1861]: cali7233812dacd: Gained IPv6LL Jan 30 13:53:34.954283 containerd[1983]: 2025-01-30 13:53:34.496 [INFO][5063] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--166-k8s-coredns--7db6d8ff4d--4dgqz-eth0 coredns-7db6d8ff4d- kube-system c4e15632-c2e7-4cc7-a34a-0ee80ce9b661 790 0 2025-01-30 13:53:00 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-19-166 coredns-7db6d8ff4d-4dgqz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib1d782861b2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="90af5c61cfd2fe7a4014680478124313e41c66345c8a2aa2b7ea181012bc57c8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4dgqz" WorkloadEndpoint="ip--172--31--19--166-k8s-coredns--7db6d8ff4d--4dgqz-" Jan 30 13:53:34.954283 containerd[1983]: 2025-01-30 13:53:34.496 [INFO][5063] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="90af5c61cfd2fe7a4014680478124313e41c66345c8a2aa2b7ea181012bc57c8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4dgqz" WorkloadEndpoint="ip--172--31--19--166-k8s-coredns--7db6d8ff4d--4dgqz-eth0" Jan 30 13:53:34.954283 containerd[1983]: 2025-01-30 13:53:34.633 [INFO][5132] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="90af5c61cfd2fe7a4014680478124313e41c66345c8a2aa2b7ea181012bc57c8" HandleID="k8s-pod-network.90af5c61cfd2fe7a4014680478124313e41c66345c8a2aa2b7ea181012bc57c8" Workload="ip--172--31--19--166-k8s-coredns--7db6d8ff4d--4dgqz-eth0" Jan 30 13:53:34.954283 containerd[1983]: 2025-01-30 13:53:34.659 [INFO][5132] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="90af5c61cfd2fe7a4014680478124313e41c66345c8a2aa2b7ea181012bc57c8" HandleID="k8s-pod-network.90af5c61cfd2fe7a4014680478124313e41c66345c8a2aa2b7ea181012bc57c8" Workload="ip--172--31--19--166-k8s-coredns--7db6d8ff4d--4dgqz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003527c0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-19-166", "pod":"coredns-7db6d8ff4d-4dgqz", "timestamp":"2025-01-30 13:53:34.633844649 +0000 UTC"}, Hostname:"ip-172-31-19-166", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:53:34.954283 containerd[1983]: 2025-01-30 13:53:34.659 [INFO][5132] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:53:34.954283 containerd[1983]: 2025-01-30 13:53:34.659 [INFO][5132] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:53:34.954283 containerd[1983]: 2025-01-30 13:53:34.660 [INFO][5132] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-166' Jan 30 13:53:34.954283 containerd[1983]: 2025-01-30 13:53:34.664 [INFO][5132] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.90af5c61cfd2fe7a4014680478124313e41c66345c8a2aa2b7ea181012bc57c8" host="ip-172-31-19-166" Jan 30 13:53:34.954283 containerd[1983]: 2025-01-30 13:53:34.700 [INFO][5132] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-19-166" Jan 30 13:53:34.954283 containerd[1983]: 2025-01-30 13:53:34.741 [INFO][5132] ipam/ipam.go 489: Trying affinity for 192.168.8.128/26 host="ip-172-31-19-166" Jan 30 13:53:34.954283 containerd[1983]: 2025-01-30 13:53:34.752 [INFO][5132] ipam/ipam.go 155: Attempting to load block cidr=192.168.8.128/26 host="ip-172-31-19-166" Jan 30 13:53:34.954283 containerd[1983]: 2025-01-30 13:53:34.772 [INFO][5132] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.8.128/26 host="ip-172-31-19-166" Jan 30 13:53:34.954283 containerd[1983]: 2025-01-30 13:53:34.775 [INFO][5132] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.8.128/26 handle="k8s-pod-network.90af5c61cfd2fe7a4014680478124313e41c66345c8a2aa2b7ea181012bc57c8" host="ip-172-31-19-166" Jan 30 13:53:34.954283 containerd[1983]: 2025-01-30 13:53:34.780 [INFO][5132] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.90af5c61cfd2fe7a4014680478124313e41c66345c8a2aa2b7ea181012bc57c8 Jan 30 13:53:34.954283 containerd[1983]: 2025-01-30 13:53:34.790 [INFO][5132] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.8.128/26 handle="k8s-pod-network.90af5c61cfd2fe7a4014680478124313e41c66345c8a2aa2b7ea181012bc57c8" host="ip-172-31-19-166" Jan 30 13:53:34.954283 containerd[1983]: 2025-01-30 13:53:34.820 [INFO][5132] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.8.131/26] block=192.168.8.128/26 handle="k8s-pod-network.90af5c61cfd2fe7a4014680478124313e41c66345c8a2aa2b7ea181012bc57c8" host="ip-172-31-19-166" Jan 30 13:53:34.954283 containerd[1983]: 2025-01-30 13:53:34.821 [INFO][5132] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.8.131/26] handle="k8s-pod-network.90af5c61cfd2fe7a4014680478124313e41c66345c8a2aa2b7ea181012bc57c8" host="ip-172-31-19-166" Jan 30 13:53:34.954283 containerd[1983]: 2025-01-30 13:53:34.821 [INFO][5132] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:53:34.954283 containerd[1983]: 2025-01-30 13:53:34.821 [INFO][5132] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.8.131/26] IPv6=[] ContainerID="90af5c61cfd2fe7a4014680478124313e41c66345c8a2aa2b7ea181012bc57c8" HandleID="k8s-pod-network.90af5c61cfd2fe7a4014680478124313e41c66345c8a2aa2b7ea181012bc57c8" Workload="ip--172--31--19--166-k8s-coredns--7db6d8ff4d--4dgqz-eth0" Jan 30 13:53:34.960935 containerd[1983]: 2025-01-30 13:53:34.848 [INFO][5063] cni-plugin/k8s.go 386: Populated endpoint ContainerID="90af5c61cfd2fe7a4014680478124313e41c66345c8a2aa2b7ea181012bc57c8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4dgqz" WorkloadEndpoint="ip--172--31--19--166-k8s-coredns--7db6d8ff4d--4dgqz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--166-k8s-coredns--7db6d8ff4d--4dgqz-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"c4e15632-c2e7-4cc7-a34a-0ee80ce9b661", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 53, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-166", ContainerID:"", Pod:"coredns-7db6d8ff4d-4dgqz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.8.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib1d782861b2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:53:34.960935 containerd[1983]: 2025-01-30 13:53:34.848 [INFO][5063] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.8.131/32] ContainerID="90af5c61cfd2fe7a4014680478124313e41c66345c8a2aa2b7ea181012bc57c8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4dgqz" WorkloadEndpoint="ip--172--31--19--166-k8s-coredns--7db6d8ff4d--4dgqz-eth0" Jan 30 13:53:34.960935 containerd[1983]: 2025-01-30 13:53:34.848 [INFO][5063] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib1d782861b2 ContainerID="90af5c61cfd2fe7a4014680478124313e41c66345c8a2aa2b7ea181012bc57c8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4dgqz" WorkloadEndpoint="ip--172--31--19--166-k8s-coredns--7db6d8ff4d--4dgqz-eth0" Jan 30 13:53:34.960935 containerd[1983]: 2025-01-30 13:53:34.866 [INFO][5063] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="90af5c61cfd2fe7a4014680478124313e41c66345c8a2aa2b7ea181012bc57c8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4dgqz" WorkloadEndpoint="ip--172--31--19--166-k8s-coredns--7db6d8ff4d--4dgqz-eth0" Jan 30 13:53:34.960935 containerd[1983]: 2025-01-30 13:53:34.888 [INFO][5063] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="90af5c61cfd2fe7a4014680478124313e41c66345c8a2aa2b7ea181012bc57c8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4dgqz" WorkloadEndpoint="ip--172--31--19--166-k8s-coredns--7db6d8ff4d--4dgqz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--166-k8s-coredns--7db6d8ff4d--4dgqz-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"c4e15632-c2e7-4cc7-a34a-0ee80ce9b661", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 53, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-166", ContainerID:"90af5c61cfd2fe7a4014680478124313e41c66345c8a2aa2b7ea181012bc57c8", Pod:"coredns-7db6d8ff4d-4dgqz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.8.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib1d782861b2", MAC:"7a:42:f5:da:72:e7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:53:34.960935 containerd[1983]: 2025-01-30 13:53:34.940 [INFO][5063] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="90af5c61cfd2fe7a4014680478124313e41c66345c8a2aa2b7ea181012bc57c8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4dgqz" WorkloadEndpoint="ip--172--31--19--166-k8s-coredns--7db6d8ff4d--4dgqz-eth0" Jan 30 13:53:35.020307 containerd[1983]: time="2025-01-30T13:53:35.018602299Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:53:35.021225 containerd[1983]: time="2025-01-30T13:53:35.020954946Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:53:35.022834 containerd[1983]: time="2025-01-30T13:53:35.022674627Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:53:35.023187 containerd[1983]: time="2025-01-30T13:53:35.023047232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:53:35.108427 systemd[1]: Started cri-containerd-90af5c61cfd2fe7a4014680478124313e41c66345c8a2aa2b7ea181012bc57c8.scope - libcontainer container 90af5c61cfd2fe7a4014680478124313e41c66345c8a2aa2b7ea181012bc57c8. Jan 30 13:53:35.117357 systemd-networkd[1861]: cali204e6ca9dda: Link UP Jan 30 13:53:35.121402 systemd-networkd[1861]: cali204e6ca9dda: Gained carrier Jan 30 13:53:35.166063 containerd[1983]: 2025-01-30 13:53:34.611 [INFO][5112] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--166-k8s-csi--node--driver--dlwgg-eth0 csi-node-driver- calico-system 092d9e15-ee48-4734-aba0-f5135cecdc7c 792 0 2025-01-30 13:53:08 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-19-166 csi-node-driver-dlwgg eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali204e6ca9dda [] []}} ContainerID="7f9e9651faabf4897a4202788ccdc983ffb21b22a82b50c64a8263a8c4e558f1" Namespace="calico-system" Pod="csi-node-driver-dlwgg" WorkloadEndpoint="ip--172--31--19--166-k8s-csi--node--driver--dlwgg-" Jan 30 13:53:35.166063 containerd[1983]: 2025-01-30 13:53:34.611 [INFO][5112] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7f9e9651faabf4897a4202788ccdc983ffb21b22a82b50c64a8263a8c4e558f1" Namespace="calico-system" Pod="csi-node-driver-dlwgg" WorkloadEndpoint="ip--172--31--19--166-k8s-csi--node--driver--dlwgg-eth0" Jan 30 13:53:35.166063 containerd[1983]: 2025-01-30 13:53:34.964 [INFO][5140] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7f9e9651faabf4897a4202788ccdc983ffb21b22a82b50c64a8263a8c4e558f1" HandleID="k8s-pod-network.7f9e9651faabf4897a4202788ccdc983ffb21b22a82b50c64a8263a8c4e558f1" Workload="ip--172--31--19--166-k8s-csi--node--driver--dlwgg-eth0" Jan 30 13:53:35.166063 containerd[1983]: 2025-01-30 13:53:34.988 [INFO][5140] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7f9e9651faabf4897a4202788ccdc983ffb21b22a82b50c64a8263a8c4e558f1" HandleID="k8s-pod-network.7f9e9651faabf4897a4202788ccdc983ffb21b22a82b50c64a8263a8c4e558f1" Workload="ip--172--31--19--166-k8s-csi--node--driver--dlwgg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00019ba20), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-19-166", "pod":"csi-node-driver-dlwgg", "timestamp":"2025-01-30 13:53:34.964332493 +0000 UTC"}, Hostname:"ip-172-31-19-166", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:53:35.166063 containerd[1983]: 2025-01-30 13:53:34.989 [INFO][5140] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:53:35.166063 containerd[1983]: 2025-01-30 13:53:34.989 [INFO][5140] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:53:35.166063 containerd[1983]: 2025-01-30 13:53:34.989 [INFO][5140] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-166' Jan 30 13:53:35.166063 containerd[1983]: 2025-01-30 13:53:34.995 [INFO][5140] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7f9e9651faabf4897a4202788ccdc983ffb21b22a82b50c64a8263a8c4e558f1" host="ip-172-31-19-166" Jan 30 13:53:35.166063 containerd[1983]: 2025-01-30 13:53:35.002 [INFO][5140] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-19-166" Jan 30 13:53:35.166063 containerd[1983]: 2025-01-30 13:53:35.016 [INFO][5140] ipam/ipam.go 489: Trying affinity for 192.168.8.128/26 host="ip-172-31-19-166" Jan 30 13:53:35.166063 containerd[1983]: 2025-01-30 13:53:35.023 [INFO][5140] ipam/ipam.go 155: Attempting to load block cidr=192.168.8.128/26 host="ip-172-31-19-166" Jan 30 13:53:35.166063 containerd[1983]: 2025-01-30 13:53:35.034 [INFO][5140] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.8.128/26 host="ip-172-31-19-166" Jan 30 13:53:35.166063 containerd[1983]: 2025-01-30 13:53:35.034 [INFO][5140] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.8.128/26 handle="k8s-pod-network.7f9e9651faabf4897a4202788ccdc983ffb21b22a82b50c64a8263a8c4e558f1" host="ip-172-31-19-166" Jan 30 13:53:35.166063 containerd[1983]: 2025-01-30 13:53:35.051 [INFO][5140] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7f9e9651faabf4897a4202788ccdc983ffb21b22a82b50c64a8263a8c4e558f1 Jan 30 13:53:35.166063 containerd[1983]: 2025-01-30 13:53:35.066 [INFO][5140] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.8.128/26 handle="k8s-pod-network.7f9e9651faabf4897a4202788ccdc983ffb21b22a82b50c64a8263a8c4e558f1" host="ip-172-31-19-166" Jan 30 13:53:35.166063 containerd[1983]: 2025-01-30 13:53:35.084 [INFO][5140] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.8.132/26] block=192.168.8.128/26 handle="k8s-pod-network.7f9e9651faabf4897a4202788ccdc983ffb21b22a82b50c64a8263a8c4e558f1" host="ip-172-31-19-166" Jan 30 13:53:35.166063 containerd[1983]: 2025-01-30 13:53:35.084 [INFO][5140] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.8.132/26] handle="k8s-pod-network.7f9e9651faabf4897a4202788ccdc983ffb21b22a82b50c64a8263a8c4e558f1" host="ip-172-31-19-166" Jan 30 13:53:35.166063 containerd[1983]: 2025-01-30 13:53:35.084 [INFO][5140] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:53:35.166063 containerd[1983]: 2025-01-30 13:53:35.084 [INFO][5140] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.8.132/26] IPv6=[] ContainerID="7f9e9651faabf4897a4202788ccdc983ffb21b22a82b50c64a8263a8c4e558f1" HandleID="k8s-pod-network.7f9e9651faabf4897a4202788ccdc983ffb21b22a82b50c64a8263a8c4e558f1" Workload="ip--172--31--19--166-k8s-csi--node--driver--dlwgg-eth0" Jan 30 13:53:35.170948 containerd[1983]: 2025-01-30 13:53:35.098 [INFO][5112] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7f9e9651faabf4897a4202788ccdc983ffb21b22a82b50c64a8263a8c4e558f1" Namespace="calico-system" Pod="csi-node-driver-dlwgg" WorkloadEndpoint="ip--172--31--19--166-k8s-csi--node--driver--dlwgg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--166-k8s-csi--node--driver--dlwgg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"092d9e15-ee48-4734-aba0-f5135cecdc7c", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 53, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-166", ContainerID:"", Pod:"csi-node-driver-dlwgg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.8.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali204e6ca9dda", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:53:35.170948 containerd[1983]: 2025-01-30 13:53:35.099 [INFO][5112] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.8.132/32] ContainerID="7f9e9651faabf4897a4202788ccdc983ffb21b22a82b50c64a8263a8c4e558f1" Namespace="calico-system" Pod="csi-node-driver-dlwgg" WorkloadEndpoint="ip--172--31--19--166-k8s-csi--node--driver--dlwgg-eth0" Jan 30 13:53:35.170948 containerd[1983]: 2025-01-30 13:53:35.099 [INFO][5112] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali204e6ca9dda ContainerID="7f9e9651faabf4897a4202788ccdc983ffb21b22a82b50c64a8263a8c4e558f1" Namespace="calico-system" Pod="csi-node-driver-dlwgg" WorkloadEndpoint="ip--172--31--19--166-k8s-csi--node--driver--dlwgg-eth0" Jan 30 13:53:35.170948 containerd[1983]: 2025-01-30 13:53:35.120 [INFO][5112] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7f9e9651faabf4897a4202788ccdc983ffb21b22a82b50c64a8263a8c4e558f1" Namespace="calico-system" Pod="csi-node-driver-dlwgg" WorkloadEndpoint="ip--172--31--19--166-k8s-csi--node--driver--dlwgg-eth0" Jan 30 13:53:35.170948 containerd[1983]: 2025-01-30 13:53:35.131 [INFO][5112] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7f9e9651faabf4897a4202788ccdc983ffb21b22a82b50c64a8263a8c4e558f1" Namespace="calico-system" Pod="csi-node-driver-dlwgg" WorkloadEndpoint="ip--172--31--19--166-k8s-csi--node--driver--dlwgg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--166-k8s-csi--node--driver--dlwgg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"092d9e15-ee48-4734-aba0-f5135cecdc7c", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 53, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-166", ContainerID:"7f9e9651faabf4897a4202788ccdc983ffb21b22a82b50c64a8263a8c4e558f1", Pod:"csi-node-driver-dlwgg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.8.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali204e6ca9dda", MAC:"06:17:7b:a1:5a:a3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:53:35.170948 containerd[1983]: 2025-01-30 13:53:35.159 [INFO][5112] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7f9e9651faabf4897a4202788ccdc983ffb21b22a82b50c64a8263a8c4e558f1" Namespace="calico-system" Pod="csi-node-driver-dlwgg" WorkloadEndpoint="ip--172--31--19--166-k8s-csi--node--driver--dlwgg-eth0" Jan 30 13:53:35.280462 containerd[1983]: time="2025-01-30T13:53:35.279845019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4dgqz,Uid:c4e15632-c2e7-4cc7-a34a-0ee80ce9b661,Namespace:kube-system,Attempt:1,} returns sandbox id \"90af5c61cfd2fe7a4014680478124313e41c66345c8a2aa2b7ea181012bc57c8\"" Jan 30 13:53:35.305444 containerd[1983]: time="2025-01-30T13:53:35.303933010Z" level=info msg="CreateContainer within sandbox \"90af5c61cfd2fe7a4014680478124313e41c66345c8a2aa2b7ea181012bc57c8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:53:35.317825 containerd[1983]: time="2025-01-30T13:53:35.317519245Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:53:35.317825 containerd[1983]: time="2025-01-30T13:53:35.317573183Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:53:35.317825 containerd[1983]: time="2025-01-30T13:53:35.317588130Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:53:35.317825 containerd[1983]: time="2025-01-30T13:53:35.317764213Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:53:35.366460 containerd[1983]: time="2025-01-30T13:53:35.366411360Z" level=info msg="CreateContainer within sandbox \"90af5c61cfd2fe7a4014680478124313e41c66345c8a2aa2b7ea181012bc57c8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2339d35cb61ae1212c43f23199abb3cd0ffb2b742d447dd11e2288588c5515cc\"" Jan 30 13:53:35.368968 containerd[1983]: time="2025-01-30T13:53:35.367941401Z" level=info msg="StartContainer for \"2339d35cb61ae1212c43f23199abb3cd0ffb2b742d447dd11e2288588c5515cc\"" Jan 30 13:53:35.384319 systemd[1]: Started cri-containerd-7f9e9651faabf4897a4202788ccdc983ffb21b22a82b50c64a8263a8c4e558f1.scope - libcontainer container 7f9e9651faabf4897a4202788ccdc983ffb21b22a82b50c64a8263a8c4e558f1. Jan 30 13:53:35.452558 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount553969109.mount: Deactivated successfully. Jan 30 13:53:35.487122 systemd[1]: run-containerd-runc-k8s.io-2339d35cb61ae1212c43f23199abb3cd0ffb2b742d447dd11e2288588c5515cc-runc.wCCLbd.mount: Deactivated successfully. Jan 30 13:53:35.510720 systemd[1]: Started cri-containerd-2339d35cb61ae1212c43f23199abb3cd0ffb2b742d447dd11e2288588c5515cc.scope - libcontainer container 2339d35cb61ae1212c43f23199abb3cd0ffb2b742d447dd11e2288588c5515cc. Jan 30 13:53:35.522686 containerd[1983]: time="2025-01-30T13:53:35.522624636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dlwgg,Uid:092d9e15-ee48-4734-aba0-f5135cecdc7c,Namespace:calico-system,Attempt:1,} returns sandbox id \"7f9e9651faabf4897a4202788ccdc983ffb21b22a82b50c64a8263a8c4e558f1\"" Jan 30 13:53:35.603829 containerd[1983]: time="2025-01-30T13:53:35.603519726Z" level=info msg="StartContainer for \"2339d35cb61ae1212c43f23199abb3cd0ffb2b742d447dd11e2288588c5515cc\" returns successfully" Jan 30 13:53:35.731677 containerd[1983]: time="2025-01-30T13:53:35.731610055Z" level=info msg="StopPodSandbox for \"2357f22516048c6c7ee3f3740cba728b6feeb8e90778cacab2013013fc10bbcd\"" Jan 30 13:53:35.859467 containerd[1983]: 2025-01-30 13:53:35.796 [INFO][5313] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2357f22516048c6c7ee3f3740cba728b6feeb8e90778cacab2013013fc10bbcd" Jan 30 13:53:35.859467 containerd[1983]: 2025-01-30 13:53:35.798 [INFO][5313] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2357f22516048c6c7ee3f3740cba728b6feeb8e90778cacab2013013fc10bbcd" iface="eth0" netns="/var/run/netns/cni-9e55038d-1c4f-aa62-5e9d-14051f6c4584" Jan 30 13:53:35.859467 containerd[1983]: 2025-01-30 13:53:35.798 [INFO][5313] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2357f22516048c6c7ee3f3740cba728b6feeb8e90778cacab2013013fc10bbcd" iface="eth0" netns="/var/run/netns/cni-9e55038d-1c4f-aa62-5e9d-14051f6c4584" Jan 30 13:53:35.859467 containerd[1983]: 2025-01-30 13:53:35.805 [INFO][5313] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2357f22516048c6c7ee3f3740cba728b6feeb8e90778cacab2013013fc10bbcd" iface="eth0" netns="/var/run/netns/cni-9e55038d-1c4f-aa62-5e9d-14051f6c4584" Jan 30 13:53:35.859467 containerd[1983]: 2025-01-30 13:53:35.805 [INFO][5313] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2357f22516048c6c7ee3f3740cba728b6feeb8e90778cacab2013013fc10bbcd" Jan 30 13:53:35.859467 containerd[1983]: 2025-01-30 13:53:35.805 [INFO][5313] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2357f22516048c6c7ee3f3740cba728b6feeb8e90778cacab2013013fc10bbcd" Jan 30 13:53:35.859467 containerd[1983]: 2025-01-30 13:53:35.833 [INFO][5320] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2357f22516048c6c7ee3f3740cba728b6feeb8e90778cacab2013013fc10bbcd" HandleID="k8s-pod-network.2357f22516048c6c7ee3f3740cba728b6feeb8e90778cacab2013013fc10bbcd" Workload="ip--172--31--19--166-k8s-coredns--7db6d8ff4d--k9ts7-eth0" Jan 30 13:53:35.859467 containerd[1983]: 2025-01-30 13:53:35.834 [INFO][5320] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:53:35.859467 containerd[1983]: 2025-01-30 13:53:35.834 [INFO][5320] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:53:35.859467 containerd[1983]: 2025-01-30 13:53:35.844 [WARNING][5320] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2357f22516048c6c7ee3f3740cba728b6feeb8e90778cacab2013013fc10bbcd" HandleID="k8s-pod-network.2357f22516048c6c7ee3f3740cba728b6feeb8e90778cacab2013013fc10bbcd" Workload="ip--172--31--19--166-k8s-coredns--7db6d8ff4d--k9ts7-eth0" Jan 30 13:53:35.859467 containerd[1983]: 2025-01-30 13:53:35.844 [INFO][5320] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2357f22516048c6c7ee3f3740cba728b6feeb8e90778cacab2013013fc10bbcd" HandleID="k8s-pod-network.2357f22516048c6c7ee3f3740cba728b6feeb8e90778cacab2013013fc10bbcd" Workload="ip--172--31--19--166-k8s-coredns--7db6d8ff4d--k9ts7-eth0" Jan 30 13:53:35.859467 containerd[1983]: 2025-01-30 13:53:35.849 [INFO][5320] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:53:35.859467 containerd[1983]: 2025-01-30 13:53:35.850 [INFO][5313] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2357f22516048c6c7ee3f3740cba728b6feeb8e90778cacab2013013fc10bbcd" Jan 30 13:53:35.861378 containerd[1983]: time="2025-01-30T13:53:35.859619262Z" level=info msg="TearDown network for sandbox \"2357f22516048c6c7ee3f3740cba728b6feeb8e90778cacab2013013fc10bbcd\" successfully" Jan 30 13:53:35.861378 containerd[1983]: time="2025-01-30T13:53:35.859652966Z" level=info msg="StopPodSandbox for \"2357f22516048c6c7ee3f3740cba728b6feeb8e90778cacab2013013fc10bbcd\" returns successfully" Jan 30 13:53:35.864459 containerd[1983]: time="2025-01-30T13:53:35.864417243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-k9ts7,Uid:2b094d3d-1ca0-4567-b2d8-ca3df2f86d82,Namespace:kube-system,Attempt:1,}" Jan 30 13:53:35.871541 systemd[1]: run-netns-cni\x2d9e55038d\x2d1c4f\x2daa62\x2d5e9d\x2d14051f6c4584.mount: Deactivated successfully. Jan 30 13:53:35.904331 systemd-networkd[1861]: cali2c8cbbc03f8: Gained IPv6LL Jan 30 13:53:36.137477 systemd-networkd[1861]: cali6453e004578: Link UP Jan 30 13:53:36.137898 systemd-networkd[1861]: cali6453e004578: Gained carrier Jan 30 13:53:36.165018 containerd[1983]: 2025-01-30 13:53:35.986 [INFO][5328] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--166-k8s-coredns--7db6d8ff4d--k9ts7-eth0 coredns-7db6d8ff4d- kube-system 2b094d3d-1ca0-4567-b2d8-ca3df2f86d82 813 0 2025-01-30 13:53:00 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-19-166 coredns-7db6d8ff4d-k9ts7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6453e004578 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="b23c6028957b7658cff0f9fa2619b5b4b5be3978ff8ed601fd86e0a9405c3838" Namespace="kube-system" Pod="coredns-7db6d8ff4d-k9ts7" WorkloadEndpoint="ip--172--31--19--166-k8s-coredns--7db6d8ff4d--k9ts7-" Jan 30 13:53:36.165018 containerd[1983]: 2025-01-30 13:53:35.986 [INFO][5328] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b23c6028957b7658cff0f9fa2619b5b4b5be3978ff8ed601fd86e0a9405c3838" Namespace="kube-system" Pod="coredns-7db6d8ff4d-k9ts7" WorkloadEndpoint="ip--172--31--19--166-k8s-coredns--7db6d8ff4d--k9ts7-eth0" Jan 30 13:53:36.165018 containerd[1983]: 2025-01-30 13:53:36.066 [INFO][5339] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b23c6028957b7658cff0f9fa2619b5b4b5be3978ff8ed601fd86e0a9405c3838" HandleID="k8s-pod-network.b23c6028957b7658cff0f9fa2619b5b4b5be3978ff8ed601fd86e0a9405c3838" Workload="ip--172--31--19--166-k8s-coredns--7db6d8ff4d--k9ts7-eth0" Jan 30 13:53:36.165018 containerd[1983]: 2025-01-30 13:53:36.084 [INFO][5339] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b23c6028957b7658cff0f9fa2619b5b4b5be3978ff8ed601fd86e0a9405c3838" HandleID="k8s-pod-network.b23c6028957b7658cff0f9fa2619b5b4b5be3978ff8ed601fd86e0a9405c3838" Workload="ip--172--31--19--166-k8s-coredns--7db6d8ff4d--k9ts7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000221010), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-19-166", "pod":"coredns-7db6d8ff4d-k9ts7", "timestamp":"2025-01-30 13:53:36.066818745 +0000 UTC"}, Hostname:"ip-172-31-19-166", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:53:36.165018 containerd[1983]: 2025-01-30 13:53:36.084 [INFO][5339] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:53:36.165018 containerd[1983]: 2025-01-30 13:53:36.084 [INFO][5339] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:53:36.165018 containerd[1983]: 2025-01-30 13:53:36.086 [INFO][5339] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-166' Jan 30 13:53:36.165018 containerd[1983]: 2025-01-30 13:53:36.089 [INFO][5339] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b23c6028957b7658cff0f9fa2619b5b4b5be3978ff8ed601fd86e0a9405c3838" host="ip-172-31-19-166" Jan 30 13:53:36.165018 containerd[1983]: 2025-01-30 13:53:36.094 [INFO][5339] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-19-166" Jan 30 13:53:36.165018 containerd[1983]: 2025-01-30 13:53:36.099 [INFO][5339] ipam/ipam.go 489: Trying affinity for 192.168.8.128/26 host="ip-172-31-19-166" Jan 30 13:53:36.165018 containerd[1983]: 2025-01-30 13:53:36.102 [INFO][5339] ipam/ipam.go 155: Attempting to load block cidr=192.168.8.128/26 host="ip-172-31-19-166" Jan 30 13:53:36.165018 containerd[1983]: 2025-01-30 13:53:36.106 [INFO][5339] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.8.128/26 host="ip-172-31-19-166" Jan 30 13:53:36.165018 containerd[1983]: 2025-01-30 13:53:36.107 [INFO][5339] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.8.128/26 handle="k8s-pod-network.b23c6028957b7658cff0f9fa2619b5b4b5be3978ff8ed601fd86e0a9405c3838" host="ip-172-31-19-166" Jan 30 13:53:36.165018 containerd[1983]: 2025-01-30 13:53:36.109 [INFO][5339] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b23c6028957b7658cff0f9fa2619b5b4b5be3978ff8ed601fd86e0a9405c3838 Jan 30 13:53:36.165018 containerd[1983]: 2025-01-30 13:53:36.117 [INFO][5339] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.8.128/26 handle="k8s-pod-network.b23c6028957b7658cff0f9fa2619b5b4b5be3978ff8ed601fd86e0a9405c3838" host="ip-172-31-19-166" Jan 30 13:53:36.165018 containerd[1983]: 2025-01-30 13:53:36.127 [INFO][5339] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.8.133/26] block=192.168.8.128/26 handle="k8s-pod-network.b23c6028957b7658cff0f9fa2619b5b4b5be3978ff8ed601fd86e0a9405c3838" host="ip-172-31-19-166" Jan 30 13:53:36.165018 containerd[1983]: 2025-01-30 13:53:36.127 [INFO][5339] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.8.133/26] handle="k8s-pod-network.b23c6028957b7658cff0f9fa2619b5b4b5be3978ff8ed601fd86e0a9405c3838" host="ip-172-31-19-166" Jan 30 13:53:36.165018 containerd[1983]: 2025-01-30 13:53:36.127 [INFO][5339] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:53:36.165018 containerd[1983]: 2025-01-30 13:53:36.127 [INFO][5339] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.8.133/26] IPv6=[] ContainerID="b23c6028957b7658cff0f9fa2619b5b4b5be3978ff8ed601fd86e0a9405c3838" HandleID="k8s-pod-network.b23c6028957b7658cff0f9fa2619b5b4b5be3978ff8ed601fd86e0a9405c3838" Workload="ip--172--31--19--166-k8s-coredns--7db6d8ff4d--k9ts7-eth0" Jan 30 13:53:36.168063 containerd[1983]: 2025-01-30 13:53:36.130 [INFO][5328] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b23c6028957b7658cff0f9fa2619b5b4b5be3978ff8ed601fd86e0a9405c3838" Namespace="kube-system" Pod="coredns-7db6d8ff4d-k9ts7" WorkloadEndpoint="ip--172--31--19--166-k8s-coredns--7db6d8ff4d--k9ts7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--166-k8s-coredns--7db6d8ff4d--k9ts7-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"2b094d3d-1ca0-4567-b2d8-ca3df2f86d82", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 53, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-166", ContainerID:"", Pod:"coredns-7db6d8ff4d-k9ts7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.8.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6453e004578", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:53:36.168063 containerd[1983]: 2025-01-30 13:53:36.131 [INFO][5328] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.8.133/32] ContainerID="b23c6028957b7658cff0f9fa2619b5b4b5be3978ff8ed601fd86e0a9405c3838" Namespace="kube-system" Pod="coredns-7db6d8ff4d-k9ts7" WorkloadEndpoint="ip--172--31--19--166-k8s-coredns--7db6d8ff4d--k9ts7-eth0" Jan 30 13:53:36.168063 containerd[1983]: 2025-01-30 13:53:36.131 [INFO][5328] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6453e004578 ContainerID="b23c6028957b7658cff0f9fa2619b5b4b5be3978ff8ed601fd86e0a9405c3838" Namespace="kube-system" Pod="coredns-7db6d8ff4d-k9ts7" WorkloadEndpoint="ip--172--31--19--166-k8s-coredns--7db6d8ff4d--k9ts7-eth0" Jan 30 13:53:36.168063 containerd[1983]: 2025-01-30 13:53:36.138 [INFO][5328] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b23c6028957b7658cff0f9fa2619b5b4b5be3978ff8ed601fd86e0a9405c3838" Namespace="kube-system" Pod="coredns-7db6d8ff4d-k9ts7" WorkloadEndpoint="ip--172--31--19--166-k8s-coredns--7db6d8ff4d--k9ts7-eth0" Jan 30 13:53:36.168063 containerd[1983]: 2025-01-30 13:53:36.138 [INFO][5328] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b23c6028957b7658cff0f9fa2619b5b4b5be3978ff8ed601fd86e0a9405c3838" Namespace="kube-system" Pod="coredns-7db6d8ff4d-k9ts7" WorkloadEndpoint="ip--172--31--19--166-k8s-coredns--7db6d8ff4d--k9ts7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--166-k8s-coredns--7db6d8ff4d--k9ts7-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"2b094d3d-1ca0-4567-b2d8-ca3df2f86d82", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 53, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-166", ContainerID:"b23c6028957b7658cff0f9fa2619b5b4b5be3978ff8ed601fd86e0a9405c3838", Pod:"coredns-7db6d8ff4d-k9ts7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.8.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6453e004578", MAC:"76:9d:09:8c:30:56", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:53:36.168063 containerd[1983]: 2025-01-30 13:53:36.157 [INFO][5328] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b23c6028957b7658cff0f9fa2619b5b4b5be3978ff8ed601fd86e0a9405c3838" Namespace="kube-system" Pod="coredns-7db6d8ff4d-k9ts7" WorkloadEndpoint="ip--172--31--19--166-k8s-coredns--7db6d8ff4d--k9ts7-eth0" Jan 30 13:53:36.217323 containerd[1983]: time="2025-01-30T13:53:36.217125460Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:53:36.218933 containerd[1983]: time="2025-01-30T13:53:36.218545848Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:53:36.218933 containerd[1983]: time="2025-01-30T13:53:36.218575548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:53:36.218933 containerd[1983]: time="2025-01-30T13:53:36.218681211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:53:36.251432 systemd[1]: Started cri-containerd-b23c6028957b7658cff0f9fa2619b5b4b5be3978ff8ed601fd86e0a9405c3838.scope - libcontainer container b23c6028957b7658cff0f9fa2619b5b4b5be3978ff8ed601fd86e0a9405c3838. Jan 30 13:53:36.329456 containerd[1983]: time="2025-01-30T13:53:36.329411984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-k9ts7,Uid:2b094d3d-1ca0-4567-b2d8-ca3df2f86d82,Namespace:kube-system,Attempt:1,} returns sandbox id \"b23c6028957b7658cff0f9fa2619b5b4b5be3978ff8ed601fd86e0a9405c3838\"" Jan 30 13:53:36.337788 containerd[1983]: time="2025-01-30T13:53:36.337643981Z" level=info msg="CreateContainer within sandbox \"b23c6028957b7658cff0f9fa2619b5b4b5be3978ff8ed601fd86e0a9405c3838\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:53:36.352521 systemd-networkd[1861]: calib1d782861b2: Gained IPv6LL Jan 30 13:53:36.386643 containerd[1983]: time="2025-01-30T13:53:36.386596602Z" level=info msg="CreateContainer within sandbox \"b23c6028957b7658cff0f9fa2619b5b4b5be3978ff8ed601fd86e0a9405c3838\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9e6d372d29e3c3b95cf1d6fd87cbb71eaaac51711238685edabe595fb098f4cb\"" Jan 30 13:53:36.390195 containerd[1983]: time="2025-01-30T13:53:36.389874702Z" level=info msg="StartContainer for \"9e6d372d29e3c3b95cf1d6fd87cbb71eaaac51711238685edabe595fb098f4cb\"" Jan 30 13:53:36.542219 systemd[1]: run-containerd-runc-k8s.io-9e6d372d29e3c3b95cf1d6fd87cbb71eaaac51711238685edabe595fb098f4cb-runc.GtUVPZ.mount: Deactivated successfully. Jan 30 13:53:36.557470 systemd[1]: Started cri-containerd-9e6d372d29e3c3b95cf1d6fd87cbb71eaaac51711238685edabe595fb098f4cb.scope - libcontainer container 9e6d372d29e3c3b95cf1d6fd87cbb71eaaac51711238685edabe595fb098f4cb. Jan 30 13:53:36.654198 kubelet[3510]: I0130 13:53:36.652899 3510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-4dgqz" podStartSLOduration=36.65287324 podStartE2EDuration="36.65287324s" podCreationTimestamp="2025-01-30 13:53:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:53:36.643798934 +0000 UTC m=+50.180564271" watchObservedRunningTime="2025-01-30 13:53:36.65287324 +0000 UTC m=+50.189638578" Jan 30 13:53:36.681631 containerd[1983]: time="2025-01-30T13:53:36.681578622Z" level=info msg="StartContainer for \"9e6d372d29e3c3b95cf1d6fd87cbb71eaaac51711238685edabe595fb098f4cb\" returns successfully" Jan 30 13:53:36.797121 containerd[1983]: time="2025-01-30T13:53:36.793829179Z" level=info msg="StopPodSandbox for \"5df243c60b5b110a1c1a8bd2dccbc93a596bc92358c15f864052f423c056404b\"" Jan 30 13:53:37.119359 systemd-networkd[1861]: cali204e6ca9dda: Gained IPv6LL Jan 30 13:53:37.247913 systemd[1]: Started sshd@9-172.31.19.166:22-139.178.68.195:44898.service - OpenSSH per-connection server daemon (139.178.68.195:44898). Jan 30 13:53:37.359784 containerd[1983]: 2025-01-30 13:53:37.103 [INFO][5451] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5df243c60b5b110a1c1a8bd2dccbc93a596bc92358c15f864052f423c056404b" Jan 30 13:53:37.359784 containerd[1983]: 2025-01-30 13:53:37.104 [INFO][5451] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5df243c60b5b110a1c1a8bd2dccbc93a596bc92358c15f864052f423c056404b" iface="eth0" netns="/var/run/netns/cni-bc723f9b-fb41-6f57-df2e-7311579783f2" Jan 30 13:53:37.359784 containerd[1983]: 2025-01-30 13:53:37.104 [INFO][5451] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5df243c60b5b110a1c1a8bd2dccbc93a596bc92358c15f864052f423c056404b" iface="eth0" netns="/var/run/netns/cni-bc723f9b-fb41-6f57-df2e-7311579783f2" Jan 30 13:53:37.359784 containerd[1983]: 2025-01-30 13:53:37.104 [INFO][5451] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5df243c60b5b110a1c1a8bd2dccbc93a596bc92358c15f864052f423c056404b" iface="eth0" netns="/var/run/netns/cni-bc723f9b-fb41-6f57-df2e-7311579783f2" Jan 30 13:53:37.359784 containerd[1983]: 2025-01-30 13:53:37.104 [INFO][5451] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5df243c60b5b110a1c1a8bd2dccbc93a596bc92358c15f864052f423c056404b" Jan 30 13:53:37.359784 containerd[1983]: 2025-01-30 13:53:37.106 [INFO][5451] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5df243c60b5b110a1c1a8bd2dccbc93a596bc92358c15f864052f423c056404b" Jan 30 13:53:37.359784 containerd[1983]: 2025-01-30 13:53:37.252 [INFO][5461] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5df243c60b5b110a1c1a8bd2dccbc93a596bc92358c15f864052f423c056404b" HandleID="k8s-pod-network.5df243c60b5b110a1c1a8bd2dccbc93a596bc92358c15f864052f423c056404b" Workload="ip--172--31--19--166-k8s-calico--kube--controllers--6c996ffb5d--7txfd-eth0" Jan 30 13:53:37.359784 containerd[1983]: 2025-01-30 13:53:37.257 [INFO][5461] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:53:37.359784 containerd[1983]: 2025-01-30 13:53:37.257 [INFO][5461] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:53:37.359784 containerd[1983]: 2025-01-30 13:53:37.287 [WARNING][5461] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5df243c60b5b110a1c1a8bd2dccbc93a596bc92358c15f864052f423c056404b" HandleID="k8s-pod-network.5df243c60b5b110a1c1a8bd2dccbc93a596bc92358c15f864052f423c056404b" Workload="ip--172--31--19--166-k8s-calico--kube--controllers--6c996ffb5d--7txfd-eth0" Jan 30 13:53:37.359784 containerd[1983]: 2025-01-30 13:53:37.289 [INFO][5461] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5df243c60b5b110a1c1a8bd2dccbc93a596bc92358c15f864052f423c056404b" HandleID="k8s-pod-network.5df243c60b5b110a1c1a8bd2dccbc93a596bc92358c15f864052f423c056404b" Workload="ip--172--31--19--166-k8s-calico--kube--controllers--6c996ffb5d--7txfd-eth0" Jan 30 13:53:37.359784 containerd[1983]: 2025-01-30 13:53:37.293 [INFO][5461] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:53:37.359784 containerd[1983]: 2025-01-30 13:53:37.337 [INFO][5451] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5df243c60b5b110a1c1a8bd2dccbc93a596bc92358c15f864052f423c056404b" Jan 30 13:53:37.370390 containerd[1983]: time="2025-01-30T13:53:37.368783988Z" level=info msg="TearDown network for sandbox \"5df243c60b5b110a1c1a8bd2dccbc93a596bc92358c15f864052f423c056404b\" successfully" Jan 30 13:53:37.370390 containerd[1983]: time="2025-01-30T13:53:37.368830363Z" level=info msg="StopPodSandbox for \"5df243c60b5b110a1c1a8bd2dccbc93a596bc92358c15f864052f423c056404b\" returns successfully" Jan 30 13:53:37.369467 systemd[1]: run-netns-cni\x2dbc723f9b\x2dfb41\x2d6f57\x2ddf2e\x2d7311579783f2.mount: Deactivated successfully. Jan 30 13:53:37.383603 containerd[1983]: time="2025-01-30T13:53:37.383087321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c996ffb5d-7txfd,Uid:99b6f917-c78e-4eb5-a202-0c6311880c4e,Namespace:calico-system,Attempt:1,}" Jan 30 13:53:37.568829 systemd-networkd[1861]: cali6453e004578: Gained IPv6LL Jan 30 13:53:37.603134 sshd[5472]: Accepted publickey for core from 139.178.68.195 port 44898 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:53:37.608440 sshd[5472]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:53:37.617060 systemd-logind[1947]: New session 10 of user core. Jan 30 13:53:37.625885 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 13:53:37.723257 kubelet[3510]: I0130 13:53:37.721272 3510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-k9ts7" podStartSLOduration=37.721247428 podStartE2EDuration="37.721247428s" podCreationTimestamp="2025-01-30 13:53:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:53:37.71935402 +0000 UTC m=+51.256119356" watchObservedRunningTime="2025-01-30 13:53:37.721247428 +0000 UTC m=+51.258012767" Jan 30 13:53:37.956446 systemd-networkd[1861]: calib26ef9f50b7: Link UP Jan 30 13:53:37.962212 systemd-networkd[1861]: calib26ef9f50b7: Gained carrier Jan 30 13:53:38.009349 containerd[1983]: 2025-01-30 13:53:37.607 [INFO][5481] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--166-k8s-calico--kube--controllers--6c996ffb5d--7txfd-eth0 calico-kube-controllers-6c996ffb5d- calico-system 99b6f917-c78e-4eb5-a202-0c6311880c4e 857 0 2025-01-30 13:53:09 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6c996ffb5d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-19-166 calico-kube-controllers-6c996ffb5d-7txfd eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calib26ef9f50b7 [] []}} ContainerID="9dc7f975f64fc3666c8dccf7c41aa76d6183026285ad7f86aaf128ad6945e946" Namespace="calico-system" Pod="calico-kube-controllers-6c996ffb5d-7txfd" WorkloadEndpoint="ip--172--31--19--166-k8s-calico--kube--controllers--6c996ffb5d--7txfd-" Jan 30 13:53:38.009349 containerd[1983]: 2025-01-30 13:53:37.608 [INFO][5481] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9dc7f975f64fc3666c8dccf7c41aa76d6183026285ad7f86aaf128ad6945e946" Namespace="calico-system" Pod="calico-kube-controllers-6c996ffb5d-7txfd" WorkloadEndpoint="ip--172--31--19--166-k8s-calico--kube--controllers--6c996ffb5d--7txfd-eth0" Jan 30 13:53:38.009349 containerd[1983]: 2025-01-30 13:53:37.741 [INFO][5494] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9dc7f975f64fc3666c8dccf7c41aa76d6183026285ad7f86aaf128ad6945e946" HandleID="k8s-pod-network.9dc7f975f64fc3666c8dccf7c41aa76d6183026285ad7f86aaf128ad6945e946" Workload="ip--172--31--19--166-k8s-calico--kube--controllers--6c996ffb5d--7txfd-eth0" Jan 30 13:53:38.009349 containerd[1983]: 2025-01-30 13:53:37.804 [INFO][5494] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9dc7f975f64fc3666c8dccf7c41aa76d6183026285ad7f86aaf128ad6945e946" HandleID="k8s-pod-network.9dc7f975f64fc3666c8dccf7c41aa76d6183026285ad7f86aaf128ad6945e946" Workload="ip--172--31--19--166-k8s-calico--kube--controllers--6c996ffb5d--7txfd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318aa0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-19-166", "pod":"calico-kube-controllers-6c996ffb5d-7txfd", "timestamp":"2025-01-30 13:53:37.738737716 +0000 UTC"}, Hostname:"ip-172-31-19-166", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:53:38.009349 containerd[1983]: 2025-01-30 13:53:37.805 [INFO][5494] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:53:38.009349 containerd[1983]: 2025-01-30 13:53:37.805 [INFO][5494] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:53:38.009349 containerd[1983]: 2025-01-30 13:53:37.805 [INFO][5494] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-166' Jan 30 13:53:38.009349 containerd[1983]: 2025-01-30 13:53:37.819 [INFO][5494] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9dc7f975f64fc3666c8dccf7c41aa76d6183026285ad7f86aaf128ad6945e946" host="ip-172-31-19-166" Jan 30 13:53:38.009349 containerd[1983]: 2025-01-30 13:53:37.839 [INFO][5494] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-19-166" Jan 30 13:53:38.009349 containerd[1983]: 2025-01-30 13:53:37.861 [INFO][5494] ipam/ipam.go 489: Trying affinity for 192.168.8.128/26 host="ip-172-31-19-166" Jan 30 13:53:38.009349 containerd[1983]: 2025-01-30 13:53:37.868 [INFO][5494] ipam/ipam.go 155: Attempting to load block cidr=192.168.8.128/26 host="ip-172-31-19-166" Jan 30 13:53:38.009349 containerd[1983]: 2025-01-30 13:53:37.876 [INFO][5494] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.8.128/26 host="ip-172-31-19-166" Jan 30 13:53:38.009349 containerd[1983]: 2025-01-30 13:53:37.876 [INFO][5494] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.8.128/26 handle="k8s-pod-network.9dc7f975f64fc3666c8dccf7c41aa76d6183026285ad7f86aaf128ad6945e946" host="ip-172-31-19-166" Jan 30 13:53:38.009349 containerd[1983]: 2025-01-30 13:53:37.881 [INFO][5494] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9dc7f975f64fc3666c8dccf7c41aa76d6183026285ad7f86aaf128ad6945e946 Jan 30 13:53:38.009349 containerd[1983]: 2025-01-30 13:53:37.901 [INFO][5494] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.8.128/26 handle="k8s-pod-network.9dc7f975f64fc3666c8dccf7c41aa76d6183026285ad7f86aaf128ad6945e946" host="ip-172-31-19-166" Jan 30 13:53:38.009349 containerd[1983]: 2025-01-30 13:53:37.924 [INFO][5494] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.8.134/26] block=192.168.8.128/26 handle="k8s-pod-network.9dc7f975f64fc3666c8dccf7c41aa76d6183026285ad7f86aaf128ad6945e946" host="ip-172-31-19-166" Jan 30 13:53:38.009349 containerd[1983]: 2025-01-30 13:53:37.924 [INFO][5494] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.8.134/26] handle="k8s-pod-network.9dc7f975f64fc3666c8dccf7c41aa76d6183026285ad7f86aaf128ad6945e946" host="ip-172-31-19-166" Jan 30 13:53:38.009349 containerd[1983]: 2025-01-30 13:53:37.924 [INFO][5494] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:53:38.009349 containerd[1983]: 2025-01-30 13:53:37.924 [INFO][5494] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.8.134/26] IPv6=[] ContainerID="9dc7f975f64fc3666c8dccf7c41aa76d6183026285ad7f86aaf128ad6945e946" HandleID="k8s-pod-network.9dc7f975f64fc3666c8dccf7c41aa76d6183026285ad7f86aaf128ad6945e946" Workload="ip--172--31--19--166-k8s-calico--kube--controllers--6c996ffb5d--7txfd-eth0" Jan 30 13:53:38.011661 containerd[1983]: 2025-01-30 13:53:37.940 [INFO][5481] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9dc7f975f64fc3666c8dccf7c41aa76d6183026285ad7f86aaf128ad6945e946" Namespace="calico-system" Pod="calico-kube-controllers-6c996ffb5d-7txfd" WorkloadEndpoint="ip--172--31--19--166-k8s-calico--kube--controllers--6c996ffb5d--7txfd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--166-k8s-calico--kube--controllers--6c996ffb5d--7txfd-eth0", GenerateName:"calico-kube-controllers-6c996ffb5d-", Namespace:"calico-system", SelfLink:"", UID:"99b6f917-c78e-4eb5-a202-0c6311880c4e", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 53, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c996ffb5d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-166", ContainerID:"", Pod:"calico-kube-controllers-6c996ffb5d-7txfd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.8.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib26ef9f50b7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:53:38.011661 containerd[1983]: 2025-01-30 13:53:37.940 [INFO][5481] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.8.134/32] ContainerID="9dc7f975f64fc3666c8dccf7c41aa76d6183026285ad7f86aaf128ad6945e946" Namespace="calico-system" Pod="calico-kube-controllers-6c996ffb5d-7txfd" WorkloadEndpoint="ip--172--31--19--166-k8s-calico--kube--controllers--6c996ffb5d--7txfd-eth0" Jan 30 13:53:38.011661 containerd[1983]: 2025-01-30 13:53:37.940 [INFO][5481] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib26ef9f50b7 ContainerID="9dc7f975f64fc3666c8dccf7c41aa76d6183026285ad7f86aaf128ad6945e946" Namespace="calico-system" Pod="calico-kube-controllers-6c996ffb5d-7txfd" WorkloadEndpoint="ip--172--31--19--166-k8s-calico--kube--controllers--6c996ffb5d--7txfd-eth0" Jan 30 13:53:38.011661 containerd[1983]: 2025-01-30 13:53:37.969 [INFO][5481] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9dc7f975f64fc3666c8dccf7c41aa76d6183026285ad7f86aaf128ad6945e946" Namespace="calico-system" Pod="calico-kube-controllers-6c996ffb5d-7txfd" WorkloadEndpoint="ip--172--31--19--166-k8s-calico--kube--controllers--6c996ffb5d--7txfd-eth0" Jan 30 13:53:38.011661 containerd[1983]: 2025-01-30 13:53:37.972 [INFO][5481] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9dc7f975f64fc3666c8dccf7c41aa76d6183026285ad7f86aaf128ad6945e946" Namespace="calico-system" Pod="calico-kube-controllers-6c996ffb5d-7txfd" WorkloadEndpoint="ip--172--31--19--166-k8s-calico--kube--controllers--6c996ffb5d--7txfd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--166-k8s-calico--kube--controllers--6c996ffb5d--7txfd-eth0", GenerateName:"calico-kube-controllers-6c996ffb5d-", Namespace:"calico-system", SelfLink:"", UID:"99b6f917-c78e-4eb5-a202-0c6311880c4e", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 53, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c996ffb5d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-166", ContainerID:"9dc7f975f64fc3666c8dccf7c41aa76d6183026285ad7f86aaf128ad6945e946", Pod:"calico-kube-controllers-6c996ffb5d-7txfd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.8.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib26ef9f50b7", MAC:"46:a1:ac:87:09:db", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:53:38.011661 containerd[1983]: 2025-01-30 13:53:37.999 [INFO][5481] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9dc7f975f64fc3666c8dccf7c41aa76d6183026285ad7f86aaf128ad6945e946" Namespace="calico-system" Pod="calico-kube-controllers-6c996ffb5d-7txfd" WorkloadEndpoint="ip--172--31--19--166-k8s-calico--kube--controllers--6c996ffb5d--7txfd-eth0" Jan 30 13:53:38.127591 containerd[1983]: time="2025-01-30T13:53:38.125988444Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:53:38.127591 containerd[1983]: time="2025-01-30T13:53:38.126284688Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:53:38.127591 containerd[1983]: time="2025-01-30T13:53:38.126322096Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:53:38.127591 containerd[1983]: time="2025-01-30T13:53:38.126459013Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:53:38.213719 systemd[1]: Started cri-containerd-9dc7f975f64fc3666c8dccf7c41aa76d6183026285ad7f86aaf128ad6945e946.scope - libcontainer container 9dc7f975f64fc3666c8dccf7c41aa76d6183026285ad7f86aaf128ad6945e946. Jan 30 13:53:38.465790 containerd[1983]: time="2025-01-30T13:53:38.465733434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c996ffb5d-7txfd,Uid:99b6f917-c78e-4eb5-a202-0c6311880c4e,Namespace:calico-system,Attempt:1,} returns sandbox id \"9dc7f975f64fc3666c8dccf7c41aa76d6183026285ad7f86aaf128ad6945e946\"" Jan 30 13:53:38.623201 sshd[5472]: pam_unix(sshd:session): session closed for user core Jan 30 13:53:38.638793 systemd[1]: sshd@9-172.31.19.166:22-139.178.68.195:44898.service: Deactivated successfully. Jan 30 13:53:38.644603 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 13:53:38.650389 systemd-logind[1947]: Session 10 logged out. Waiting for processes to exit. Jan 30 13:53:38.656434 systemd-logind[1947]: Removed session 10. Jan 30 13:53:39.487434 systemd-networkd[1861]: calib26ef9f50b7: Gained IPv6LL Jan 30 13:53:39.591862 containerd[1983]: time="2025-01-30T13:53:39.590577161Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:39.596239 containerd[1983]: time="2025-01-30T13:53:39.596175733Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 30 13:53:39.597624 containerd[1983]: time="2025-01-30T13:53:39.597594818Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:39.603298 containerd[1983]: time="2025-01-30T13:53:39.603003805Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:39.605990 containerd[1983]: time="2025-01-30T13:53:39.605399992Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 4.849877167s" Jan 30 13:53:39.605990 containerd[1983]: time="2025-01-30T13:53:39.605436045Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 30 13:53:39.616466 containerd[1983]: time="2025-01-30T13:53:39.615600689Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 30 13:53:39.621569 containerd[1983]: time="2025-01-30T13:53:39.621472595Z" level=info msg="CreateContainer within sandbox \"95d30a3e9c541ec0f5aa7d1ce1040e50c9c5003000636cce3fa1996ce91c83c0\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 13:53:39.656797 containerd[1983]: time="2025-01-30T13:53:39.655786035Z" level=info msg="CreateContainer within sandbox \"95d30a3e9c541ec0f5aa7d1ce1040e50c9c5003000636cce3fa1996ce91c83c0\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7d001d7fcb972565ad8f6c91799672e69655746f6b29f67cfe8a38352823668c\"" Jan 30 13:53:39.658760 containerd[1983]: time="2025-01-30T13:53:39.658722493Z" level=info msg="StartContainer for \"7d001d7fcb972565ad8f6c91799672e69655746f6b29f67cfe8a38352823668c\"" Jan 30 13:53:39.735802 systemd[1]: Started cri-containerd-7d001d7fcb972565ad8f6c91799672e69655746f6b29f67cfe8a38352823668c.scope - libcontainer container 7d001d7fcb972565ad8f6c91799672e69655746f6b29f67cfe8a38352823668c. Jan 30 13:53:39.839607 containerd[1983]: time="2025-01-30T13:53:39.839273953Z" level=info msg="StartContainer for \"7d001d7fcb972565ad8f6c91799672e69655746f6b29f67cfe8a38352823668c\" returns successfully" Jan 30 13:53:40.011849 containerd[1983]: time="2025-01-30T13:53:40.011370481Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:40.016192 containerd[1983]: time="2025-01-30T13:53:40.014754292Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 30 13:53:40.019906 containerd[1983]: time="2025-01-30T13:53:40.019843925Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 404.195201ms" Jan 30 13:53:40.020132 containerd[1983]: time="2025-01-30T13:53:40.020072777Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 30 13:53:40.022606 containerd[1983]: time="2025-01-30T13:53:40.022346856Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 30 13:53:40.027351 containerd[1983]: time="2025-01-30T13:53:40.027306048Z" level=info msg="CreateContainer within sandbox \"dce6c4e5a028526767858f173547f6d484ab135f23e609446427f33e2df05840\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 13:53:40.058156 containerd[1983]: time="2025-01-30T13:53:40.057627022Z" level=info msg="CreateContainer within sandbox \"dce6c4e5a028526767858f173547f6d484ab135f23e609446427f33e2df05840\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7f10021a54fb3483e6cfc3fe9810097a74cbfae819350a76e62da557f24c0d55\"" Jan 30 13:53:40.059414 containerd[1983]: time="2025-01-30T13:53:40.059162233Z" level=info msg="StartContainer for \"7f10021a54fb3483e6cfc3fe9810097a74cbfae819350a76e62da557f24c0d55\"" Jan 30 13:53:40.122367 systemd[1]: Started cri-containerd-7f10021a54fb3483e6cfc3fe9810097a74cbfae819350a76e62da557f24c0d55.scope - libcontainer container 7f10021a54fb3483e6cfc3fe9810097a74cbfae819350a76e62da557f24c0d55. Jan 30 13:53:40.316784 containerd[1983]: time="2025-01-30T13:53:40.316212465Z" level=info msg="StartContainer for \"7f10021a54fb3483e6cfc3fe9810097a74cbfae819350a76e62da557f24c0d55\" returns successfully" Jan 30 13:53:40.709265 kubelet[3510]: I0130 13:53:40.705137 3510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7455f859bb-tzswx" podStartSLOduration=27.840539557 podStartE2EDuration="32.705091386s" podCreationTimestamp="2025-01-30 13:53:08 +0000 UTC" firstStartedPulling="2025-01-30 13:53:34.749995798 +0000 UTC m=+48.286761127" lastFinishedPulling="2025-01-30 13:53:39.614547621 +0000 UTC m=+53.151312956" observedRunningTime="2025-01-30 13:53:40.701527682 +0000 UTC m=+54.238293018" watchObservedRunningTime="2025-01-30 13:53:40.705091386 +0000 UTC m=+54.241856720" Jan 30 13:53:41.627126 containerd[1983]: time="2025-01-30T13:53:41.626426372Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:41.628361 containerd[1983]: time="2025-01-30T13:53:41.628237537Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 30 13:53:41.633124 containerd[1983]: time="2025-01-30T13:53:41.631072598Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:41.635164 containerd[1983]: time="2025-01-30T13:53:41.635116867Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:41.636523 containerd[1983]: time="2025-01-30T13:53:41.636487040Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.614099295s" Jan 30 13:53:41.636687 containerd[1983]: time="2025-01-30T13:53:41.636668706Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 30 13:53:41.650182 containerd[1983]: time="2025-01-30T13:53:41.650076577Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 30 13:53:41.658744 containerd[1983]: time="2025-01-30T13:53:41.658698062Z" level=info msg="CreateContainer within sandbox \"7f9e9651faabf4897a4202788ccdc983ffb21b22a82b50c64a8263a8c4e558f1\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 30 13:53:41.691682 kubelet[3510]: I0130 13:53:41.691638 3510 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:53:41.707950 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1804811943.mount: Deactivated successfully. Jan 30 13:53:41.715543 containerd[1983]: time="2025-01-30T13:53:41.715459298Z" level=info msg="CreateContainer within sandbox \"7f9e9651faabf4897a4202788ccdc983ffb21b22a82b50c64a8263a8c4e558f1\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"89750085598a2efb9a70a19c41a6be41e361a22a9ee6207a7894bdebafb4e454\"" Jan 30 13:53:41.734126 containerd[1983]: time="2025-01-30T13:53:41.731296105Z" level=info msg="StartContainer for \"89750085598a2efb9a70a19c41a6be41e361a22a9ee6207a7894bdebafb4e454\"" Jan 30 13:53:41.805229 ntpd[1940]: Listen normally on 8 vxlan.calico 192.168.8.128:123 Jan 30 13:53:41.805416 ntpd[1940]: Listen normally on 9 vxlan.calico [fe80::64ed:50ff:fe2e:9d88%4]:123 Jan 30 13:53:41.805795 ntpd[1940]: 30 Jan 13:53:41 ntpd[1940]: Listen normally on 8 vxlan.calico 192.168.8.128:123 Jan 30 13:53:41.805795 ntpd[1940]: 30 Jan 13:53:41 ntpd[1940]: Listen normally on 9 vxlan.calico [fe80::64ed:50ff:fe2e:9d88%4]:123 Jan 30 13:53:41.805795 ntpd[1940]: 30 Jan 13:53:41 ntpd[1940]: Listen normally on 10 cali7233812dacd [fe80::ecee:eeff:feee:eeee%7]:123 Jan 30 13:53:41.805795 ntpd[1940]: 30 Jan 13:53:41 ntpd[1940]: Listen normally on 11 cali2c8cbbc03f8 [fe80::ecee:eeff:feee:eeee%8]:123 Jan 30 13:53:41.805795 ntpd[1940]: 30 Jan 13:53:41 ntpd[1940]: Listen normally on 12 calib1d782861b2 [fe80::ecee:eeff:feee:eeee%9]:123 Jan 30 13:53:41.805795 ntpd[1940]: 30 Jan 13:53:41 ntpd[1940]: Listen normally on 13 cali204e6ca9dda [fe80::ecee:eeff:feee:eeee%10]:123 Jan 30 13:53:41.805795 ntpd[1940]: 30 Jan 13:53:41 ntpd[1940]: Listen normally on 14 cali6453e004578 [fe80::ecee:eeff:feee:eeee%11]:123 Jan 30 13:53:41.805795 ntpd[1940]: 30 Jan 13:53:41 ntpd[1940]: Listen normally on 15 calib26ef9f50b7 [fe80::ecee:eeff:feee:eeee%12]:123 Jan 30 13:53:41.805483 ntpd[1940]: Listen normally on 10 cali7233812dacd [fe80::ecee:eeff:feee:eeee%7]:123 Jan 30 13:53:41.805525 ntpd[1940]: Listen normally on 11 cali2c8cbbc03f8 [fe80::ecee:eeff:feee:eeee%8]:123 Jan 30 13:53:41.805563 ntpd[1940]: Listen normally on 12 calib1d782861b2 [fe80::ecee:eeff:feee:eeee%9]:123 Jan 30 13:53:41.805603 ntpd[1940]: Listen normally on 13 cali204e6ca9dda [fe80::ecee:eeff:feee:eeee%10]:123 Jan 30 13:53:41.805640 ntpd[1940]: Listen normally on 14 cali6453e004578 [fe80::ecee:eeff:feee:eeee%11]:123 Jan 30 13:53:41.805677 ntpd[1940]: Listen normally on 15 calib26ef9f50b7 [fe80::ecee:eeff:feee:eeee%12]:123 Jan 30 13:53:41.843035 systemd[1]: Started cri-containerd-89750085598a2efb9a70a19c41a6be41e361a22a9ee6207a7894bdebafb4e454.scope - libcontainer container 89750085598a2efb9a70a19c41a6be41e361a22a9ee6207a7894bdebafb4e454. Jan 30 13:53:41.965226 containerd[1983]: time="2025-01-30T13:53:41.964802407Z" level=info msg="StartContainer for \"89750085598a2efb9a70a19c41a6be41e361a22a9ee6207a7894bdebafb4e454\" returns successfully" Jan 30 13:53:43.325337 kubelet[3510]: I0130 13:53:43.325198 3510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7455f859bb-hlpjx" podStartSLOduration=30.251660642 podStartE2EDuration="35.325174554s" podCreationTimestamp="2025-01-30 13:53:08 +0000 UTC" firstStartedPulling="2025-01-30 13:53:34.948235031 +0000 UTC m=+48.485000347" lastFinishedPulling="2025-01-30 13:53:40.021748928 +0000 UTC m=+53.558514259" observedRunningTime="2025-01-30 13:53:40.753023573 +0000 UTC m=+54.289788909" watchObservedRunningTime="2025-01-30 13:53:43.325174554 +0000 UTC m=+56.861939892" Jan 30 13:53:43.680477 systemd[1]: Started sshd@10-172.31.19.166:22-139.178.68.195:44906.service - OpenSSH per-connection server daemon (139.178.68.195:44906). Jan 30 13:53:43.975891 sshd[5716]: Accepted publickey for core from 139.178.68.195 port 44906 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:53:43.982039 sshd[5716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:53:43.993965 systemd-logind[1947]: New session 11 of user core. Jan 30 13:53:43.999635 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 13:53:44.857629 sshd[5716]: pam_unix(sshd:session): session closed for user core Jan 30 13:53:44.868329 systemd-logind[1947]: Session 11 logged out. Waiting for processes to exit. Jan 30 13:53:44.869028 systemd[1]: sshd@10-172.31.19.166:22-139.178.68.195:44906.service: Deactivated successfully. Jan 30 13:53:44.873810 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 13:53:44.877372 systemd-logind[1947]: Removed session 11. Jan 30 13:53:44.989510 containerd[1983]: time="2025-01-30T13:53:44.989429948Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:44.991807 containerd[1983]: time="2025-01-30T13:53:44.991567468Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 30 13:53:44.995902 containerd[1983]: time="2025-01-30T13:53:44.994331365Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:44.999702 containerd[1983]: time="2025-01-30T13:53:44.998578983Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:44.999702 containerd[1983]: time="2025-01-30T13:53:44.999484656Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 3.349116348s" Jan 30 13:53:44.999702 containerd[1983]: time="2025-01-30T13:53:44.999523692Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 30 13:53:45.001655 containerd[1983]: time="2025-01-30T13:53:45.001583157Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 30 13:53:45.096851 containerd[1983]: time="2025-01-30T13:53:45.095947193Z" level=info msg="CreateContainer within sandbox \"9dc7f975f64fc3666c8dccf7c41aa76d6183026285ad7f86aaf128ad6945e946\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 30 13:53:45.139852 containerd[1983]: time="2025-01-30T13:53:45.139724952Z" level=info msg="CreateContainer within sandbox \"9dc7f975f64fc3666c8dccf7c41aa76d6183026285ad7f86aaf128ad6945e946\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"8f03524f05682c345e363e26138de48623dffdd9f73a9987a4da1bc57bb96725\"" Jan 30 13:53:45.145273 containerd[1983]: time="2025-01-30T13:53:45.144188889Z" level=info msg="StartContainer for \"8f03524f05682c345e363e26138de48623dffdd9f73a9987a4da1bc57bb96725\"" Jan 30 13:53:45.198318 systemd[1]: Started cri-containerd-8f03524f05682c345e363e26138de48623dffdd9f73a9987a4da1bc57bb96725.scope - libcontainer container 8f03524f05682c345e363e26138de48623dffdd9f73a9987a4da1bc57bb96725. Jan 30 13:53:45.268002 containerd[1983]: time="2025-01-30T13:53:45.267872332Z" level=info msg="StartContainer for \"8f03524f05682c345e363e26138de48623dffdd9f73a9987a4da1bc57bb96725\" returns successfully" Jan 30 13:53:45.782737 kubelet[3510]: I0130 13:53:45.782679 3510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6c996ffb5d-7txfd" podStartSLOduration=30.253356085 podStartE2EDuration="36.782658202s" podCreationTimestamp="2025-01-30 13:53:09 +0000 UTC" firstStartedPulling="2025-01-30 13:53:38.471813175 +0000 UTC m=+52.008578497" lastFinishedPulling="2025-01-30 13:53:45.001115283 +0000 UTC m=+58.537880614" observedRunningTime="2025-01-30 13:53:45.772275093 +0000 UTC m=+59.309040431" watchObservedRunningTime="2025-01-30 13:53:45.782658202 +0000 UTC m=+59.319423538" Jan 30 13:53:46.780334 containerd[1983]: time="2025-01-30T13:53:46.780281110Z" level=info msg="StopPodSandbox for \"c3896f163db0a00ffe6c18dc351f404f7a225a292dbd069dfd06c41bf299a11e\"" Jan 30 13:53:47.194180 containerd[1983]: 2025-01-30 13:53:47.050 [WARNING][5799] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c3896f163db0a00ffe6c18dc351f404f7a225a292dbd069dfd06c41bf299a11e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--166-k8s-calico--apiserver--7455f859bb--hlpjx-eth0", GenerateName:"calico-apiserver-7455f859bb-", Namespace:"calico-apiserver", SelfLink:"", UID:"228374c3-8542-47d9-a2e1-c564d0ab650c", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 53, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7455f859bb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-166", ContainerID:"dce6c4e5a028526767858f173547f6d484ab135f23e609446427f33e2df05840", Pod:"calico-apiserver-7455f859bb-hlpjx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.8.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2c8cbbc03f8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:53:47.194180 containerd[1983]: 2025-01-30 13:53:47.054 [INFO][5799] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c3896f163db0a00ffe6c18dc351f404f7a225a292dbd069dfd06c41bf299a11e" Jan 30 13:53:47.194180 containerd[1983]: 2025-01-30 13:53:47.056 [INFO][5799] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c3896f163db0a00ffe6c18dc351f404f7a225a292dbd069dfd06c41bf299a11e" iface="eth0" netns="" Jan 30 13:53:47.194180 containerd[1983]: 2025-01-30 13:53:47.056 [INFO][5799] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c3896f163db0a00ffe6c18dc351f404f7a225a292dbd069dfd06c41bf299a11e" Jan 30 13:53:47.194180 containerd[1983]: 2025-01-30 13:53:47.057 [INFO][5799] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c3896f163db0a00ffe6c18dc351f404f7a225a292dbd069dfd06c41bf299a11e" Jan 30 13:53:47.194180 containerd[1983]: 2025-01-30 13:53:47.138 [INFO][5819] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c3896f163db0a00ffe6c18dc351f404f7a225a292dbd069dfd06c41bf299a11e" HandleID="k8s-pod-network.c3896f163db0a00ffe6c18dc351f404f7a225a292dbd069dfd06c41bf299a11e" Workload="ip--172--31--19--166-k8s-calico--apiserver--7455f859bb--hlpjx-eth0" Jan 30 13:53:47.194180 containerd[1983]: 2025-01-30 13:53:47.142 [INFO][5819] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:53:47.194180 containerd[1983]: 2025-01-30 13:53:47.143 [INFO][5819] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:53:47.194180 containerd[1983]: 2025-01-30 13:53:47.165 [WARNING][5819] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c3896f163db0a00ffe6c18dc351f404f7a225a292dbd069dfd06c41bf299a11e" HandleID="k8s-pod-network.c3896f163db0a00ffe6c18dc351f404f7a225a292dbd069dfd06c41bf299a11e" Workload="ip--172--31--19--166-k8s-calico--apiserver--7455f859bb--hlpjx-eth0" Jan 30 13:53:47.194180 containerd[1983]: 2025-01-30 13:53:47.165 [INFO][5819] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c3896f163db0a00ffe6c18dc351f404f7a225a292dbd069dfd06c41bf299a11e" HandleID="k8s-pod-network.c3896f163db0a00ffe6c18dc351f404f7a225a292dbd069dfd06c41bf299a11e" Workload="ip--172--31--19--166-k8s-calico--apiserver--7455f859bb--hlpjx-eth0" Jan 30 13:53:47.194180 containerd[1983]: 2025-01-30 13:53:47.173 [INFO][5819] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:53:47.194180 containerd[1983]: 2025-01-30 13:53:47.185 [INFO][5799] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c3896f163db0a00ffe6c18dc351f404f7a225a292dbd069dfd06c41bf299a11e" Jan 30 13:53:47.194180 containerd[1983]: time="2025-01-30T13:53:47.193952502Z" level=info msg="TearDown network for sandbox \"c3896f163db0a00ffe6c18dc351f404f7a225a292dbd069dfd06c41bf299a11e\" successfully" Jan 30 13:53:47.194180 containerd[1983]: time="2025-01-30T13:53:47.193983711Z" level=info msg="StopPodSandbox for \"c3896f163db0a00ffe6c18dc351f404f7a225a292dbd069dfd06c41bf299a11e\" returns successfully" Jan 30 13:53:47.349208 containerd[1983]: time="2025-01-30T13:53:47.348784389Z" level=info msg="RemovePodSandbox for \"c3896f163db0a00ffe6c18dc351f404f7a225a292dbd069dfd06c41bf299a11e\"" Jan 30 13:53:47.349208 containerd[1983]: time="2025-01-30T13:53:47.348850046Z" level=info msg="Forcibly stopping sandbox \"c3896f163db0a00ffe6c18dc351f404f7a225a292dbd069dfd06c41bf299a11e\"" Jan 30 13:53:47.713775 containerd[1983]: time="2025-01-30T13:53:47.713724261Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:47.724653 containerd[1983]: time="2025-01-30T13:53:47.723376472Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 30 13:53:47.731144 containerd[1983]: time="2025-01-30T13:53:47.730652600Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:47.756046 containerd[1983]: 2025-01-30 13:53:47.606 [WARNING][5851] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c3896f163db0a00ffe6c18dc351f404f7a225a292dbd069dfd06c41bf299a11e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--166-k8s-calico--apiserver--7455f859bb--hlpjx-eth0", GenerateName:"calico-apiserver-7455f859bb-", Namespace:"calico-apiserver", SelfLink:"", UID:"228374c3-8542-47d9-a2e1-c564d0ab650c", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 53, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7455f859bb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-166", ContainerID:"dce6c4e5a028526767858f173547f6d484ab135f23e609446427f33e2df05840", Pod:"calico-apiserver-7455f859bb-hlpjx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.8.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2c8cbbc03f8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:53:47.756046 containerd[1983]: 2025-01-30 13:53:47.607 [INFO][5851] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c3896f163db0a00ffe6c18dc351f404f7a225a292dbd069dfd06c41bf299a11e" Jan 30 13:53:47.756046 containerd[1983]: 2025-01-30 13:53:47.607 [INFO][5851] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c3896f163db0a00ffe6c18dc351f404f7a225a292dbd069dfd06c41bf299a11e" iface="eth0" netns="" Jan 30 13:53:47.756046 containerd[1983]: 2025-01-30 13:53:47.607 [INFO][5851] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c3896f163db0a00ffe6c18dc351f404f7a225a292dbd069dfd06c41bf299a11e" Jan 30 13:53:47.756046 containerd[1983]: 2025-01-30 13:53:47.607 [INFO][5851] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c3896f163db0a00ffe6c18dc351f404f7a225a292dbd069dfd06c41bf299a11e" Jan 30 13:53:47.756046 containerd[1983]: 2025-01-30 13:53:47.702 [INFO][5858] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c3896f163db0a00ffe6c18dc351f404f7a225a292dbd069dfd06c41bf299a11e" HandleID="k8s-pod-network.c3896f163db0a00ffe6c18dc351f404f7a225a292dbd069dfd06c41bf299a11e" Workload="ip--172--31--19--166-k8s-calico--apiserver--7455f859bb--hlpjx-eth0" Jan 30 13:53:47.756046 containerd[1983]: 2025-01-30 13:53:47.703 [INFO][5858] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:53:47.756046 containerd[1983]: 2025-01-30 13:53:47.703 [INFO][5858] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:53:47.756046 containerd[1983]: 2025-01-30 13:53:47.735 [WARNING][5858] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c3896f163db0a00ffe6c18dc351f404f7a225a292dbd069dfd06c41bf299a11e" HandleID="k8s-pod-network.c3896f163db0a00ffe6c18dc351f404f7a225a292dbd069dfd06c41bf299a11e" Workload="ip--172--31--19--166-k8s-calico--apiserver--7455f859bb--hlpjx-eth0" Jan 30 13:53:47.756046 containerd[1983]: 2025-01-30 13:53:47.735 [INFO][5858] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c3896f163db0a00ffe6c18dc351f404f7a225a292dbd069dfd06c41bf299a11e" HandleID="k8s-pod-network.c3896f163db0a00ffe6c18dc351f404f7a225a292dbd069dfd06c41bf299a11e" Workload="ip--172--31--19--166-k8s-calico--apiserver--7455f859bb--hlpjx-eth0" Jan 30 13:53:47.756046 containerd[1983]: 2025-01-30 13:53:47.742 [INFO][5858] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:53:47.756046 containerd[1983]: 2025-01-30 13:53:47.750 [INFO][5851] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c3896f163db0a00ffe6c18dc351f404f7a225a292dbd069dfd06c41bf299a11e" Jan 30 13:53:47.756046 containerd[1983]: time="2025-01-30T13:53:47.755280455Z" level=info msg="TearDown network for sandbox \"c3896f163db0a00ffe6c18dc351f404f7a225a292dbd069dfd06c41bf299a11e\" successfully" Jan 30 13:53:47.781895 containerd[1983]: time="2025-01-30T13:53:47.781512099Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:47.783170 containerd[1983]: time="2025-01-30T13:53:47.783017551Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 2.781387089s" Jan 30 13:53:47.783170 containerd[1983]: time="2025-01-30T13:53:47.783061092Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 30 13:53:47.796129 containerd[1983]: time="2025-01-30T13:53:47.794141888Z" level=info msg="CreateContainer within sandbox \"7f9e9651faabf4897a4202788ccdc983ffb21b22a82b50c64a8263a8c4e558f1\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 30 13:53:47.820559 containerd[1983]: time="2025-01-30T13:53:47.820498856Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c3896f163db0a00ffe6c18dc351f404f7a225a292dbd069dfd06c41bf299a11e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:53:47.871083 containerd[1983]: time="2025-01-30T13:53:47.869154951Z" level=info msg="RemovePodSandbox \"c3896f163db0a00ffe6c18dc351f404f7a225a292dbd069dfd06c41bf299a11e\" returns successfully" Jan 30 13:53:47.875044 containerd[1983]: time="2025-01-30T13:53:47.875014989Z" level=info msg="StopPodSandbox for \"2357f22516048c6c7ee3f3740cba728b6feeb8e90778cacab2013013fc10bbcd\"" Jan 30 13:53:47.880508 containerd[1983]: time="2025-01-30T13:53:47.880465690Z" level=info msg="CreateContainer within sandbox \"7f9e9651faabf4897a4202788ccdc983ffb21b22a82b50c64a8263a8c4e558f1\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"c95551e2fda8935a6022a06335719c1604ce49c5ad6a96eb4bacb90876976fa7\"" Jan 30 13:53:47.882450 containerd[1983]: time="2025-01-30T13:53:47.882409645Z" level=info msg="StartContainer for \"c95551e2fda8935a6022a06335719c1604ce49c5ad6a96eb4bacb90876976fa7\"" Jan 30 13:53:47.954565 systemd[1]: Started cri-containerd-c95551e2fda8935a6022a06335719c1604ce49c5ad6a96eb4bacb90876976fa7.scope - libcontainer container c95551e2fda8935a6022a06335719c1604ce49c5ad6a96eb4bacb90876976fa7. Jan 30 13:53:48.070524 containerd[1983]: time="2025-01-30T13:53:48.070216834Z" level=info msg="StartContainer for \"c95551e2fda8935a6022a06335719c1604ce49c5ad6a96eb4bacb90876976fa7\" returns successfully" Jan 30 13:53:48.135929 containerd[1983]: 2025-01-30 13:53:48.040 [WARNING][5885] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2357f22516048c6c7ee3f3740cba728b6feeb8e90778cacab2013013fc10bbcd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--166-k8s-coredns--7db6d8ff4d--k9ts7-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"2b094d3d-1ca0-4567-b2d8-ca3df2f86d82", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 53, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-166", ContainerID:"b23c6028957b7658cff0f9fa2619b5b4b5be3978ff8ed601fd86e0a9405c3838", Pod:"coredns-7db6d8ff4d-k9ts7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.8.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6453e004578", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:53:48.135929 containerd[1983]: 2025-01-30 13:53:48.040 [INFO][5885] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2357f22516048c6c7ee3f3740cba728b6feeb8e90778cacab2013013fc10bbcd" Jan 30 13:53:48.135929 containerd[1983]: 2025-01-30 13:53:48.040 [INFO][5885] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2357f22516048c6c7ee3f3740cba728b6feeb8e90778cacab2013013fc10bbcd" iface="eth0" netns="" Jan 30 13:53:48.135929 containerd[1983]: 2025-01-30 13:53:48.040 [INFO][5885] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2357f22516048c6c7ee3f3740cba728b6feeb8e90778cacab2013013fc10bbcd" Jan 30 13:53:48.135929 containerd[1983]: 2025-01-30 13:53:48.041 [INFO][5885] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2357f22516048c6c7ee3f3740cba728b6feeb8e90778cacab2013013fc10bbcd" Jan 30 13:53:48.135929 containerd[1983]: 2025-01-30 13:53:48.120 [INFO][5907] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2357f22516048c6c7ee3f3740cba728b6feeb8e90778cacab2013013fc10bbcd" HandleID="k8s-pod-network.2357f22516048c6c7ee3f3740cba728b6feeb8e90778cacab2013013fc10bbcd" Workload="ip--172--31--19--166-k8s-coredns--7db6d8ff4d--k9ts7-eth0" Jan 30 13:53:48.135929 containerd[1983]: 2025-01-30 13:53:48.120 [INFO][5907] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:53:48.135929 containerd[1983]: 2025-01-30 13:53:48.120 [INFO][5907] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:53:48.135929 containerd[1983]: 2025-01-30 13:53:48.128 [WARNING][5907] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2357f22516048c6c7ee3f3740cba728b6feeb8e90778cacab2013013fc10bbcd" HandleID="k8s-pod-network.2357f22516048c6c7ee3f3740cba728b6feeb8e90778cacab2013013fc10bbcd" Workload="ip--172--31--19--166-k8s-coredns--7db6d8ff4d--k9ts7-eth0" Jan 30 13:53:48.135929 containerd[1983]: 2025-01-30 13:53:48.128 [INFO][5907] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2357f22516048c6c7ee3f3740cba728b6feeb8e90778cacab2013013fc10bbcd" HandleID="k8s-pod-network.2357f22516048c6c7ee3f3740cba728b6feeb8e90778cacab2013013fc10bbcd" Workload="ip--172--31--19--166-k8s-coredns--7db6d8ff4d--k9ts7-eth0" Jan 30 13:53:48.135929 containerd[1983]: 2025-01-30 13:53:48.131 [INFO][5907] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:53:48.135929 containerd[1983]: 2025-01-30 13:53:48.133 [INFO][5885] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2357f22516048c6c7ee3f3740cba728b6feeb8e90778cacab2013013fc10bbcd" Jan 30 13:53:48.136961 containerd[1983]: time="2025-01-30T13:53:48.136744527Z" level=info msg="TearDown network for sandbox \"2357f22516048c6c7ee3f3740cba728b6feeb8e90778cacab2013013fc10bbcd\" successfully" Jan 30 13:53:48.136961 containerd[1983]: time="2025-01-30T13:53:48.136775155Z" level=info msg="StopPodSandbox for \"2357f22516048c6c7ee3f3740cba728b6feeb8e90778cacab2013013fc10bbcd\" returns successfully" Jan 30 13:53:48.137787 containerd[1983]: time="2025-01-30T13:53:48.137758275Z" level=info msg="RemovePodSandbox for \"2357f22516048c6c7ee3f3740cba728b6feeb8e90778cacab2013013fc10bbcd\"" Jan 30 13:53:48.137863 containerd[1983]: time="2025-01-30T13:53:48.137797488Z" level=info msg="Forcibly stopping sandbox \"2357f22516048c6c7ee3f3740cba728b6feeb8e90778cacab2013013fc10bbcd\"" Jan 30 13:53:48.238583 containerd[1983]: 2025-01-30 13:53:48.187 [WARNING][5937] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2357f22516048c6c7ee3f3740cba728b6feeb8e90778cacab2013013fc10bbcd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--166-k8s-coredns--7db6d8ff4d--k9ts7-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"2b094d3d-1ca0-4567-b2d8-ca3df2f86d82", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 53, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-166", ContainerID:"b23c6028957b7658cff0f9fa2619b5b4b5be3978ff8ed601fd86e0a9405c3838", Pod:"coredns-7db6d8ff4d-k9ts7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.8.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6453e004578", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:53:48.238583 containerd[1983]: 2025-01-30 13:53:48.188 [INFO][5937] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2357f22516048c6c7ee3f3740cba728b6feeb8e90778cacab2013013fc10bbcd" Jan 30 13:53:48.238583 containerd[1983]: 2025-01-30 13:53:48.188 [INFO][5937] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2357f22516048c6c7ee3f3740cba728b6feeb8e90778cacab2013013fc10bbcd" iface="eth0" netns="" Jan 30 13:53:48.238583 containerd[1983]: 2025-01-30 13:53:48.188 [INFO][5937] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2357f22516048c6c7ee3f3740cba728b6feeb8e90778cacab2013013fc10bbcd" Jan 30 13:53:48.238583 containerd[1983]: 2025-01-30 13:53:48.188 [INFO][5937] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2357f22516048c6c7ee3f3740cba728b6feeb8e90778cacab2013013fc10bbcd" Jan 30 13:53:48.238583 containerd[1983]: 2025-01-30 13:53:48.218 [INFO][5943] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2357f22516048c6c7ee3f3740cba728b6feeb8e90778cacab2013013fc10bbcd" HandleID="k8s-pod-network.2357f22516048c6c7ee3f3740cba728b6feeb8e90778cacab2013013fc10bbcd" Workload="ip--172--31--19--166-k8s-coredns--7db6d8ff4d--k9ts7-eth0" Jan 30 13:53:48.238583 containerd[1983]: 2025-01-30 13:53:48.218 [INFO][5943] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:53:48.238583 containerd[1983]: 2025-01-30 13:53:48.218 [INFO][5943] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:53:48.238583 containerd[1983]: 2025-01-30 13:53:48.225 [WARNING][5943] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2357f22516048c6c7ee3f3740cba728b6feeb8e90778cacab2013013fc10bbcd" HandleID="k8s-pod-network.2357f22516048c6c7ee3f3740cba728b6feeb8e90778cacab2013013fc10bbcd" Workload="ip--172--31--19--166-k8s-coredns--7db6d8ff4d--k9ts7-eth0" Jan 30 13:53:48.238583 containerd[1983]: 2025-01-30 13:53:48.225 [INFO][5943] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2357f22516048c6c7ee3f3740cba728b6feeb8e90778cacab2013013fc10bbcd" HandleID="k8s-pod-network.2357f22516048c6c7ee3f3740cba728b6feeb8e90778cacab2013013fc10bbcd" Workload="ip--172--31--19--166-k8s-coredns--7db6d8ff4d--k9ts7-eth0" Jan 30 13:53:48.238583 containerd[1983]: 2025-01-30 13:53:48.227 [INFO][5943] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:53:48.238583 containerd[1983]: 2025-01-30 13:53:48.234 [INFO][5937] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2357f22516048c6c7ee3f3740cba728b6feeb8e90778cacab2013013fc10bbcd" Jan 30 13:53:48.238583 containerd[1983]: time="2025-01-30T13:53:48.237308248Z" level=info msg="TearDown network for sandbox \"2357f22516048c6c7ee3f3740cba728b6feeb8e90778cacab2013013fc10bbcd\" successfully" Jan 30 13:53:48.244151 containerd[1983]: time="2025-01-30T13:53:48.244066457Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2357f22516048c6c7ee3f3740cba728b6feeb8e90778cacab2013013fc10bbcd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:53:48.244558 containerd[1983]: time="2025-01-30T13:53:48.244177080Z" level=info msg="RemovePodSandbox \"2357f22516048c6c7ee3f3740cba728b6feeb8e90778cacab2013013fc10bbcd\" returns successfully" Jan 30 13:53:48.245202 containerd[1983]: time="2025-01-30T13:53:48.244754437Z" level=info msg="StopPodSandbox for \"0d5a34594156568249ae9c16b8a5e94d1f908e4cbe599063d95746b3841ca534\"" Jan 30 13:53:48.337830 containerd[1983]: 2025-01-30 13:53:48.299 [WARNING][5961] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0d5a34594156568249ae9c16b8a5e94d1f908e4cbe599063d95746b3841ca534" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--166-k8s-calico--apiserver--7455f859bb--tzswx-eth0", GenerateName:"calico-apiserver-7455f859bb-", Namespace:"calico-apiserver", SelfLink:"", UID:"11b5c859-222b-40cc-bebe-26c0a9a42d40", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 53, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7455f859bb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-166", ContainerID:"95d30a3e9c541ec0f5aa7d1ce1040e50c9c5003000636cce3fa1996ce91c83c0", Pod:"calico-apiserver-7455f859bb-tzswx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.8.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7233812dacd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:53:48.337830 containerd[1983]: 2025-01-30 13:53:48.299 [INFO][5961] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0d5a34594156568249ae9c16b8a5e94d1f908e4cbe599063d95746b3841ca534" Jan 30 13:53:48.337830 containerd[1983]: 2025-01-30 13:53:48.299 [INFO][5961] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0d5a34594156568249ae9c16b8a5e94d1f908e4cbe599063d95746b3841ca534" iface="eth0" netns="" Jan 30 13:53:48.337830 containerd[1983]: 2025-01-30 13:53:48.299 [INFO][5961] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0d5a34594156568249ae9c16b8a5e94d1f908e4cbe599063d95746b3841ca534" Jan 30 13:53:48.337830 containerd[1983]: 2025-01-30 13:53:48.299 [INFO][5961] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0d5a34594156568249ae9c16b8a5e94d1f908e4cbe599063d95746b3841ca534" Jan 30 13:53:48.337830 containerd[1983]: 2025-01-30 13:53:48.325 [INFO][5967] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0d5a34594156568249ae9c16b8a5e94d1f908e4cbe599063d95746b3841ca534" HandleID="k8s-pod-network.0d5a34594156568249ae9c16b8a5e94d1f908e4cbe599063d95746b3841ca534" Workload="ip--172--31--19--166-k8s-calico--apiserver--7455f859bb--tzswx-eth0" Jan 30 13:53:48.337830 containerd[1983]: 2025-01-30 13:53:48.326 [INFO][5967] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:53:48.337830 containerd[1983]: 2025-01-30 13:53:48.326 [INFO][5967] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:53:48.337830 containerd[1983]: 2025-01-30 13:53:48.332 [WARNING][5967] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0d5a34594156568249ae9c16b8a5e94d1f908e4cbe599063d95746b3841ca534" HandleID="k8s-pod-network.0d5a34594156568249ae9c16b8a5e94d1f908e4cbe599063d95746b3841ca534" Workload="ip--172--31--19--166-k8s-calico--apiserver--7455f859bb--tzswx-eth0" Jan 30 13:53:48.337830 containerd[1983]: 2025-01-30 13:53:48.332 [INFO][5967] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0d5a34594156568249ae9c16b8a5e94d1f908e4cbe599063d95746b3841ca534" HandleID="k8s-pod-network.0d5a34594156568249ae9c16b8a5e94d1f908e4cbe599063d95746b3841ca534" Workload="ip--172--31--19--166-k8s-calico--apiserver--7455f859bb--tzswx-eth0" Jan 30 13:53:48.337830 containerd[1983]: 2025-01-30 13:53:48.334 [INFO][5967] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:53:48.337830 containerd[1983]: 2025-01-30 13:53:48.336 [INFO][5961] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0d5a34594156568249ae9c16b8a5e94d1f908e4cbe599063d95746b3841ca534" Jan 30 13:53:48.337830 containerd[1983]: time="2025-01-30T13:53:48.337802585Z" level=info msg="TearDown network for sandbox \"0d5a34594156568249ae9c16b8a5e94d1f908e4cbe599063d95746b3841ca534\" successfully" Jan 30 13:53:48.341621 containerd[1983]: time="2025-01-30T13:53:48.337833827Z" level=info msg="StopPodSandbox for \"0d5a34594156568249ae9c16b8a5e94d1f908e4cbe599063d95746b3841ca534\" returns successfully" Jan 30 13:53:48.341621 containerd[1983]: time="2025-01-30T13:53:48.338768633Z" level=info msg="RemovePodSandbox for \"0d5a34594156568249ae9c16b8a5e94d1f908e4cbe599063d95746b3841ca534\"" Jan 30 13:53:48.341621 containerd[1983]: time="2025-01-30T13:53:48.339203031Z" level=info msg="Forcibly stopping sandbox \"0d5a34594156568249ae9c16b8a5e94d1f908e4cbe599063d95746b3841ca534\"" Jan 30 13:53:48.443392 containerd[1983]: 2025-01-30 13:53:48.394 [WARNING][5986] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0d5a34594156568249ae9c16b8a5e94d1f908e4cbe599063d95746b3841ca534" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--166-k8s-calico--apiserver--7455f859bb--tzswx-eth0", GenerateName:"calico-apiserver-7455f859bb-", Namespace:"calico-apiserver", SelfLink:"", UID:"11b5c859-222b-40cc-bebe-26c0a9a42d40", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 53, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7455f859bb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-166", ContainerID:"95d30a3e9c541ec0f5aa7d1ce1040e50c9c5003000636cce3fa1996ce91c83c0", Pod:"calico-apiserver-7455f859bb-tzswx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.8.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7233812dacd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:53:48.443392 containerd[1983]: 2025-01-30 13:53:48.394 [INFO][5986] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0d5a34594156568249ae9c16b8a5e94d1f908e4cbe599063d95746b3841ca534" Jan 30 13:53:48.443392 containerd[1983]: 2025-01-30 13:53:48.394 [INFO][5986] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0d5a34594156568249ae9c16b8a5e94d1f908e4cbe599063d95746b3841ca534" iface="eth0" netns="" Jan 30 13:53:48.443392 containerd[1983]: 2025-01-30 13:53:48.394 [INFO][5986] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0d5a34594156568249ae9c16b8a5e94d1f908e4cbe599063d95746b3841ca534" Jan 30 13:53:48.443392 containerd[1983]: 2025-01-30 13:53:48.394 [INFO][5986] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0d5a34594156568249ae9c16b8a5e94d1f908e4cbe599063d95746b3841ca534" Jan 30 13:53:48.443392 containerd[1983]: 2025-01-30 13:53:48.422 [INFO][5993] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0d5a34594156568249ae9c16b8a5e94d1f908e4cbe599063d95746b3841ca534" HandleID="k8s-pod-network.0d5a34594156568249ae9c16b8a5e94d1f908e4cbe599063d95746b3841ca534" Workload="ip--172--31--19--166-k8s-calico--apiserver--7455f859bb--tzswx-eth0" Jan 30 13:53:48.443392 containerd[1983]: 2025-01-30 13:53:48.422 [INFO][5993] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:53:48.443392 containerd[1983]: 2025-01-30 13:53:48.422 [INFO][5993] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:53:48.443392 containerd[1983]: 2025-01-30 13:53:48.436 [WARNING][5993] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0d5a34594156568249ae9c16b8a5e94d1f908e4cbe599063d95746b3841ca534" HandleID="k8s-pod-network.0d5a34594156568249ae9c16b8a5e94d1f908e4cbe599063d95746b3841ca534" Workload="ip--172--31--19--166-k8s-calico--apiserver--7455f859bb--tzswx-eth0" Jan 30 13:53:48.443392 containerd[1983]: 2025-01-30 13:53:48.436 [INFO][5993] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0d5a34594156568249ae9c16b8a5e94d1f908e4cbe599063d95746b3841ca534" HandleID="k8s-pod-network.0d5a34594156568249ae9c16b8a5e94d1f908e4cbe599063d95746b3841ca534" Workload="ip--172--31--19--166-k8s-calico--apiserver--7455f859bb--tzswx-eth0" Jan 30 13:53:48.443392 containerd[1983]: 2025-01-30 13:53:48.438 [INFO][5993] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:53:48.443392 containerd[1983]: 2025-01-30 13:53:48.441 [INFO][5986] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0d5a34594156568249ae9c16b8a5e94d1f908e4cbe599063d95746b3841ca534" Jan 30 13:53:48.444130 containerd[1983]: time="2025-01-30T13:53:48.443434600Z" level=info msg="TearDown network for sandbox \"0d5a34594156568249ae9c16b8a5e94d1f908e4cbe599063d95746b3841ca534\" successfully" Jan 30 13:53:48.456180 containerd[1983]: time="2025-01-30T13:53:48.456125131Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0d5a34594156568249ae9c16b8a5e94d1f908e4cbe599063d95746b3841ca534\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:53:48.457163 containerd[1983]: time="2025-01-30T13:53:48.456213438Z" level=info msg="RemovePodSandbox \"0d5a34594156568249ae9c16b8a5e94d1f908e4cbe599063d95746b3841ca534\" returns successfully" Jan 30 13:53:48.457163 containerd[1983]: time="2025-01-30T13:53:48.456723970Z" level=info msg="StopPodSandbox for \"a152127c8c8513c4472499ceda39be4a0bb4173d50df2cbf2154da16ef482cfb\"" Jan 30 13:53:48.592713 containerd[1983]: 2025-01-30 13:53:48.523 [WARNING][6013] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a152127c8c8513c4472499ceda39be4a0bb4173d50df2cbf2154da16ef482cfb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--166-k8s-csi--node--driver--dlwgg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"092d9e15-ee48-4734-aba0-f5135cecdc7c", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 53, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-166", ContainerID:"7f9e9651faabf4897a4202788ccdc983ffb21b22a82b50c64a8263a8c4e558f1", Pod:"csi-node-driver-dlwgg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.8.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali204e6ca9dda", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:53:48.592713 containerd[1983]: 2025-01-30 13:53:48.524 [INFO][6013] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a152127c8c8513c4472499ceda39be4a0bb4173d50df2cbf2154da16ef482cfb" Jan 30 13:53:48.592713 containerd[1983]: 2025-01-30 13:53:48.524 [INFO][6013] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a152127c8c8513c4472499ceda39be4a0bb4173d50df2cbf2154da16ef482cfb" iface="eth0" netns="" Jan 30 13:53:48.592713 containerd[1983]: 2025-01-30 13:53:48.524 [INFO][6013] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a152127c8c8513c4472499ceda39be4a0bb4173d50df2cbf2154da16ef482cfb" Jan 30 13:53:48.592713 containerd[1983]: 2025-01-30 13:53:48.524 [INFO][6013] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a152127c8c8513c4472499ceda39be4a0bb4173d50df2cbf2154da16ef482cfb" Jan 30 13:53:48.592713 containerd[1983]: 2025-01-30 13:53:48.566 [INFO][6019] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a152127c8c8513c4472499ceda39be4a0bb4173d50df2cbf2154da16ef482cfb" HandleID="k8s-pod-network.a152127c8c8513c4472499ceda39be4a0bb4173d50df2cbf2154da16ef482cfb" Workload="ip--172--31--19--166-k8s-csi--node--driver--dlwgg-eth0" Jan 30 13:53:48.592713 containerd[1983]: 2025-01-30 13:53:48.567 [INFO][6019] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:53:48.592713 containerd[1983]: 2025-01-30 13:53:48.567 [INFO][6019] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:53:48.592713 containerd[1983]: 2025-01-30 13:53:48.575 [WARNING][6019] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a152127c8c8513c4472499ceda39be4a0bb4173d50df2cbf2154da16ef482cfb" HandleID="k8s-pod-network.a152127c8c8513c4472499ceda39be4a0bb4173d50df2cbf2154da16ef482cfb" Workload="ip--172--31--19--166-k8s-csi--node--driver--dlwgg-eth0" Jan 30 13:53:48.592713 containerd[1983]: 2025-01-30 13:53:48.575 [INFO][6019] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a152127c8c8513c4472499ceda39be4a0bb4173d50df2cbf2154da16ef482cfb" HandleID="k8s-pod-network.a152127c8c8513c4472499ceda39be4a0bb4173d50df2cbf2154da16ef482cfb" Workload="ip--172--31--19--166-k8s-csi--node--driver--dlwgg-eth0" Jan 30 13:53:48.592713 containerd[1983]: 2025-01-30 13:53:48.585 [INFO][6019] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:53:48.592713 containerd[1983]: 2025-01-30 13:53:48.588 [INFO][6013] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a152127c8c8513c4472499ceda39be4a0bb4173d50df2cbf2154da16ef482cfb" Jan 30 13:53:48.592713 containerd[1983]: time="2025-01-30T13:53:48.591910313Z" level=info msg="TearDown network for sandbox \"a152127c8c8513c4472499ceda39be4a0bb4173d50df2cbf2154da16ef482cfb\" successfully" Jan 30 13:53:48.592713 containerd[1983]: time="2025-01-30T13:53:48.591941015Z" level=info msg="StopPodSandbox for \"a152127c8c8513c4472499ceda39be4a0bb4173d50df2cbf2154da16ef482cfb\" returns successfully" Jan 30 13:53:48.594972 containerd[1983]: time="2025-01-30T13:53:48.592966945Z" level=info msg="RemovePodSandbox for \"a152127c8c8513c4472499ceda39be4a0bb4173d50df2cbf2154da16ef482cfb\"" Jan 30 13:53:48.594972 containerd[1983]: time="2025-01-30T13:53:48.593004208Z" level=info msg="Forcibly stopping sandbox \"a152127c8c8513c4472499ceda39be4a0bb4173d50df2cbf2154da16ef482cfb\"" Jan 30 13:53:48.686766 containerd[1983]: 2025-01-30 13:53:48.641 [WARNING][6037] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a152127c8c8513c4472499ceda39be4a0bb4173d50df2cbf2154da16ef482cfb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--166-k8s-csi--node--driver--dlwgg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"092d9e15-ee48-4734-aba0-f5135cecdc7c", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 53, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-166", ContainerID:"7f9e9651faabf4897a4202788ccdc983ffb21b22a82b50c64a8263a8c4e558f1", Pod:"csi-node-driver-dlwgg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.8.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali204e6ca9dda", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:53:48.686766 containerd[1983]: 2025-01-30 13:53:48.641 [INFO][6037] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a152127c8c8513c4472499ceda39be4a0bb4173d50df2cbf2154da16ef482cfb" Jan 30 13:53:48.686766 containerd[1983]: 2025-01-30 13:53:48.641 [INFO][6037] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a152127c8c8513c4472499ceda39be4a0bb4173d50df2cbf2154da16ef482cfb" iface="eth0" netns="" Jan 30 13:53:48.686766 containerd[1983]: 2025-01-30 13:53:48.641 [INFO][6037] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a152127c8c8513c4472499ceda39be4a0bb4173d50df2cbf2154da16ef482cfb" Jan 30 13:53:48.686766 containerd[1983]: 2025-01-30 13:53:48.641 [INFO][6037] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a152127c8c8513c4472499ceda39be4a0bb4173d50df2cbf2154da16ef482cfb" Jan 30 13:53:48.686766 containerd[1983]: 2025-01-30 13:53:48.672 [INFO][6043] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a152127c8c8513c4472499ceda39be4a0bb4173d50df2cbf2154da16ef482cfb" HandleID="k8s-pod-network.a152127c8c8513c4472499ceda39be4a0bb4173d50df2cbf2154da16ef482cfb" Workload="ip--172--31--19--166-k8s-csi--node--driver--dlwgg-eth0" Jan 30 13:53:48.686766 containerd[1983]: 2025-01-30 13:53:48.672 [INFO][6043] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:53:48.686766 containerd[1983]: 2025-01-30 13:53:48.672 [INFO][6043] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:53:48.686766 containerd[1983]: 2025-01-30 13:53:48.679 [WARNING][6043] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a152127c8c8513c4472499ceda39be4a0bb4173d50df2cbf2154da16ef482cfb" HandleID="k8s-pod-network.a152127c8c8513c4472499ceda39be4a0bb4173d50df2cbf2154da16ef482cfb" Workload="ip--172--31--19--166-k8s-csi--node--driver--dlwgg-eth0" Jan 30 13:53:48.686766 containerd[1983]: 2025-01-30 13:53:48.679 [INFO][6043] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a152127c8c8513c4472499ceda39be4a0bb4173d50df2cbf2154da16ef482cfb" HandleID="k8s-pod-network.a152127c8c8513c4472499ceda39be4a0bb4173d50df2cbf2154da16ef482cfb" Workload="ip--172--31--19--166-k8s-csi--node--driver--dlwgg-eth0" Jan 30 13:53:48.686766 containerd[1983]: 2025-01-30 13:53:48.680 [INFO][6043] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:53:48.686766 containerd[1983]: 2025-01-30 13:53:48.682 [INFO][6037] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a152127c8c8513c4472499ceda39be4a0bb4173d50df2cbf2154da16ef482cfb" Jan 30 13:53:48.686766 containerd[1983]: time="2025-01-30T13:53:48.685605733Z" level=info msg="TearDown network for sandbox \"a152127c8c8513c4472499ceda39be4a0bb4173d50df2cbf2154da16ef482cfb\" successfully" Jan 30 13:53:48.697966 containerd[1983]: time="2025-01-30T13:53:48.697905544Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a152127c8c8513c4472499ceda39be4a0bb4173d50df2cbf2154da16ef482cfb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:53:48.698124 containerd[1983]: time="2025-01-30T13:53:48.697982495Z" level=info msg="RemovePodSandbox \"a152127c8c8513c4472499ceda39be4a0bb4173d50df2cbf2154da16ef482cfb\" returns successfully" Jan 30 13:53:48.698691 containerd[1983]: time="2025-01-30T13:53:48.698661696Z" level=info msg="StopPodSandbox for \"31d2221e30d0200aa5f6080b4a8f6a58033d610f83811b0271143583209bbd06\"" Jan 30 13:53:48.866922 containerd[1983]: 2025-01-30 13:53:48.791 [WARNING][6061] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="31d2221e30d0200aa5f6080b4a8f6a58033d610f83811b0271143583209bbd06" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--166-k8s-coredns--7db6d8ff4d--4dgqz-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"c4e15632-c2e7-4cc7-a34a-0ee80ce9b661", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 53, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-166", ContainerID:"90af5c61cfd2fe7a4014680478124313e41c66345c8a2aa2b7ea181012bc57c8", Pod:"coredns-7db6d8ff4d-4dgqz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.8.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib1d782861b2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:53:48.866922 containerd[1983]: 2025-01-30 13:53:48.791 [INFO][6061] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="31d2221e30d0200aa5f6080b4a8f6a58033d610f83811b0271143583209bbd06" Jan 30 13:53:48.866922 containerd[1983]: 2025-01-30 13:53:48.792 [INFO][6061] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="31d2221e30d0200aa5f6080b4a8f6a58033d610f83811b0271143583209bbd06" iface="eth0" netns="" Jan 30 13:53:48.866922 containerd[1983]: 2025-01-30 13:53:48.792 [INFO][6061] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="31d2221e30d0200aa5f6080b4a8f6a58033d610f83811b0271143583209bbd06" Jan 30 13:53:48.866922 containerd[1983]: 2025-01-30 13:53:48.792 [INFO][6061] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="31d2221e30d0200aa5f6080b4a8f6a58033d610f83811b0271143583209bbd06" Jan 30 13:53:48.866922 containerd[1983]: 2025-01-30 13:53:48.851 [INFO][6068] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="31d2221e30d0200aa5f6080b4a8f6a58033d610f83811b0271143583209bbd06" HandleID="k8s-pod-network.31d2221e30d0200aa5f6080b4a8f6a58033d610f83811b0271143583209bbd06" Workload="ip--172--31--19--166-k8s-coredns--7db6d8ff4d--4dgqz-eth0" Jan 30 13:53:48.866922 containerd[1983]: 2025-01-30 13:53:48.851 [INFO][6068] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:53:48.866922 containerd[1983]: 2025-01-30 13:53:48.851 [INFO][6068] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:53:48.866922 containerd[1983]: 2025-01-30 13:53:48.860 [WARNING][6068] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="31d2221e30d0200aa5f6080b4a8f6a58033d610f83811b0271143583209bbd06" HandleID="k8s-pod-network.31d2221e30d0200aa5f6080b4a8f6a58033d610f83811b0271143583209bbd06" Workload="ip--172--31--19--166-k8s-coredns--7db6d8ff4d--4dgqz-eth0" Jan 30 13:53:48.866922 containerd[1983]: 2025-01-30 13:53:48.860 [INFO][6068] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="31d2221e30d0200aa5f6080b4a8f6a58033d610f83811b0271143583209bbd06" HandleID="k8s-pod-network.31d2221e30d0200aa5f6080b4a8f6a58033d610f83811b0271143583209bbd06" Workload="ip--172--31--19--166-k8s-coredns--7db6d8ff4d--4dgqz-eth0" Jan 30 13:53:48.866922 containerd[1983]: 2025-01-30 13:53:48.862 [INFO][6068] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:53:48.866922 containerd[1983]: 2025-01-30 13:53:48.864 [INFO][6061] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="31d2221e30d0200aa5f6080b4a8f6a58033d610f83811b0271143583209bbd06" Jan 30 13:53:48.866922 containerd[1983]: time="2025-01-30T13:53:48.866821941Z" level=info msg="TearDown network for sandbox \"31d2221e30d0200aa5f6080b4a8f6a58033d610f83811b0271143583209bbd06\" successfully" Jan 30 13:53:48.866922 containerd[1983]: time="2025-01-30T13:53:48.866850617Z" level=info msg="StopPodSandbox for \"31d2221e30d0200aa5f6080b4a8f6a58033d610f83811b0271143583209bbd06\" returns successfully" Jan 30 13:53:48.870407 containerd[1983]: time="2025-01-30T13:53:48.868469047Z" level=info msg="RemovePodSandbox for \"31d2221e30d0200aa5f6080b4a8f6a58033d610f83811b0271143583209bbd06\"" Jan 30 13:53:48.870407 containerd[1983]: time="2025-01-30T13:53:48.868576934Z" level=info msg="Forcibly stopping sandbox \"31d2221e30d0200aa5f6080b4a8f6a58033d610f83811b0271143583209bbd06\"" Jan 30 13:53:48.994325 containerd[1983]: 2025-01-30 13:53:48.934 [WARNING][6087] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="31d2221e30d0200aa5f6080b4a8f6a58033d610f83811b0271143583209bbd06" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--166-k8s-coredns--7db6d8ff4d--4dgqz-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"c4e15632-c2e7-4cc7-a34a-0ee80ce9b661", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 53, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-166", ContainerID:"90af5c61cfd2fe7a4014680478124313e41c66345c8a2aa2b7ea181012bc57c8", Pod:"coredns-7db6d8ff4d-4dgqz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.8.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib1d782861b2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:53:48.994325 containerd[1983]: 2025-01-30 13:53:48.934 [INFO][6087] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="31d2221e30d0200aa5f6080b4a8f6a58033d610f83811b0271143583209bbd06" Jan 30 13:53:48.994325 containerd[1983]: 2025-01-30 13:53:48.935 [INFO][6087] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="31d2221e30d0200aa5f6080b4a8f6a58033d610f83811b0271143583209bbd06" iface="eth0" netns="" Jan 30 13:53:48.994325 containerd[1983]: 2025-01-30 13:53:48.935 [INFO][6087] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="31d2221e30d0200aa5f6080b4a8f6a58033d610f83811b0271143583209bbd06" Jan 30 13:53:48.994325 containerd[1983]: 2025-01-30 13:53:48.935 [INFO][6087] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="31d2221e30d0200aa5f6080b4a8f6a58033d610f83811b0271143583209bbd06" Jan 30 13:53:48.994325 containerd[1983]: 2025-01-30 13:53:48.977 [INFO][6094] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="31d2221e30d0200aa5f6080b4a8f6a58033d610f83811b0271143583209bbd06" HandleID="k8s-pod-network.31d2221e30d0200aa5f6080b4a8f6a58033d610f83811b0271143583209bbd06" Workload="ip--172--31--19--166-k8s-coredns--7db6d8ff4d--4dgqz-eth0" Jan 30 13:53:48.994325 containerd[1983]: 2025-01-30 13:53:48.977 [INFO][6094] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:53:48.994325 containerd[1983]: 2025-01-30 13:53:48.977 [INFO][6094] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:53:48.994325 containerd[1983]: 2025-01-30 13:53:48.986 [WARNING][6094] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="31d2221e30d0200aa5f6080b4a8f6a58033d610f83811b0271143583209bbd06" HandleID="k8s-pod-network.31d2221e30d0200aa5f6080b4a8f6a58033d610f83811b0271143583209bbd06" Workload="ip--172--31--19--166-k8s-coredns--7db6d8ff4d--4dgqz-eth0" Jan 30 13:53:48.994325 containerd[1983]: 2025-01-30 13:53:48.987 [INFO][6094] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="31d2221e30d0200aa5f6080b4a8f6a58033d610f83811b0271143583209bbd06" HandleID="k8s-pod-network.31d2221e30d0200aa5f6080b4a8f6a58033d610f83811b0271143583209bbd06" Workload="ip--172--31--19--166-k8s-coredns--7db6d8ff4d--4dgqz-eth0" Jan 30 13:53:48.994325 containerd[1983]: 2025-01-30 13:53:48.988 [INFO][6094] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:53:48.994325 containerd[1983]: 2025-01-30 13:53:48.990 [INFO][6087] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="31d2221e30d0200aa5f6080b4a8f6a58033d610f83811b0271143583209bbd06" Jan 30 13:53:48.995051 containerd[1983]: time="2025-01-30T13:53:48.994398896Z" level=info msg="TearDown network for sandbox \"31d2221e30d0200aa5f6080b4a8f6a58033d610f83811b0271143583209bbd06\" successfully" Jan 30 13:53:49.001007 containerd[1983]: time="2025-01-30T13:53:49.000965083Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"31d2221e30d0200aa5f6080b4a8f6a58033d610f83811b0271143583209bbd06\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:53:49.001141 containerd[1983]: time="2025-01-30T13:53:49.001045632Z" level=info msg="RemovePodSandbox \"31d2221e30d0200aa5f6080b4a8f6a58033d610f83811b0271143583209bbd06\" returns successfully" Jan 30 13:53:49.001668 containerd[1983]: time="2025-01-30T13:53:49.001637996Z" level=info msg="StopPodSandbox for \"5df243c60b5b110a1c1a8bd2dccbc93a596bc92358c15f864052f423c056404b\"" Jan 30 13:53:49.125445 containerd[1983]: 2025-01-30 13:53:49.062 [WARNING][6112] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5df243c60b5b110a1c1a8bd2dccbc93a596bc92358c15f864052f423c056404b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--166-k8s-calico--kube--controllers--6c996ffb5d--7txfd-eth0", GenerateName:"calico-kube-controllers-6c996ffb5d-", Namespace:"calico-system", SelfLink:"", UID:"99b6f917-c78e-4eb5-a202-0c6311880c4e", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 53, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c996ffb5d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-166", ContainerID:"9dc7f975f64fc3666c8dccf7c41aa76d6183026285ad7f86aaf128ad6945e946", Pod:"calico-kube-controllers-6c996ffb5d-7txfd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.8.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib26ef9f50b7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:53:49.125445 containerd[1983]: 2025-01-30 13:53:49.062 [INFO][6112] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5df243c60b5b110a1c1a8bd2dccbc93a596bc92358c15f864052f423c056404b" Jan 30 13:53:49.125445 containerd[1983]: 2025-01-30 13:53:49.062 [INFO][6112] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5df243c60b5b110a1c1a8bd2dccbc93a596bc92358c15f864052f423c056404b" iface="eth0" netns="" Jan 30 13:53:49.125445 containerd[1983]: 2025-01-30 13:53:49.062 [INFO][6112] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5df243c60b5b110a1c1a8bd2dccbc93a596bc92358c15f864052f423c056404b" Jan 30 13:53:49.125445 containerd[1983]: 2025-01-30 13:53:49.062 [INFO][6112] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5df243c60b5b110a1c1a8bd2dccbc93a596bc92358c15f864052f423c056404b" Jan 30 13:53:49.125445 containerd[1983]: 2025-01-30 13:53:49.105 [INFO][6119] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5df243c60b5b110a1c1a8bd2dccbc93a596bc92358c15f864052f423c056404b" HandleID="k8s-pod-network.5df243c60b5b110a1c1a8bd2dccbc93a596bc92358c15f864052f423c056404b" Workload="ip--172--31--19--166-k8s-calico--kube--controllers--6c996ffb5d--7txfd-eth0" Jan 30 13:53:49.125445 containerd[1983]: 2025-01-30 13:53:49.106 [INFO][6119] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:53:49.125445 containerd[1983]: 2025-01-30 13:53:49.106 [INFO][6119] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:53:49.125445 containerd[1983]: 2025-01-30 13:53:49.117 [WARNING][6119] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5df243c60b5b110a1c1a8bd2dccbc93a596bc92358c15f864052f423c056404b" HandleID="k8s-pod-network.5df243c60b5b110a1c1a8bd2dccbc93a596bc92358c15f864052f423c056404b" Workload="ip--172--31--19--166-k8s-calico--kube--controllers--6c996ffb5d--7txfd-eth0" Jan 30 13:53:49.125445 containerd[1983]: 2025-01-30 13:53:49.117 [INFO][6119] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5df243c60b5b110a1c1a8bd2dccbc93a596bc92358c15f864052f423c056404b" HandleID="k8s-pod-network.5df243c60b5b110a1c1a8bd2dccbc93a596bc92358c15f864052f423c056404b" Workload="ip--172--31--19--166-k8s-calico--kube--controllers--6c996ffb5d--7txfd-eth0" Jan 30 13:53:49.125445 containerd[1983]: 2025-01-30 13:53:49.119 [INFO][6119] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:53:49.125445 containerd[1983]: 2025-01-30 13:53:49.122 [INFO][6112] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5df243c60b5b110a1c1a8bd2dccbc93a596bc92358c15f864052f423c056404b" Jan 30 13:53:49.125445 containerd[1983]: time="2025-01-30T13:53:49.125284909Z" level=info msg="TearDown network for sandbox \"5df243c60b5b110a1c1a8bd2dccbc93a596bc92358c15f864052f423c056404b\" successfully" Jan 30 13:53:49.125445 containerd[1983]: time="2025-01-30T13:53:49.125431021Z" level=info msg="StopPodSandbox for \"5df243c60b5b110a1c1a8bd2dccbc93a596bc92358c15f864052f423c056404b\" returns successfully" Jan 30 13:53:49.128533 containerd[1983]: time="2025-01-30T13:53:49.128497149Z" level=info msg="RemovePodSandbox for \"5df243c60b5b110a1c1a8bd2dccbc93a596bc92358c15f864052f423c056404b\"" Jan 30 13:53:49.128615 containerd[1983]: time="2025-01-30T13:53:49.128537044Z" level=info msg="Forcibly stopping sandbox \"5df243c60b5b110a1c1a8bd2dccbc93a596bc92358c15f864052f423c056404b\"" Jan 30 13:53:49.268913 containerd[1983]: 2025-01-30 13:53:49.205 [WARNING][6137] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5df243c60b5b110a1c1a8bd2dccbc93a596bc92358c15f864052f423c056404b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--166-k8s-calico--kube--controllers--6c996ffb5d--7txfd-eth0", GenerateName:"calico-kube-controllers-6c996ffb5d-", Namespace:"calico-system", SelfLink:"", UID:"99b6f917-c78e-4eb5-a202-0c6311880c4e", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 53, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c996ffb5d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-166", ContainerID:"9dc7f975f64fc3666c8dccf7c41aa76d6183026285ad7f86aaf128ad6945e946", Pod:"calico-kube-controllers-6c996ffb5d-7txfd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.8.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib26ef9f50b7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:53:49.268913 containerd[1983]: 2025-01-30 13:53:49.206 [INFO][6137] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5df243c60b5b110a1c1a8bd2dccbc93a596bc92358c15f864052f423c056404b" Jan 30 13:53:49.268913 containerd[1983]: 2025-01-30 13:53:49.206 [INFO][6137] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5df243c60b5b110a1c1a8bd2dccbc93a596bc92358c15f864052f423c056404b" iface="eth0" netns="" Jan 30 13:53:49.268913 containerd[1983]: 2025-01-30 13:53:49.206 [INFO][6137] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5df243c60b5b110a1c1a8bd2dccbc93a596bc92358c15f864052f423c056404b" Jan 30 13:53:49.268913 containerd[1983]: 2025-01-30 13:53:49.206 [INFO][6137] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5df243c60b5b110a1c1a8bd2dccbc93a596bc92358c15f864052f423c056404b" Jan 30 13:53:49.268913 containerd[1983]: 2025-01-30 13:53:49.246 [INFO][6143] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5df243c60b5b110a1c1a8bd2dccbc93a596bc92358c15f864052f423c056404b" HandleID="k8s-pod-network.5df243c60b5b110a1c1a8bd2dccbc93a596bc92358c15f864052f423c056404b" Workload="ip--172--31--19--166-k8s-calico--kube--controllers--6c996ffb5d--7txfd-eth0" Jan 30 13:53:49.268913 containerd[1983]: 2025-01-30 13:53:49.247 [INFO][6143] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:53:49.268913 containerd[1983]: 2025-01-30 13:53:49.247 [INFO][6143] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:53:49.268913 containerd[1983]: 2025-01-30 13:53:49.258 [WARNING][6143] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5df243c60b5b110a1c1a8bd2dccbc93a596bc92358c15f864052f423c056404b" HandleID="k8s-pod-network.5df243c60b5b110a1c1a8bd2dccbc93a596bc92358c15f864052f423c056404b" Workload="ip--172--31--19--166-k8s-calico--kube--controllers--6c996ffb5d--7txfd-eth0" Jan 30 13:53:49.268913 containerd[1983]: 2025-01-30 13:53:49.258 [INFO][6143] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5df243c60b5b110a1c1a8bd2dccbc93a596bc92358c15f864052f423c056404b" HandleID="k8s-pod-network.5df243c60b5b110a1c1a8bd2dccbc93a596bc92358c15f864052f423c056404b" Workload="ip--172--31--19--166-k8s-calico--kube--controllers--6c996ffb5d--7txfd-eth0" Jan 30 13:53:49.268913 containerd[1983]: 2025-01-30 13:53:49.263 [INFO][6143] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:53:49.268913 containerd[1983]: 2025-01-30 13:53:49.266 [INFO][6137] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5df243c60b5b110a1c1a8bd2dccbc93a596bc92358c15f864052f423c056404b" Jan 30 13:53:49.270391 containerd[1983]: time="2025-01-30T13:53:49.269224535Z" level=info msg="TearDown network for sandbox \"5df243c60b5b110a1c1a8bd2dccbc93a596bc92358c15f864052f423c056404b\" successfully" Jan 30 13:53:49.284358 containerd[1983]: time="2025-01-30T13:53:49.276587222Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5df243c60b5b110a1c1a8bd2dccbc93a596bc92358c15f864052f423c056404b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:53:49.284358 containerd[1983]: time="2025-01-30T13:53:49.284482678Z" level=info msg="RemovePodSandbox \"5df243c60b5b110a1c1a8bd2dccbc93a596bc92358c15f864052f423c056404b\" returns successfully" Jan 30 13:53:49.314949 kubelet[3510]: I0130 13:53:49.314895 3510 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 30 13:53:49.324137 kubelet[3510]: I0130 13:53:49.324084 3510 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 30 13:53:49.896664 systemd[1]: Started sshd@11-172.31.19.166:22-139.178.68.195:33730.service - OpenSSH per-connection server daemon (139.178.68.195:33730). Jan 30 13:53:50.141146 sshd[6170]: Accepted publickey for core from 139.178.68.195 port 33730 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:53:50.144286 sshd[6170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:53:50.150164 systemd-logind[1947]: New session 12 of user core. Jan 30 13:53:50.157343 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 13:53:51.249307 sshd[6170]: pam_unix(sshd:session): session closed for user core Jan 30 13:53:51.255158 systemd[1]: sshd@11-172.31.19.166:22-139.178.68.195:33730.service: Deactivated successfully. Jan 30 13:53:51.258705 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 13:53:51.259836 systemd-logind[1947]: Session 12 logged out. Waiting for processes to exit. Jan 30 13:53:51.260930 systemd-logind[1947]: Removed session 12. Jan 30 13:53:51.288458 systemd[1]: Started sshd@12-172.31.19.166:22-139.178.68.195:33744.service - OpenSSH per-connection server daemon (139.178.68.195:33744). Jan 30 13:53:51.474637 sshd[6185]: Accepted publickey for core from 139.178.68.195 port 33744 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:53:51.475345 sshd[6185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:53:51.481203 systemd-logind[1947]: New session 13 of user core. Jan 30 13:53:51.484286 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 13:53:51.880476 sshd[6185]: pam_unix(sshd:session): session closed for user core Jan 30 13:53:51.887542 systemd-logind[1947]: Session 13 logged out. Waiting for processes to exit. Jan 30 13:53:51.890321 systemd[1]: sshd@12-172.31.19.166:22-139.178.68.195:33744.service: Deactivated successfully. Jan 30 13:53:51.896110 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 13:53:51.936262 systemd-logind[1947]: Removed session 13. Jan 30 13:53:51.945531 systemd[1]: Started sshd@13-172.31.19.166:22-139.178.68.195:33746.service - OpenSSH per-connection server daemon (139.178.68.195:33746). Jan 30 13:53:52.143073 sshd[6196]: Accepted publickey for core from 139.178.68.195 port 33746 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:53:52.143788 sshd[6196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:53:52.165143 systemd-logind[1947]: New session 14 of user core. Jan 30 13:53:52.173352 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 13:53:52.423162 sshd[6196]: pam_unix(sshd:session): session closed for user core Jan 30 13:53:52.428438 systemd[1]: sshd@13-172.31.19.166:22-139.178.68.195:33746.service: Deactivated successfully. Jan 30 13:53:52.430982 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 13:53:52.432338 systemd-logind[1947]: Session 14 logged out. Waiting for processes to exit. Jan 30 13:53:52.433894 systemd-logind[1947]: Removed session 14. Jan 30 13:53:57.458517 systemd[1]: Started sshd@14-172.31.19.166:22-139.178.68.195:57362.service - OpenSSH per-connection server daemon (139.178.68.195:57362). Jan 30 13:53:57.661094 sshd[6215]: Accepted publickey for core from 139.178.68.195 port 57362 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:53:57.661934 sshd[6215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:53:57.668156 systemd-logind[1947]: New session 15 of user core. Jan 30 13:53:57.673327 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 13:53:58.133064 sshd[6215]: pam_unix(sshd:session): session closed for user core Jan 30 13:53:58.143862 systemd-logind[1947]: Session 15 logged out. Waiting for processes to exit. Jan 30 13:53:58.153828 systemd[1]: sshd@14-172.31.19.166:22-139.178.68.195:57362.service: Deactivated successfully. Jan 30 13:53:58.163009 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 13:53:58.167056 systemd-logind[1947]: Removed session 15. Jan 30 13:54:01.119128 kubelet[3510]: I0130 13:54:01.119067 3510 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:54:01.200277 kubelet[3510]: I0130 13:54:01.199224 3510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-dlwgg" podStartSLOduration=40.936286266 podStartE2EDuration="53.199196286s" podCreationTimestamp="2025-01-30 13:53:08 +0000 UTC" firstStartedPulling="2025-01-30 13:53:35.526057407 +0000 UTC m=+49.062822725" lastFinishedPulling="2025-01-30 13:53:47.788967422 +0000 UTC m=+61.325732745" observedRunningTime="2025-01-30 13:53:48.826017515 +0000 UTC m=+62.362782856" watchObservedRunningTime="2025-01-30 13:54:01.199196286 +0000 UTC m=+74.735961625" Jan 30 13:54:03.191261 systemd[1]: Started sshd@15-172.31.19.166:22-139.178.68.195:57378.service - OpenSSH per-connection server daemon (139.178.68.195:57378). Jan 30 13:54:03.531637 sshd[6240]: Accepted publickey for core from 139.178.68.195 port 57378 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:54:03.546839 sshd[6240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:54:03.565648 systemd-logind[1947]: New session 16 of user core. Jan 30 13:54:03.571942 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 13:54:04.613645 sshd[6240]: pam_unix(sshd:session): session closed for user core Jan 30 13:54:04.626527 systemd[1]: sshd@15-172.31.19.166:22-139.178.68.195:57378.service: Deactivated successfully. Jan 30 13:54:04.631225 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 13:54:04.632652 systemd-logind[1947]: Session 16 logged out. Waiting for processes to exit. Jan 30 13:54:04.634834 systemd-logind[1947]: Removed session 16. Jan 30 13:54:09.659850 systemd[1]: Started sshd@16-172.31.19.166:22-139.178.68.195:50616.service - OpenSSH per-connection server daemon (139.178.68.195:50616). Jan 30 13:54:09.916923 sshd[6257]: Accepted publickey for core from 139.178.68.195 port 50616 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:54:09.921712 sshd[6257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:54:09.930421 systemd-logind[1947]: New session 17 of user core. Jan 30 13:54:09.938428 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 13:54:10.471568 sshd[6257]: pam_unix(sshd:session): session closed for user core Jan 30 13:54:10.492151 systemd[1]: sshd@16-172.31.19.166:22-139.178.68.195:50616.service: Deactivated successfully. Jan 30 13:54:10.495401 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 13:54:10.497064 systemd-logind[1947]: Session 17 logged out. Waiting for processes to exit. Jan 30 13:54:10.500436 systemd-logind[1947]: Removed session 17. Jan 30 13:54:15.515387 systemd[1]: Started sshd@17-172.31.19.166:22-139.178.68.195:52376.service - OpenSSH per-connection server daemon (139.178.68.195:52376). Jan 30 13:54:15.722530 sshd[6276]: Accepted publickey for core from 139.178.68.195 port 52376 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:54:15.725690 sshd[6276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:54:15.731630 systemd-logind[1947]: New session 18 of user core. Jan 30 13:54:15.737333 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 13:54:16.357187 sshd[6276]: pam_unix(sshd:session): session closed for user core Jan 30 13:54:16.365443 systemd[1]: sshd@17-172.31.19.166:22-139.178.68.195:52376.service: Deactivated successfully. Jan 30 13:54:16.369058 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 13:54:16.376691 systemd-logind[1947]: Session 18 logged out. Waiting for processes to exit. Jan 30 13:54:16.397519 systemd[1]: Started sshd@18-172.31.19.166:22-139.178.68.195:52382.service - OpenSSH per-connection server daemon (139.178.68.195:52382). Jan 30 13:54:16.399183 systemd-logind[1947]: Removed session 18. Jan 30 13:54:16.600582 sshd[6289]: Accepted publickey for core from 139.178.68.195 port 52382 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:54:16.604191 sshd[6289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:54:16.622935 systemd-logind[1947]: New session 19 of user core. Jan 30 13:54:16.629393 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 13:54:17.308961 systemd[1]: run-containerd-runc-k8s.io-387b95110f1249659517e73132f228bc3dd8528bb059b6f17cc1c3511001a3c0-runc.4HdcCV.mount: Deactivated successfully. Jan 30 13:54:17.611991 sshd[6289]: pam_unix(sshd:session): session closed for user core Jan 30 13:54:17.619350 systemd[1]: sshd@18-172.31.19.166:22-139.178.68.195:52382.service: Deactivated successfully. Jan 30 13:54:17.621531 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 13:54:17.622477 systemd-logind[1947]: Session 19 logged out. Waiting for processes to exit. Jan 30 13:54:17.623982 systemd-logind[1947]: Removed session 19. Jan 30 13:54:17.648697 systemd[1]: Started sshd@19-172.31.19.166:22-139.178.68.195:52386.service - OpenSSH per-connection server daemon (139.178.68.195:52386). Jan 30 13:54:17.858146 sshd[6325]: Accepted publickey for core from 139.178.68.195 port 52386 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:54:17.861541 sshd[6325]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:54:17.869386 systemd-logind[1947]: New session 20 of user core. Jan 30 13:54:17.874323 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 13:54:22.107192 sshd[6325]: pam_unix(sshd:session): session closed for user core Jan 30 13:54:22.124411 systemd[1]: sshd@19-172.31.19.166:22-139.178.68.195:52386.service: Deactivated successfully. Jan 30 13:54:22.132866 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 13:54:22.158089 systemd-logind[1947]: Session 20 logged out. Waiting for processes to exit. Jan 30 13:54:22.186506 systemd[1]: Started sshd@20-172.31.19.166:22-139.178.68.195:52392.service - OpenSSH per-connection server daemon (139.178.68.195:52392). Jan 30 13:54:22.205775 systemd-logind[1947]: Removed session 20. Jan 30 13:54:22.408567 sshd[6382]: Accepted publickey for core from 139.178.68.195 port 52392 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:54:22.411081 sshd[6382]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:54:22.418444 systemd-logind[1947]: New session 21 of user core. Jan 30 13:54:22.423375 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 30 13:54:23.565913 sshd[6382]: pam_unix(sshd:session): session closed for user core Jan 30 13:54:23.572229 systemd-logind[1947]: Session 21 logged out. Waiting for processes to exit. Jan 30 13:54:23.572900 systemd[1]: sshd@20-172.31.19.166:22-139.178.68.195:52392.service: Deactivated successfully. Jan 30 13:54:23.577950 systemd[1]: session-21.scope: Deactivated successfully. Jan 30 13:54:23.581960 systemd-logind[1947]: Removed session 21. Jan 30 13:54:23.604531 systemd[1]: Started sshd@21-172.31.19.166:22-139.178.68.195:52394.service - OpenSSH per-connection server daemon (139.178.68.195:52394). Jan 30 13:54:23.805584 sshd[6393]: Accepted publickey for core from 139.178.68.195 port 52394 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:54:23.808858 sshd[6393]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:54:23.816218 systemd-logind[1947]: New session 22 of user core. Jan 30 13:54:23.824467 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 30 13:54:24.055660 sshd[6393]: pam_unix(sshd:session): session closed for user core Jan 30 13:54:24.061267 systemd-logind[1947]: Session 22 logged out. Waiting for processes to exit. Jan 30 13:54:24.061996 systemd[1]: sshd@21-172.31.19.166:22-139.178.68.195:52394.service: Deactivated successfully. Jan 30 13:54:24.065400 systemd[1]: session-22.scope: Deactivated successfully. Jan 30 13:54:24.066807 systemd-logind[1947]: Removed session 22. Jan 30 13:54:29.101543 systemd[1]: Started sshd@22-172.31.19.166:22-139.178.68.195:54686.service - OpenSSH per-connection server daemon (139.178.68.195:54686). Jan 30 13:54:29.259667 sshd[6405]: Accepted publickey for core from 139.178.68.195 port 54686 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:54:29.260388 sshd[6405]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:54:29.266326 systemd-logind[1947]: New session 23 of user core. Jan 30 13:54:29.282359 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 30 13:54:29.510088 sshd[6405]: pam_unix(sshd:session): session closed for user core Jan 30 13:54:29.513412 systemd[1]: sshd@22-172.31.19.166:22-139.178.68.195:54686.service: Deactivated successfully. Jan 30 13:54:29.516024 systemd[1]: session-23.scope: Deactivated successfully. Jan 30 13:54:29.518381 systemd-logind[1947]: Session 23 logged out. Waiting for processes to exit. Jan 30 13:54:29.519832 systemd-logind[1947]: Removed session 23. Jan 30 13:54:34.558903 systemd[1]: Started sshd@23-172.31.19.166:22-139.178.68.195:54692.service - OpenSSH per-connection server daemon (139.178.68.195:54692). Jan 30 13:54:34.803065 sshd[6424]: Accepted publickey for core from 139.178.68.195 port 54692 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:54:34.807440 sshd[6424]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:54:34.816470 systemd-logind[1947]: New session 24 of user core. Jan 30 13:54:34.822711 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 30 13:54:35.181606 sshd[6424]: pam_unix(sshd:session): session closed for user core Jan 30 13:54:35.186033 systemd-logind[1947]: Session 24 logged out. Waiting for processes to exit. Jan 30 13:54:35.186907 systemd[1]: sshd@23-172.31.19.166:22-139.178.68.195:54692.service: Deactivated successfully. Jan 30 13:54:35.190713 systemd[1]: session-24.scope: Deactivated successfully. Jan 30 13:54:35.194479 systemd-logind[1947]: Removed session 24. Jan 30 13:54:40.237760 systemd[1]: Started sshd@24-172.31.19.166:22-139.178.68.195:56420.service - OpenSSH per-connection server daemon (139.178.68.195:56420). Jan 30 13:54:40.459644 sshd[6437]: Accepted publickey for core from 139.178.68.195 port 56420 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:54:40.463019 sshd[6437]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:54:40.468397 systemd-logind[1947]: New session 25 of user core. Jan 30 13:54:40.473310 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 30 13:54:40.722716 sshd[6437]: pam_unix(sshd:session): session closed for user core Jan 30 13:54:40.727085 systemd[1]: sshd@24-172.31.19.166:22-139.178.68.195:56420.service: Deactivated successfully. Jan 30 13:54:40.731709 systemd[1]: session-25.scope: Deactivated successfully. Jan 30 13:54:40.736054 systemd-logind[1947]: Session 25 logged out. Waiting for processes to exit. Jan 30 13:54:40.738741 systemd-logind[1947]: Removed session 25. Jan 30 13:54:45.761656 systemd[1]: Started sshd@25-172.31.19.166:22-139.178.68.195:55810.service - OpenSSH per-connection server daemon (139.178.68.195:55810). Jan 30 13:54:45.933929 sshd[6450]: Accepted publickey for core from 139.178.68.195 port 55810 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:54:45.936556 sshd[6450]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:54:45.944506 systemd-logind[1947]: New session 26 of user core. Jan 30 13:54:45.948309 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 30 13:54:46.169161 sshd[6450]: pam_unix(sshd:session): session closed for user core Jan 30 13:54:46.174824 systemd[1]: sshd@25-172.31.19.166:22-139.178.68.195:55810.service: Deactivated successfully. Jan 30 13:54:46.178773 systemd[1]: session-26.scope: Deactivated successfully. Jan 30 13:54:46.180319 systemd-logind[1947]: Session 26 logged out. Waiting for processes to exit. Jan 30 13:54:46.182091 systemd-logind[1947]: Removed session 26. Jan 30 13:54:49.870464 systemd[1]: run-containerd-runc-k8s.io-8f03524f05682c345e363e26138de48623dffdd9f73a9987a4da1bc57bb96725-runc.xGWQJF.mount: Deactivated successfully. Jan 30 13:54:51.213889 systemd[1]: Started sshd@26-172.31.19.166:22-139.178.68.195:55826.service - OpenSSH per-connection server daemon (139.178.68.195:55826). Jan 30 13:54:51.435923 sshd[6507]: Accepted publickey for core from 139.178.68.195 port 55826 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:54:51.437222 sshd[6507]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:54:51.443592 systemd-logind[1947]: New session 27 of user core. Jan 30 13:54:51.449326 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 30 13:54:51.679460 sshd[6507]: pam_unix(sshd:session): session closed for user core Jan 30 13:54:51.683190 systemd[1]: sshd@26-172.31.19.166:22-139.178.68.195:55826.service: Deactivated successfully. Jan 30 13:54:51.686863 systemd[1]: session-27.scope: Deactivated successfully. Jan 30 13:54:51.691201 systemd-logind[1947]: Session 27 logged out. Waiting for processes to exit. Jan 30 13:54:51.692619 systemd-logind[1947]: Removed session 27. Jan 30 13:54:56.724017 systemd[1]: Started sshd@27-172.31.19.166:22-139.178.68.195:33544.service - OpenSSH per-connection server daemon (139.178.68.195:33544). Jan 30 13:54:56.926746 sshd[6526]: Accepted publickey for core from 139.178.68.195 port 33544 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:54:56.929210 sshd[6526]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:54:56.936209 systemd-logind[1947]: New session 28 of user core. Jan 30 13:54:56.941362 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 30 13:54:57.156508 sshd[6526]: pam_unix(sshd:session): session closed for user core Jan 30 13:54:57.161477 systemd[1]: sshd@27-172.31.19.166:22-139.178.68.195:33544.service: Deactivated successfully. Jan 30 13:54:57.164075 systemd[1]: session-28.scope: Deactivated successfully. Jan 30 13:54:57.166550 systemd-logind[1947]: Session 28 logged out. Waiting for processes to exit. Jan 30 13:54:57.168815 systemd-logind[1947]: Removed session 28. Jan 30 13:55:44.927369 systemd[1]: cri-containerd-a141f8c050d86a659ee4b14162b9cedc19a8745a097dae59ae8b01a99544a940.scope: Deactivated successfully. Jan 30 13:55:44.927790 systemd[1]: cri-containerd-a141f8c050d86a659ee4b14162b9cedc19a8745a097dae59ae8b01a99544a940.scope: Consumed 3.989s CPU time. Jan 30 13:55:45.175738 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a141f8c050d86a659ee4b14162b9cedc19a8745a097dae59ae8b01a99544a940-rootfs.mount: Deactivated successfully. Jan 30 13:55:45.210908 containerd[1983]: time="2025-01-30T13:55:45.196422341Z" level=info msg="shim disconnected" id=a141f8c050d86a659ee4b14162b9cedc19a8745a097dae59ae8b01a99544a940 namespace=k8s.io Jan 30 13:55:45.225703 containerd[1983]: time="2025-01-30T13:55:45.225638634Z" level=warning msg="cleaning up after shim disconnected" id=a141f8c050d86a659ee4b14162b9cedc19a8745a097dae59ae8b01a99544a940 namespace=k8s.io Jan 30 13:55:45.225703 containerd[1983]: time="2025-01-30T13:55:45.225693735Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:55:45.559056 kubelet[3510]: I0130 13:55:45.558914 3510 scope.go:117] "RemoveContainer" containerID="a141f8c050d86a659ee4b14162b9cedc19a8745a097dae59ae8b01a99544a940" Jan 30 13:55:45.607060 containerd[1983]: time="2025-01-30T13:55:45.606753485Z" level=info msg="CreateContainer within sandbox \"4fc5a8cbbb2b8fe7ef285717843bee47d49522e9f624415b2e0a9317a8886e4e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 30 13:55:45.662509 containerd[1983]: time="2025-01-30T13:55:45.662466418Z" level=info msg="CreateContainer within sandbox \"4fc5a8cbbb2b8fe7ef285717843bee47d49522e9f624415b2e0a9317a8886e4e\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"1e9a90737b67c793703a7a189f48dd7fe6ca2f99754307e0256a000b7afd1f96\"" Jan 30 13:55:45.663300 containerd[1983]: time="2025-01-30T13:55:45.663246432Z" level=info msg="StartContainer for \"1e9a90737b67c793703a7a189f48dd7fe6ca2f99754307e0256a000b7afd1f96\"" Jan 30 13:55:45.748424 systemd[1]: Started cri-containerd-1e9a90737b67c793703a7a189f48dd7fe6ca2f99754307e0256a000b7afd1f96.scope - libcontainer container 1e9a90737b67c793703a7a189f48dd7fe6ca2f99754307e0256a000b7afd1f96. Jan 30 13:55:45.791378 containerd[1983]: time="2025-01-30T13:55:45.791311079Z" level=info msg="StartContainer for \"1e9a90737b67c793703a7a189f48dd7fe6ca2f99754307e0256a000b7afd1f96\" returns successfully" Jan 30 13:55:46.815226 systemd[1]: cri-containerd-bc1a8916446557c8cfc648cd63725f2f8f130d2a2306153112cbe2f97ad5609c.scope: Deactivated successfully. Jan 30 13:55:46.815764 systemd[1]: cri-containerd-bc1a8916446557c8cfc648cd63725f2f8f130d2a2306153112cbe2f97ad5609c.scope: Consumed 4.472s CPU time, 25.5M memory peak, 0B memory swap peak. Jan 30 13:55:46.961897 containerd[1983]: time="2025-01-30T13:55:46.961715965Z" level=info msg="shim disconnected" id=bc1a8916446557c8cfc648cd63725f2f8f130d2a2306153112cbe2f97ad5609c namespace=k8s.io Jan 30 13:55:46.965321 containerd[1983]: time="2025-01-30T13:55:46.961904689Z" level=warning msg="cleaning up after shim disconnected" id=bc1a8916446557c8cfc648cd63725f2f8f130d2a2306153112cbe2f97ad5609c namespace=k8s.io Jan 30 13:55:46.965321 containerd[1983]: time="2025-01-30T13:55:46.961920597Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:55:46.963956 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bc1a8916446557c8cfc648cd63725f2f8f130d2a2306153112cbe2f97ad5609c-rootfs.mount: Deactivated successfully. Jan 30 13:55:47.020679 systemd[1]: run-containerd-runc-k8s.io-387b95110f1249659517e73132f228bc3dd8528bb059b6f17cc1c3511001a3c0-runc.umrD1G.mount: Deactivated successfully. Jan 30 13:55:47.576558 kubelet[3510]: I0130 13:55:47.572875 3510 scope.go:117] "RemoveContainer" containerID="bc1a8916446557c8cfc648cd63725f2f8f130d2a2306153112cbe2f97ad5609c" Jan 30 13:55:47.595224 containerd[1983]: time="2025-01-30T13:55:47.595067080Z" level=info msg="CreateContainer within sandbox \"f78876b23d1d249ea8881263fd55fdeeb5d758a4f821ef6fbb175f7f67ec81a3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 30 13:55:47.631428 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount286065884.mount: Deactivated successfully. Jan 30 13:55:47.640363 containerd[1983]: time="2025-01-30T13:55:47.640315346Z" level=info msg="CreateContainer within sandbox \"f78876b23d1d249ea8881263fd55fdeeb5d758a4f821ef6fbb175f7f67ec81a3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"ec036322ac677650cd9a3bd4db4ccef65c62c92dbfe8cc27e58aadb5bf5df60b\"" Jan 30 13:55:47.642581 containerd[1983]: time="2025-01-30T13:55:47.641052266Z" level=info msg="StartContainer for \"ec036322ac677650cd9a3bd4db4ccef65c62c92dbfe8cc27e58aadb5bf5df60b\"" Jan 30 13:55:47.677688 systemd[1]: Started cri-containerd-ec036322ac677650cd9a3bd4db4ccef65c62c92dbfe8cc27e58aadb5bf5df60b.scope - libcontainer container ec036322ac677650cd9a3bd4db4ccef65c62c92dbfe8cc27e58aadb5bf5df60b. Jan 30 13:55:47.763488 containerd[1983]: time="2025-01-30T13:55:47.763441011Z" level=info msg="StartContainer for \"ec036322ac677650cd9a3bd4db4ccef65c62c92dbfe8cc27e58aadb5bf5df60b\" returns successfully" Jan 30 13:55:47.966252 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3453601303.mount: Deactivated successfully. Jan 30 13:55:50.165950 kubelet[3510]: E0130 13:55:50.165891 3510 controller.go:195] "Failed to update lease" err="Put \"https://172.31.19.166:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-166?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 13:55:51.245499 systemd[1]: cri-containerd-aeea4c5fe06acfaa87c5aff1d2a36aa6fdc3962ee1c8be227f72aa36a6050ecc.scope: Deactivated successfully. Jan 30 13:55:51.246702 systemd[1]: cri-containerd-aeea4c5fe06acfaa87c5aff1d2a36aa6fdc3962ee1c8be227f72aa36a6050ecc.scope: Consumed 1.676s CPU time, 19.3M memory peak, 0B memory swap peak. Jan 30 13:55:51.286848 containerd[1983]: time="2025-01-30T13:55:51.286649619Z" level=info msg="shim disconnected" id=aeea4c5fe06acfaa87c5aff1d2a36aa6fdc3962ee1c8be227f72aa36a6050ecc namespace=k8s.io Jan 30 13:55:51.286848 containerd[1983]: time="2025-01-30T13:55:51.286846818Z" level=warning msg="cleaning up after shim disconnected" id=aeea4c5fe06acfaa87c5aff1d2a36aa6fdc3962ee1c8be227f72aa36a6050ecc namespace=k8s.io Jan 30 13:55:51.287702 containerd[1983]: time="2025-01-30T13:55:51.286863864Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:55:51.288554 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aeea4c5fe06acfaa87c5aff1d2a36aa6fdc3962ee1c8be227f72aa36a6050ecc-rootfs.mount: Deactivated successfully. Jan 30 13:55:51.589079 kubelet[3510]: I0130 13:55:51.588964 3510 scope.go:117] "RemoveContainer" containerID="aeea4c5fe06acfaa87c5aff1d2a36aa6fdc3962ee1c8be227f72aa36a6050ecc" Jan 30 13:55:51.592078 containerd[1983]: time="2025-01-30T13:55:51.592042462Z" level=info msg="CreateContainer within sandbox \"d6078041ebca8cd2c5bc219cb2a0c21aa465d83e88bbd764d634fab9c908d4dd\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 30 13:55:51.617082 containerd[1983]: time="2025-01-30T13:55:51.617033494Z" level=info msg="CreateContainer within sandbox \"d6078041ebca8cd2c5bc219cb2a0c21aa465d83e88bbd764d634fab9c908d4dd\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"62d810e8296fc3d731114b3f071c60b96a318e82ef1ab0951b8cf7596b425fde\"" Jan 30 13:55:51.617671 containerd[1983]: time="2025-01-30T13:55:51.617636746Z" level=info msg="StartContainer for \"62d810e8296fc3d731114b3f071c60b96a318e82ef1ab0951b8cf7596b425fde\"" Jan 30 13:55:51.661449 systemd[1]: Started cri-containerd-62d810e8296fc3d731114b3f071c60b96a318e82ef1ab0951b8cf7596b425fde.scope - libcontainer container 62d810e8296fc3d731114b3f071c60b96a318e82ef1ab0951b8cf7596b425fde. Jan 30 13:55:51.713603 containerd[1983]: time="2025-01-30T13:55:51.713553261Z" level=info msg="StartContainer for \"62d810e8296fc3d731114b3f071c60b96a318e82ef1ab0951b8cf7596b425fde\" returns successfully"