Dec 13 01:31:28.979993 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Dec 12 23:15:00 -00 2024 Dec 13 01:31:28.980021 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:31:28.980031 kernel: BIOS-provided physical RAM map: Dec 13 01:31:28.980037 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 01:31:28.980053 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 01:31:28.980061 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 01:31:28.980072 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Dec 13 01:31:28.980079 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Dec 13 01:31:28.980085 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Dec 13 01:31:28.980092 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 01:31:28.980099 kernel: NX (Execute Disable) protection: active Dec 13 01:31:28.980106 kernel: APIC: Static calls initialized Dec 13 01:31:28.980112 kernel: SMBIOS 2.7 present. Dec 13 01:31:28.980120 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Dec 13 01:31:28.980130 kernel: Hypervisor detected: KVM Dec 13 01:31:28.980138 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 01:31:28.980146 kernel: kvm-clock: using sched offset of 6641552568 cycles Dec 13 01:31:28.980154 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 01:31:28.980162 kernel: tsc: Detected 2499.994 MHz processor Dec 13 01:31:28.980171 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 01:31:28.980179 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 01:31:28.980189 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Dec 13 01:31:28.980197 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 13 01:31:28.980204 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 01:31:28.980212 kernel: Using GB pages for direct mapping Dec 13 01:31:28.980220 kernel: ACPI: Early table checksum verification disabled Dec 13 01:31:28.980227 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Dec 13 01:31:28.980235 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Dec 13 01:31:28.980243 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Dec 13 01:31:28.980250 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Dec 13 01:31:28.980260 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Dec 13 01:31:28.980269 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Dec 13 01:31:28.980403 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Dec 13 01:31:28.980415 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Dec 13 01:31:28.980423 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Dec 13 01:31:28.980430 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Dec 13 01:31:28.980438 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Dec 13 01:31:28.980446 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Dec 13 01:31:28.980453 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Dec 13 01:31:28.980465 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Dec 13 01:31:28.980477 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Dec 13 01:31:28.980485 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Dec 13 01:31:28.980493 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Dec 13 01:31:28.980501 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Dec 13 01:31:28.980511 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Dec 13 01:31:28.980519 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Dec 13 01:31:28.980527 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Dec 13 01:31:28.980536 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Dec 13 01:31:28.980551 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 01:31:28.980559 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 01:31:28.980568 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Dec 13 01:31:28.980576 kernel: NUMA: Initialized distance table, cnt=1 Dec 13 01:31:28.980584 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Dec 13 01:31:28.980595 kernel: Zone ranges: Dec 13 01:31:28.980603 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 01:31:28.980611 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Dec 13 01:31:28.980619 kernel: Normal empty Dec 13 01:31:28.980628 kernel: Movable zone start for each node Dec 13 01:31:28.980636 kernel: Early memory node ranges Dec 13 01:31:28.980644 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 01:31:28.980652 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Dec 13 01:31:28.980660 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Dec 13 01:31:28.980669 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 01:31:28.980679 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 01:31:28.980688 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Dec 13 01:31:28.980696 kernel: ACPI: PM-Timer IO Port: 0xb008 Dec 13 01:31:28.980704 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 01:31:28.980712 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Dec 13 01:31:28.980721 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 01:31:28.980729 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 01:31:28.980737 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 01:31:28.980752 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 01:31:28.980763 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 01:31:28.980771 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 01:31:28.980779 kernel: TSC deadline timer available Dec 13 01:31:28.980787 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 01:31:28.980795 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 13 01:31:28.980804 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Dec 13 01:31:28.980812 kernel: Booting paravirtualized kernel on KVM Dec 13 01:31:28.980820 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 01:31:28.980829 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 13 01:31:28.980839 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Dec 13 01:31:28.980848 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Dec 13 01:31:28.980856 kernel: pcpu-alloc: [0] 0 1 Dec 13 01:31:28.980863 kernel: kvm-guest: PV spinlocks enabled Dec 13 01:31:28.980872 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 01:31:28.980881 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:31:28.980890 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:31:28.980898 kernel: random: crng init done Dec 13 01:31:28.980909 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 01:31:28.980917 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 01:31:28.980925 kernel: Fallback order for Node 0: 0 Dec 13 01:31:28.980933 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Dec 13 01:31:28.980941 kernel: Policy zone: DMA32 Dec 13 01:31:28.980949 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:31:28.980958 kernel: Memory: 1932348K/2057760K available (12288K kernel code, 2299K rwdata, 22724K rodata, 42844K init, 2348K bss, 125152K reserved, 0K cma-reserved) Dec 13 01:31:28.980966 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 01:31:28.981017 kernel: Kernel/User page tables isolation: enabled Dec 13 01:31:28.981031 kernel: ftrace: allocating 37902 entries in 149 pages Dec 13 01:31:28.981040 kernel: ftrace: allocated 149 pages with 4 groups Dec 13 01:31:28.981048 kernel: Dynamic Preempt: voluntary Dec 13 01:31:28.981056 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 01:31:28.981065 kernel: rcu: RCU event tracing is enabled. Dec 13 01:31:28.981074 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 01:31:28.981082 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 01:31:28.981091 kernel: Rude variant of Tasks RCU enabled. Dec 13 01:31:28.981099 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:31:28.981110 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:31:28.981118 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 01:31:28.981127 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 13 01:31:28.981135 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 01:31:28.981143 kernel: Console: colour VGA+ 80x25 Dec 13 01:31:28.981151 kernel: printk: console [ttyS0] enabled Dec 13 01:31:28.981160 kernel: ACPI: Core revision 20230628 Dec 13 01:31:28.981168 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Dec 13 01:31:28.981177 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 01:31:28.981187 kernel: x2apic enabled Dec 13 01:31:28.981196 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 01:31:28.981213 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240933eba6e, max_idle_ns: 440795246008 ns Dec 13 01:31:28.981224 kernel: Calibrating delay loop (skipped) preset value.. 4999.98 BogoMIPS (lpj=2499994) Dec 13 01:31:28.981233 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Dec 13 01:31:28.981241 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Dec 13 01:31:28.981250 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 01:31:28.981272 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 01:31:28.981280 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 01:31:28.981289 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 01:31:28.981297 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Dec 13 01:31:28.981306 kernel: RETBleed: Vulnerable Dec 13 01:31:28.981316 kernel: Speculative Store Bypass: Vulnerable Dec 13 01:31:28.981325 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 01:31:28.981346 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 01:31:28.981354 kernel: GDS: Unknown: Dependent on hypervisor status Dec 13 01:31:28.981363 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 01:31:28.981371 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 01:31:28.981380 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 01:31:28.981391 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Dec 13 01:31:28.981399 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Dec 13 01:31:28.981408 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Dec 13 01:31:28.981416 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Dec 13 01:31:28.981425 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Dec 13 01:31:28.981433 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Dec 13 01:31:28.981460 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 01:31:28.981469 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Dec 13 01:31:28.981477 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Dec 13 01:31:28.981486 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Dec 13 01:31:28.981494 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Dec 13 01:31:28.981505 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Dec 13 01:31:28.981514 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Dec 13 01:31:28.981523 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Dec 13 01:31:28.981531 kernel: Freeing SMP alternatives memory: 32K Dec 13 01:31:28.981540 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:31:28.981548 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 01:31:28.981557 kernel: landlock: Up and running. Dec 13 01:31:28.981566 kernel: SELinux: Initializing. Dec 13 01:31:28.981574 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 01:31:28.981583 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 01:31:28.981637 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Dec 13 01:31:28.981654 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:31:28.981663 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:31:28.981672 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:31:28.981682 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Dec 13 01:31:28.981690 kernel: signal: max sigframe size: 3632 Dec 13 01:31:28.981699 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:31:28.981708 kernel: rcu: Max phase no-delay instances is 400. Dec 13 01:31:28.981717 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 01:31:28.981726 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:31:28.981737 kernel: smpboot: x86: Booting SMP configuration: Dec 13 01:31:28.981746 kernel: .... node #0, CPUs: #1 Dec 13 01:31:28.981756 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Dec 13 01:31:28.981765 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 13 01:31:28.981774 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 01:31:28.981783 kernel: smpboot: Max logical packages: 1 Dec 13 01:31:28.981792 kernel: smpboot: Total of 2 processors activated (9999.97 BogoMIPS) Dec 13 01:31:28.981801 kernel: devtmpfs: initialized Dec 13 01:31:28.981812 kernel: x86/mm: Memory block size: 128MB Dec 13 01:31:28.981821 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:31:28.981830 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 01:31:28.981839 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:31:28.981848 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:31:28.981857 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:31:28.981865 kernel: audit: type=2000 audit(1734053487.622:1): state=initialized audit_enabled=0 res=1 Dec 13 01:31:28.981874 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:31:28.981883 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 01:31:28.981894 kernel: cpuidle: using governor menu Dec 13 01:31:28.981903 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:31:28.981912 kernel: dca service started, version 1.12.1 Dec 13 01:31:28.981920 kernel: PCI: Using configuration type 1 for base access Dec 13 01:31:28.981929 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 01:31:28.981938 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:31:28.981947 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 01:31:28.981956 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:31:28.981964 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 01:31:28.981976 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:31:28.981985 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:31:28.981994 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:31:28.982002 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:31:28.982011 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Dec 13 01:31:28.982020 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 01:31:28.982029 kernel: ACPI: Interpreter enabled Dec 13 01:31:28.982037 kernel: ACPI: PM: (supports S0 S5) Dec 13 01:31:28.982046 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 01:31:28.982055 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 01:31:28.982066 kernel: PCI: Using E820 reservations for host bridge windows Dec 13 01:31:28.982075 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Dec 13 01:31:28.982084 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 01:31:28.982244 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 13 01:31:28.982359 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Dec 13 01:31:28.982452 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Dec 13 01:31:28.982464 kernel: acpiphp: Slot [3] registered Dec 13 01:31:28.982477 kernel: acpiphp: Slot [4] registered Dec 13 01:31:28.982485 kernel: acpiphp: Slot [5] registered Dec 13 01:31:28.982494 kernel: acpiphp: Slot [6] registered Dec 13 01:31:28.982503 kernel: acpiphp: Slot [7] registered Dec 13 01:31:28.982511 kernel: acpiphp: Slot [8] registered Dec 13 01:31:28.982520 kernel: acpiphp: Slot [9] registered Dec 13 01:31:28.982529 kernel: acpiphp: Slot [10] registered Dec 13 01:31:28.982537 kernel: acpiphp: Slot [11] registered Dec 13 01:31:28.982546 kernel: acpiphp: Slot [12] registered Dec 13 01:31:28.982557 kernel: acpiphp: Slot [13] registered Dec 13 01:31:28.982566 kernel: acpiphp: Slot [14] registered Dec 13 01:31:28.982575 kernel: acpiphp: Slot [15] registered Dec 13 01:31:28.982584 kernel: acpiphp: Slot [16] registered Dec 13 01:31:28.982592 kernel: acpiphp: Slot [17] registered Dec 13 01:31:28.982601 kernel: acpiphp: Slot [18] registered Dec 13 01:31:28.982609 kernel: acpiphp: Slot [19] registered Dec 13 01:31:28.982618 kernel: acpiphp: Slot [20] registered Dec 13 01:31:28.982627 kernel: acpiphp: Slot [21] registered Dec 13 01:31:28.982636 kernel: acpiphp: Slot [22] registered Dec 13 01:31:28.982647 kernel: acpiphp: Slot [23] registered Dec 13 01:31:28.982656 kernel: acpiphp: Slot [24] registered Dec 13 01:31:28.982664 kernel: acpiphp: Slot [25] registered Dec 13 01:31:28.982673 kernel: acpiphp: Slot [26] registered Dec 13 01:31:28.982682 kernel: acpiphp: Slot [27] registered Dec 13 01:31:28.982690 kernel: acpiphp: Slot [28] registered Dec 13 01:31:28.982699 kernel: acpiphp: Slot [29] registered Dec 13 01:31:28.982707 kernel: acpiphp: Slot [30] registered Dec 13 01:31:28.982716 kernel: acpiphp: Slot [31] registered Dec 13 01:31:28.982727 kernel: PCI host bridge to bus 0000:00 Dec 13 01:31:28.982824 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 01:31:28.983211 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 01:31:28.983373 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 01:31:28.983587 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Dec 13 01:31:28.983782 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 01:31:28.984070 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Dec 13 01:31:28.984417 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Dec 13 01:31:28.984954 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Dec 13 01:31:28.985249 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Dec 13 01:31:28.985427 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Dec 13 01:31:28.985571 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Dec 13 01:31:28.985932 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Dec 13 01:31:28.986256 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Dec 13 01:31:28.986578 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Dec 13 01:31:28.986875 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Dec 13 01:31:28.987124 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Dec 13 01:31:28.987763 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Dec 13 01:31:28.988101 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Dec 13 01:31:28.988393 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Dec 13 01:31:28.988660 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 01:31:28.988966 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Dec 13 01:31:28.989220 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Dec 13 01:31:28.989504 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Dec 13 01:31:28.989806 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Dec 13 01:31:28.989833 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 01:31:28.989851 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 01:31:28.989913 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 01:31:28.989931 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 01:31:28.989982 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 13 01:31:28.990002 kernel: iommu: Default domain type: Translated Dec 13 01:31:28.990020 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 01:31:28.990072 kernel: PCI: Using ACPI for IRQ routing Dec 13 01:31:28.990091 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 01:31:28.990109 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 01:31:28.990126 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Dec 13 01:31:28.990357 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Dec 13 01:31:28.990569 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Dec 13 01:31:28.990724 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 01:31:28.990744 kernel: vgaarb: loaded Dec 13 01:31:28.990761 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Dec 13 01:31:28.990777 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Dec 13 01:31:28.990793 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 01:31:28.990808 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:31:28.990824 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:31:28.990844 kernel: pnp: PnP ACPI init Dec 13 01:31:28.990859 kernel: pnp: PnP ACPI: found 5 devices Dec 13 01:31:28.990875 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 01:31:28.990891 kernel: NET: Registered PF_INET protocol family Dec 13 01:31:28.990956 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 01:31:28.990986 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 01:31:28.991002 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:31:28.991017 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 01:31:28.991033 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 13 01:31:28.991052 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 01:31:28.991067 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 01:31:28.991082 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 01:31:28.991097 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:31:28.991113 kernel: NET: Registered PF_XDP protocol family Dec 13 01:31:28.991252 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 01:31:28.991409 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 01:31:28.991532 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 01:31:28.991716 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Dec 13 01:31:28.991864 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 13 01:31:28.991887 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:31:28.991904 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 01:31:28.991921 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240933eba6e, max_idle_ns: 440795246008 ns Dec 13 01:31:28.991938 kernel: clocksource: Switched to clocksource tsc Dec 13 01:31:28.991955 kernel: Initialise system trusted keyrings Dec 13 01:31:28.991972 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 01:31:28.991992 kernel: Key type asymmetric registered Dec 13 01:31:28.992009 kernel: Asymmetric key parser 'x509' registered Dec 13 01:31:28.992026 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 01:31:28.992051 kernel: io scheduler mq-deadline registered Dec 13 01:31:28.992069 kernel: io scheduler kyber registered Dec 13 01:31:28.992085 kernel: io scheduler bfq registered Dec 13 01:31:28.992120 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 01:31:28.992138 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:31:28.992156 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 01:31:28.992177 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 01:31:28.992194 kernel: i8042: Warning: Keylock active Dec 13 01:31:28.992211 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 01:31:28.992229 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 01:31:28.992412 kernel: rtc_cmos 00:00: RTC can wake from S4 Dec 13 01:31:28.992554 kernel: rtc_cmos 00:00: registered as rtc0 Dec 13 01:31:28.992694 kernel: rtc_cmos 00:00: setting system clock to 2024-12-13T01:31:28 UTC (1734053488) Dec 13 01:31:28.992832 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Dec 13 01:31:28.992859 kernel: intel_pstate: CPU model not supported Dec 13 01:31:28.992876 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:31:28.992894 kernel: Segment Routing with IPv6 Dec 13 01:31:28.992911 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:31:28.992929 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:31:28.992947 kernel: Key type dns_resolver registered Dec 13 01:31:28.992964 kernel: IPI shorthand broadcast: enabled Dec 13 01:31:28.993034 kernel: sched_clock: Marking stable (595001625, 229347779)->(941452825, -117103421) Dec 13 01:31:28.993055 kernel: registered taskstats version 1 Dec 13 01:31:28.993077 kernel: Loading compiled-in X.509 certificates Dec 13 01:31:28.993095 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: c82d546f528d79a5758dcebbc47fb6daf92836a0' Dec 13 01:31:28.993113 kernel: Key type .fscrypt registered Dec 13 01:31:28.993130 kernel: Key type fscrypt-provisioning registered Dec 13 01:31:28.993148 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:31:28.993165 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:31:28.993182 kernel: ima: No architecture policies found Dec 13 01:31:28.993200 kernel: clk: Disabling unused clocks Dec 13 01:31:28.993217 kernel: Freeing unused kernel image (initmem) memory: 42844K Dec 13 01:31:28.993238 kernel: Write protecting the kernel read-only data: 36864k Dec 13 01:31:28.993256 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Dec 13 01:31:28.993273 kernel: Run /init as init process Dec 13 01:31:28.993291 kernel: with arguments: Dec 13 01:31:28.993307 kernel: /init Dec 13 01:31:28.993325 kernel: with environment: Dec 13 01:31:28.993374 kernel: HOME=/ Dec 13 01:31:28.993388 kernel: TERM=linux Dec 13 01:31:28.993402 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:31:28.993424 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:31:28.993454 systemd[1]: Detected virtualization amazon. Dec 13 01:31:28.993471 systemd[1]: Detected architecture x86-64. Dec 13 01:31:28.993486 systemd[1]: Running in initrd. Dec 13 01:31:28.993503 systemd[1]: No hostname configured, using default hostname. Dec 13 01:31:28.993522 systemd[1]: Hostname set to . Dec 13 01:31:28.993540 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:31:28.993554 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:31:28.993568 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:31:28.993669 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:31:28.993689 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 01:31:28.993706 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:31:28.993722 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 01:31:28.993743 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 01:31:28.993762 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 01:31:28.993780 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 01:31:28.993795 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:31:28.993812 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:31:28.993829 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:31:28.993845 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:31:28.993865 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:31:28.993881 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:31:28.993898 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:31:28.993914 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:31:28.993930 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:31:28.993946 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:31:28.993962 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:31:28.993978 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:31:28.993997 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:31:28.994014 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:31:28.994030 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 01:31:28.994047 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:31:28.994063 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 01:31:28.994079 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:31:28.994096 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:31:28.994116 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:31:28.994132 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:31:28.994149 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 01:31:28.994165 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:31:28.994215 systemd-journald[178]: Collecting audit messages is disabled. Dec 13 01:31:28.994255 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:31:28.994274 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:31:28.994296 systemd-journald[178]: Journal started Dec 13 01:31:28.994329 systemd-journald[178]: Runtime Journal (/run/log/journal/ec22cb6ccc3e64aa98c88bbcb885ac12) is 4.8M, max 38.6M, 33.7M free. Dec 13 01:31:28.968596 systemd-modules-load[179]: Inserted module 'overlay' Dec 13 01:31:29.003906 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:31:29.008709 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 01:31:29.015534 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:31:29.034549 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:31:29.147437 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:31:29.147484 kernel: Bridge firewalling registered Dec 13 01:31:29.039869 systemd-modules-load[179]: Inserted module 'br_netfilter' Dec 13 01:31:29.152229 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:31:29.155795 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:31:29.156251 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:31:29.166555 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:31:29.172528 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:31:29.175497 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:31:29.193694 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:31:29.203110 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:31:29.211603 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:31:29.214914 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:31:29.229080 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 01:31:29.250206 dracut-cmdline[213]: dracut-dracut-053 Dec 13 01:31:29.255042 dracut-cmdline[213]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:31:29.282202 systemd-resolved[208]: Positive Trust Anchors: Dec 13 01:31:29.282575 systemd-resolved[208]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:31:29.282630 systemd-resolved[208]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:31:29.287855 systemd-resolved[208]: Defaulting to hostname 'linux'. Dec 13 01:31:29.289077 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:31:29.293435 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:31:29.365368 kernel: SCSI subsystem initialized Dec 13 01:31:29.375359 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:31:29.388359 kernel: iscsi: registered transport (tcp) Dec 13 01:31:29.412551 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:31:29.412631 kernel: QLogic iSCSI HBA Driver Dec 13 01:31:29.460784 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 01:31:29.465557 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 01:31:29.496555 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:31:29.496645 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:31:29.496669 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 01:31:29.543362 kernel: raid6: avx512x4 gen() 16926 MB/s Dec 13 01:31:29.560382 kernel: raid6: avx512x2 gen() 15965 MB/s Dec 13 01:31:29.577369 kernel: raid6: avx512x1 gen() 16100 MB/s Dec 13 01:31:29.594379 kernel: raid6: avx2x4 gen() 14405 MB/s Dec 13 01:31:29.611369 kernel: raid6: avx2x2 gen() 15471 MB/s Dec 13 01:31:29.628402 kernel: raid6: avx2x1 gen() 11605 MB/s Dec 13 01:31:29.628541 kernel: raid6: using algorithm avx512x4 gen() 16926 MB/s Dec 13 01:31:29.646366 kernel: raid6: .... xor() 5825 MB/s, rmw enabled Dec 13 01:31:29.646444 kernel: raid6: using avx512x2 recovery algorithm Dec 13 01:31:29.670370 kernel: xor: automatically using best checksumming function avx Dec 13 01:31:29.861368 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 01:31:29.873999 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:31:29.879556 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:31:29.904896 systemd-udevd[396]: Using default interface naming scheme 'v255'. Dec 13 01:31:29.911270 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:31:29.921111 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 01:31:29.945660 dracut-pre-trigger[404]: rd.md=0: removing MD RAID activation Dec 13 01:31:29.988020 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:31:30.003210 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:31:30.074986 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:31:30.086553 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 01:31:30.123471 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 01:31:30.127309 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:31:30.130465 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:31:30.133635 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:31:30.142530 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 01:31:30.175653 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:31:30.196353 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 01:31:30.208359 kernel: ena 0000:00:05.0: ENA device version: 0.10 Dec 13 01:31:30.224200 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Dec 13 01:31:30.224426 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Dec 13 01:31:30.224621 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 01:31:30.224646 kernel: AES CTR mode by8 optimization enabled Dec 13 01:31:30.224667 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:8e:f9:27:98:91 Dec 13 01:31:30.223809 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:31:30.223973 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:31:30.226644 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:31:30.227950 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:31:30.228185 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:31:30.233857 (udev-worker)[453]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:31:30.237964 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:31:30.250280 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:31:30.283643 kernel: nvme nvme0: pci function 0000:00:04.0 Dec 13 01:31:30.283888 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Dec 13 01:31:30.295367 kernel: nvme nvme0: 2/0/0 default/read/poll queues Dec 13 01:31:30.299788 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 01:31:30.299839 kernel: GPT:9289727 != 16777215 Dec 13 01:31:30.299857 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 01:31:30.299874 kernel: GPT:9289727 != 16777215 Dec 13 01:31:30.299890 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 01:31:30.299906 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:31:30.420015 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:31:30.427359 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:31:30.441122 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (457) Dec 13 01:31:30.458356 kernel: BTRFS: device fsid c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be devid 1 transid 41 /dev/nvme0n1p3 scanned by (udev-worker) (447) Dec 13 01:31:30.480122 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:31:30.501940 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Dec 13 01:31:30.561521 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Dec 13 01:31:30.574759 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Dec 13 01:31:30.581029 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Dec 13 01:31:30.581174 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Dec 13 01:31:30.593655 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 01:31:30.608309 disk-uuid[630]: Primary Header is updated. Dec 13 01:31:30.608309 disk-uuid[630]: Secondary Entries is updated. Dec 13 01:31:30.608309 disk-uuid[630]: Secondary Header is updated. Dec 13 01:31:30.616475 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:31:30.628438 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:31:30.633940 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:31:31.639400 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:31:31.640997 disk-uuid[631]: The operation has completed successfully. Dec 13 01:31:31.798565 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:31:31.798691 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 01:31:31.831670 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 01:31:31.849439 sh[972]: Success Dec 13 01:31:31.865396 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 01:31:31.973120 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 01:31:31.985438 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 01:31:31.987609 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 01:31:32.031228 kernel: BTRFS info (device dm-0): first mount of filesystem c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be Dec 13 01:31:32.031295 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:31:32.031375 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 01:31:32.031398 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 01:31:32.033739 kernel: BTRFS info (device dm-0): using free space tree Dec 13 01:31:32.155368 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 13 01:31:32.157432 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 01:31:32.158269 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 01:31:32.163598 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 01:31:32.166304 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 01:31:32.194481 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:31:32.194546 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:31:32.194568 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 01:31:32.200384 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 01:31:32.214366 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:31:32.214385 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:31:32.236785 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 01:31:32.248552 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 01:31:32.284652 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:31:32.290744 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:31:32.327793 systemd-networkd[1164]: lo: Link UP Dec 13 01:31:32.327804 systemd-networkd[1164]: lo: Gained carrier Dec 13 01:31:32.329537 systemd-networkd[1164]: Enumeration completed Dec 13 01:31:32.330133 systemd-networkd[1164]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:31:32.330139 systemd-networkd[1164]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:31:32.330911 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:31:32.336746 systemd[1]: Reached target network.target - Network. Dec 13 01:31:32.353662 systemd-networkd[1164]: eth0: Link UP Dec 13 01:31:32.353673 systemd-networkd[1164]: eth0: Gained carrier Dec 13 01:31:32.353688 systemd-networkd[1164]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:31:32.372431 systemd-networkd[1164]: eth0: DHCPv4 address 172.31.29.53/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 01:31:32.798949 ignition[1115]: Ignition 2.19.0 Dec 13 01:31:32.798963 ignition[1115]: Stage: fetch-offline Dec 13 01:31:32.799234 ignition[1115]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:31:32.801474 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:31:32.799247 ignition[1115]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:31:32.799657 ignition[1115]: Ignition finished successfully Dec 13 01:31:32.824555 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 01:31:32.841511 ignition[1173]: Ignition 2.19.0 Dec 13 01:31:32.841525 ignition[1173]: Stage: fetch Dec 13 01:31:32.842390 ignition[1173]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:31:32.842403 ignition[1173]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:31:32.842510 ignition[1173]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:31:32.853474 ignition[1173]: PUT result: OK Dec 13 01:31:32.855743 ignition[1173]: parsed url from cmdline: "" Dec 13 01:31:32.855753 ignition[1173]: no config URL provided Dec 13 01:31:32.855761 ignition[1173]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:31:32.855783 ignition[1173]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:31:32.855800 ignition[1173]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:31:32.857707 ignition[1173]: PUT result: OK Dec 13 01:31:32.859003 ignition[1173]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Dec 13 01:31:32.862293 ignition[1173]: GET result: OK Dec 13 01:31:32.863122 ignition[1173]: parsing config with SHA512: 196a9d6ae3d39100d06bf86d2de261312c8cfaccc2403ca014e6d99050cfd5429caf0a9b148db3852db36b2cfd6b5ad2f2d7288f0821f90e14f68f123a5e3627 Dec 13 01:31:32.870078 unknown[1173]: fetched base config from "system" Dec 13 01:31:32.870581 unknown[1173]: fetched base config from "system" Dec 13 01:31:32.870610 unknown[1173]: fetched user config from "aws" Dec 13 01:31:32.872393 ignition[1173]: fetch: fetch complete Dec 13 01:31:32.872401 ignition[1173]: fetch: fetch passed Dec 13 01:31:32.872717 ignition[1173]: Ignition finished successfully Dec 13 01:31:32.877909 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 01:31:32.884519 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 01:31:32.902931 ignition[1180]: Ignition 2.19.0 Dec 13 01:31:32.902942 ignition[1180]: Stage: kargs Dec 13 01:31:32.903261 ignition[1180]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:31:32.903271 ignition[1180]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:31:32.903437 ignition[1180]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:31:32.904756 ignition[1180]: PUT result: OK Dec 13 01:31:32.909923 ignition[1180]: kargs: kargs passed Dec 13 01:31:32.909976 ignition[1180]: Ignition finished successfully Dec 13 01:31:32.913353 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 01:31:32.919641 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 01:31:32.935910 ignition[1186]: Ignition 2.19.0 Dec 13 01:31:32.935921 ignition[1186]: Stage: disks Dec 13 01:31:32.936309 ignition[1186]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:31:32.936319 ignition[1186]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:31:32.936420 ignition[1186]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:31:32.937672 ignition[1186]: PUT result: OK Dec 13 01:31:32.945783 ignition[1186]: disks: disks passed Dec 13 01:31:32.945869 ignition[1186]: Ignition finished successfully Dec 13 01:31:32.948746 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 01:31:32.951367 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 01:31:32.952725 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:31:32.956676 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:31:32.958746 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:31:32.961583 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:31:32.972563 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 01:31:33.008285 systemd-fsck[1195]: ROOT: clean, 14/553520 files, 52654/553472 blocks Dec 13 01:31:33.016819 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 01:31:33.026642 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 01:31:33.176375 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 390119fa-ab9c-4f50-b046-3b5c76c46193 r/w with ordered data mode. Quota mode: none. Dec 13 01:31:33.177141 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 01:31:33.177954 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 01:31:33.197534 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:31:33.203492 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 01:31:33.204313 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 01:31:33.204399 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:31:33.204430 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:31:33.215933 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 01:31:33.224863 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 01:31:33.230389 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1214) Dec 13 01:31:33.234326 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:31:33.234540 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:31:33.234565 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 01:31:33.241359 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 01:31:33.243728 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:31:33.741468 initrd-setup-root[1238]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:31:33.762422 initrd-setup-root[1245]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:31:33.769583 initrd-setup-root[1252]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:31:33.776237 initrd-setup-root[1259]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:31:34.138917 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 01:31:34.146479 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 01:31:34.155702 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 01:31:34.171954 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 01:31:34.173052 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:31:34.229248 ignition[1327]: INFO : Ignition 2.19.0 Dec 13 01:31:34.229248 ignition[1327]: INFO : Stage: mount Dec 13 01:31:34.233059 ignition[1327]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:31:34.233059 ignition[1327]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:31:34.236183 ignition[1327]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:31:34.236496 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 01:31:34.240315 ignition[1327]: INFO : PUT result: OK Dec 13 01:31:34.245136 ignition[1327]: INFO : mount: mount passed Dec 13 01:31:34.246063 ignition[1327]: INFO : Ignition finished successfully Dec 13 01:31:34.247949 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 01:31:34.255475 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 01:31:34.270515 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:31:34.291363 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1338) Dec 13 01:31:34.293546 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:31:34.293638 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:31:34.293653 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 01:31:34.298357 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 01:31:34.300577 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:31:34.320126 systemd-networkd[1164]: eth0: Gained IPv6LL Dec 13 01:31:34.332261 ignition[1355]: INFO : Ignition 2.19.0 Dec 13 01:31:34.332261 ignition[1355]: INFO : Stage: files Dec 13 01:31:34.334195 ignition[1355]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:31:34.334195 ignition[1355]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:31:34.334195 ignition[1355]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:31:34.338763 ignition[1355]: INFO : PUT result: OK Dec 13 01:31:34.343858 ignition[1355]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:31:34.358639 ignition[1355]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:31:34.358639 ignition[1355]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:31:34.382330 ignition[1355]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:31:34.384049 ignition[1355]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:31:34.385574 unknown[1355]: wrote ssh authorized keys file for user: core Dec 13 01:31:34.387297 ignition[1355]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:31:34.391123 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 01:31:34.391123 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 01:31:34.391123 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:31:34.398571 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 01:31:34.531256 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 01:31:34.683703 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:31:34.683703 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:31:34.688722 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:31:34.688722 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:31:34.688722 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:31:34.688722 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:31:34.688722 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:31:34.688722 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:31:34.688722 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:31:34.688722 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:31:34.688722 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:31:34.688722 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:31:34.688722 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:31:34.688722 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:31:34.688722 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 01:31:35.260477 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 13 01:31:35.620694 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:31:35.620694 ignition[1355]: INFO : files: op(c): [started] processing unit "containerd.service" Dec 13 01:31:35.625349 ignition[1355]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 01:31:35.625349 ignition[1355]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 01:31:35.625349 ignition[1355]: INFO : files: op(c): [finished] processing unit "containerd.service" Dec 13 01:31:35.625349 ignition[1355]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Dec 13 01:31:35.625349 ignition[1355]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:31:35.625349 ignition[1355]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:31:35.625349 ignition[1355]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Dec 13 01:31:35.625349 ignition[1355]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Dec 13 01:31:35.625349 ignition[1355]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 01:31:35.625349 ignition[1355]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:31:35.625349 ignition[1355]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:31:35.625349 ignition[1355]: INFO : files: files passed Dec 13 01:31:35.625349 ignition[1355]: INFO : Ignition finished successfully Dec 13 01:31:35.633433 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 01:31:35.653061 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 01:31:35.658636 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 01:31:35.667696 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:31:35.671951 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 01:31:35.688092 initrd-setup-root-after-ignition[1383]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:31:35.688092 initrd-setup-root-after-ignition[1383]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:31:35.695435 initrd-setup-root-after-ignition[1387]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:31:35.699317 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:31:35.703077 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 01:31:35.711081 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 01:31:35.765257 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:31:35.766489 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 01:31:35.771592 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 01:31:35.774813 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 01:31:35.777123 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 01:31:35.783497 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 01:31:35.801829 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:31:35.808582 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 01:31:35.836618 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:31:35.839547 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:31:35.841256 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 01:31:35.843286 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:31:35.844927 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:31:35.849911 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 01:31:35.850107 systemd[1]: Stopped target basic.target - Basic System. Dec 13 01:31:35.854361 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 01:31:35.856901 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:31:35.859612 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 01:31:35.862436 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 01:31:35.864173 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:31:35.878423 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 01:31:35.882365 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 01:31:35.885806 systemd[1]: Stopped target swap.target - Swaps. Dec 13 01:31:35.888629 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:31:35.888767 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:31:35.892431 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:31:35.895000 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:31:35.898190 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 01:31:35.898281 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:31:35.902748 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:31:35.902938 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 01:31:35.907211 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:31:35.907372 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:31:35.912493 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:31:35.912715 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 01:31:35.922574 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 01:31:35.928707 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 01:31:35.931443 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:31:35.931794 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:31:35.936543 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:31:35.936713 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:31:35.951315 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:31:35.954157 ignition[1407]: INFO : Ignition 2.19.0 Dec 13 01:31:35.954157 ignition[1407]: INFO : Stage: umount Dec 13 01:31:35.954157 ignition[1407]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:31:35.954157 ignition[1407]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:31:35.954157 ignition[1407]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:31:35.951473 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 01:31:35.963266 ignition[1407]: INFO : PUT result: OK Dec 13 01:31:35.969721 ignition[1407]: INFO : umount: umount passed Dec 13 01:31:35.971684 ignition[1407]: INFO : Ignition finished successfully Dec 13 01:31:35.972492 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:31:35.972618 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 01:31:35.977730 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:31:35.977917 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 01:31:35.983772 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:31:35.983861 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 01:31:35.986175 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 01:31:35.986251 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 01:31:35.993497 systemd[1]: Stopped target network.target - Network. Dec 13 01:31:35.993785 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:31:35.993870 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:31:35.994885 systemd[1]: Stopped target paths.target - Path Units. Dec 13 01:31:35.995127 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:31:35.997739 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:31:35.999320 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 01:31:36.001616 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 01:31:36.002999 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:31:36.003062 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:31:36.006644 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:31:36.006748 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:31:36.011562 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:31:36.011657 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 01:31:36.015912 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 01:31:36.015977 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 01:31:36.019839 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 01:31:36.022470 systemd-networkd[1164]: eth0: DHCPv6 lease lost Dec 13 01:31:36.022944 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 01:31:36.028150 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:31:36.028833 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:31:36.028933 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 01:31:36.033803 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:31:36.033913 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 01:31:36.041444 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:31:36.041538 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:31:36.051848 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 01:31:36.053824 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:31:36.053897 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:31:36.056958 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:31:36.057020 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:31:36.062196 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:31:36.063609 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 01:31:36.069036 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 01:31:36.069094 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:31:36.074633 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:31:36.087932 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:31:36.088078 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 01:31:36.092114 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:31:36.096109 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:31:36.105952 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:31:36.106085 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 01:31:36.109032 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:31:36.109418 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:31:36.113658 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:31:36.113742 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:31:36.117136 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:31:36.117211 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 01:31:36.123743 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:31:36.123915 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:31:36.134626 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 01:31:36.136344 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:31:36.136422 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:31:36.137970 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 01:31:36.138035 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:31:36.139513 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:31:36.139569 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:31:36.141051 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:31:36.141110 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:31:36.145496 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:31:36.145654 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 01:31:36.226629 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:31:36.226868 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 01:31:36.231191 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 01:31:36.232653 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:31:36.232737 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 01:31:36.247059 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 01:31:36.260079 systemd[1]: Switching root. Dec 13 01:31:36.292417 systemd-journald[178]: Journal stopped Dec 13 01:31:39.197907 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Dec 13 01:31:39.198002 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 01:31:39.198026 kernel: SELinux: policy capability open_perms=1 Dec 13 01:31:39.198047 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 01:31:39.198067 kernel: SELinux: policy capability always_check_network=0 Dec 13 01:31:39.198087 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 01:31:39.198108 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 01:31:39.198132 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 01:31:39.198201 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 01:31:39.198226 kernel: audit: type=1403 audit(1734053497.751:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 01:31:39.198266 systemd[1]: Successfully loaded SELinux policy in 60.649ms. Dec 13 01:31:39.198295 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.704ms. Dec 13 01:31:39.198318 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:31:39.206075 systemd[1]: Detected virtualization amazon. Dec 13 01:31:39.206118 systemd[1]: Detected architecture x86-64. Dec 13 01:31:39.206213 systemd[1]: Detected first boot. Dec 13 01:31:39.206241 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:31:39.206264 zram_generator::config[1466]: No configuration found. Dec 13 01:31:39.206303 systemd[1]: Populated /etc with preset unit settings. Dec 13 01:31:39.206325 systemd[1]: Queued start job for default target multi-user.target. Dec 13 01:31:39.206359 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Dec 13 01:31:39.206383 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 01:31:39.206405 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 01:31:39.206427 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 01:31:39.206449 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 01:31:39.206784 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 01:31:39.212414 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 01:31:39.212447 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 01:31:39.212477 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 01:31:39.212499 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:31:39.212522 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:31:39.212544 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 01:31:39.212566 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 01:31:39.212589 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 01:31:39.212612 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:31:39.212636 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 01:31:39.212657 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:31:39.212679 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 01:31:39.212700 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:31:39.212722 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:31:39.212744 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:31:39.212766 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:31:39.212787 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 01:31:39.212813 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 01:31:39.212834 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:31:39.212856 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:31:39.212878 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:31:39.212900 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:31:39.212924 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:31:39.212945 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 01:31:39.212967 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 01:31:39.212989 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 01:31:39.213013 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 01:31:39.213035 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:31:39.213057 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 01:31:39.213079 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 01:31:39.213100 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 01:31:39.213124 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 01:31:39.213147 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:31:39.213168 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:31:39.213190 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 01:31:39.213226 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:31:39.213246 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:31:39.213267 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:31:39.213288 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 01:31:39.213461 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:31:39.213487 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 01:31:39.213509 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Dec 13 01:31:39.213531 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Dec 13 01:31:39.213557 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:31:39.213578 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:31:39.213599 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 01:31:39.213637 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 01:31:39.213659 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:31:39.213682 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:31:39.213704 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 01:31:39.213726 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 01:31:39.213747 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 01:31:39.213773 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 01:31:39.213794 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 01:31:39.213816 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 01:31:39.213839 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:31:39.213861 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 01:31:39.213884 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 01:31:39.213943 systemd-journald[1563]: Collecting audit messages is disabled. Dec 13 01:31:39.213992 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:31:39.214014 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:31:39.214036 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:31:39.214058 kernel: loop: module loaded Dec 13 01:31:39.214080 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:31:39.214105 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:31:39.214128 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:31:39.214149 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:31:39.214171 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 01:31:39.214193 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 01:31:39.214216 systemd-journald[1563]: Journal started Dec 13 01:31:39.214258 systemd-journald[1563]: Runtime Journal (/run/log/journal/ec22cb6ccc3e64aa98c88bbcb885ac12) is 4.8M, max 38.6M, 33.7M free. Dec 13 01:31:39.217365 kernel: fuse: init (API version 7.39) Dec 13 01:31:39.237833 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:31:39.252778 kernel: ACPI: bus type drm_connector registered Dec 13 01:31:39.249682 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 01:31:39.250035 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 01:31:39.253984 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 01:31:39.256790 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:31:39.259542 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:31:39.273284 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 01:31:39.282513 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 01:31:39.291426 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 01:31:39.293223 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 01:31:39.301936 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 01:31:39.311549 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 01:31:39.316489 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:31:39.333660 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 01:31:39.335608 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:31:39.344602 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:31:39.355540 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:31:39.369918 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 01:31:39.372680 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 01:31:39.374697 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 01:31:39.379857 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 01:31:39.390440 systemd-journald[1563]: Time spent on flushing to /var/log/journal/ec22cb6ccc3e64aa98c88bbcb885ac12 is 47.092ms for 951 entries. Dec 13 01:31:39.390440 systemd-journald[1563]: System Journal (/var/log/journal/ec22cb6ccc3e64aa98c88bbcb885ac12) is 8.0M, max 195.6M, 187.6M free. Dec 13 01:31:39.445781 systemd-journald[1563]: Received client request to flush runtime journal. Dec 13 01:31:39.390773 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:31:39.400528 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 01:31:39.441195 udevadm[1622]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 01:31:39.448460 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 01:31:39.459502 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:31:39.462091 systemd-tmpfiles[1615]: ACLs are not supported, ignoring. Dec 13 01:31:39.463429 systemd-tmpfiles[1615]: ACLs are not supported, ignoring. Dec 13 01:31:39.471578 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:31:39.480567 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 01:31:39.564768 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 01:31:39.574519 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:31:39.595684 systemd-tmpfiles[1637]: ACLs are not supported, ignoring. Dec 13 01:31:39.596105 systemd-tmpfiles[1637]: ACLs are not supported, ignoring. Dec 13 01:31:39.603179 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:31:40.198527 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 01:31:40.213664 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:31:40.244989 systemd-udevd[1643]: Using default interface naming scheme 'v255'. Dec 13 01:31:40.333042 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:31:40.348998 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:31:40.376177 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 01:31:40.466315 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Dec 13 01:31:40.501770 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 01:31:40.508272 (udev-worker)[1651]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:31:40.512394 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1652) Dec 13 01:31:40.514895 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Dec 13 01:31:40.535008 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1652) Dec 13 01:31:40.535077 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Dec 13 01:31:40.540358 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Dec 13 01:31:40.546528 kernel: ACPI: button: Power Button [PWRF] Dec 13 01:31:40.546600 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Dec 13 01:31:40.546646 kernel: ACPI: button: Sleep Button [SLPF] Dec 13 01:31:40.608624 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 01:31:40.633537 systemd-networkd[1647]: lo: Link UP Dec 13 01:31:40.634713 systemd-networkd[1647]: lo: Gained carrier Dec 13 01:31:40.636483 systemd-networkd[1647]: Enumeration completed Dec 13 01:31:40.638013 systemd-networkd[1647]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:31:40.638104 systemd-networkd[1647]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:31:40.638628 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:31:40.641137 systemd-networkd[1647]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:31:40.641189 systemd-networkd[1647]: eth0: Link UP Dec 13 01:31:40.643943 systemd-networkd[1647]: eth0: Gained carrier Dec 13 01:31:40.643990 systemd-networkd[1647]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:31:40.647767 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 01:31:40.654397 systemd-networkd[1647]: eth0: DHCPv4 address 172.31.29.53/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 01:31:40.658050 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:31:40.719365 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (1648) Dec 13 01:31:40.888664 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Dec 13 01:31:40.939133 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 01:31:40.940953 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:31:40.955679 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 01:31:40.998344 lvm[1767]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:31:41.031994 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 01:31:41.034702 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:31:41.046578 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 01:31:41.067014 lvm[1770]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:31:41.108538 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 01:31:41.111293 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:31:41.113249 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 01:31:41.113282 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:31:41.114503 systemd[1]: Reached target machines.target - Containers. Dec 13 01:31:41.116917 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 01:31:41.123521 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 01:31:41.126425 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 01:31:41.129563 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:31:41.138536 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 01:31:41.142457 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 01:31:41.154536 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 01:31:41.157179 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 01:31:41.180681 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 01:31:41.198354 kernel: loop0: detected capacity change from 0 to 142488 Dec 13 01:31:41.214849 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 01:31:41.215601 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 01:31:41.336361 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 01:31:41.360358 kernel: loop1: detected capacity change from 0 to 211296 Dec 13 01:31:41.432365 kernel: loop2: detected capacity change from 0 to 140768 Dec 13 01:31:41.591777 kernel: loop3: detected capacity change from 0 to 61336 Dec 13 01:31:41.739360 kernel: loop4: detected capacity change from 0 to 142488 Dec 13 01:31:41.784360 kernel: loop5: detected capacity change from 0 to 211296 Dec 13 01:31:41.814370 kernel: loop6: detected capacity change from 0 to 140768 Dec 13 01:31:41.853363 kernel: loop7: detected capacity change from 0 to 61336 Dec 13 01:31:41.870165 (sd-merge)[1792]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Dec 13 01:31:41.870795 (sd-merge)[1792]: Merged extensions into '/usr'. Dec 13 01:31:41.876687 systemd[1]: Reloading requested from client PID 1778 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 01:31:41.876705 systemd[1]: Reloading... Dec 13 01:31:42.000410 zram_generator::config[1820]: No configuration found. Dec 13 01:31:42.203738 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:31:42.336290 systemd[1]: Reloading finished in 458 ms. Dec 13 01:31:42.361196 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 01:31:42.371577 systemd[1]: Starting ensure-sysext.service... Dec 13 01:31:42.377540 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:31:42.388697 systemd[1]: Reloading requested from client PID 1874 ('systemctl') (unit ensure-sysext.service)... Dec 13 01:31:42.388714 systemd[1]: Reloading... Dec 13 01:31:42.418642 systemd-tmpfiles[1875]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 01:31:42.419232 systemd-tmpfiles[1875]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 01:31:42.422754 systemd-tmpfiles[1875]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 01:31:42.423385 systemd-tmpfiles[1875]: ACLs are not supported, ignoring. Dec 13 01:31:42.423475 systemd-tmpfiles[1875]: ACLs are not supported, ignoring. Dec 13 01:31:42.430056 systemd-tmpfiles[1875]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:31:42.430075 systemd-tmpfiles[1875]: Skipping /boot Dec 13 01:31:42.451880 systemd-tmpfiles[1875]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:31:42.451901 systemd-tmpfiles[1875]: Skipping /boot Dec 13 01:31:42.511561 systemd-networkd[1647]: eth0: Gained IPv6LL Dec 13 01:31:42.558863 zram_generator::config[1905]: No configuration found. Dec 13 01:31:42.730189 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:31:42.805702 systemd[1]: Reloading finished in 416 ms. Dec 13 01:31:42.823165 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 01:31:42.830206 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:31:42.845591 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:31:42.849509 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 01:31:42.856026 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 01:31:42.867654 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:31:42.883646 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 01:31:42.894798 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:31:42.895104 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:31:42.900732 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:31:42.915867 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:31:42.921995 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:31:42.923547 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:31:42.923722 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:31:42.935099 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:31:42.936867 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:31:42.946392 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:31:42.946624 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:31:42.956568 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:31:42.956896 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:31:42.963737 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:31:42.964241 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:31:42.970150 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:31:42.970423 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:31:42.970682 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:31:42.970882 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:31:42.982469 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:31:42.982949 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:31:42.990683 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:31:43.016721 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:31:43.031807 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:31:43.038633 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:31:43.040577 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:31:43.040867 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 01:31:43.043086 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:31:43.048280 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 01:31:43.054996 ldconfig[1774]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 01:31:43.064109 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:31:43.068781 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:31:43.072589 systemd[1]: Finished ensure-sysext.service. Dec 13 01:31:43.074985 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:31:43.075231 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:31:43.080316 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:31:43.080684 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:31:43.090053 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 01:31:43.091678 augenrules[2005]: No rules Dec 13 01:31:43.100714 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:31:43.103163 systemd-resolved[1968]: Positive Trust Anchors: Dec 13 01:31:43.103252 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:31:43.105676 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:31:43.106596 systemd-resolved[1968]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:31:43.106731 systemd-resolved[1968]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:31:43.116276 systemd-resolved[1968]: Defaulting to hostname 'linux'. Dec 13 01:31:43.117206 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:31:43.117293 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:31:43.119963 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:31:43.121655 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 01:31:43.123541 systemd[1]: Reached target network.target - Network. Dec 13 01:31:43.124880 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 01:31:43.126099 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:31:43.131520 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 01:31:43.146011 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 01:31:43.187052 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 01:31:43.189436 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:31:43.189492 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:31:43.190739 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 01:31:43.192090 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 01:31:43.193938 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 01:31:43.195435 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 01:31:43.196878 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 01:31:43.198518 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 01:31:43.198571 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:31:43.199554 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:31:43.202071 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 01:31:43.205046 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 01:31:43.209320 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 01:31:43.212829 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 01:31:43.214890 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:31:43.216372 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:31:43.217900 systemd[1]: System is tainted: cgroupsv1 Dec 13 01:31:43.217959 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:31:43.217992 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:31:43.222082 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 01:31:43.228511 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 01:31:43.238174 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 01:31:43.250529 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 01:31:43.253413 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 01:31:43.254589 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 01:31:43.270452 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:31:43.289686 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 01:31:43.299534 systemd[1]: Started ntpd.service - Network Time Service. Dec 13 01:31:43.310870 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 01:31:43.332017 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 01:31:43.344245 jq[2033]: false Dec 13 01:31:43.347460 systemd[1]: Starting setup-oem.service - Setup OEM... Dec 13 01:31:43.379192 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 01:31:43.386923 extend-filesystems[2034]: Found loop4 Dec 13 01:31:43.390108 extend-filesystems[2034]: Found loop5 Dec 13 01:31:43.390108 extend-filesystems[2034]: Found loop6 Dec 13 01:31:43.390108 extend-filesystems[2034]: Found loop7 Dec 13 01:31:43.390108 extend-filesystems[2034]: Found nvme0n1 Dec 13 01:31:43.390108 extend-filesystems[2034]: Found nvme0n1p1 Dec 13 01:31:43.390108 extend-filesystems[2034]: Found nvme0n1p2 Dec 13 01:31:43.390108 extend-filesystems[2034]: Found nvme0n1p3 Dec 13 01:31:43.390108 extend-filesystems[2034]: Found usr Dec 13 01:31:43.390108 extend-filesystems[2034]: Found nvme0n1p4 Dec 13 01:31:43.390108 extend-filesystems[2034]: Found nvme0n1p6 Dec 13 01:31:43.390108 extend-filesystems[2034]: Found nvme0n1p7 Dec 13 01:31:43.390108 extend-filesystems[2034]: Found nvme0n1p9 Dec 13 01:31:43.390108 extend-filesystems[2034]: Checking size of /dev/nvme0n1p9 Dec 13 01:31:43.388747 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 01:31:43.435000 extend-filesystems[2034]: Resized partition /dev/nvme0n1p9 Dec 13 01:31:43.450602 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 01:31:43.452240 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 01:31:43.462797 extend-filesystems[2065]: resize2fs 1.47.1 (20-May-2024) Dec 13 01:31:43.470889 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 01:31:43.483672 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 01:31:43.489974 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Dec 13 01:31:43.489880 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 01:31:43.488465 dbus-daemon[2031]: [system] SELinux support is enabled Dec 13 01:31:43.506520 dbus-daemon[2031]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1647 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 13 01:31:43.522652 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 01:31:43.523239 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 01:31:43.532948 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 01:31:43.533631 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 01:31:43.548785 ntpd[2037]: ntpd 4.2.8p17@1.4004-o Thu Dec 12 22:36:14 UTC 2024 (1): Starting Dec 13 01:31:43.560641 ntpd[2037]: 13 Dec 01:31:43 ntpd[2037]: ntpd 4.2.8p17@1.4004-o Thu Dec 12 22:36:14 UTC 2024 (1): Starting Dec 13 01:31:43.560641 ntpd[2037]: 13 Dec 01:31:43 ntpd[2037]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 13 01:31:43.560641 ntpd[2037]: 13 Dec 01:31:43 ntpd[2037]: ---------------------------------------------------- Dec 13 01:31:43.560641 ntpd[2037]: 13 Dec 01:31:43 ntpd[2037]: ntp-4 is maintained by Network Time Foundation, Dec 13 01:31:43.560641 ntpd[2037]: 13 Dec 01:31:43 ntpd[2037]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 13 01:31:43.560641 ntpd[2037]: 13 Dec 01:31:43 ntpd[2037]: corporation. Support and training for ntp-4 are Dec 13 01:31:43.560641 ntpd[2037]: 13 Dec 01:31:43 ntpd[2037]: available at https://www.nwtime.org/support Dec 13 01:31:43.560641 ntpd[2037]: 13 Dec 01:31:43 ntpd[2037]: ---------------------------------------------------- Dec 13 01:31:43.560641 ntpd[2037]: 13 Dec 01:31:43 ntpd[2037]: proto: precision = 0.078 usec (-24) Dec 13 01:31:43.560641 ntpd[2037]: 13 Dec 01:31:43 ntpd[2037]: basedate set to 2024-11-30 Dec 13 01:31:43.560641 ntpd[2037]: 13 Dec 01:31:43 ntpd[2037]: gps base set to 2024-12-01 (week 2343) Dec 13 01:31:43.558629 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 01:31:43.548818 ntpd[2037]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 13 01:31:43.559068 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 01:31:43.597444 ntpd[2037]: 13 Dec 01:31:43 ntpd[2037]: Listen and drop on 0 v6wildcard [::]:123 Dec 13 01:31:43.597444 ntpd[2037]: 13 Dec 01:31:43 ntpd[2037]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 13 01:31:43.597444 ntpd[2037]: 13 Dec 01:31:43 ntpd[2037]: Listen normally on 2 lo 127.0.0.1:123 Dec 13 01:31:43.597444 ntpd[2037]: 13 Dec 01:31:43 ntpd[2037]: Listen normally on 3 eth0 172.31.29.53:123 Dec 13 01:31:43.597444 ntpd[2037]: 13 Dec 01:31:43 ntpd[2037]: Listen normally on 4 lo [::1]:123 Dec 13 01:31:43.597444 ntpd[2037]: 13 Dec 01:31:43 ntpd[2037]: Listen normally on 5 eth0 [fe80::48e:f9ff:fe27:9891%2]:123 Dec 13 01:31:43.597444 ntpd[2037]: 13 Dec 01:31:43 ntpd[2037]: Listening on routing socket on fd #22 for interface updates Dec 13 01:31:43.600750 jq[2067]: true Dec 13 01:31:43.548830 ntpd[2037]: ---------------------------------------------------- Dec 13 01:31:43.548841 ntpd[2037]: ntp-4 is maintained by Network Time Foundation, Dec 13 01:31:43.548851 ntpd[2037]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 13 01:31:43.548861 ntpd[2037]: corporation. Support and training for ntp-4 are Dec 13 01:31:43.548871 ntpd[2037]: available at https://www.nwtime.org/support Dec 13 01:31:43.548881 ntpd[2037]: ---------------------------------------------------- Dec 13 01:31:43.552804 ntpd[2037]: proto: precision = 0.078 usec (-24) Dec 13 01:31:43.556198 ntpd[2037]: basedate set to 2024-11-30 Dec 13 01:31:43.556217 ntpd[2037]: gps base set to 2024-12-01 (week 2343) Dec 13 01:31:43.578769 ntpd[2037]: Listen and drop on 0 v6wildcard [::]:123 Dec 13 01:31:43.578825 ntpd[2037]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 13 01:31:43.583015 ntpd[2037]: Listen normally on 2 lo 127.0.0.1:123 Dec 13 01:31:43.583090 ntpd[2037]: Listen normally on 3 eth0 172.31.29.53:123 Dec 13 01:31:43.618958 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 01:31:43.583134 ntpd[2037]: Listen normally on 4 lo [::1]:123 Dec 13 01:31:43.595656 ntpd[2037]: Listen normally on 5 eth0 [fe80::48e:f9ff:fe27:9891%2]:123 Dec 13 01:31:43.595714 ntpd[2037]: Listening on routing socket on fd #22 for interface updates Dec 13 01:31:43.641312 ntpd[2037]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:31:43.644194 ntpd[2037]: 13 Dec 01:31:43 ntpd[2037]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:31:43.644194 ntpd[2037]: 13 Dec 01:31:43 ntpd[2037]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:31:43.643577 ntpd[2037]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:31:43.661494 update_engine[2066]: I20241213 01:31:43.659193 2066 main.cc:92] Flatcar Update Engine starting Dec 13 01:31:43.663213 (ntainerd)[2086]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 01:31:43.675178 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 01:31:43.675236 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 01:31:43.676682 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 01:31:43.676708 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 01:31:43.678376 tar[2075]: linux-amd64/helm Dec 13 01:31:43.684217 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Dec 13 01:31:43.684431 coreos-metadata[2030]: Dec 13 01:31:43.683 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 13 01:31:43.683114 dbus-daemon[2031]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 01:31:43.735483 update_engine[2066]: I20241213 01:31:43.703246 2066 update_check_scheduler.cc:74] Next update check in 8m18s Dec 13 01:31:43.735557 coreos-metadata[2030]: Dec 13 01:31:43.685 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Dec 13 01:31:43.735557 coreos-metadata[2030]: Dec 13 01:31:43.686 INFO Fetch successful Dec 13 01:31:43.735557 coreos-metadata[2030]: Dec 13 01:31:43.686 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Dec 13 01:31:43.735557 coreos-metadata[2030]: Dec 13 01:31:43.688 INFO Fetch successful Dec 13 01:31:43.735557 coreos-metadata[2030]: Dec 13 01:31:43.688 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Dec 13 01:31:43.735557 coreos-metadata[2030]: Dec 13 01:31:43.689 INFO Fetch successful Dec 13 01:31:43.735557 coreos-metadata[2030]: Dec 13 01:31:43.689 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Dec 13 01:31:43.735557 coreos-metadata[2030]: Dec 13 01:31:43.689 INFO Fetch successful Dec 13 01:31:43.735557 coreos-metadata[2030]: Dec 13 01:31:43.690 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Dec 13 01:31:43.735557 coreos-metadata[2030]: Dec 13 01:31:43.691 INFO Fetch failed with 404: resource not found Dec 13 01:31:43.735557 coreos-metadata[2030]: Dec 13 01:31:43.691 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Dec 13 01:31:43.735557 coreos-metadata[2030]: Dec 13 01:31:43.691 INFO Fetch successful Dec 13 01:31:43.735557 coreos-metadata[2030]: Dec 13 01:31:43.691 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Dec 13 01:31:43.735557 coreos-metadata[2030]: Dec 13 01:31:43.692 INFO Fetch successful Dec 13 01:31:43.735557 coreos-metadata[2030]: Dec 13 01:31:43.692 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Dec 13 01:31:43.735557 coreos-metadata[2030]: Dec 13 01:31:43.693 INFO Fetch successful Dec 13 01:31:43.735557 coreos-metadata[2030]: Dec 13 01:31:43.693 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Dec 13 01:31:43.735557 coreos-metadata[2030]: Dec 13 01:31:43.706 INFO Fetch successful Dec 13 01:31:43.735557 coreos-metadata[2030]: Dec 13 01:31:43.706 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Dec 13 01:31:43.735557 coreos-metadata[2030]: Dec 13 01:31:43.712 INFO Fetch successful Dec 13 01:31:43.740863 jq[2083]: true Dec 13 01:31:43.741252 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Dec 13 01:31:43.751493 extend-filesystems[2065]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Dec 13 01:31:43.751493 extend-filesystems[2065]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 01:31:43.751493 extend-filesystems[2065]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Dec 13 01:31:43.767631 extend-filesystems[2034]: Resized filesystem in /dev/nvme0n1p9 Dec 13 01:31:43.766943 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 01:31:43.767288 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 01:31:43.825907 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (2113) Dec 13 01:31:43.838209 systemd[1]: Finished setup-oem.service - Setup OEM. Dec 13 01:31:43.854028 systemd-logind[2059]: Watching system buttons on /dev/input/event2 (Power Button) Dec 13 01:31:43.864657 systemd-logind[2059]: Watching system buttons on /dev/input/event3 (Sleep Button) Dec 13 01:31:43.864699 systemd-logind[2059]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 01:31:43.866831 systemd[1]: Started update-engine.service - Update Engine. Dec 13 01:31:43.871546 systemd-logind[2059]: New seat seat0. Dec 13 01:31:43.884655 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 01:31:43.901646 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Dec 13 01:31:43.916772 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 01:31:43.925895 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 01:31:43.930609 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 01:31:43.932939 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 01:31:44.120407 bash[2196]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:31:44.125537 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 01:31:44.140715 systemd[1]: Starting sshkeys.service... Dec 13 01:31:44.209207 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 13 01:31:44.217949 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 13 01:31:44.378989 locksmithd[2147]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 01:31:44.404144 amazon-ssm-agent[2137]: Initializing new seelog logger Dec 13 01:31:44.404144 amazon-ssm-agent[2137]: New Seelog Logger Creation Complete Dec 13 01:31:44.404144 amazon-ssm-agent[2137]: 2024/12/13 01:31:44 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:31:44.404144 amazon-ssm-agent[2137]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:31:44.404144 amazon-ssm-agent[2137]: 2024/12/13 01:31:44 processing appconfig overrides Dec 13 01:31:44.407636 amazon-ssm-agent[2137]: 2024/12/13 01:31:44 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:31:44.407636 amazon-ssm-agent[2137]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:31:44.407636 amazon-ssm-agent[2137]: 2024/12/13 01:31:44 processing appconfig overrides Dec 13 01:31:44.407636 amazon-ssm-agent[2137]: 2024/12/13 01:31:44 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:31:44.407636 amazon-ssm-agent[2137]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:31:44.407636 amazon-ssm-agent[2137]: 2024/12/13 01:31:44 processing appconfig overrides Dec 13 01:31:44.409056 amazon-ssm-agent[2137]: 2024-12-13 01:31:44 INFO Proxy environment variables: Dec 13 01:31:44.417756 amazon-ssm-agent[2137]: 2024/12/13 01:31:44 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:31:44.417756 amazon-ssm-agent[2137]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:31:44.417756 amazon-ssm-agent[2137]: 2024/12/13 01:31:44 processing appconfig overrides Dec 13 01:31:44.442926 coreos-metadata[2234]: Dec 13 01:31:44.437 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 13 01:31:44.440944 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Dec 13 01:31:44.440741 dbus-daemon[2031]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 13 01:31:44.462430 coreos-metadata[2234]: Dec 13 01:31:44.459 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Dec 13 01:31:44.462430 coreos-metadata[2234]: Dec 13 01:31:44.460 INFO Fetch successful Dec 13 01:31:44.462430 coreos-metadata[2234]: Dec 13 01:31:44.460 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 13 01:31:44.462430 coreos-metadata[2234]: Dec 13 01:31:44.461 INFO Fetch successful Dec 13 01:31:44.459993 dbus-daemon[2031]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2115 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 13 01:31:44.468509 unknown[2234]: wrote ssh authorized keys file for user: core Dec 13 01:31:44.494862 systemd[1]: Starting polkit.service - Authorization Manager... Dec 13 01:31:44.519224 amazon-ssm-agent[2137]: 2024-12-13 01:31:44 INFO http_proxy: Dec 13 01:31:44.581587 update-ssh-keys[2261]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:31:44.581249 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 13 01:31:44.601191 systemd[1]: Finished sshkeys.service. Dec 13 01:31:44.614086 polkitd[2260]: Started polkitd version 121 Dec 13 01:31:44.621362 amazon-ssm-agent[2137]: 2024-12-13 01:31:44 INFO no_proxy: Dec 13 01:31:44.661464 sshd_keygen[2095]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 01:31:44.661065 polkitd[2260]: Loading rules from directory /etc/polkit-1/rules.d Dec 13 01:31:44.661150 polkitd[2260]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 13 01:31:44.675017 polkitd[2260]: Finished loading, compiling and executing 2 rules Dec 13 01:31:44.681718 systemd[1]: Started polkit.service - Authorization Manager. Dec 13 01:31:44.681506 dbus-daemon[2031]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 13 01:31:44.689803 polkitd[2260]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 13 01:31:44.725451 amazon-ssm-agent[2137]: 2024-12-13 01:31:44 INFO https_proxy: Dec 13 01:31:44.742177 systemd-hostnamed[2115]: Hostname set to (transient) Dec 13 01:31:44.742983 systemd-resolved[1968]: System hostname changed to 'ip-172-31-29-53'. Dec 13 01:31:44.798884 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 01:31:44.813695 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 01:31:44.828408 amazon-ssm-agent[2137]: 2024-12-13 01:31:44 INFO Checking if agent identity type OnPrem can be assumed Dec 13 01:31:44.847472 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 01:31:44.847834 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 01:31:44.861720 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 01:31:44.886054 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 01:31:44.901008 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 01:31:44.916583 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 01:31:44.918172 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 01:31:44.934182 amazon-ssm-agent[2137]: 2024-12-13 01:31:44 INFO Checking if agent identity type EC2 can be assumed Dec 13 01:31:44.970301 containerd[2086]: time="2024-12-13T01:31:44.970206546Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 01:31:45.033079 amazon-ssm-agent[2137]: 2024-12-13 01:31:44 INFO Agent will take identity from EC2 Dec 13 01:31:45.047251 containerd[2086]: time="2024-12-13T01:31:45.045383347Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:31:45.047841 containerd[2086]: time="2024-12-13T01:31:45.047796911Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:31:45.047968 containerd[2086]: time="2024-12-13T01:31:45.047950225Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 01:31:45.048046 containerd[2086]: time="2024-12-13T01:31:45.048032545Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 01:31:45.048295 containerd[2086]: time="2024-12-13T01:31:45.048272994Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 01:31:45.051312 containerd[2086]: time="2024-12-13T01:31:45.050377557Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 01:31:45.051312 containerd[2086]: time="2024-12-13T01:31:45.050513602Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:31:45.051312 containerd[2086]: time="2024-12-13T01:31:45.050535728Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:31:45.051312 containerd[2086]: time="2024-12-13T01:31:45.050847871Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:31:45.051312 containerd[2086]: time="2024-12-13T01:31:45.050870719Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 01:31:45.051312 containerd[2086]: time="2024-12-13T01:31:45.050891268Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:31:45.051312 containerd[2086]: time="2024-12-13T01:31:45.050908407Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 01:31:45.051312 containerd[2086]: time="2024-12-13T01:31:45.051006537Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:31:45.051312 containerd[2086]: time="2024-12-13T01:31:45.051270053Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:31:45.052422 containerd[2086]: time="2024-12-13T01:31:45.051918311Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:31:45.052422 containerd[2086]: time="2024-12-13T01:31:45.051958856Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 01:31:45.052422 containerd[2086]: time="2024-12-13T01:31:45.052079960Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 01:31:45.052422 containerd[2086]: time="2024-12-13T01:31:45.052137327Z" level=info msg="metadata content store policy set" policy=shared Dec 13 01:31:45.065581 containerd[2086]: time="2024-12-13T01:31:45.065529712Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 01:31:45.065853 containerd[2086]: time="2024-12-13T01:31:45.065771328Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 01:31:45.065955 containerd[2086]: time="2024-12-13T01:31:45.065939454Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 01:31:45.066039 containerd[2086]: time="2024-12-13T01:31:45.066025136Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 01:31:45.066148 containerd[2086]: time="2024-12-13T01:31:45.066133304Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 01:31:45.068050 containerd[2086]: time="2024-12-13T01:31:45.067481755Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 01:31:45.068386 containerd[2086]: time="2024-12-13T01:31:45.068365525Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 01:31:45.068671 containerd[2086]: time="2024-12-13T01:31:45.068646636Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 01:31:45.070401 containerd[2086]: time="2024-12-13T01:31:45.070379204Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 01:31:45.070507 containerd[2086]: time="2024-12-13T01:31:45.070491151Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 01:31:45.070759 containerd[2086]: time="2024-12-13T01:31:45.070737492Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 01:31:45.070861 containerd[2086]: time="2024-12-13T01:31:45.070845488Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 01:31:45.070950 containerd[2086]: time="2024-12-13T01:31:45.070935568Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 01:31:45.071896 containerd[2086]: time="2024-12-13T01:31:45.071159565Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 01:31:45.071896 containerd[2086]: time="2024-12-13T01:31:45.071191157Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 01:31:45.071896 containerd[2086]: time="2024-12-13T01:31:45.071227556Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 01:31:45.071896 containerd[2086]: time="2024-12-13T01:31:45.071247883Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 01:31:45.071896 containerd[2086]: time="2024-12-13T01:31:45.071268460Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 01:31:45.071896 containerd[2086]: time="2024-12-13T01:31:45.071323002Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 01:31:45.071896 containerd[2086]: time="2024-12-13T01:31:45.071365317Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 01:31:45.071896 containerd[2086]: time="2024-12-13T01:31:45.071389126Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 01:31:45.071896 containerd[2086]: time="2024-12-13T01:31:45.071408854Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 01:31:45.071896 containerd[2086]: time="2024-12-13T01:31:45.071441366Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 01:31:45.071896 containerd[2086]: time="2024-12-13T01:31:45.071462263Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 01:31:45.071896 containerd[2086]: time="2024-12-13T01:31:45.071480382Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 01:31:45.071896 containerd[2086]: time="2024-12-13T01:31:45.071515223Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 01:31:45.071896 containerd[2086]: time="2024-12-13T01:31:45.071534972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 01:31:45.072477 containerd[2086]: time="2024-12-13T01:31:45.071557551Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 01:31:45.072477 containerd[2086]: time="2024-12-13T01:31:45.071595052Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 01:31:45.072477 containerd[2086]: time="2024-12-13T01:31:45.071613714Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 01:31:45.072477 containerd[2086]: time="2024-12-13T01:31:45.071632404Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 01:31:45.072477 containerd[2086]: time="2024-12-13T01:31:45.071668433Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 01:31:45.072477 containerd[2086]: time="2024-12-13T01:31:45.071705042Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 01:31:45.072477 containerd[2086]: time="2024-12-13T01:31:45.071723356Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 01:31:45.072477 containerd[2086]: time="2024-12-13T01:31:45.071754436Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 01:31:45.072477 containerd[2086]: time="2024-12-13T01:31:45.071826900Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 01:31:45.072477 containerd[2086]: time="2024-12-13T01:31:45.071849635Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 01:31:45.072477 containerd[2086]: time="2024-12-13T01:31:45.071866487Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 01:31:45.073275 containerd[2086]: time="2024-12-13T01:31:45.072909989Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 01:31:45.073275 containerd[2086]: time="2024-12-13T01:31:45.072937143Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 01:31:45.073275 containerd[2086]: time="2024-12-13T01:31:45.072981744Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 01:31:45.073275 containerd[2086]: time="2024-12-13T01:31:45.073000487Z" level=info msg="NRI interface is disabled by configuration." Dec 13 01:31:45.073275 containerd[2086]: time="2024-12-13T01:31:45.073021537Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 01:31:45.076518 containerd[2086]: time="2024-12-13T01:31:45.075561492Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 01:31:45.076518 containerd[2086]: time="2024-12-13T01:31:45.075703116Z" level=info msg="Connect containerd service" Dec 13 01:31:45.076518 containerd[2086]: time="2024-12-13T01:31:45.075778967Z" level=info msg="using legacy CRI server" Dec 13 01:31:45.076518 containerd[2086]: time="2024-12-13T01:31:45.075790914Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 01:31:45.077070 containerd[2086]: time="2024-12-13T01:31:45.077043314Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 01:31:45.090957 containerd[2086]: time="2024-12-13T01:31:45.087803505Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:31:45.090957 containerd[2086]: time="2024-12-13T01:31:45.088133222Z" level=info msg="Start subscribing containerd event" Dec 13 01:31:45.090957 containerd[2086]: time="2024-12-13T01:31:45.088219135Z" level=info msg="Start recovering state" Dec 13 01:31:45.090957 containerd[2086]: time="2024-12-13T01:31:45.088324591Z" level=info msg="Start event monitor" Dec 13 01:31:45.090957 containerd[2086]: time="2024-12-13T01:31:45.088368615Z" level=info msg="Start snapshots syncer" Dec 13 01:31:45.090957 containerd[2086]: time="2024-12-13T01:31:45.088382819Z" level=info msg="Start cni network conf syncer for default" Dec 13 01:31:45.090957 containerd[2086]: time="2024-12-13T01:31:45.088394031Z" level=info msg="Start streaming server" Dec 13 01:31:45.094178 containerd[2086]: time="2024-12-13T01:31:45.093657345Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 01:31:45.094178 containerd[2086]: time="2024-12-13T01:31:45.093788468Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 01:31:45.094057 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 01:31:45.099128 containerd[2086]: time="2024-12-13T01:31:45.098478264Z" level=info msg="containerd successfully booted in 0.130063s" Dec 13 01:31:45.132342 amazon-ssm-agent[2137]: 2024-12-13 01:31:44 INFO [amazon-ssm-agent] using named pipe channel for IPC Dec 13 01:31:45.231820 amazon-ssm-agent[2137]: 2024-12-13 01:31:44 INFO [amazon-ssm-agent] using named pipe channel for IPC Dec 13 01:31:45.331717 amazon-ssm-agent[2137]: 2024-12-13 01:31:44 INFO [amazon-ssm-agent] using named pipe channel for IPC Dec 13 01:31:45.422281 amazon-ssm-agent[2137]: 2024-12-13 01:31:44 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Dec 13 01:31:45.424236 amazon-ssm-agent[2137]: 2024-12-13 01:31:44 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Dec 13 01:31:45.424236 amazon-ssm-agent[2137]: 2024-12-13 01:31:44 INFO [amazon-ssm-agent] Starting Core Agent Dec 13 01:31:45.424236 amazon-ssm-agent[2137]: 2024-12-13 01:31:44 INFO [amazon-ssm-agent] registrar detected. Attempting registration Dec 13 01:31:45.424236 amazon-ssm-agent[2137]: 2024-12-13 01:31:44 INFO [Registrar] Starting registrar module Dec 13 01:31:45.424236 amazon-ssm-agent[2137]: 2024-12-13 01:31:44 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Dec 13 01:31:45.424236 amazon-ssm-agent[2137]: 2024-12-13 01:31:45 INFO [EC2Identity] EC2 registration was successful. Dec 13 01:31:45.424236 amazon-ssm-agent[2137]: 2024-12-13 01:31:45 INFO [CredentialRefresher] credentialRefresher has started Dec 13 01:31:45.424236 amazon-ssm-agent[2137]: 2024-12-13 01:31:45 INFO [CredentialRefresher] Starting credentials refresher loop Dec 13 01:31:45.424236 amazon-ssm-agent[2137]: 2024-12-13 01:31:45 INFO EC2RoleProvider Successfully connected with instance profile role credentials Dec 13 01:31:45.431566 amazon-ssm-agent[2137]: 2024-12-13 01:31:45 INFO [CredentialRefresher] Next credential rotation will be in 31.0166176402 minutes Dec 13 01:31:45.527227 tar[2075]: linux-amd64/LICENSE Dec 13 01:31:45.527227 tar[2075]: linux-amd64/README.md Dec 13 01:31:45.548023 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 01:31:45.931568 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:31:45.936998 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 01:31:45.938945 systemd[1]: Startup finished in 9.769s (kernel) + 8.246s (userspace) = 18.015s. Dec 13 01:31:46.059180 (kubelet)[2326]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:31:46.438979 amazon-ssm-agent[2137]: 2024-12-13 01:31:46 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Dec 13 01:31:46.540249 amazon-ssm-agent[2137]: 2024-12-13 01:31:46 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2336) started Dec 13 01:31:46.641417 amazon-ssm-agent[2137]: 2024-12-13 01:31:46 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Dec 13 01:31:46.930145 kubelet[2326]: E1213 01:31:46.929848 2326 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:31:46.933037 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:31:46.933365 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:31:50.844734 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 01:31:50.856986 systemd[1]: Started sshd@0-172.31.29.53:22-139.178.68.195:48400.service - OpenSSH per-connection server daemon (139.178.68.195:48400). Dec 13 01:31:51.046387 sshd[2350]: Accepted publickey for core from 139.178.68.195 port 48400 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:31:51.048892 sshd[2350]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:51.059157 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 01:31:51.065671 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 01:31:51.081674 systemd-logind[2059]: New session 1 of user core. Dec 13 01:31:51.119620 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 01:31:51.130852 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 01:31:51.136253 (systemd)[2356]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:31:51.267757 systemd[2356]: Queued start job for default target default.target. Dec 13 01:31:51.268245 systemd[2356]: Created slice app.slice - User Application Slice. Dec 13 01:31:51.268275 systemd[2356]: Reached target paths.target - Paths. Dec 13 01:31:51.268293 systemd[2356]: Reached target timers.target - Timers. Dec 13 01:31:51.273981 systemd[2356]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 01:31:51.294926 systemd[2356]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 01:31:51.295375 systemd[2356]: Reached target sockets.target - Sockets. Dec 13 01:31:51.295448 systemd[2356]: Reached target basic.target - Basic System. Dec 13 01:31:51.295557 systemd[2356]: Reached target default.target - Main User Target. Dec 13 01:31:51.295689 systemd[2356]: Startup finished in 151ms. Dec 13 01:31:51.296402 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 01:31:51.305820 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 01:31:51.460113 systemd[1]: Started sshd@1-172.31.29.53:22-139.178.68.195:48410.service - OpenSSH per-connection server daemon (139.178.68.195:48410). Dec 13 01:31:51.618096 sshd[2368]: Accepted publickey for core from 139.178.68.195 port 48410 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:31:51.619634 sshd[2368]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:51.625696 systemd-logind[2059]: New session 2 of user core. Dec 13 01:31:51.631988 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 01:31:51.757844 sshd[2368]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:51.764376 systemd[1]: sshd@1-172.31.29.53:22-139.178.68.195:48410.service: Deactivated successfully. Dec 13 01:31:51.770015 systemd-logind[2059]: Session 2 logged out. Waiting for processes to exit. Dec 13 01:31:51.771095 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 01:31:51.772328 systemd-logind[2059]: Removed session 2. Dec 13 01:31:51.789810 systemd[1]: Started sshd@2-172.31.29.53:22-139.178.68.195:48416.service - OpenSSH per-connection server daemon (139.178.68.195:48416). Dec 13 01:31:51.943004 sshd[2376]: Accepted publickey for core from 139.178.68.195 port 48416 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:31:51.945304 sshd[2376]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:51.951886 systemd-logind[2059]: New session 3 of user core. Dec 13 01:31:51.960681 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 01:31:52.076766 sshd[2376]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:52.081665 systemd[1]: sshd@2-172.31.29.53:22-139.178.68.195:48416.service: Deactivated successfully. Dec 13 01:31:52.087641 systemd-logind[2059]: Session 3 logged out. Waiting for processes to exit. Dec 13 01:31:52.089717 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 01:31:52.091525 systemd-logind[2059]: Removed session 3. Dec 13 01:31:52.106646 systemd[1]: Started sshd@3-172.31.29.53:22-139.178.68.195:48422.service - OpenSSH per-connection server daemon (139.178.68.195:48422). Dec 13 01:31:52.271170 sshd[2384]: Accepted publickey for core from 139.178.68.195 port 48422 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:31:52.272863 sshd[2384]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:52.280369 systemd-logind[2059]: New session 4 of user core. Dec 13 01:31:52.290684 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 01:31:52.421675 sshd[2384]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:52.427390 systemd[1]: sshd@3-172.31.29.53:22-139.178.68.195:48422.service: Deactivated successfully. Dec 13 01:31:52.438778 systemd-logind[2059]: Session 4 logged out. Waiting for processes to exit. Dec 13 01:31:52.440553 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 01:31:52.448170 systemd-logind[2059]: Removed session 4. Dec 13 01:31:52.457603 systemd[1]: Started sshd@4-172.31.29.53:22-139.178.68.195:48430.service - OpenSSH per-connection server daemon (139.178.68.195:48430). Dec 13 01:31:52.637450 sshd[2392]: Accepted publickey for core from 139.178.68.195 port 48430 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:31:52.638973 sshd[2392]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:52.653398 systemd-logind[2059]: New session 5 of user core. Dec 13 01:31:52.661348 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 01:31:52.838514 sudo[2396]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 01:31:52.838918 sudo[2396]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:31:52.853493 sudo[2396]: pam_unix(sudo:session): session closed for user root Dec 13 01:31:52.876842 sshd[2392]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:52.881979 systemd[1]: sshd@4-172.31.29.53:22-139.178.68.195:48430.service: Deactivated successfully. Dec 13 01:31:52.886691 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 01:31:52.887674 systemd-logind[2059]: Session 5 logged out. Waiting for processes to exit. Dec 13 01:31:52.888712 systemd-logind[2059]: Removed session 5. Dec 13 01:31:52.904849 systemd[1]: Started sshd@5-172.31.29.53:22-139.178.68.195:48438.service - OpenSSH per-connection server daemon (139.178.68.195:48438). Dec 13 01:31:53.060587 sshd[2401]: Accepted publickey for core from 139.178.68.195 port 48438 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:31:53.062908 sshd[2401]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:53.068280 systemd-logind[2059]: New session 6 of user core. Dec 13 01:31:53.079896 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 01:31:53.179700 sudo[2406]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 01:31:53.180141 sudo[2406]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:31:53.184532 sudo[2406]: pam_unix(sudo:session): session closed for user root Dec 13 01:31:53.190058 sudo[2405]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 01:31:53.190530 sudo[2405]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:31:53.204885 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 01:31:53.207644 auditctl[2409]: No rules Dec 13 01:31:53.208032 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 01:31:53.208290 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 01:31:53.224025 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:31:53.261213 augenrules[2428]: No rules Dec 13 01:31:53.262989 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:31:53.266601 sudo[2405]: pam_unix(sudo:session): session closed for user root Dec 13 01:31:53.291872 sshd[2401]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:53.296990 systemd[1]: sshd@5-172.31.29.53:22-139.178.68.195:48438.service: Deactivated successfully. Dec 13 01:31:53.301046 systemd-logind[2059]: Session 6 logged out. Waiting for processes to exit. Dec 13 01:31:53.302205 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 01:31:53.304356 systemd-logind[2059]: Removed session 6. Dec 13 01:31:53.320800 systemd[1]: Started sshd@6-172.31.29.53:22-139.178.68.195:48454.service - OpenSSH per-connection server daemon (139.178.68.195:48454). Dec 13 01:31:53.475957 sshd[2437]: Accepted publickey for core from 139.178.68.195 port 48454 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:31:53.477899 sshd[2437]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:53.484428 systemd-logind[2059]: New session 7 of user core. Dec 13 01:31:53.496710 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 01:31:53.599473 sudo[2441]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 01:31:53.600437 sudo[2441]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:31:54.320898 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 01:31:54.333179 (dockerd)[2457]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 01:31:55.104422 dockerd[2457]: time="2024-12-13T01:31:55.104354394Z" level=info msg="Starting up" Dec 13 01:31:56.447780 dockerd[2457]: time="2024-12-13T01:31:56.447725566Z" level=info msg="Loading containers: start." Dec 13 01:31:56.700568 kernel: Initializing XFRM netlink socket Dec 13 01:31:56.775963 (udev-worker)[2478]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:31:56.854859 systemd-networkd[1647]: docker0: Link UP Dec 13 01:31:56.889369 dockerd[2457]: time="2024-12-13T01:31:56.889222547Z" level=info msg="Loading containers: done." Dec 13 01:31:56.923279 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2813352221-merged.mount: Deactivated successfully. Dec 13 01:31:56.935921 dockerd[2457]: time="2024-12-13T01:31:56.935862196Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 01:31:56.936177 dockerd[2457]: time="2024-12-13T01:31:56.936009488Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 13 01:31:56.936177 dockerd[2457]: time="2024-12-13T01:31:56.936158219Z" level=info msg="Daemon has completed initialization" Dec 13 01:31:57.000613 dockerd[2457]: time="2024-12-13T01:31:56.997960676Z" level=info msg="API listen on /run/docker.sock" Dec 13 01:31:56.998374 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 01:31:56.999667 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 01:31:57.008724 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:31:57.907077 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:31:57.909883 (kubelet)[2610]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:31:58.003971 kubelet[2610]: E1213 01:31:58.003907 2610 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:31:58.010465 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:31:58.010745 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:31:58.579712 containerd[2086]: time="2024-12-13T01:31:58.579655698Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 01:31:59.309220 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3279349146.mount: Deactivated successfully. Dec 13 01:32:02.400304 containerd[2086]: time="2024-12-13T01:32:02.400249999Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:02.402458 containerd[2086]: time="2024-12-13T01:32:02.402195523Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=35139254" Dec 13 01:32:02.405137 containerd[2086]: time="2024-12-13T01:32:02.404770050Z" level=info msg="ImageCreate event name:\"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:02.409236 containerd[2086]: time="2024-12-13T01:32:02.409192069Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:02.410445 containerd[2086]: time="2024-12-13T01:32:02.410403463Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"35136054\" in 3.830688804s" Dec 13 01:32:02.410554 containerd[2086]: time="2024-12-13T01:32:02.410453935Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Dec 13 01:32:02.442344 containerd[2086]: time="2024-12-13T01:32:02.442288003Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 01:32:05.262403 containerd[2086]: time="2024-12-13T01:32:05.262019285Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:05.264959 containerd[2086]: time="2024-12-13T01:32:05.264768167Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=32217732" Dec 13 01:32:05.267608 containerd[2086]: time="2024-12-13T01:32:05.267113249Z" level=info msg="ImageCreate event name:\"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:05.271307 containerd[2086]: time="2024-12-13T01:32:05.271264773Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:05.272415 containerd[2086]: time="2024-12-13T01:32:05.272379166Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"33662844\" in 2.830043694s" Dec 13 01:32:05.272574 containerd[2086]: time="2024-12-13T01:32:05.272535755Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Dec 13 01:32:05.332572 containerd[2086]: time="2024-12-13T01:32:05.332533228Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 01:32:07.095111 containerd[2086]: time="2024-12-13T01:32:07.094987248Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:07.097050 containerd[2086]: time="2024-12-13T01:32:07.096994368Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=17332822" Dec 13 01:32:07.100019 containerd[2086]: time="2024-12-13T01:32:07.099557101Z" level=info msg="ImageCreate event name:\"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:07.106568 containerd[2086]: time="2024-12-13T01:32:07.106524875Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:07.107683 containerd[2086]: time="2024-12-13T01:32:07.107643112Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"18777952\" in 1.774866994s" Dec 13 01:32:07.107773 containerd[2086]: time="2024-12-13T01:32:07.107689393Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Dec 13 01:32:07.136492 containerd[2086]: time="2024-12-13T01:32:07.136453534Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 01:32:08.243690 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 01:32:08.257677 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:32:08.831938 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1993235596.mount: Deactivated successfully. Dec 13 01:32:09.496621 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:32:09.506518 (kubelet)[2721]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:32:09.599797 kubelet[2721]: E1213 01:32:09.599641 2721 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:32:09.604162 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:32:09.604467 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:32:09.775287 containerd[2086]: time="2024-12-13T01:32:09.775153132Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:09.777365 containerd[2086]: time="2024-12-13T01:32:09.777204500Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=28619958" Dec 13 01:32:09.779422 containerd[2086]: time="2024-12-13T01:32:09.779370480Z" level=info msg="ImageCreate event name:\"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:09.782657 containerd[2086]: time="2024-12-13T01:32:09.782595344Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:09.783741 containerd[2086]: time="2024-12-13T01:32:09.783239188Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"28618977\" in 2.646742707s" Dec 13 01:32:09.783741 containerd[2086]: time="2024-12-13T01:32:09.783279907Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 01:32:09.812075 containerd[2086]: time="2024-12-13T01:32:09.812025644Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 01:32:10.472203 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2865207356.mount: Deactivated successfully. Dec 13 01:32:11.906024 containerd[2086]: time="2024-12-13T01:32:11.905967058Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:11.908060 containerd[2086]: time="2024-12-13T01:32:11.907860077Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Dec 13 01:32:11.911367 containerd[2086]: time="2024-12-13T01:32:11.910224287Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:11.914862 containerd[2086]: time="2024-12-13T01:32:11.914807888Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:11.916590 containerd[2086]: time="2024-12-13T01:32:11.916158683Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.104089891s" Dec 13 01:32:11.916590 containerd[2086]: time="2024-12-13T01:32:11.916203538Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 01:32:11.943306 containerd[2086]: time="2024-12-13T01:32:11.943268535Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 01:32:12.523959 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1031689178.mount: Deactivated successfully. Dec 13 01:32:12.537597 containerd[2086]: time="2024-12-13T01:32:12.537545373Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:12.539749 containerd[2086]: time="2024-12-13T01:32:12.539564594Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Dec 13 01:32:12.543125 containerd[2086]: time="2024-12-13T01:32:12.541743320Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:12.546053 containerd[2086]: time="2024-12-13T01:32:12.545159992Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:12.546053 containerd[2086]: time="2024-12-13T01:32:12.545913109Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 602.603753ms" Dec 13 01:32:12.546053 containerd[2086]: time="2024-12-13T01:32:12.545948838Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 01:32:12.573596 containerd[2086]: time="2024-12-13T01:32:12.573558424Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 01:32:13.201421 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount166616499.mount: Deactivated successfully. Dec 13 01:32:14.775364 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 13 01:32:16.284585 containerd[2086]: time="2024-12-13T01:32:16.284531952Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:16.286420 containerd[2086]: time="2024-12-13T01:32:16.286357143Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Dec 13 01:32:16.288789 containerd[2086]: time="2024-12-13T01:32:16.288730955Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:16.294278 containerd[2086]: time="2024-12-13T01:32:16.292953604Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:16.294278 containerd[2086]: time="2024-12-13T01:32:16.294130996Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 3.720534051s" Dec 13 01:32:16.294278 containerd[2086]: time="2024-12-13T01:32:16.294168463Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Dec 13 01:32:19.743388 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 01:32:19.756012 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:32:20.064610 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 01:32:20.064711 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 01:32:20.065309 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:32:20.077675 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:32:20.113914 systemd[1]: Reloading requested from client PID 2917 ('systemctl') (unit session-7.scope)... Dec 13 01:32:20.113931 systemd[1]: Reloading... Dec 13 01:32:20.234384 zram_generator::config[2957]: No configuration found. Dec 13 01:32:20.394846 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:32:20.491466 systemd[1]: Reloading finished in 376 ms. Dec 13 01:32:20.531909 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 01:32:20.532069 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 01:32:20.532478 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:32:20.547601 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:32:21.854611 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:32:21.865903 (kubelet)[3024]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:32:21.947826 kubelet[3024]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:32:21.948172 kubelet[3024]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:32:21.948211 kubelet[3024]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:32:21.960048 kubelet[3024]: I1213 01:32:21.958440 3024 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:32:22.365459 kubelet[3024]: I1213 01:32:22.365424 3024 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 01:32:22.365459 kubelet[3024]: I1213 01:32:22.365457 3024 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:32:22.365749 kubelet[3024]: I1213 01:32:22.365727 3024 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 01:32:22.399934 kubelet[3024]: E1213 01:32:22.399797 3024 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.29.53:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.29.53:6443: connect: connection refused Dec 13 01:32:22.400363 kubelet[3024]: I1213 01:32:22.400215 3024 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:32:22.427102 kubelet[3024]: I1213 01:32:22.427075 3024 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:32:22.427623 kubelet[3024]: I1213 01:32:22.427596 3024 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:32:22.429169 kubelet[3024]: I1213 01:32:22.429135 3024 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:32:22.429169 kubelet[3024]: I1213 01:32:22.429173 3024 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:32:22.429465 kubelet[3024]: I1213 01:32:22.429187 3024 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:32:22.431505 kubelet[3024]: I1213 01:32:22.431477 3024 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:32:22.431636 kubelet[3024]: I1213 01:32:22.431618 3024 kubelet.go:396] "Attempting to sync node with API server" Dec 13 01:32:22.432113 kubelet[3024]: I1213 01:32:22.431643 3024 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:32:22.432607 kubelet[3024]: I1213 01:32:22.432587 3024 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:32:22.434424 kubelet[3024]: I1213 01:32:22.434223 3024 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:32:22.434424 kubelet[3024]: W1213 01:32:22.434225 3024 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.29.53:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-53&limit=500&resourceVersion=0": dial tcp 172.31.29.53:6443: connect: connection refused Dec 13 01:32:22.434424 kubelet[3024]: E1213 01:32:22.434295 3024 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.29.53:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-53&limit=500&resourceVersion=0": dial tcp 172.31.29.53:6443: connect: connection refused Dec 13 01:32:22.437992 kubelet[3024]: W1213 01:32:22.437518 3024 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.29.53:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.29.53:6443: connect: connection refused Dec 13 01:32:22.437992 kubelet[3024]: E1213 01:32:22.437574 3024 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.29.53:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.29.53:6443: connect: connection refused Dec 13 01:32:22.438602 kubelet[3024]: I1213 01:32:22.438255 3024 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:32:22.444621 kubelet[3024]: I1213 01:32:22.444592 3024 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:32:22.446306 kubelet[3024]: W1213 01:32:22.446267 3024 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 01:32:22.449652 kubelet[3024]: I1213 01:32:22.449628 3024 server.go:1256] "Started kubelet" Dec 13 01:32:22.449895 kubelet[3024]: I1213 01:32:22.449866 3024 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:32:22.451606 kubelet[3024]: I1213 01:32:22.450970 3024 server.go:461] "Adding debug handlers to kubelet server" Dec 13 01:32:22.453862 kubelet[3024]: I1213 01:32:22.453832 3024 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:32:22.455444 kubelet[3024]: I1213 01:32:22.455245 3024 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:32:22.455519 kubelet[3024]: I1213 01:32:22.455470 3024 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:32:22.460542 kubelet[3024]: E1213 01:32:22.459270 3024 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.29.53:6443/api/v1/namespaces/default/events\": dial tcp 172.31.29.53:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-29-53.181098826b374e1b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-29-53,UID:ip-172-31-29-53,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-29-53,},FirstTimestamp:2024-12-13 01:32:22.449597979 +0000 UTC m=+0.577223518,LastTimestamp:2024-12-13 01:32:22.449597979 +0000 UTC m=+0.577223518,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-29-53,}" Dec 13 01:32:22.466378 kubelet[3024]: I1213 01:32:22.465744 3024 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:32:22.468058 kubelet[3024]: E1213 01:32:22.468036 3024 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-53?timeout=10s\": dial tcp 172.31.29.53:6443: connect: connection refused" interval="200ms" Dec 13 01:32:22.468323 kubelet[3024]: I1213 01:32:22.468309 3024 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:32:22.468509 kubelet[3024]: I1213 01:32:22.468491 3024 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:32:22.471265 kubelet[3024]: I1213 01:32:22.471238 3024 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 01:32:22.471482 kubelet[3024]: I1213 01:32:22.471469 3024 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 01:32:22.471910 kubelet[3024]: I1213 01:32:22.471893 3024 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:32:22.485058 kubelet[3024]: I1213 01:32:22.484492 3024 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:32:22.485572 kubelet[3024]: E1213 01:32:22.485549 3024 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:32:22.487876 kubelet[3024]: I1213 01:32:22.487767 3024 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:32:22.487876 kubelet[3024]: I1213 01:32:22.487808 3024 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:32:22.487876 kubelet[3024]: I1213 01:32:22.487834 3024 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 01:32:22.488082 kubelet[3024]: E1213 01:32:22.487890 3024 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:32:22.493951 kubelet[3024]: W1213 01:32:22.493895 3024 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.29.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.53:6443: connect: connection refused Dec 13 01:32:22.494267 kubelet[3024]: E1213 01:32:22.494148 3024 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.29.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.53:6443: connect: connection refused Dec 13 01:32:22.499081 kubelet[3024]: W1213 01:32:22.498970 3024 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.29.53:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.53:6443: connect: connection refused Dec 13 01:32:22.499081 kubelet[3024]: E1213 01:32:22.499054 3024 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.29.53:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.53:6443: connect: connection refused Dec 13 01:32:22.539052 kubelet[3024]: I1213 01:32:22.538799 3024 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:32:22.539052 kubelet[3024]: I1213 01:32:22.538820 3024 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:32:22.539052 kubelet[3024]: I1213 01:32:22.538847 3024 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:32:22.553041 kubelet[3024]: I1213 01:32:22.552917 3024 policy_none.go:49] "None policy: Start" Dec 13 01:32:22.553747 kubelet[3024]: I1213 01:32:22.553684 3024 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:32:22.553747 kubelet[3024]: I1213 01:32:22.553716 3024 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:32:22.563654 kubelet[3024]: I1213 01:32:22.563617 3024 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:32:22.564835 kubelet[3024]: I1213 01:32:22.563982 3024 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:32:22.572596 kubelet[3024]: I1213 01:32:22.572552 3024 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-29-53" Dec 13 01:32:22.573301 kubelet[3024]: E1213 01:32:22.573270 3024 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.29.53:6443/api/v1/nodes\": dial tcp 172.31.29.53:6443: connect: connection refused" node="ip-172-31-29-53" Dec 13 01:32:22.573301 kubelet[3024]: E1213 01:32:22.573153 3024 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-29-53\" not found" Dec 13 01:32:22.588607 kubelet[3024]: I1213 01:32:22.588563 3024 topology_manager.go:215] "Topology Admit Handler" podUID="3ef7e5cb4e6bf3c695b9076303a823d1" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-29-53" Dec 13 01:32:22.590174 kubelet[3024]: I1213 01:32:22.590144 3024 topology_manager.go:215] "Topology Admit Handler" podUID="32058be7f6f951ca350c6bbd7413eb77" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-29-53" Dec 13 01:32:22.593481 kubelet[3024]: I1213 01:32:22.591560 3024 topology_manager.go:215] "Topology Admit Handler" podUID="0b4229c660671d93e702f52707829795" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-29-53" Dec 13 01:32:22.668892 kubelet[3024]: E1213 01:32:22.668775 3024 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-53?timeout=10s\": dial tcp 172.31.29.53:6443: connect: connection refused" interval="400ms" Dec 13 01:32:22.773080 kubelet[3024]: I1213 01:32:22.773032 3024 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b4229c660671d93e702f52707829795-kubeconfig\") pod \"kube-scheduler-ip-172-31-29-53\" (UID: \"0b4229c660671d93e702f52707829795\") " pod="kube-system/kube-scheduler-ip-172-31-29-53" Dec 13 01:32:22.773080 kubelet[3024]: I1213 01:32:22.773086 3024 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3ef7e5cb4e6bf3c695b9076303a823d1-k8s-certs\") pod \"kube-apiserver-ip-172-31-29-53\" (UID: \"3ef7e5cb4e6bf3c695b9076303a823d1\") " pod="kube-system/kube-apiserver-ip-172-31-29-53" Dec 13 01:32:22.773356 kubelet[3024]: I1213 01:32:22.773117 3024 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/32058be7f6f951ca350c6bbd7413eb77-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-29-53\" (UID: \"32058be7f6f951ca350c6bbd7413eb77\") " pod="kube-system/kube-controller-manager-ip-172-31-29-53" Dec 13 01:32:22.773356 kubelet[3024]: I1213 01:32:22.773143 3024 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/32058be7f6f951ca350c6bbd7413eb77-k8s-certs\") pod \"kube-controller-manager-ip-172-31-29-53\" (UID: \"32058be7f6f951ca350c6bbd7413eb77\") " pod="kube-system/kube-controller-manager-ip-172-31-29-53" Dec 13 01:32:22.773356 kubelet[3024]: I1213 01:32:22.773170 3024 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/32058be7f6f951ca350c6bbd7413eb77-kubeconfig\") pod \"kube-controller-manager-ip-172-31-29-53\" (UID: \"32058be7f6f951ca350c6bbd7413eb77\") " pod="kube-system/kube-controller-manager-ip-172-31-29-53" Dec 13 01:32:22.773356 kubelet[3024]: I1213 01:32:22.773194 3024 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3ef7e5cb4e6bf3c695b9076303a823d1-ca-certs\") pod \"kube-apiserver-ip-172-31-29-53\" (UID: \"3ef7e5cb4e6bf3c695b9076303a823d1\") " pod="kube-system/kube-apiserver-ip-172-31-29-53" Dec 13 01:32:22.773356 kubelet[3024]: I1213 01:32:22.773224 3024 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3ef7e5cb4e6bf3c695b9076303a823d1-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-29-53\" (UID: \"3ef7e5cb4e6bf3c695b9076303a823d1\") " pod="kube-system/kube-apiserver-ip-172-31-29-53" Dec 13 01:32:22.773555 kubelet[3024]: I1213 01:32:22.773252 3024 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/32058be7f6f951ca350c6bbd7413eb77-ca-certs\") pod \"kube-controller-manager-ip-172-31-29-53\" (UID: \"32058be7f6f951ca350c6bbd7413eb77\") " pod="kube-system/kube-controller-manager-ip-172-31-29-53" Dec 13 01:32:22.773555 kubelet[3024]: I1213 01:32:22.773283 3024 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/32058be7f6f951ca350c6bbd7413eb77-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-29-53\" (UID: \"32058be7f6f951ca350c6bbd7413eb77\") " pod="kube-system/kube-controller-manager-ip-172-31-29-53" Dec 13 01:32:22.776721 kubelet[3024]: I1213 01:32:22.776439 3024 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-29-53" Dec 13 01:32:22.776830 kubelet[3024]: E1213 01:32:22.776791 3024 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.29.53:6443/api/v1/nodes\": dial tcp 172.31.29.53:6443: connect: connection refused" node="ip-172-31-29-53" Dec 13 01:32:22.898372 containerd[2086]: time="2024-12-13T01:32:22.898320368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-29-53,Uid:3ef7e5cb4e6bf3c695b9076303a823d1,Namespace:kube-system,Attempt:0,}" Dec 13 01:32:22.902713 containerd[2086]: time="2024-12-13T01:32:22.901740029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-29-53,Uid:0b4229c660671d93e702f52707829795,Namespace:kube-system,Attempt:0,}" Dec 13 01:32:22.904950 containerd[2086]: time="2024-12-13T01:32:22.904915608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-29-53,Uid:32058be7f6f951ca350c6bbd7413eb77,Namespace:kube-system,Attempt:0,}" Dec 13 01:32:23.075380 kubelet[3024]: E1213 01:32:23.073747 3024 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-53?timeout=10s\": dial tcp 172.31.29.53:6443: connect: connection refused" interval="800ms" Dec 13 01:32:23.179615 kubelet[3024]: I1213 01:32:23.179582 3024 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-29-53" Dec 13 01:32:23.180256 kubelet[3024]: E1213 01:32:23.180176 3024 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.29.53:6443/api/v1/nodes\": dial tcp 172.31.29.53:6443: connect: connection refused" node="ip-172-31-29-53" Dec 13 01:32:23.481519 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2499123508.mount: Deactivated successfully. Dec 13 01:32:23.499472 containerd[2086]: time="2024-12-13T01:32:23.499417765Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:32:23.501367 containerd[2086]: time="2024-12-13T01:32:23.501306311Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Dec 13 01:32:23.503468 containerd[2086]: time="2024-12-13T01:32:23.503431082Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:32:23.505562 containerd[2086]: time="2024-12-13T01:32:23.505527944Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:32:23.507835 containerd[2086]: time="2024-12-13T01:32:23.507793769Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:32:23.509832 containerd[2086]: time="2024-12-13T01:32:23.509785257Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:32:23.511503 containerd[2086]: time="2024-12-13T01:32:23.511205686Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:32:23.514968 containerd[2086]: time="2024-12-13T01:32:23.514933505Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:32:23.515862 containerd[2086]: time="2024-12-13T01:32:23.515827013Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 617.409897ms" Dec 13 01:32:23.522079 containerd[2086]: time="2024-12-13T01:32:23.522022460Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 616.935092ms" Dec 13 01:32:23.523017 containerd[2086]: time="2024-12-13T01:32:23.522977856Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 621.163953ms" Dec 13 01:32:23.604919 kubelet[3024]: W1213 01:32:23.604856 3024 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.29.53:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.53:6443: connect: connection refused Dec 13 01:32:23.604919 kubelet[3024]: E1213 01:32:23.604919 3024 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.29.53:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.53:6443: connect: connection refused Dec 13 01:32:23.668582 kubelet[3024]: W1213 01:32:23.668524 3024 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.29.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.53:6443: connect: connection refused Dec 13 01:32:23.668582 kubelet[3024]: E1213 01:32:23.668586 3024 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.29.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.53:6443: connect: connection refused Dec 13 01:32:23.671043 kubelet[3024]: W1213 01:32:23.670995 3024 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.29.53:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-53&limit=500&resourceVersion=0": dial tcp 172.31.29.53:6443: connect: connection refused Dec 13 01:32:23.671043 kubelet[3024]: E1213 01:32:23.671050 3024 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.29.53:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-53&limit=500&resourceVersion=0": dial tcp 172.31.29.53:6443: connect: connection refused Dec 13 01:32:23.790028 kubelet[3024]: W1213 01:32:23.789846 3024 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.29.53:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.29.53:6443: connect: connection refused Dec 13 01:32:23.790028 kubelet[3024]: E1213 01:32:23.790031 3024 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.29.53:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.29.53:6443: connect: connection refused Dec 13 01:32:23.869475 containerd[2086]: time="2024-12-13T01:32:23.868318375Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:32:23.869475 containerd[2086]: time="2024-12-13T01:32:23.868412748Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:32:23.869475 containerd[2086]: time="2024-12-13T01:32:23.868436813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:23.869475 containerd[2086]: time="2024-12-13T01:32:23.868541891Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:23.878461 containerd[2086]: time="2024-12-13T01:32:23.873680170Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:32:23.878461 containerd[2086]: time="2024-12-13T01:32:23.874695918Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:32:23.878461 containerd[2086]: time="2024-12-13T01:32:23.874837104Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:23.878461 containerd[2086]: time="2024-12-13T01:32:23.875226810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:23.878751 kubelet[3024]: E1213 01:32:23.876663 3024 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-53?timeout=10s\": dial tcp 172.31.29.53:6443: connect: connection refused" interval="1.6s" Dec 13 01:32:23.889445 containerd[2086]: time="2024-12-13T01:32:23.889083250Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:32:23.889445 containerd[2086]: time="2024-12-13T01:32:23.889158155Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:32:23.889445 containerd[2086]: time="2024-12-13T01:32:23.889194296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:23.889798 containerd[2086]: time="2024-12-13T01:32:23.889378760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:23.985487 kubelet[3024]: I1213 01:32:23.985462 3024 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-29-53" Dec 13 01:32:23.986798 kubelet[3024]: E1213 01:32:23.986774 3024 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.29.53:6443/api/v1/nodes\": dial tcp 172.31.29.53:6443: connect: connection refused" node="ip-172-31-29-53" Dec 13 01:32:24.050884 containerd[2086]: time="2024-12-13T01:32:24.048302261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-29-53,Uid:3ef7e5cb4e6bf3c695b9076303a823d1,Namespace:kube-system,Attempt:0,} returns sandbox id \"1aeb71128c3cce46a24a8d1f26542c5b57e656c8ed99624e81f072f883a98a4c\"" Dec 13 01:32:24.058098 containerd[2086]: time="2024-12-13T01:32:24.058021388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-29-53,Uid:32058be7f6f951ca350c6bbd7413eb77,Namespace:kube-system,Attempt:0,} returns sandbox id \"39532486831f5d23cf98d4e91b6f5b131122c41932a47cf1e4acc8a0d875c0e8\"" Dec 13 01:32:24.066861 containerd[2086]: time="2024-12-13T01:32:24.066728228Z" level=info msg="CreateContainer within sandbox \"1aeb71128c3cce46a24a8d1f26542c5b57e656c8ed99624e81f072f883a98a4c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 01:32:24.067232 containerd[2086]: time="2024-12-13T01:32:24.067196716Z" level=info msg="CreateContainer within sandbox \"39532486831f5d23cf98d4e91b6f5b131122c41932a47cf1e4acc8a0d875c0e8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 01:32:24.071866 containerd[2086]: time="2024-12-13T01:32:24.071822108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-29-53,Uid:0b4229c660671d93e702f52707829795,Namespace:kube-system,Attempt:0,} returns sandbox id \"bd645dbbeab9c04065596e4a1b15691f3f2ca2ea4d9823aa3dec0e946dbd7c61\"" Dec 13 01:32:24.075199 containerd[2086]: time="2024-12-13T01:32:24.075159039Z" level=info msg="CreateContainer within sandbox \"bd645dbbeab9c04065596e4a1b15691f3f2ca2ea4d9823aa3dec0e946dbd7c61\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 01:32:24.131814 containerd[2086]: time="2024-12-13T01:32:24.131758883Z" level=info msg="CreateContainer within sandbox \"1aeb71128c3cce46a24a8d1f26542c5b57e656c8ed99624e81f072f883a98a4c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"895261e97e165bc1e4e858218c294bc5efa3ea7e780487a82223e11060def0f6\"" Dec 13 01:32:24.132599 containerd[2086]: time="2024-12-13T01:32:24.132570500Z" level=info msg="StartContainer for \"895261e97e165bc1e4e858218c294bc5efa3ea7e780487a82223e11060def0f6\"" Dec 13 01:32:24.141150 containerd[2086]: time="2024-12-13T01:32:24.141028057Z" level=info msg="CreateContainer within sandbox \"39532486831f5d23cf98d4e91b6f5b131122c41932a47cf1e4acc8a0d875c0e8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"998db809948cf70fa3e333cd20f68be388d3b280fcbabb49b0aa2c8925c57bab\"" Dec 13 01:32:24.143605 containerd[2086]: time="2024-12-13T01:32:24.141739543Z" level=info msg="StartContainer for \"998db809948cf70fa3e333cd20f68be388d3b280fcbabb49b0aa2c8925c57bab\"" Dec 13 01:32:24.147511 containerd[2086]: time="2024-12-13T01:32:24.147439504Z" level=info msg="CreateContainer within sandbox \"bd645dbbeab9c04065596e4a1b15691f3f2ca2ea4d9823aa3dec0e946dbd7c61\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"336c7eb008c3cb516b94f1b48bcb258d647da9a0bf1df6e575a874dd543a4827\"" Dec 13 01:32:24.149235 containerd[2086]: time="2024-12-13T01:32:24.149208354Z" level=info msg="StartContainer for \"336c7eb008c3cb516b94f1b48bcb258d647da9a0bf1df6e575a874dd543a4827\"" Dec 13 01:32:24.347693 containerd[2086]: time="2024-12-13T01:32:24.347563893Z" level=info msg="StartContainer for \"998db809948cf70fa3e333cd20f68be388d3b280fcbabb49b0aa2c8925c57bab\" returns successfully" Dec 13 01:32:24.352242 containerd[2086]: time="2024-12-13T01:32:24.351456333Z" level=info msg="StartContainer for \"895261e97e165bc1e4e858218c294bc5efa3ea7e780487a82223e11060def0f6\" returns successfully" Dec 13 01:32:24.372855 containerd[2086]: time="2024-12-13T01:32:24.372816035Z" level=info msg="StartContainer for \"336c7eb008c3cb516b94f1b48bcb258d647da9a0bf1df6e575a874dd543a4827\" returns successfully" Dec 13 01:32:24.556401 kubelet[3024]: E1213 01:32:24.556360 3024 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.29.53:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.29.53:6443: connect: connection refused Dec 13 01:32:25.592812 kubelet[3024]: I1213 01:32:25.592782 3024 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-29-53" Dec 13 01:32:27.350680 kubelet[3024]: E1213 01:32:27.350635 3024 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-29-53\" not found" node="ip-172-31-29-53" Dec 13 01:32:27.418446 kubelet[3024]: I1213 01:32:27.418384 3024 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-29-53" Dec 13 01:32:27.441136 kubelet[3024]: I1213 01:32:27.441100 3024 apiserver.go:52] "Watching apiserver" Dec 13 01:32:27.471475 kubelet[3024]: I1213 01:32:27.471421 3024 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 01:32:27.747600 kubelet[3024]: E1213 01:32:27.747570 3024 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-29-53\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-29-53" Dec 13 01:32:28.945268 update_engine[2066]: I20241213 01:32:28.942277 2066 update_attempter.cc:509] Updating boot flags... Dec 13 01:32:29.042369 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (3309) Dec 13 01:32:30.539922 systemd[1]: Reloading requested from client PID 3393 ('systemctl') (unit session-7.scope)... Dec 13 01:32:30.539941 systemd[1]: Reloading... Dec 13 01:32:30.686363 zram_generator::config[3433]: No configuration found. Dec 13 01:32:30.870079 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:32:30.992451 systemd[1]: Reloading finished in 451 ms. Dec 13 01:32:31.050794 kubelet[3024]: I1213 01:32:31.050716 3024 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:32:31.050767 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:32:31.063440 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:32:31.063926 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:32:31.072438 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:32:31.484649 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:32:31.494094 (kubelet)[3500]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:32:31.585848 kubelet[3500]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:32:31.585848 kubelet[3500]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:32:31.585848 kubelet[3500]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:32:31.586556 kubelet[3500]: I1213 01:32:31.585935 3500 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:32:31.592856 kubelet[3500]: I1213 01:32:31.592820 3500 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 01:32:31.592856 kubelet[3500]: I1213 01:32:31.592847 3500 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:32:31.593104 kubelet[3500]: I1213 01:32:31.593083 3500 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 01:32:31.595355 kubelet[3500]: I1213 01:32:31.595296 3500 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 01:32:31.598376 kubelet[3500]: I1213 01:32:31.598069 3500 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:32:31.620006 kubelet[3500]: I1213 01:32:31.619614 3500 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:32:31.623128 kubelet[3500]: I1213 01:32:31.623107 3500 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:32:31.623595 kubelet[3500]: I1213 01:32:31.623565 3500 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:32:31.623964 kubelet[3500]: I1213 01:32:31.623756 3500 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:32:31.623964 kubelet[3500]: I1213 01:32:31.623776 3500 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:32:31.623964 kubelet[3500]: I1213 01:32:31.623819 3500 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:32:31.624611 kubelet[3500]: I1213 01:32:31.624162 3500 kubelet.go:396] "Attempting to sync node with API server" Dec 13 01:32:31.624959 kubelet[3500]: I1213 01:32:31.624947 3500 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:32:31.625079 kubelet[3500]: I1213 01:32:31.625069 3500 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:32:31.627387 kubelet[3500]: I1213 01:32:31.627370 3500 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:32:31.628631 kubelet[3500]: I1213 01:32:31.628617 3500 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:32:31.629211 kubelet[3500]: I1213 01:32:31.629022 3500 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:32:31.631481 kubelet[3500]: I1213 01:32:31.631467 3500 server.go:1256] "Started kubelet" Dec 13 01:32:31.637160 kubelet[3500]: I1213 01:32:31.637030 3500 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:32:31.654377 kubelet[3500]: I1213 01:32:31.651699 3500 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:32:31.657145 kubelet[3500]: I1213 01:32:31.654822 3500 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:32:31.659505 kubelet[3500]: I1213 01:32:31.659482 3500 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:32:31.666045 kubelet[3500]: I1213 01:32:31.665997 3500 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:32:31.668973 kubelet[3500]: I1213 01:32:31.668950 3500 server.go:461] "Adding debug handlers to kubelet server" Dec 13 01:32:31.671364 kubelet[3500]: I1213 01:32:31.671316 3500 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 01:32:31.671494 kubelet[3500]: I1213 01:32:31.671481 3500 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 01:32:31.681023 kubelet[3500]: I1213 01:32:31.680967 3500 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:32:31.682571 kubelet[3500]: I1213 01:32:31.682550 3500 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:32:31.682717 kubelet[3500]: I1213 01:32:31.682709 3500 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:32:31.682790 kubelet[3500]: I1213 01:32:31.682784 3500 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 01:32:31.682900 kubelet[3500]: E1213 01:32:31.682891 3500 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:32:31.699318 kubelet[3500]: I1213 01:32:31.699284 3500 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:32:31.701673 kubelet[3500]: I1213 01:32:31.701615 3500 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:32:31.703151 kubelet[3500]: E1213 01:32:31.703129 3500 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:32:31.706189 kubelet[3500]: I1213 01:32:31.705805 3500 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:32:31.774882 kubelet[3500]: I1213 01:32:31.774858 3500 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-29-53" Dec 13 01:32:31.786383 kubelet[3500]: E1213 01:32:31.785749 3500 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 01:32:31.788532 kubelet[3500]: I1213 01:32:31.788509 3500 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-29-53" Dec 13 01:32:31.788657 kubelet[3500]: I1213 01:32:31.788585 3500 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-29-53" Dec 13 01:32:31.798154 kubelet[3500]: I1213 01:32:31.798132 3500 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:32:31.798304 kubelet[3500]: I1213 01:32:31.798295 3500 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:32:31.798427 kubelet[3500]: I1213 01:32:31.798418 3500 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:32:31.798764 kubelet[3500]: I1213 01:32:31.798752 3500 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 01:32:31.798938 kubelet[3500]: I1213 01:32:31.798931 3500 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 01:32:31.799013 kubelet[3500]: I1213 01:32:31.799006 3500 policy_none.go:49] "None policy: Start" Dec 13 01:32:31.800230 kubelet[3500]: I1213 01:32:31.800135 3500 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:32:31.800230 kubelet[3500]: I1213 01:32:31.800173 3500 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:32:31.800760 kubelet[3500]: I1213 01:32:31.800657 3500 state_mem.go:75] "Updated machine memory state" Dec 13 01:32:31.804471 kubelet[3500]: I1213 01:32:31.804445 3500 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:32:31.806206 kubelet[3500]: I1213 01:32:31.804716 3500 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:32:31.987001 kubelet[3500]: I1213 01:32:31.986948 3500 topology_manager.go:215] "Topology Admit Handler" podUID="3ef7e5cb4e6bf3c695b9076303a823d1" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-29-53" Dec 13 01:32:31.987168 kubelet[3500]: I1213 01:32:31.987064 3500 topology_manager.go:215] "Topology Admit Handler" podUID="32058be7f6f951ca350c6bbd7413eb77" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-29-53" Dec 13 01:32:31.987168 kubelet[3500]: I1213 01:32:31.987109 3500 topology_manager.go:215] "Topology Admit Handler" podUID="0b4229c660671d93e702f52707829795" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-29-53" Dec 13 01:32:32.000479 kubelet[3500]: E1213 01:32:31.999360 3500 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-29-53\" already exists" pod="kube-system/kube-scheduler-ip-172-31-29-53" Dec 13 01:32:32.000479 kubelet[3500]: E1213 01:32:32.000087 3500 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-29-53\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-29-53" Dec 13 01:32:32.073141 kubelet[3500]: I1213 01:32:32.072914 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b4229c660671d93e702f52707829795-kubeconfig\") pod \"kube-scheduler-ip-172-31-29-53\" (UID: \"0b4229c660671d93e702f52707829795\") " pod="kube-system/kube-scheduler-ip-172-31-29-53" Dec 13 01:32:32.073141 kubelet[3500]: I1213 01:32:32.072977 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/32058be7f6f951ca350c6bbd7413eb77-ca-certs\") pod \"kube-controller-manager-ip-172-31-29-53\" (UID: \"32058be7f6f951ca350c6bbd7413eb77\") " pod="kube-system/kube-controller-manager-ip-172-31-29-53" Dec 13 01:32:32.073141 kubelet[3500]: I1213 01:32:32.073017 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/32058be7f6f951ca350c6bbd7413eb77-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-29-53\" (UID: \"32058be7f6f951ca350c6bbd7413eb77\") " pod="kube-system/kube-controller-manager-ip-172-31-29-53" Dec 13 01:32:32.073141 kubelet[3500]: I1213 01:32:32.073050 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/32058be7f6f951ca350c6bbd7413eb77-k8s-certs\") pod \"kube-controller-manager-ip-172-31-29-53\" (UID: \"32058be7f6f951ca350c6bbd7413eb77\") " pod="kube-system/kube-controller-manager-ip-172-31-29-53" Dec 13 01:32:32.073141 kubelet[3500]: I1213 01:32:32.073079 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3ef7e5cb4e6bf3c695b9076303a823d1-ca-certs\") pod \"kube-apiserver-ip-172-31-29-53\" (UID: \"3ef7e5cb4e6bf3c695b9076303a823d1\") " pod="kube-system/kube-apiserver-ip-172-31-29-53" Dec 13 01:32:32.073680 kubelet[3500]: I1213 01:32:32.073108 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3ef7e5cb4e6bf3c695b9076303a823d1-k8s-certs\") pod \"kube-apiserver-ip-172-31-29-53\" (UID: \"3ef7e5cb4e6bf3c695b9076303a823d1\") " pod="kube-system/kube-apiserver-ip-172-31-29-53" Dec 13 01:32:32.073680 kubelet[3500]: I1213 01:32:32.073147 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3ef7e5cb4e6bf3c695b9076303a823d1-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-29-53\" (UID: \"3ef7e5cb4e6bf3c695b9076303a823d1\") " pod="kube-system/kube-apiserver-ip-172-31-29-53" Dec 13 01:32:32.073680 kubelet[3500]: I1213 01:32:32.073270 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/32058be7f6f951ca350c6bbd7413eb77-kubeconfig\") pod \"kube-controller-manager-ip-172-31-29-53\" (UID: \"32058be7f6f951ca350c6bbd7413eb77\") " pod="kube-system/kube-controller-manager-ip-172-31-29-53" Dec 13 01:32:32.073972 kubelet[3500]: I1213 01:32:32.073913 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/32058be7f6f951ca350c6bbd7413eb77-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-29-53\" (UID: \"32058be7f6f951ca350c6bbd7413eb77\") " pod="kube-system/kube-controller-manager-ip-172-31-29-53" Dec 13 01:32:32.627857 kubelet[3500]: I1213 01:32:32.627818 3500 apiserver.go:52] "Watching apiserver" Dec 13 01:32:32.672647 kubelet[3500]: I1213 01:32:32.672441 3500 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 01:32:32.770589 kubelet[3500]: E1213 01:32:32.768440 3500 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-29-53\" already exists" pod="kube-system/kube-apiserver-ip-172-31-29-53" Dec 13 01:32:32.830408 kubelet[3500]: I1213 01:32:32.830354 3500 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-29-53" podStartSLOduration=3.83027726 podStartE2EDuration="3.83027726s" podCreationTimestamp="2024-12-13 01:32:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:32:32.82825649 +0000 UTC m=+1.327850311" watchObservedRunningTime="2024-12-13 01:32:32.83027726 +0000 UTC m=+1.329871061" Dec 13 01:32:32.887314 kubelet[3500]: I1213 01:32:32.887173 3500 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-29-53" podStartSLOduration=1.887117495 podStartE2EDuration="1.887117495s" podCreationTimestamp="2024-12-13 01:32:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:32:32.858437761 +0000 UTC m=+1.358031582" watchObservedRunningTime="2024-12-13 01:32:32.887117495 +0000 UTC m=+1.386711303" Dec 13 01:32:32.916150 kubelet[3500]: I1213 01:32:32.915815 3500 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-29-53" podStartSLOduration=2.915762397 podStartE2EDuration="2.915762397s" podCreationTimestamp="2024-12-13 01:32:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:32:32.888642489 +0000 UTC m=+1.388236308" watchObservedRunningTime="2024-12-13 01:32:32.915762397 +0000 UTC m=+1.415356217" Dec 13 01:32:37.786823 sudo[2441]: pam_unix(sudo:session): session closed for user root Dec 13 01:32:37.810838 sshd[2437]: pam_unix(sshd:session): session closed for user core Dec 13 01:32:37.814638 systemd[1]: sshd@6-172.31.29.53:22-139.178.68.195:48454.service: Deactivated successfully. Dec 13 01:32:37.822082 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 01:32:37.822296 systemd-logind[2059]: Session 7 logged out. Waiting for processes to exit. Dec 13 01:32:37.826728 systemd-logind[2059]: Removed session 7. Dec 13 01:32:42.603373 kubelet[3500]: I1213 01:32:42.601233 3500 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 01:32:42.614698 containerd[2086]: time="2024-12-13T01:32:42.607486277Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 01:32:42.615737 kubelet[3500]: I1213 01:32:42.607762 3500 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 01:32:43.618669 kubelet[3500]: I1213 01:32:43.618625 3500 topology_manager.go:215] "Topology Admit Handler" podUID="7759c01b-b3b5-470d-99ec-c0f0cb4caa3e" podNamespace="kube-system" podName="kube-proxy-sqkzz" Dec 13 01:32:43.758535 kubelet[3500]: I1213 01:32:43.756320 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t57vp\" (UniqueName: \"kubernetes.io/projected/7759c01b-b3b5-470d-99ec-c0f0cb4caa3e-kube-api-access-t57vp\") pod \"kube-proxy-sqkzz\" (UID: \"7759c01b-b3b5-470d-99ec-c0f0cb4caa3e\") " pod="kube-system/kube-proxy-sqkzz" Dec 13 01:32:43.758535 kubelet[3500]: I1213 01:32:43.756392 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7759c01b-b3b5-470d-99ec-c0f0cb4caa3e-kube-proxy\") pod \"kube-proxy-sqkzz\" (UID: \"7759c01b-b3b5-470d-99ec-c0f0cb4caa3e\") " pod="kube-system/kube-proxy-sqkzz" Dec 13 01:32:43.758535 kubelet[3500]: I1213 01:32:43.756422 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7759c01b-b3b5-470d-99ec-c0f0cb4caa3e-xtables-lock\") pod \"kube-proxy-sqkzz\" (UID: \"7759c01b-b3b5-470d-99ec-c0f0cb4caa3e\") " pod="kube-system/kube-proxy-sqkzz" Dec 13 01:32:43.758535 kubelet[3500]: I1213 01:32:43.756457 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7759c01b-b3b5-470d-99ec-c0f0cb4caa3e-lib-modules\") pod \"kube-proxy-sqkzz\" (UID: \"7759c01b-b3b5-470d-99ec-c0f0cb4caa3e\") " pod="kube-system/kube-proxy-sqkzz" Dec 13 01:32:43.758535 kubelet[3500]: I1213 01:32:43.756852 3500 topology_manager.go:215] "Topology Admit Handler" podUID="1661031a-28d2-4368-9aa4-6c3aa5585b05" podNamespace="tigera-operator" podName="tigera-operator-c7ccbd65-wnlk6" Dec 13 01:32:43.856704 kubelet[3500]: I1213 01:32:43.856643 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1661031a-28d2-4368-9aa4-6c3aa5585b05-var-lib-calico\") pod \"tigera-operator-c7ccbd65-wnlk6\" (UID: \"1661031a-28d2-4368-9aa4-6c3aa5585b05\") " pod="tigera-operator/tigera-operator-c7ccbd65-wnlk6" Dec 13 01:32:43.856857 kubelet[3500]: I1213 01:32:43.856787 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmnx9\" (UniqueName: \"kubernetes.io/projected/1661031a-28d2-4368-9aa4-6c3aa5585b05-kube-api-access-qmnx9\") pod \"tigera-operator-c7ccbd65-wnlk6\" (UID: \"1661031a-28d2-4368-9aa4-6c3aa5585b05\") " pod="tigera-operator/tigera-operator-c7ccbd65-wnlk6" Dec 13 01:32:43.941691 containerd[2086]: time="2024-12-13T01:32:43.938298712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sqkzz,Uid:7759c01b-b3b5-470d-99ec-c0f0cb4caa3e,Namespace:kube-system,Attempt:0,}" Dec 13 01:32:44.008146 containerd[2086]: time="2024-12-13T01:32:44.008028546Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:32:44.008146 containerd[2086]: time="2024-12-13T01:32:44.008081896Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:32:44.008146 containerd[2086]: time="2024-12-13T01:32:44.008097300Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:44.011391 containerd[2086]: time="2024-12-13T01:32:44.008329278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:44.062718 containerd[2086]: time="2024-12-13T01:32:44.062679158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sqkzz,Uid:7759c01b-b3b5-470d-99ec-c0f0cb4caa3e,Namespace:kube-system,Attempt:0,} returns sandbox id \"34bde627ae5cd4056275b8d19165cb3a8e9c99e6eb113ce4cf69847c96cfa24a\"" Dec 13 01:32:44.066451 containerd[2086]: time="2024-12-13T01:32:44.066397797Z" level=info msg="CreateContainer within sandbox \"34bde627ae5cd4056275b8d19165cb3a8e9c99e6eb113ce4cf69847c96cfa24a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 01:32:44.075940 containerd[2086]: time="2024-12-13T01:32:44.075906082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-wnlk6,Uid:1661031a-28d2-4368-9aa4-6c3aa5585b05,Namespace:tigera-operator,Attempt:0,}" Dec 13 01:32:44.133229 containerd[2086]: time="2024-12-13T01:32:44.133177294Z" level=info msg="CreateContainer within sandbox \"34bde627ae5cd4056275b8d19165cb3a8e9c99e6eb113ce4cf69847c96cfa24a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ac1c78e0c270aee036f6cc8fb43cd1048629ac81cbd1a0e7924387ab58b09619\"" Dec 13 01:32:44.135381 containerd[2086]: time="2024-12-13T01:32:44.134082424Z" level=info msg="StartContainer for \"ac1c78e0c270aee036f6cc8fb43cd1048629ac81cbd1a0e7924387ab58b09619\"" Dec 13 01:32:44.146544 containerd[2086]: time="2024-12-13T01:32:44.146257204Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:32:44.148401 containerd[2086]: time="2024-12-13T01:32:44.146383337Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:32:44.148401 containerd[2086]: time="2024-12-13T01:32:44.147200506Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:44.148401 containerd[2086]: time="2024-12-13T01:32:44.147344257Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:44.230112 containerd[2086]: time="2024-12-13T01:32:44.230006151Z" level=info msg="StartContainer for \"ac1c78e0c270aee036f6cc8fb43cd1048629ac81cbd1a0e7924387ab58b09619\" returns successfully" Dec 13 01:32:44.277217 containerd[2086]: time="2024-12-13T01:32:44.277179099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-wnlk6,Uid:1661031a-28d2-4368-9aa4-6c3aa5585b05,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"d927f4b1e246e9a38f0b18fe7ad0098462fe3f4a730bd3f9f99354463cbb7b4b\"" Dec 13 01:32:44.287584 containerd[2086]: time="2024-12-13T01:32:44.287547331Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Dec 13 01:32:47.189160 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount484811562.mount: Deactivated successfully. Dec 13 01:32:48.034736 containerd[2086]: time="2024-12-13T01:32:48.034687320Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:48.036704 containerd[2086]: time="2024-12-13T01:32:48.036563891Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21764261" Dec 13 01:32:48.041686 containerd[2086]: time="2024-12-13T01:32:48.041352482Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:48.043898 containerd[2086]: time="2024-12-13T01:32:48.043845873Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:48.045246 containerd[2086]: time="2024-12-13T01:32:48.044608983Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 3.756837247s" Dec 13 01:32:48.045246 containerd[2086]: time="2024-12-13T01:32:48.044759698Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Dec 13 01:32:48.052169 containerd[2086]: time="2024-12-13T01:32:48.052126167Z" level=info msg="CreateContainer within sandbox \"d927f4b1e246e9a38f0b18fe7ad0098462fe3f4a730bd3f9f99354463cbb7b4b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 13 01:32:48.164765 containerd[2086]: time="2024-12-13T01:32:48.164718724Z" level=info msg="CreateContainer within sandbox \"d927f4b1e246e9a38f0b18fe7ad0098462fe3f4a730bd3f9f99354463cbb7b4b\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"e76d452d3d93a2524597f01c10402e9fa6093a06a61266184f4895bbecf6beef\"" Dec 13 01:32:48.166270 containerd[2086]: time="2024-12-13T01:32:48.165393041Z" level=info msg="StartContainer for \"e76d452d3d93a2524597f01c10402e9fa6093a06a61266184f4895bbecf6beef\"" Dec 13 01:32:48.257834 containerd[2086]: time="2024-12-13T01:32:48.257666132Z" level=info msg="StartContainer for \"e76d452d3d93a2524597f01c10402e9fa6093a06a61266184f4895bbecf6beef\" returns successfully" Dec 13 01:32:48.812732 kubelet[3500]: I1213 01:32:48.811020 3500 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-sqkzz" podStartSLOduration=5.810969536 podStartE2EDuration="5.810969536s" podCreationTimestamp="2024-12-13 01:32:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:32:44.810071736 +0000 UTC m=+13.309665558" watchObservedRunningTime="2024-12-13 01:32:48.810969536 +0000 UTC m=+17.310563363" Dec 13 01:32:51.836525 kubelet[3500]: I1213 01:32:51.833950 3500 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-c7ccbd65-wnlk6" podStartSLOduration=5.070472112 podStartE2EDuration="8.83364334s" podCreationTimestamp="2024-12-13 01:32:43 +0000 UTC" firstStartedPulling="2024-12-13 01:32:44.286296097 +0000 UTC m=+12.785889906" lastFinishedPulling="2024-12-13 01:32:48.049467325 +0000 UTC m=+16.549061134" observedRunningTime="2024-12-13 01:32:48.81129484 +0000 UTC m=+17.310888682" watchObservedRunningTime="2024-12-13 01:32:51.83364334 +0000 UTC m=+20.333237155" Dec 13 01:32:51.836525 kubelet[3500]: I1213 01:32:51.834118 3500 topology_manager.go:215] "Topology Admit Handler" podUID="9decf99b-b592-4462-a57c-4b3186238d05" podNamespace="calico-system" podName="calico-typha-59f588f8-lz4v6" Dec 13 01:32:51.940626 kubelet[3500]: I1213 01:32:51.940163 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9h4c\" (UniqueName: \"kubernetes.io/projected/9decf99b-b592-4462-a57c-4b3186238d05-kube-api-access-m9h4c\") pod \"calico-typha-59f588f8-lz4v6\" (UID: \"9decf99b-b592-4462-a57c-4b3186238d05\") " pod="calico-system/calico-typha-59f588f8-lz4v6" Dec 13 01:32:51.940967 kubelet[3500]: I1213 01:32:51.940841 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9decf99b-b592-4462-a57c-4b3186238d05-tigera-ca-bundle\") pod \"calico-typha-59f588f8-lz4v6\" (UID: \"9decf99b-b592-4462-a57c-4b3186238d05\") " pod="calico-system/calico-typha-59f588f8-lz4v6" Dec 13 01:32:51.940967 kubelet[3500]: I1213 01:32:51.940933 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9decf99b-b592-4462-a57c-4b3186238d05-typha-certs\") pod \"calico-typha-59f588f8-lz4v6\" (UID: \"9decf99b-b592-4462-a57c-4b3186238d05\") " pod="calico-system/calico-typha-59f588f8-lz4v6" Dec 13 01:32:52.015126 kubelet[3500]: I1213 01:32:52.013220 3500 topology_manager.go:215] "Topology Admit Handler" podUID="84eed9dd-3a57-4b2d-81ab-56717d1395d5" podNamespace="calico-system" podName="calico-node-7xhp8" Dec 13 01:32:52.143598 kubelet[3500]: I1213 01:32:52.143467 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/84eed9dd-3a57-4b2d-81ab-56717d1395d5-lib-modules\") pod \"calico-node-7xhp8\" (UID: \"84eed9dd-3a57-4b2d-81ab-56717d1395d5\") " pod="calico-system/calico-node-7xhp8" Dec 13 01:32:52.144547 kubelet[3500]: I1213 01:32:52.144020 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/84eed9dd-3a57-4b2d-81ab-56717d1395d5-cni-net-dir\") pod \"calico-node-7xhp8\" (UID: \"84eed9dd-3a57-4b2d-81ab-56717d1395d5\") " pod="calico-system/calico-node-7xhp8" Dec 13 01:32:52.145434 kubelet[3500]: I1213 01:32:52.145367 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/84eed9dd-3a57-4b2d-81ab-56717d1395d5-xtables-lock\") pod \"calico-node-7xhp8\" (UID: \"84eed9dd-3a57-4b2d-81ab-56717d1395d5\") " pod="calico-system/calico-node-7xhp8" Dec 13 01:32:52.146111 kubelet[3500]: I1213 01:32:52.146024 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/84eed9dd-3a57-4b2d-81ab-56717d1395d5-cni-log-dir\") pod \"calico-node-7xhp8\" (UID: \"84eed9dd-3a57-4b2d-81ab-56717d1395d5\") " pod="calico-system/calico-node-7xhp8" Dec 13 01:32:52.146514 kubelet[3500]: I1213 01:32:52.146497 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/84eed9dd-3a57-4b2d-81ab-56717d1395d5-flexvol-driver-host\") pod \"calico-node-7xhp8\" (UID: \"84eed9dd-3a57-4b2d-81ab-56717d1395d5\") " pod="calico-system/calico-node-7xhp8" Dec 13 01:32:52.146732 kubelet[3500]: I1213 01:32:52.146720 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/84eed9dd-3a57-4b2d-81ab-56717d1395d5-tigera-ca-bundle\") pod \"calico-node-7xhp8\" (UID: \"84eed9dd-3a57-4b2d-81ab-56717d1395d5\") " pod="calico-system/calico-node-7xhp8" Dec 13 01:32:52.147063 kubelet[3500]: I1213 01:32:52.147049 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/84eed9dd-3a57-4b2d-81ab-56717d1395d5-node-certs\") pod \"calico-node-7xhp8\" (UID: \"84eed9dd-3a57-4b2d-81ab-56717d1395d5\") " pod="calico-system/calico-node-7xhp8" Dec 13 01:32:52.148229 kubelet[3500]: I1213 01:32:52.147830 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/84eed9dd-3a57-4b2d-81ab-56717d1395d5-policysync\") pod \"calico-node-7xhp8\" (UID: \"84eed9dd-3a57-4b2d-81ab-56717d1395d5\") " pod="calico-system/calico-node-7xhp8" Dec 13 01:32:52.148872 kubelet[3500]: I1213 01:32:52.148409 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjfl6\" (UniqueName: \"kubernetes.io/projected/84eed9dd-3a57-4b2d-81ab-56717d1395d5-kube-api-access-hjfl6\") pod \"calico-node-7xhp8\" (UID: \"84eed9dd-3a57-4b2d-81ab-56717d1395d5\") " pod="calico-system/calico-node-7xhp8" Dec 13 01:32:52.148872 kubelet[3500]: I1213 01:32:52.148540 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/84eed9dd-3a57-4b2d-81ab-56717d1395d5-var-run-calico\") pod \"calico-node-7xhp8\" (UID: \"84eed9dd-3a57-4b2d-81ab-56717d1395d5\") " pod="calico-system/calico-node-7xhp8" Dec 13 01:32:52.150195 kubelet[3500]: I1213 01:32:52.149662 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/84eed9dd-3a57-4b2d-81ab-56717d1395d5-cni-bin-dir\") pod \"calico-node-7xhp8\" (UID: \"84eed9dd-3a57-4b2d-81ab-56717d1395d5\") " pod="calico-system/calico-node-7xhp8" Dec 13 01:32:52.150195 kubelet[3500]: I1213 01:32:52.149863 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/84eed9dd-3a57-4b2d-81ab-56717d1395d5-var-lib-calico\") pod \"calico-node-7xhp8\" (UID: \"84eed9dd-3a57-4b2d-81ab-56717d1395d5\") " pod="calico-system/calico-node-7xhp8" Dec 13 01:32:52.150322 containerd[2086]: time="2024-12-13T01:32:52.149948800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-59f588f8-lz4v6,Uid:9decf99b-b592-4462-a57c-4b3186238d05,Namespace:calico-system,Attempt:0,}" Dec 13 01:32:52.171751 kubelet[3500]: I1213 01:32:52.169996 3500 topology_manager.go:215] "Topology Admit Handler" podUID="37e377bc-ecb2-46be-838f-bd5c4df2e7cf" podNamespace="calico-system" podName="csi-node-driver-nk5pm" Dec 13 01:32:52.172388 kubelet[3500]: E1213 01:32:52.172022 3500 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nk5pm" podUID="37e377bc-ecb2-46be-838f-bd5c4df2e7cf" Dec 13 01:32:52.252722 containerd[2086]: time="2024-12-13T01:32:52.249965872Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:32:52.252722 containerd[2086]: time="2024-12-13T01:32:52.250045637Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:32:52.252722 containerd[2086]: time="2024-12-13T01:32:52.250063066Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:52.252722 containerd[2086]: time="2024-12-13T01:32:52.250183513Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:52.313421 kubelet[3500]: E1213 01:32:52.313010 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:52.313421 kubelet[3500]: W1213 01:32:52.313050 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:52.313421 kubelet[3500]: E1213 01:32:52.313076 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:52.315319 kubelet[3500]: E1213 01:32:52.314570 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:52.315319 kubelet[3500]: W1213 01:32:52.314593 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:52.315319 kubelet[3500]: E1213 01:32:52.314619 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:52.318713 kubelet[3500]: E1213 01:32:52.316617 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:52.318713 kubelet[3500]: W1213 01:32:52.316633 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:52.318713 kubelet[3500]: E1213 01:32:52.316665 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:52.318713 kubelet[3500]: E1213 01:32:52.317742 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:52.318713 kubelet[3500]: W1213 01:32:52.317755 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:52.318713 kubelet[3500]: E1213 01:32:52.317774 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:52.320836 kubelet[3500]: E1213 01:32:52.320414 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:52.320836 kubelet[3500]: W1213 01:32:52.320430 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:52.320836 kubelet[3500]: E1213 01:32:52.320450 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:52.320836 kubelet[3500]: E1213 01:32:52.320710 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:52.320836 kubelet[3500]: W1213 01:32:52.320722 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:52.320836 kubelet[3500]: E1213 01:32:52.320739 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:52.328630 kubelet[3500]: E1213 01:32:52.328598 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:52.328630 kubelet[3500]: W1213 01:32:52.328626 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:52.328905 kubelet[3500]: E1213 01:32:52.328657 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:52.331640 kubelet[3500]: E1213 01:32:52.331397 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:52.331640 kubelet[3500]: W1213 01:32:52.331418 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:52.331640 kubelet[3500]: E1213 01:32:52.331444 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:52.335327 kubelet[3500]: E1213 01:32:52.334377 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:52.335327 kubelet[3500]: W1213 01:32:52.334397 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:52.335327 kubelet[3500]: E1213 01:32:52.334421 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:52.347016 kubelet[3500]: E1213 01:32:52.346983 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:52.347016 kubelet[3500]: W1213 01:32:52.347004 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:52.347257 kubelet[3500]: E1213 01:32:52.347030 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:52.370547 kubelet[3500]: E1213 01:32:52.370522 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:52.371068 kubelet[3500]: W1213 01:32:52.370906 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:52.371068 kubelet[3500]: E1213 01:32:52.370946 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:52.371438 kubelet[3500]: I1213 01:32:52.371422 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/37e377bc-ecb2-46be-838f-bd5c4df2e7cf-registration-dir\") pod \"csi-node-driver-nk5pm\" (UID: \"37e377bc-ecb2-46be-838f-bd5c4df2e7cf\") " pod="calico-system/csi-node-driver-nk5pm" Dec 13 01:32:52.372116 kubelet[3500]: E1213 01:32:52.371912 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:52.372116 kubelet[3500]: W1213 01:32:52.371929 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:52.372116 kubelet[3500]: E1213 01:32:52.371951 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:52.372116 kubelet[3500]: I1213 01:32:52.371981 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/37e377bc-ecb2-46be-838f-bd5c4df2e7cf-varrun\") pod \"csi-node-driver-nk5pm\" (UID: \"37e377bc-ecb2-46be-838f-bd5c4df2e7cf\") " pod="calico-system/csi-node-driver-nk5pm" Dec 13 01:32:52.372592 kubelet[3500]: E1213 01:32:52.372578 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:52.372675 kubelet[3500]: W1213 01:32:52.372664 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:52.372756 kubelet[3500]: E1213 01:32:52.372746 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:52.373067 kubelet[3500]: E1213 01:32:52.373054 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:52.373210 kubelet[3500]: W1213 01:32:52.373196 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:52.378479 kubelet[3500]: E1213 01:32:52.373265 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:52.378479 kubelet[3500]: E1213 01:32:52.373628 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:52.378479 kubelet[3500]: W1213 01:32:52.373639 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:52.378479 kubelet[3500]: E1213 01:32:52.373655 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:52.378479 kubelet[3500]: I1213 01:32:52.373769 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/37e377bc-ecb2-46be-838f-bd5c4df2e7cf-kubelet-dir\") pod \"csi-node-driver-nk5pm\" (UID: \"37e377bc-ecb2-46be-838f-bd5c4df2e7cf\") " pod="calico-system/csi-node-driver-nk5pm" Dec 13 01:32:52.378479 kubelet[3500]: E1213 01:32:52.374020 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:52.378479 kubelet[3500]: W1213 01:32:52.374033 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:52.378479 kubelet[3500]: E1213 01:32:52.374071 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:52.378479 kubelet[3500]: I1213 01:32:52.374097 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/37e377bc-ecb2-46be-838f-bd5c4df2e7cf-socket-dir\") pod \"csi-node-driver-nk5pm\" (UID: \"37e377bc-ecb2-46be-838f-bd5c4df2e7cf\") " pod="calico-system/csi-node-driver-nk5pm" Dec 13 01:32:52.378979 kubelet[3500]: E1213 01:32:52.374381 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:52.378979 kubelet[3500]: W1213 01:32:52.374392 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:52.378979 kubelet[3500]: E1213 01:32:52.374406 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:52.378979 kubelet[3500]: I1213 01:32:52.374450 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4k5r8\" (UniqueName: \"kubernetes.io/projected/37e377bc-ecb2-46be-838f-bd5c4df2e7cf-kube-api-access-4k5r8\") pod \"csi-node-driver-nk5pm\" (UID: \"37e377bc-ecb2-46be-838f-bd5c4df2e7cf\") " pod="calico-system/csi-node-driver-nk5pm" Dec 13 01:32:52.378979 kubelet[3500]: E1213 01:32:52.374822 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:52.378979 kubelet[3500]: W1213 01:32:52.374835 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:52.378979 kubelet[3500]: E1213 01:32:52.374852 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:52.378979 kubelet[3500]: E1213 01:32:52.375094 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:52.378979 kubelet[3500]: W1213 01:32:52.375103 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:52.379459 kubelet[3500]: E1213 01:32:52.375119 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:52.379459 kubelet[3500]: E1213 01:32:52.375377 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:52.379459 kubelet[3500]: W1213 01:32:52.375387 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:52.379459 kubelet[3500]: E1213 01:32:52.375403 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:52.379459 kubelet[3500]: E1213 01:32:52.375683 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:52.379459 kubelet[3500]: W1213 01:32:52.375694 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:52.379459 kubelet[3500]: E1213 01:32:52.375779 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:52.379459 kubelet[3500]: E1213 01:32:52.376067 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:52.379459 kubelet[3500]: W1213 01:32:52.376095 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:52.379459 kubelet[3500]: E1213 01:32:52.376111 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:52.379907 kubelet[3500]: E1213 01:32:52.376511 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:52.379907 kubelet[3500]: W1213 01:32:52.376522 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:52.379907 kubelet[3500]: E1213 01:32:52.376539 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:52.379907 kubelet[3500]: E1213 01:32:52.376778 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:52.379907 kubelet[3500]: W1213 01:32:52.376805 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:52.379907 kubelet[3500]: E1213 01:32:52.376834 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:52.379907 kubelet[3500]: E1213 01:32:52.377182 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:52.379907 kubelet[3500]: W1213 01:32:52.377192 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:52.379907 kubelet[3500]: E1213 01:32:52.377209 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:52.379907 kubelet[3500]: E1213 01:32:52.377478 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:52.380358 kubelet[3500]: W1213 01:32:52.377488 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:52.380358 kubelet[3500]: E1213 01:32:52.377502 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:52.380358 kubelet[3500]: E1213 01:32:52.377728 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:52.380358 kubelet[3500]: W1213 01:32:52.377737 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:52.380358 kubelet[3500]: E1213 01:32:52.377751 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:52.380358 kubelet[3500]: E1213 01:32:52.377972 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:52.380358 kubelet[3500]: W1213 01:32:52.377981 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:52.380358 kubelet[3500]: E1213 01:32:52.377996 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:52.404999 kubelet[3500]: E1213 01:32:52.402839 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:52.404999 kubelet[3500]: W1213 01:32:52.402860 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:52.404999 kubelet[3500]: E1213 01:32:52.402885 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:52.477468 kubelet[3500]: E1213 01:32:52.477403 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:52.477468 kubelet[3500]: W1213 01:32:52.477456 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:52.477919 kubelet[3500]: E1213 01:32:52.477485 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:52.479009 kubelet[3500]: E1213 01:32:52.478320 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:52.479009 kubelet[3500]: W1213 01:32:52.478493 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:52.481905 kubelet[3500]: E1213 01:32:52.479803 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:52.482072 kubelet[3500]: E1213 01:32:52.482059 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:52.482382 kubelet[3500]: W1213 01:32:52.482229 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:52.482382 kubelet[3500]: E1213 01:32:52.482262 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:52.482836 kubelet[3500]: E1213 01:32:52.482692 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:52.482836 kubelet[3500]: W1213 01:32:52.482706 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:52.482836 kubelet[3500]: E1213 01:32:52.482742 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:52.490154 kubelet[3500]: E1213 01:32:52.490133 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:52.492377 kubelet[3500]: W1213 01:32:52.492207 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:52.495971 kubelet[3500]: E1213 01:32:52.494457 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:52.497090 kubelet[3500]: E1213 01:32:52.495908 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:52.498106 kubelet[3500]: W1213 01:32:52.497670 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:52.499546 kubelet[3500]: E1213 01:32:52.498239 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:52.499985 kubelet[3500]: E1213 01:32:52.499960 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:52.499985 kubelet[3500]: W1213 01:32:52.499978 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:52.510205 kubelet[3500]: E1213 01:32:52.507974 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:52.510205 kubelet[3500]: W1213 01:32:52.507998 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:52.510205 kubelet[3500]: E1213 01:32:52.509304 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:52.510205 kubelet[3500]: W1213 01:32:52.509320 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:52.510205 kubelet[3500]: E1213 01:32:52.509934 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:52.510205 kubelet[3500]: W1213 01:32:52.510058 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:52.510205 kubelet[3500]: E1213 01:32:52.510087 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:52.510663 kubelet[3500]: E1213 01:32:52.510404 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:52.510663 kubelet[3500]: W1213 01:32:52.510415 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:52.510663 kubelet[3500]: E1213 01:32:52.510433 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:52.510663 kubelet[3500]: E1213 01:32:52.510618 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:52.510663 kubelet[3500]: W1213 01:32:52.510627 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:52.510663 kubelet[3500]: E1213 01:32:52.510642 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:52.512839 kubelet[3500]: E1213 01:32:52.511664 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:52.512839 kubelet[3500]: W1213 01:32:52.511678 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:52.512839 kubelet[3500]: E1213 01:32:52.511698 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:52.512839 kubelet[3500]: E1213 01:32:52.511869 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:52.512839 kubelet[3500]: E1213 01:32:52.512276 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:52.512839 kubelet[3500]: W1213 01:32:52.512287 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:52.512839 kubelet[3500]: E1213 01:32:52.512303 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:52.512839 kubelet[3500]: E1213 01:32:52.512608 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:52.512839 kubelet[3500]: W1213 01:32:52.512619 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:52.512839 kubelet[3500]: E1213 01:32:52.512636 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:52.513404 kubelet[3500]: E1213 01:32:52.513079 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:52.513404 kubelet[3500]: W1213 01:32:52.513090 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:52.513404 kubelet[3500]: E1213 01:32:52.513106 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:52.513404 kubelet[3500]: E1213 01:32:52.513153 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:52.513404 kubelet[3500]: E1213 01:32:52.513173 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:52.513605 kubelet[3500]: E1213 01:32:52.513458 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:52.513605 kubelet[3500]: W1213 01:32:52.513469 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:52.513605 kubelet[3500]: E1213 01:32:52.513485 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:52.513949 kubelet[3500]: E1213 01:32:52.513786 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:52.513949 kubelet[3500]: W1213 01:32:52.513942 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:52.514051 kubelet[3500]: E1213 01:32:52.513965 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:52.518495 kubelet[3500]: E1213 01:32:52.514212 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:52.518495 kubelet[3500]: W1213 01:32:52.514225 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:52.518495 kubelet[3500]: E1213 01:32:52.514260 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:52.518495 kubelet[3500]: E1213 01:32:52.514625 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:52.518495 kubelet[3500]: W1213 01:32:52.514635 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:52.518495 kubelet[3500]: E1213 01:32:52.514652 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:52.518495 kubelet[3500]: E1213 01:32:52.515029 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:52.518495 kubelet[3500]: W1213 01:32:52.515040 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:52.518495 kubelet[3500]: E1213 01:32:52.515074 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:52.518495 kubelet[3500]: E1213 01:32:52.515424 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:52.520292 kubelet[3500]: W1213 01:32:52.516601 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:52.520292 kubelet[3500]: E1213 01:32:52.516618 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:52.520292 kubelet[3500]: E1213 01:32:52.517007 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:52.520292 kubelet[3500]: W1213 01:32:52.517021 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:52.520292 kubelet[3500]: E1213 01:32:52.517039 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:52.520292 kubelet[3500]: E1213 01:32:52.517276 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:52.520292 kubelet[3500]: W1213 01:32:52.517299 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:52.520292 kubelet[3500]: E1213 01:32:52.517317 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:52.520292 kubelet[3500]: E1213 01:32:52.517895 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:52.520292 kubelet[3500]: W1213 01:32:52.517906 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:52.522857 kubelet[3500]: E1213 01:32:52.517924 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:52.527585 containerd[2086]: time="2024-12-13T01:32:52.523518069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-59f588f8-lz4v6,Uid:9decf99b-b592-4462-a57c-4b3186238d05,Namespace:calico-system,Attempt:0,} returns sandbox id \"d9cae63d3adea2c5ef1b91133661b6c5cabc01fd606b2991f414b365f37d5f55\"" Dec 13 01:32:52.571423 kubelet[3500]: E1213 01:32:52.568370 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:52.571423 kubelet[3500]: W1213 01:32:52.568393 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:52.571423 kubelet[3500]: E1213 01:32:52.568434 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:52.574076 containerd[2086]: time="2024-12-13T01:32:52.574039866Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Dec 13 01:32:52.624807 containerd[2086]: time="2024-12-13T01:32:52.624683466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7xhp8,Uid:84eed9dd-3a57-4b2d-81ab-56717d1395d5,Namespace:calico-system,Attempt:0,}" Dec 13 01:32:52.680733 containerd[2086]: time="2024-12-13T01:32:52.678715532Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:32:52.681754 containerd[2086]: time="2024-12-13T01:32:52.681703926Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:32:52.682475 containerd[2086]: time="2024-12-13T01:32:52.681899423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:52.683706 containerd[2086]: time="2024-12-13T01:32:52.682764069Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:52.751214 containerd[2086]: time="2024-12-13T01:32:52.751175654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7xhp8,Uid:84eed9dd-3a57-4b2d-81ab-56717d1395d5,Namespace:calico-system,Attempt:0,} returns sandbox id \"4b5a14711aea015f978538cab31058cdedc020d701f855b8a06245a986479047\"" Dec 13 01:32:53.706913 kubelet[3500]: E1213 01:32:53.705977 3500 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nk5pm" podUID="37e377bc-ecb2-46be-838f-bd5c4df2e7cf" Dec 13 01:32:54.223329 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount124891312.mount: Deactivated successfully. Dec 13 01:32:55.251567 containerd[2086]: time="2024-12-13T01:32:55.251516299Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:55.253799 containerd[2086]: time="2024-12-13T01:32:55.253659088Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Dec 13 01:32:55.256380 containerd[2086]: time="2024-12-13T01:32:55.256290092Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:55.260685 containerd[2086]: time="2024-12-13T01:32:55.260609048Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:55.262649 containerd[2086]: time="2024-12-13T01:32:55.261699980Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.687612509s" Dec 13 01:32:55.262649 containerd[2086]: time="2024-12-13T01:32:55.261741449Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Dec 13 01:32:55.265105 containerd[2086]: time="2024-12-13T01:32:55.264986390Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Dec 13 01:32:55.289426 containerd[2086]: time="2024-12-13T01:32:55.289388064Z" level=info msg="CreateContainer within sandbox \"d9cae63d3adea2c5ef1b91133661b6c5cabc01fd606b2991f414b365f37d5f55\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 13 01:32:55.328473 containerd[2086]: time="2024-12-13T01:32:55.328153143Z" level=info msg="CreateContainer within sandbox \"d9cae63d3adea2c5ef1b91133661b6c5cabc01fd606b2991f414b365f37d5f55\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"6cfd0c514f65c92ea561368768983bd00f9998b66cd6c898a930998da60b5342\"" Dec 13 01:32:55.329073 containerd[2086]: time="2024-12-13T01:32:55.329036373Z" level=info msg="StartContainer for \"6cfd0c514f65c92ea561368768983bd00f9998b66cd6c898a930998da60b5342\"" Dec 13 01:32:55.493483 containerd[2086]: time="2024-12-13T01:32:55.493406377Z" level=info msg="StartContainer for \"6cfd0c514f65c92ea561368768983bd00f9998b66cd6c898a930998da60b5342\" returns successfully" Dec 13 01:32:55.663690 systemd-resolved[1968]: Under memory pressure, flushing caches. Dec 13 01:32:55.663797 systemd-resolved[1968]: Flushed all caches. Dec 13 01:32:55.667166 systemd-journald[1563]: Under memory pressure, flushing caches. Dec 13 01:32:55.685389 kubelet[3500]: E1213 01:32:55.685345 3500 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nk5pm" podUID="37e377bc-ecb2-46be-838f-bd5c4df2e7cf" Dec 13 01:32:55.970219 kubelet[3500]: E1213 01:32:55.968954 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:55.970219 kubelet[3500]: W1213 01:32:55.968982 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:55.970219 kubelet[3500]: E1213 01:32:55.969008 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:55.970483 kubelet[3500]: E1213 01:32:55.970224 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:55.970483 kubelet[3500]: W1213 01:32:55.970238 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:55.970483 kubelet[3500]: E1213 01:32:55.970263 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:55.970949 kubelet[3500]: E1213 01:32:55.970922 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:55.970949 kubelet[3500]: W1213 01:32:55.970939 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:55.971584 kubelet[3500]: E1213 01:32:55.970959 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:55.971584 kubelet[3500]: E1213 01:32:55.971234 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:55.971584 kubelet[3500]: W1213 01:32:55.971244 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:55.971584 kubelet[3500]: E1213 01:32:55.971261 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:55.971584 kubelet[3500]: E1213 01:32:55.971519 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:55.971584 kubelet[3500]: W1213 01:32:55.971530 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:55.971584 kubelet[3500]: E1213 01:32:55.971546 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:55.972924 kubelet[3500]: E1213 01:32:55.971730 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:55.972924 kubelet[3500]: W1213 01:32:55.971771 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:55.972924 kubelet[3500]: E1213 01:32:55.971787 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:55.972924 kubelet[3500]: E1213 01:32:55.971976 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:55.972924 kubelet[3500]: W1213 01:32:55.971986 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:55.972924 kubelet[3500]: E1213 01:32:55.972000 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:55.973725 kubelet[3500]: E1213 01:32:55.973596 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:55.973725 kubelet[3500]: W1213 01:32:55.973612 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:55.973725 kubelet[3500]: E1213 01:32:55.973628 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:55.974302 kubelet[3500]: E1213 01:32:55.974261 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:55.974302 kubelet[3500]: W1213 01:32:55.974275 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:55.974302 kubelet[3500]: E1213 01:32:55.974294 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:55.975711 kubelet[3500]: E1213 01:32:55.975690 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:55.975711 kubelet[3500]: W1213 01:32:55.975705 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:55.975847 kubelet[3500]: E1213 01:32:55.975722 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:55.977722 kubelet[3500]: E1213 01:32:55.977703 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:55.977722 kubelet[3500]: W1213 01:32:55.977722 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:55.978422 kubelet[3500]: E1213 01:32:55.977739 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:55.978422 kubelet[3500]: E1213 01:32:55.978122 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:55.978422 kubelet[3500]: W1213 01:32:55.978135 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:55.978422 kubelet[3500]: E1213 01:32:55.978151 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:55.978631 kubelet[3500]: E1213 01:32:55.978458 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:55.978631 kubelet[3500]: W1213 01:32:55.978469 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:55.978631 kubelet[3500]: E1213 01:32:55.978485 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:55.978769 kubelet[3500]: E1213 01:32:55.978688 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:55.978769 kubelet[3500]: W1213 01:32:55.978697 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:55.978769 kubelet[3500]: E1213 01:32:55.978712 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:55.978915 kubelet[3500]: E1213 01:32:55.978901 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:55.978915 kubelet[3500]: W1213 01:32:55.978910 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:55.979447 kubelet[3500]: E1213 01:32:55.978924 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:56.056134 kubelet[3500]: E1213 01:32:56.055749 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:56.056134 kubelet[3500]: W1213 01:32:56.055837 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:56.056134 kubelet[3500]: E1213 01:32:56.055867 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:56.056393 kubelet[3500]: E1213 01:32:56.056216 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:56.056393 kubelet[3500]: W1213 01:32:56.056227 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:56.056393 kubelet[3500]: E1213 01:32:56.056299 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:56.057001 kubelet[3500]: E1213 01:32:56.056735 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:56.057001 kubelet[3500]: W1213 01:32:56.056772 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:56.057245 kubelet[3500]: E1213 01:32:56.057172 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:56.058119 kubelet[3500]: E1213 01:32:56.057602 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:56.058119 kubelet[3500]: W1213 01:32:56.057618 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:56.058119 kubelet[3500]: E1213 01:32:56.057647 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:56.058119 kubelet[3500]: E1213 01:32:56.057968 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:56.058119 kubelet[3500]: W1213 01:32:56.057979 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:56.058119 kubelet[3500]: E1213 01:32:56.058081 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:56.058509 kubelet[3500]: E1213 01:32:56.058330 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:56.058509 kubelet[3500]: W1213 01:32:56.058370 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:56.058509 kubelet[3500]: E1213 01:32:56.058443 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:56.058974 kubelet[3500]: E1213 01:32:56.058956 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:56.058974 kubelet[3500]: W1213 01:32:56.058971 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:56.059199 kubelet[3500]: E1213 01:32:56.059075 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:56.059409 kubelet[3500]: E1213 01:32:56.059390 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:56.059409 kubelet[3500]: W1213 01:32:56.059405 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:56.059502 kubelet[3500]: E1213 01:32:56.059434 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:56.060301 kubelet[3500]: E1213 01:32:56.059962 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:56.060301 kubelet[3500]: W1213 01:32:56.059974 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:56.060301 kubelet[3500]: E1213 01:32:56.060033 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:56.060476 kubelet[3500]: E1213 01:32:56.060368 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:56.060476 kubelet[3500]: W1213 01:32:56.060379 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:56.060476 kubelet[3500]: E1213 01:32:56.060444 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:56.060703 kubelet[3500]: E1213 01:32:56.060661 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:56.060703 kubelet[3500]: W1213 01:32:56.060671 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:56.060802 kubelet[3500]: E1213 01:32:56.060705 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:56.060983 kubelet[3500]: E1213 01:32:56.060972 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:56.061052 kubelet[3500]: W1213 01:32:56.060984 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:56.061052 kubelet[3500]: E1213 01:32:56.061013 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:56.061292 kubelet[3500]: E1213 01:32:56.061271 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:56.061292 kubelet[3500]: W1213 01:32:56.061286 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:56.061427 kubelet[3500]: E1213 01:32:56.061324 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:56.064110 kubelet[3500]: E1213 01:32:56.062243 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:56.064110 kubelet[3500]: W1213 01:32:56.062256 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:56.064110 kubelet[3500]: E1213 01:32:56.062277 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:56.064110 kubelet[3500]: E1213 01:32:56.063855 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:56.064110 kubelet[3500]: W1213 01:32:56.063867 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:56.064110 kubelet[3500]: E1213 01:32:56.063952 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:56.064448 kubelet[3500]: E1213 01:32:56.064169 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:56.064448 kubelet[3500]: W1213 01:32:56.064180 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:56.064448 kubelet[3500]: E1213 01:32:56.064210 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:56.064588 kubelet[3500]: E1213 01:32:56.064504 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:56.064588 kubelet[3500]: W1213 01:32:56.064514 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:56.064588 kubelet[3500]: E1213 01:32:56.064530 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:56.064970 kubelet[3500]: E1213 01:32:56.064953 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:56.064970 kubelet[3500]: W1213 01:32:56.064971 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:56.065087 kubelet[3500]: E1213 01:32:56.064986 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:56.868534 kubelet[3500]: I1213 01:32:56.868109 3500 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:32:56.870120 containerd[2086]: time="2024-12-13T01:32:56.870081219Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:56.885778 containerd[2086]: time="2024-12-13T01:32:56.885716181Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Dec 13 01:32:56.887741 kubelet[3500]: E1213 01:32:56.887712 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:56.887741 kubelet[3500]: W1213 01:32:56.887737 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:56.887975 kubelet[3500]: E1213 01:32:56.887766 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:56.888049 kubelet[3500]: E1213 01:32:56.888036 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:56.888101 kubelet[3500]: W1213 01:32:56.888048 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:56.888101 kubelet[3500]: E1213 01:32:56.888066 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:56.888322 kubelet[3500]: E1213 01:32:56.888298 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:56.888322 kubelet[3500]: W1213 01:32:56.888317 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:56.888521 kubelet[3500]: E1213 01:32:56.888348 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:56.888612 kubelet[3500]: E1213 01:32:56.888587 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:56.888612 kubelet[3500]: W1213 01:32:56.888602 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:56.888762 kubelet[3500]: E1213 01:32:56.888618 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:56.888858 kubelet[3500]: E1213 01:32:56.888836 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:56.888858 kubelet[3500]: W1213 01:32:56.888852 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:56.888990 kubelet[3500]: E1213 01:32:56.888868 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:56.889086 kubelet[3500]: E1213 01:32:56.889065 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:56.889086 kubelet[3500]: W1213 01:32:56.889080 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:56.889210 kubelet[3500]: E1213 01:32:56.889094 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:56.889317 kubelet[3500]: E1213 01:32:56.889300 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:56.889317 kubelet[3500]: W1213 01:32:56.889314 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:56.889462 kubelet[3500]: E1213 01:32:56.889328 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:56.889597 kubelet[3500]: E1213 01:32:56.889570 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:56.889597 kubelet[3500]: W1213 01:32:56.889583 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:56.889719 kubelet[3500]: E1213 01:32:56.889601 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:56.890113 kubelet[3500]: E1213 01:32:56.889826 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:56.890113 kubelet[3500]: W1213 01:32:56.889835 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:56.890113 kubelet[3500]: E1213 01:32:56.889857 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:56.890113 kubelet[3500]: E1213 01:32:56.890085 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:56.890113 kubelet[3500]: W1213 01:32:56.890096 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:56.890113 kubelet[3500]: E1213 01:32:56.890111 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:56.892428 kubelet[3500]: E1213 01:32:56.890509 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:56.892428 kubelet[3500]: W1213 01:32:56.890520 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:56.892428 kubelet[3500]: E1213 01:32:56.890535 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:56.892428 kubelet[3500]: E1213 01:32:56.890826 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:56.892428 kubelet[3500]: W1213 01:32:56.890838 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:56.892428 kubelet[3500]: E1213 01:32:56.890854 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:56.892428 kubelet[3500]: E1213 01:32:56.891065 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:56.892428 kubelet[3500]: W1213 01:32:56.891074 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:56.892428 kubelet[3500]: E1213 01:32:56.891088 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:56.892428 kubelet[3500]: E1213 01:32:56.891279 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:56.892840 kubelet[3500]: W1213 01:32:56.891288 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:56.892840 kubelet[3500]: E1213 01:32:56.891302 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:56.892840 kubelet[3500]: E1213 01:32:56.891560 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:56.892840 kubelet[3500]: W1213 01:32:56.891569 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:56.892840 kubelet[3500]: E1213 01:32:56.891587 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:56.915796 containerd[2086]: time="2024-12-13T01:32:56.915740353Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:56.924250 containerd[2086]: time="2024-12-13T01:32:56.924093211Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:56.925302 containerd[2086]: time="2024-12-13T01:32:56.925061865Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.659666199s" Dec 13 01:32:56.925302 containerd[2086]: time="2024-12-13T01:32:56.925139432Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Dec 13 01:32:56.929432 containerd[2086]: time="2024-12-13T01:32:56.929388798Z" level=info msg="CreateContainer within sandbox \"4b5a14711aea015f978538cab31058cdedc020d701f855b8a06245a986479047\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 01:32:56.964144 kubelet[3500]: E1213 01:32:56.964106 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:56.964144 kubelet[3500]: W1213 01:32:56.964133 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:56.964144 kubelet[3500]: E1213 01:32:56.964159 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:56.967830 kubelet[3500]: E1213 01:32:56.967520 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:56.967830 kubelet[3500]: W1213 01:32:56.967541 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:56.967830 kubelet[3500]: E1213 01:32:56.967573 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:56.968220 kubelet[3500]: E1213 01:32:56.968204 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:56.968499 kubelet[3500]: W1213 01:32:56.968294 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:56.968499 kubelet[3500]: E1213 01:32:56.968321 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:56.968767 kubelet[3500]: E1213 01:32:56.968755 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:56.968944 kubelet[3500]: W1213 01:32:56.968834 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:56.968944 kubelet[3500]: E1213 01:32:56.968858 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:56.969212 kubelet[3500]: E1213 01:32:56.969199 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:56.969405 kubelet[3500]: W1213 01:32:56.969284 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:56.969405 kubelet[3500]: E1213 01:32:56.969358 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:56.969752 kubelet[3500]: E1213 01:32:56.969647 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:56.969752 kubelet[3500]: W1213 01:32:56.969660 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:56.969752 kubelet[3500]: E1213 01:32:56.969690 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:56.970156 kubelet[3500]: E1213 01:32:56.970025 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:56.970156 kubelet[3500]: W1213 01:32:56.970038 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:56.970156 kubelet[3500]: E1213 01:32:56.970059 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:56.973417 kubelet[3500]: E1213 01:32:56.973297 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:56.973417 kubelet[3500]: W1213 01:32:56.973314 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:56.978561 kubelet[3500]: E1213 01:32:56.978373 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:56.979871 kubelet[3500]: E1213 01:32:56.978916 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:56.979871 kubelet[3500]: W1213 01:32:56.978953 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:56.979871 kubelet[3500]: E1213 01:32:56.979814 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:56.983760 kubelet[3500]: E1213 01:32:56.983742 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:56.984018 kubelet[3500]: W1213 01:32:56.983933 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:56.984201 kubelet[3500]: E1213 01:32:56.984109 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:56.985034 kubelet[3500]: E1213 01:32:56.984635 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:56.985034 kubelet[3500]: W1213 01:32:56.984658 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:56.985357 kubelet[3500]: E1213 01:32:56.985250 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:56.985547 kubelet[3500]: E1213 01:32:56.985520 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:56.985547 kubelet[3500]: W1213 01:32:56.985532 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:56.985872 kubelet[3500]: E1213 01:32:56.985776 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:56.987654 kubelet[3500]: E1213 01:32:56.987624 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:56.989764 kubelet[3500]: W1213 01:32:56.989287 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:56.991278 kubelet[3500]: E1213 01:32:56.991261 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:56.995008 kubelet[3500]: E1213 01:32:56.991746 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:56.995008 kubelet[3500]: W1213 01:32:56.993643 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:56.995008 kubelet[3500]: E1213 01:32:56.993676 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:56.995008 kubelet[3500]: E1213 01:32:56.994737 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:56.995008 kubelet[3500]: W1213 01:32:56.994749 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:56.995008 kubelet[3500]: E1213 01:32:56.994771 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:56.997159 kubelet[3500]: E1213 01:32:56.996659 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:56.997159 kubelet[3500]: W1213 01:32:56.996670 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:56.997159 kubelet[3500]: E1213 01:32:56.996866 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:57.001578 kubelet[3500]: E1213 01:32:57.001534 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:57.001578 kubelet[3500]: W1213 01:32:57.001556 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:57.001924 kubelet[3500]: E1213 01:32:57.001754 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:57.002618 kubelet[3500]: E1213 01:32:57.002486 3500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:32:57.002618 kubelet[3500]: W1213 01:32:57.002504 3500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:32:57.002618 kubelet[3500]: E1213 01:32:57.002526 3500 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:32:57.220448 containerd[2086]: time="2024-12-13T01:32:57.220313187Z" level=info msg="CreateContainer within sandbox \"4b5a14711aea015f978538cab31058cdedc020d701f855b8a06245a986479047\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"1d04791215c0738580c08acbc2004205cfec279b0b6895e4483d01515ad145ed\"" Dec 13 01:32:57.222475 containerd[2086]: time="2024-12-13T01:32:57.222432013Z" level=info msg="StartContainer for \"1d04791215c0738580c08acbc2004205cfec279b0b6895e4483d01515ad145ed\"" Dec 13 01:32:57.309710 containerd[2086]: time="2024-12-13T01:32:57.309668672Z" level=info msg="StartContainer for \"1d04791215c0738580c08acbc2004205cfec279b0b6895e4483d01515ad145ed\" returns successfully" Dec 13 01:32:57.362143 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1d04791215c0738580c08acbc2004205cfec279b0b6895e4483d01515ad145ed-rootfs.mount: Deactivated successfully. Dec 13 01:32:57.439226 containerd[2086]: time="2024-12-13T01:32:57.420374717Z" level=info msg="shim disconnected" id=1d04791215c0738580c08acbc2004205cfec279b0b6895e4483d01515ad145ed namespace=k8s.io Dec 13 01:32:57.439617 containerd[2086]: time="2024-12-13T01:32:57.439227830Z" level=warning msg="cleaning up after shim disconnected" id=1d04791215c0738580c08acbc2004205cfec279b0b6895e4483d01515ad145ed namespace=k8s.io Dec 13 01:32:57.439617 containerd[2086]: time="2024-12-13T01:32:57.439263083Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:32:57.683895 kubelet[3500]: E1213 01:32:57.683409 3500 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nk5pm" podUID="37e377bc-ecb2-46be-838f-bd5c4df2e7cf" Dec 13 01:32:57.873068 containerd[2086]: time="2024-12-13T01:32:57.872870510Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Dec 13 01:32:57.903079 kubelet[3500]: I1213 01:32:57.902155 3500 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-59f588f8-lz4v6" podStartSLOduration=4.213348652 podStartE2EDuration="6.902101403s" podCreationTimestamp="2024-12-13 01:32:51 +0000 UTC" firstStartedPulling="2024-12-13 01:32:52.573388399 +0000 UTC m=+21.072982214" lastFinishedPulling="2024-12-13 01:32:55.262141157 +0000 UTC m=+23.761734965" observedRunningTime="2024-12-13 01:32:55.935901176 +0000 UTC m=+24.435495023" watchObservedRunningTime="2024-12-13 01:32:57.902101403 +0000 UTC m=+26.401695225" Dec 13 01:32:59.683854 kubelet[3500]: E1213 01:32:59.683576 3500 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nk5pm" podUID="37e377bc-ecb2-46be-838f-bd5c4df2e7cf" Dec 13 01:33:01.686786 kubelet[3500]: E1213 01:33:01.684850 3500 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nk5pm" podUID="37e377bc-ecb2-46be-838f-bd5c4df2e7cf" Dec 13 01:33:03.598814 containerd[2086]: time="2024-12-13T01:33:03.598759259Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:03.600875 containerd[2086]: time="2024-12-13T01:33:03.600653636Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Dec 13 01:33:03.604915 containerd[2086]: time="2024-12-13T01:33:03.603502380Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:03.608763 containerd[2086]: time="2024-12-13T01:33:03.608703273Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:03.610710 containerd[2086]: time="2024-12-13T01:33:03.610614131Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 5.737578558s" Dec 13 01:33:03.610935 containerd[2086]: time="2024-12-13T01:33:03.610743146Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Dec 13 01:33:03.614533 containerd[2086]: time="2024-12-13T01:33:03.614498218Z" level=info msg="CreateContainer within sandbox \"4b5a14711aea015f978538cab31058cdedc020d701f855b8a06245a986479047\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 01:33:03.685307 kubelet[3500]: E1213 01:33:03.684610 3500 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nk5pm" podUID="37e377bc-ecb2-46be-838f-bd5c4df2e7cf" Dec 13 01:33:03.817069 containerd[2086]: time="2024-12-13T01:33:03.816819386Z" level=info msg="CreateContainer within sandbox \"4b5a14711aea015f978538cab31058cdedc020d701f855b8a06245a986479047\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"9d18cb7f177e75192c04a8df79363d5ecfb88e16f26143ec7254502d3b842aff\"" Dec 13 01:33:03.817997 containerd[2086]: time="2024-12-13T01:33:03.817831604Z" level=info msg="StartContainer for \"9d18cb7f177e75192c04a8df79363d5ecfb88e16f26143ec7254502d3b842aff\"" Dec 13 01:33:03.953848 containerd[2086]: time="2024-12-13T01:33:03.953315121Z" level=info msg="StartContainer for \"9d18cb7f177e75192c04a8df79363d5ecfb88e16f26143ec7254502d3b842aff\" returns successfully" Dec 13 01:33:05.684951 kubelet[3500]: E1213 01:33:05.684920 3500 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nk5pm" podUID="37e377bc-ecb2-46be-838f-bd5c4df2e7cf" Dec 13 01:33:07.291588 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9d18cb7f177e75192c04a8df79363d5ecfb88e16f26143ec7254502d3b842aff-rootfs.mount: Deactivated successfully. Dec 13 01:33:07.293020 kubelet[3500]: I1213 01:33:07.292658 3500 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 01:33:07.310485 containerd[2086]: time="2024-12-13T01:33:07.310278503Z" level=info msg="shim disconnected" id=9d18cb7f177e75192c04a8df79363d5ecfb88e16f26143ec7254502d3b842aff namespace=k8s.io Dec 13 01:33:07.313437 containerd[2086]: time="2024-12-13T01:33:07.310490174Z" level=warning msg="cleaning up after shim disconnected" id=9d18cb7f177e75192c04a8df79363d5ecfb88e16f26143ec7254502d3b842aff namespace=k8s.io Dec 13 01:33:07.313437 containerd[2086]: time="2024-12-13T01:33:07.310505414Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:33:07.342471 kubelet[3500]: I1213 01:33:07.342424 3500 topology_manager.go:215] "Topology Admit Handler" podUID="0e3b0103-676b-4665-aa0f-646d586812a7" podNamespace="kube-system" podName="coredns-76f75df574-blsbn" Dec 13 01:33:07.354641 kubelet[3500]: I1213 01:33:07.354611 3500 topology_manager.go:215] "Topology Admit Handler" podUID="8a855430-bc04-49b7-b6f5-e957608abefc" podNamespace="kube-system" podName="coredns-76f75df574-z9648" Dec 13 01:33:07.358069 kubelet[3500]: I1213 01:33:07.358031 3500 topology_manager.go:215] "Topology Admit Handler" podUID="32353b72-7c12-4d50-b914-daf11815550a" podNamespace="calico-system" podName="calico-kube-controllers-ddd8db86d-nkrpz" Dec 13 01:33:07.361091 kubelet[3500]: I1213 01:33:07.361053 3500 topology_manager.go:215] "Topology Admit Handler" podUID="17b16b6f-09f1-4ddb-a8a5-c0692d5afcac" podNamespace="calico-apiserver" podName="calico-apiserver-7977594b99-6rk5q" Dec 13 01:33:07.362824 kubelet[3500]: I1213 01:33:07.361268 3500 topology_manager.go:215] "Topology Admit Handler" podUID="0200e2b4-cc5f-4c09-802c-ef6b6feab695" podNamespace="calico-apiserver" podName="calico-apiserver-7977594b99-m4sgj" Dec 13 01:33:07.458657 kubelet[3500]: I1213 01:33:07.458618 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0200e2b4-cc5f-4c09-802c-ef6b6feab695-calico-apiserver-certs\") pod \"calico-apiserver-7977594b99-m4sgj\" (UID: \"0200e2b4-cc5f-4c09-802c-ef6b6feab695\") " pod="calico-apiserver/calico-apiserver-7977594b99-m4sgj" Dec 13 01:33:07.458825 kubelet[3500]: I1213 01:33:07.458677 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8a855430-bc04-49b7-b6f5-e957608abefc-config-volume\") pod \"coredns-76f75df574-z9648\" (UID: \"8a855430-bc04-49b7-b6f5-e957608abefc\") " pod="kube-system/coredns-76f75df574-z9648" Dec 13 01:33:07.458825 kubelet[3500]: I1213 01:33:07.458709 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/17b16b6f-09f1-4ddb-a8a5-c0692d5afcac-calico-apiserver-certs\") pod \"calico-apiserver-7977594b99-6rk5q\" (UID: \"17b16b6f-09f1-4ddb-a8a5-c0692d5afcac\") " pod="calico-apiserver/calico-apiserver-7977594b99-6rk5q" Dec 13 01:33:07.458825 kubelet[3500]: I1213 01:33:07.458738 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbs92\" (UniqueName: \"kubernetes.io/projected/8a855430-bc04-49b7-b6f5-e957608abefc-kube-api-access-fbs92\") pod \"coredns-76f75df574-z9648\" (UID: \"8a855430-bc04-49b7-b6f5-e957608abefc\") " pod="kube-system/coredns-76f75df574-z9648" Dec 13 01:33:07.458825 kubelet[3500]: I1213 01:33:07.458772 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvjf9\" (UniqueName: \"kubernetes.io/projected/32353b72-7c12-4d50-b914-daf11815550a-kube-api-access-nvjf9\") pod \"calico-kube-controllers-ddd8db86d-nkrpz\" (UID: \"32353b72-7c12-4d50-b914-daf11815550a\") " pod="calico-system/calico-kube-controllers-ddd8db86d-nkrpz" Dec 13 01:33:07.458825 kubelet[3500]: I1213 01:33:07.458800 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kscpk\" (UniqueName: \"kubernetes.io/projected/0e3b0103-676b-4665-aa0f-646d586812a7-kube-api-access-kscpk\") pod \"coredns-76f75df574-blsbn\" (UID: \"0e3b0103-676b-4665-aa0f-646d586812a7\") " pod="kube-system/coredns-76f75df574-blsbn" Dec 13 01:33:07.459065 kubelet[3500]: I1213 01:33:07.458831 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8fv8\" (UniqueName: \"kubernetes.io/projected/17b16b6f-09f1-4ddb-a8a5-c0692d5afcac-kube-api-access-x8fv8\") pod \"calico-apiserver-7977594b99-6rk5q\" (UID: \"17b16b6f-09f1-4ddb-a8a5-c0692d5afcac\") " pod="calico-apiserver/calico-apiserver-7977594b99-6rk5q" Dec 13 01:33:07.459065 kubelet[3500]: I1213 01:33:07.458862 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0e3b0103-676b-4665-aa0f-646d586812a7-config-volume\") pod \"coredns-76f75df574-blsbn\" (UID: \"0e3b0103-676b-4665-aa0f-646d586812a7\") " pod="kube-system/coredns-76f75df574-blsbn" Dec 13 01:33:07.459065 kubelet[3500]: I1213 01:33:07.458900 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/32353b72-7c12-4d50-b914-daf11815550a-tigera-ca-bundle\") pod \"calico-kube-controllers-ddd8db86d-nkrpz\" (UID: \"32353b72-7c12-4d50-b914-daf11815550a\") " pod="calico-system/calico-kube-controllers-ddd8db86d-nkrpz" Dec 13 01:33:07.459065 kubelet[3500]: I1213 01:33:07.458931 3500 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26nzp\" (UniqueName: \"kubernetes.io/projected/0200e2b4-cc5f-4c09-802c-ef6b6feab695-kube-api-access-26nzp\") pod \"calico-apiserver-7977594b99-m4sgj\" (UID: \"0200e2b4-cc5f-4c09-802c-ef6b6feab695\") " pod="calico-apiserver/calico-apiserver-7977594b99-m4sgj" Dec 13 01:33:07.665470 containerd[2086]: time="2024-12-13T01:33:07.664458798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-blsbn,Uid:0e3b0103-676b-4665-aa0f-646d586812a7,Namespace:kube-system,Attempt:0,}" Dec 13 01:33:07.686113 containerd[2086]: time="2024-12-13T01:33:07.686049342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7977594b99-m4sgj,Uid:0200e2b4-cc5f-4c09-802c-ef6b6feab695,Namespace:calico-apiserver,Attempt:0,}" Dec 13 01:33:07.701632 containerd[2086]: time="2024-12-13T01:33:07.699897798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nk5pm,Uid:37e377bc-ecb2-46be-838f-bd5c4df2e7cf,Namespace:calico-system,Attempt:0,}" Dec 13 01:33:07.702004 containerd[2086]: time="2024-12-13T01:33:07.701959147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-z9648,Uid:8a855430-bc04-49b7-b6f5-e957608abefc,Namespace:kube-system,Attempt:0,}" Dec 13 01:33:07.702414 containerd[2086]: time="2024-12-13T01:33:07.702385265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7977594b99-6rk5q,Uid:17b16b6f-09f1-4ddb-a8a5-c0692d5afcac,Namespace:calico-apiserver,Attempt:0,}" Dec 13 01:33:07.702815 containerd[2086]: time="2024-12-13T01:33:07.702786387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-ddd8db86d-nkrpz,Uid:32353b72-7c12-4d50-b914-daf11815550a,Namespace:calico-system,Attempt:0,}" Dec 13 01:33:07.947476 containerd[2086]: time="2024-12-13T01:33:07.945915524Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Dec 13 01:33:08.178135 containerd[2086]: time="2024-12-13T01:33:08.178074424Z" level=error msg="Failed to destroy network for sandbox \"ad47095c098ae93d34affadd78444e72b74ececb4a2617e1e2937cfd63115909\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:08.201417 containerd[2086]: time="2024-12-13T01:33:08.199422158Z" level=error msg="encountered an error cleaning up failed sandbox \"ad47095c098ae93d34affadd78444e72b74ececb4a2617e1e2937cfd63115909\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:08.202810 containerd[2086]: time="2024-12-13T01:33:08.202755113Z" level=error msg="Failed to destroy network for sandbox \"eef006b16c82eab73c7b18acd0119e674e90c0c1665d6505c46fc27c5e0cf13c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:08.204246 containerd[2086]: time="2024-12-13T01:33:08.204195465Z" level=error msg="encountered an error cleaning up failed sandbox \"eef006b16c82eab73c7b18acd0119e674e90c0c1665d6505c46fc27c5e0cf13c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:08.218613 containerd[2086]: time="2024-12-13T01:33:08.218552640Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-blsbn,Uid:0e3b0103-676b-4665-aa0f-646d586812a7,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"eef006b16c82eab73c7b18acd0119e674e90c0c1665d6505c46fc27c5e0cf13c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:08.225788 containerd[2086]: time="2024-12-13T01:33:08.225725888Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-z9648,Uid:8a855430-bc04-49b7-b6f5-e957608abefc,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ad47095c098ae93d34affadd78444e72b74ececb4a2617e1e2937cfd63115909\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:08.236202 kubelet[3500]: E1213 01:33:08.236167 3500 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad47095c098ae93d34affadd78444e72b74ececb4a2617e1e2937cfd63115909\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:08.236409 kubelet[3500]: E1213 01:33:08.236246 3500 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad47095c098ae93d34affadd78444e72b74ececb4a2617e1e2937cfd63115909\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-z9648" Dec 13 01:33:08.236409 kubelet[3500]: E1213 01:33:08.236274 3500 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad47095c098ae93d34affadd78444e72b74ececb4a2617e1e2937cfd63115909\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-z9648" Dec 13 01:33:08.236409 kubelet[3500]: E1213 01:33:08.236351 3500 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-z9648_kube-system(8a855430-bc04-49b7-b6f5-e957608abefc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-z9648_kube-system(8a855430-bc04-49b7-b6f5-e957608abefc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ad47095c098ae93d34affadd78444e72b74ececb4a2617e1e2937cfd63115909\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-z9648" podUID="8a855430-bc04-49b7-b6f5-e957608abefc" Dec 13 01:33:08.237364 kubelet[3500]: E1213 01:33:08.236750 3500 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eef006b16c82eab73c7b18acd0119e674e90c0c1665d6505c46fc27c5e0cf13c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:08.237364 kubelet[3500]: E1213 01:33:08.236796 3500 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eef006b16c82eab73c7b18acd0119e674e90c0c1665d6505c46fc27c5e0cf13c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-blsbn" Dec 13 01:33:08.237364 kubelet[3500]: E1213 01:33:08.236826 3500 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eef006b16c82eab73c7b18acd0119e674e90c0c1665d6505c46fc27c5e0cf13c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-blsbn" Dec 13 01:33:08.237585 kubelet[3500]: E1213 01:33:08.236881 3500 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-blsbn_kube-system(0e3b0103-676b-4665-aa0f-646d586812a7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-blsbn_kube-system(0e3b0103-676b-4665-aa0f-646d586812a7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eef006b16c82eab73c7b18acd0119e674e90c0c1665d6505c46fc27c5e0cf13c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-blsbn" podUID="0e3b0103-676b-4665-aa0f-646d586812a7" Dec 13 01:33:08.243750 containerd[2086]: time="2024-12-13T01:33:08.243529749Z" level=error msg="Failed to destroy network for sandbox \"11cf48d339a3e24e6793a271dfe6b899e3a7036288cb32edd199f38144cb15db\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:08.244390 containerd[2086]: time="2024-12-13T01:33:08.243907472Z" level=error msg="encountered an error cleaning up failed sandbox \"11cf48d339a3e24e6793a271dfe6b899e3a7036288cb32edd199f38144cb15db\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:08.244390 containerd[2086]: time="2024-12-13T01:33:08.243971879Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7977594b99-6rk5q,Uid:17b16b6f-09f1-4ddb-a8a5-c0692d5afcac,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"11cf48d339a3e24e6793a271dfe6b899e3a7036288cb32edd199f38144cb15db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:08.244588 kubelet[3500]: E1213 01:33:08.244201 3500 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11cf48d339a3e24e6793a271dfe6b899e3a7036288cb32edd199f38144cb15db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:08.244588 kubelet[3500]: E1213 01:33:08.244249 3500 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11cf48d339a3e24e6793a271dfe6b899e3a7036288cb32edd199f38144cb15db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7977594b99-6rk5q" Dec 13 01:33:08.244588 kubelet[3500]: E1213 01:33:08.244278 3500 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11cf48d339a3e24e6793a271dfe6b899e3a7036288cb32edd199f38144cb15db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7977594b99-6rk5q" Dec 13 01:33:08.245061 kubelet[3500]: E1213 01:33:08.244364 3500 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7977594b99-6rk5q_calico-apiserver(17b16b6f-09f1-4ddb-a8a5-c0692d5afcac)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7977594b99-6rk5q_calico-apiserver(17b16b6f-09f1-4ddb-a8a5-c0692d5afcac)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"11cf48d339a3e24e6793a271dfe6b899e3a7036288cb32edd199f38144cb15db\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7977594b99-6rk5q" podUID="17b16b6f-09f1-4ddb-a8a5-c0692d5afcac" Dec 13 01:33:08.251742 containerd[2086]: time="2024-12-13T01:33:08.251689961Z" level=error msg="Failed to destroy network for sandbox \"b8e708e0d12c5c6d9a74f9327cdb4b8b9b99f30c3293f188c9c41b650ebba2b6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:08.252361 containerd[2086]: time="2024-12-13T01:33:08.252226653Z" level=error msg="encountered an error cleaning up failed sandbox \"b8e708e0d12c5c6d9a74f9327cdb4b8b9b99f30c3293f188c9c41b650ebba2b6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:08.252361 containerd[2086]: time="2024-12-13T01:33:08.252288359Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7977594b99-m4sgj,Uid:0200e2b4-cc5f-4c09-802c-ef6b6feab695,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b8e708e0d12c5c6d9a74f9327cdb4b8b9b99f30c3293f188c9c41b650ebba2b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:08.253137 kubelet[3500]: E1213 01:33:08.252756 3500 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8e708e0d12c5c6d9a74f9327cdb4b8b9b99f30c3293f188c9c41b650ebba2b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:08.253137 kubelet[3500]: E1213 01:33:08.252816 3500 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8e708e0d12c5c6d9a74f9327cdb4b8b9b99f30c3293f188c9c41b650ebba2b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7977594b99-m4sgj" Dec 13 01:33:08.253137 kubelet[3500]: E1213 01:33:08.252859 3500 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8e708e0d12c5c6d9a74f9327cdb4b8b9b99f30c3293f188c9c41b650ebba2b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7977594b99-m4sgj" Dec 13 01:33:08.253313 kubelet[3500]: E1213 01:33:08.252939 3500 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7977594b99-m4sgj_calico-apiserver(0200e2b4-cc5f-4c09-802c-ef6b6feab695)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7977594b99-m4sgj_calico-apiserver(0200e2b4-cc5f-4c09-802c-ef6b6feab695)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b8e708e0d12c5c6d9a74f9327cdb4b8b9b99f30c3293f188c9c41b650ebba2b6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7977594b99-m4sgj" podUID="0200e2b4-cc5f-4c09-802c-ef6b6feab695" Dec 13 01:33:08.260913 containerd[2086]: time="2024-12-13T01:33:08.260391155Z" level=error msg="Failed to destroy network for sandbox \"ac2d59a67e7467ec7c43637658c3dde72f2ef719b8381e20b848bf2be63df5b2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:08.260913 containerd[2086]: time="2024-12-13T01:33:08.260855608Z" level=error msg="encountered an error cleaning up failed sandbox \"ac2d59a67e7467ec7c43637658c3dde72f2ef719b8381e20b848bf2be63df5b2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:08.261090 containerd[2086]: time="2024-12-13T01:33:08.260931696Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nk5pm,Uid:37e377bc-ecb2-46be-838f-bd5c4df2e7cf,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ac2d59a67e7467ec7c43637658c3dde72f2ef719b8381e20b848bf2be63df5b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:08.261231 kubelet[3500]: E1213 01:33:08.261207 3500 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac2d59a67e7467ec7c43637658c3dde72f2ef719b8381e20b848bf2be63df5b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:08.261319 kubelet[3500]: E1213 01:33:08.261271 3500 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac2d59a67e7467ec7c43637658c3dde72f2ef719b8381e20b848bf2be63df5b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nk5pm" Dec 13 01:33:08.261319 kubelet[3500]: E1213 01:33:08.261299 3500 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac2d59a67e7467ec7c43637658c3dde72f2ef719b8381e20b848bf2be63df5b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nk5pm" Dec 13 01:33:08.261430 kubelet[3500]: E1213 01:33:08.261391 3500 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-nk5pm_calico-system(37e377bc-ecb2-46be-838f-bd5c4df2e7cf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-nk5pm_calico-system(37e377bc-ecb2-46be-838f-bd5c4df2e7cf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ac2d59a67e7467ec7c43637658c3dde72f2ef719b8381e20b848bf2be63df5b2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-nk5pm" podUID="37e377bc-ecb2-46be-838f-bd5c4df2e7cf" Dec 13 01:33:08.264215 containerd[2086]: time="2024-12-13T01:33:08.264170664Z" level=error msg="Failed to destroy network for sandbox \"ad3a991bea30bc6d9465d4002e8615c37f32464d59de6aceb8ffdaf7412d1e82\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:08.264721 containerd[2086]: time="2024-12-13T01:33:08.264632153Z" level=error msg="encountered an error cleaning up failed sandbox \"ad3a991bea30bc6d9465d4002e8615c37f32464d59de6aceb8ffdaf7412d1e82\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:08.264830 containerd[2086]: time="2024-12-13T01:33:08.264749994Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-ddd8db86d-nkrpz,Uid:32353b72-7c12-4d50-b914-daf11815550a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ad3a991bea30bc6d9465d4002e8615c37f32464d59de6aceb8ffdaf7412d1e82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:08.265083 kubelet[3500]: E1213 01:33:08.265059 3500 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad3a991bea30bc6d9465d4002e8615c37f32464d59de6aceb8ffdaf7412d1e82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:08.265169 kubelet[3500]: E1213 01:33:08.265112 3500 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad3a991bea30bc6d9465d4002e8615c37f32464d59de6aceb8ffdaf7412d1e82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-ddd8db86d-nkrpz" Dec 13 01:33:08.265169 kubelet[3500]: E1213 01:33:08.265154 3500 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad3a991bea30bc6d9465d4002e8615c37f32464d59de6aceb8ffdaf7412d1e82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-ddd8db86d-nkrpz" Dec 13 01:33:08.265265 kubelet[3500]: E1213 01:33:08.265228 3500 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-ddd8db86d-nkrpz_calico-system(32353b72-7c12-4d50-b914-daf11815550a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-ddd8db86d-nkrpz_calico-system(32353b72-7c12-4d50-b914-daf11815550a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ad3a991bea30bc6d9465d4002e8615c37f32464d59de6aceb8ffdaf7412d1e82\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-ddd8db86d-nkrpz" podUID="32353b72-7c12-4d50-b914-daf11815550a" Dec 13 01:33:08.943464 kubelet[3500]: I1213 01:33:08.943435 3500 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad47095c098ae93d34affadd78444e72b74ececb4a2617e1e2937cfd63115909" Dec 13 01:33:08.948449 kubelet[3500]: I1213 01:33:08.947984 3500 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eef006b16c82eab73c7b18acd0119e674e90c0c1665d6505c46fc27c5e0cf13c" Dec 13 01:33:08.949068 containerd[2086]: time="2024-12-13T01:33:08.948785653Z" level=info msg="StopPodSandbox for \"eef006b16c82eab73c7b18acd0119e674e90c0c1665d6505c46fc27c5e0cf13c\"" Dec 13 01:33:08.952812 containerd[2086]: time="2024-12-13T01:33:08.952471474Z" level=info msg="Ensure that sandbox eef006b16c82eab73c7b18acd0119e674e90c0c1665d6505c46fc27c5e0cf13c in task-service has been cleanup successfully" Dec 13 01:33:08.955924 containerd[2086]: time="2024-12-13T01:33:08.955304668Z" level=info msg="StopPodSandbox for \"ad47095c098ae93d34affadd78444e72b74ececb4a2617e1e2937cfd63115909\"" Dec 13 01:33:08.955924 containerd[2086]: time="2024-12-13T01:33:08.955603913Z" level=info msg="Ensure that sandbox ad47095c098ae93d34affadd78444e72b74ececb4a2617e1e2937cfd63115909 in task-service has been cleanup successfully" Dec 13 01:33:08.958604 kubelet[3500]: I1213 01:33:08.958581 3500 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11cf48d339a3e24e6793a271dfe6b899e3a7036288cb32edd199f38144cb15db" Dec 13 01:33:08.960315 containerd[2086]: time="2024-12-13T01:33:08.960275961Z" level=info msg="StopPodSandbox for \"11cf48d339a3e24e6793a271dfe6b899e3a7036288cb32edd199f38144cb15db\"" Dec 13 01:33:08.962100 containerd[2086]: time="2024-12-13T01:33:08.961796832Z" level=info msg="Ensure that sandbox 11cf48d339a3e24e6793a271dfe6b899e3a7036288cb32edd199f38144cb15db in task-service has been cleanup successfully" Dec 13 01:33:08.962945 kubelet[3500]: I1213 01:33:08.962922 3500 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad3a991bea30bc6d9465d4002e8615c37f32464d59de6aceb8ffdaf7412d1e82" Dec 13 01:33:08.966510 containerd[2086]: time="2024-12-13T01:33:08.964881555Z" level=info msg="StopPodSandbox for \"ad3a991bea30bc6d9465d4002e8615c37f32464d59de6aceb8ffdaf7412d1e82\"" Dec 13 01:33:08.966510 containerd[2086]: time="2024-12-13T01:33:08.965091677Z" level=info msg="Ensure that sandbox ad3a991bea30bc6d9465d4002e8615c37f32464d59de6aceb8ffdaf7412d1e82 in task-service has been cleanup successfully" Dec 13 01:33:08.980897 kubelet[3500]: I1213 01:33:08.980142 3500 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8e708e0d12c5c6d9a74f9327cdb4b8b9b99f30c3293f188c9c41b650ebba2b6" Dec 13 01:33:08.985525 containerd[2086]: time="2024-12-13T01:33:08.985212314Z" level=info msg="StopPodSandbox for \"b8e708e0d12c5c6d9a74f9327cdb4b8b9b99f30c3293f188c9c41b650ebba2b6\"" Dec 13 01:33:08.985757 containerd[2086]: time="2024-12-13T01:33:08.985723730Z" level=info msg="Ensure that sandbox b8e708e0d12c5c6d9a74f9327cdb4b8b9b99f30c3293f188c9c41b650ebba2b6 in task-service has been cleanup successfully" Dec 13 01:33:08.993660 kubelet[3500]: I1213 01:33:08.993632 3500 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac2d59a67e7467ec7c43637658c3dde72f2ef719b8381e20b848bf2be63df5b2" Dec 13 01:33:08.995779 containerd[2086]: time="2024-12-13T01:33:08.995648230Z" level=info msg="StopPodSandbox for \"ac2d59a67e7467ec7c43637658c3dde72f2ef719b8381e20b848bf2be63df5b2\"" Dec 13 01:33:08.996705 containerd[2086]: time="2024-12-13T01:33:08.995871943Z" level=info msg="Ensure that sandbox ac2d59a67e7467ec7c43637658c3dde72f2ef719b8381e20b848bf2be63df5b2 in task-service has been cleanup successfully" Dec 13 01:33:09.062712 containerd[2086]: time="2024-12-13T01:33:09.062659654Z" level=error msg="StopPodSandbox for \"ad3a991bea30bc6d9465d4002e8615c37f32464d59de6aceb8ffdaf7412d1e82\" failed" error="failed to destroy network for sandbox \"ad3a991bea30bc6d9465d4002e8615c37f32464d59de6aceb8ffdaf7412d1e82\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:09.063298 kubelet[3500]: E1213 01:33:09.063122 3500 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ad3a991bea30bc6d9465d4002e8615c37f32464d59de6aceb8ffdaf7412d1e82\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ad3a991bea30bc6d9465d4002e8615c37f32464d59de6aceb8ffdaf7412d1e82" Dec 13 01:33:09.063298 kubelet[3500]: E1213 01:33:09.063231 3500 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ad3a991bea30bc6d9465d4002e8615c37f32464d59de6aceb8ffdaf7412d1e82"} Dec 13 01:33:09.064239 kubelet[3500]: E1213 01:33:09.063525 3500 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"32353b72-7c12-4d50-b914-daf11815550a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ad3a991bea30bc6d9465d4002e8615c37f32464d59de6aceb8ffdaf7412d1e82\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:33:09.064239 kubelet[3500]: E1213 01:33:09.063675 3500 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"32353b72-7c12-4d50-b914-daf11815550a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ad3a991bea30bc6d9465d4002e8615c37f32464d59de6aceb8ffdaf7412d1e82\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-ddd8db86d-nkrpz" podUID="32353b72-7c12-4d50-b914-daf11815550a" Dec 13 01:33:09.114756 containerd[2086]: time="2024-12-13T01:33:09.114589083Z" level=error msg="StopPodSandbox for \"ad47095c098ae93d34affadd78444e72b74ececb4a2617e1e2937cfd63115909\" failed" error="failed to destroy network for sandbox \"ad47095c098ae93d34affadd78444e72b74ececb4a2617e1e2937cfd63115909\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:09.114904 kubelet[3500]: E1213 01:33:09.114867 3500 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ad47095c098ae93d34affadd78444e72b74ececb4a2617e1e2937cfd63115909\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ad47095c098ae93d34affadd78444e72b74ececb4a2617e1e2937cfd63115909" Dec 13 01:33:09.114967 kubelet[3500]: E1213 01:33:09.114921 3500 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ad47095c098ae93d34affadd78444e72b74ececb4a2617e1e2937cfd63115909"} Dec 13 01:33:09.115009 kubelet[3500]: E1213 01:33:09.114970 3500 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8a855430-bc04-49b7-b6f5-e957608abefc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ad47095c098ae93d34affadd78444e72b74ececb4a2617e1e2937cfd63115909\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:33:09.115118 kubelet[3500]: E1213 01:33:09.115011 3500 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8a855430-bc04-49b7-b6f5-e957608abefc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ad47095c098ae93d34affadd78444e72b74ececb4a2617e1e2937cfd63115909\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-z9648" podUID="8a855430-bc04-49b7-b6f5-e957608abefc" Dec 13 01:33:09.167413 containerd[2086]: time="2024-12-13T01:33:09.166726451Z" level=error msg="StopPodSandbox for \"11cf48d339a3e24e6793a271dfe6b899e3a7036288cb32edd199f38144cb15db\" failed" error="failed to destroy network for sandbox \"11cf48d339a3e24e6793a271dfe6b899e3a7036288cb32edd199f38144cb15db\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:09.167639 kubelet[3500]: E1213 01:33:09.167094 3500 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"11cf48d339a3e24e6793a271dfe6b899e3a7036288cb32edd199f38144cb15db\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="11cf48d339a3e24e6793a271dfe6b899e3a7036288cb32edd199f38144cb15db" Dec 13 01:33:09.167639 kubelet[3500]: E1213 01:33:09.167144 3500 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"11cf48d339a3e24e6793a271dfe6b899e3a7036288cb32edd199f38144cb15db"} Dec 13 01:33:09.167639 kubelet[3500]: E1213 01:33:09.167208 3500 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"17b16b6f-09f1-4ddb-a8a5-c0692d5afcac\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"11cf48d339a3e24e6793a271dfe6b899e3a7036288cb32edd199f38144cb15db\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:33:09.167639 kubelet[3500]: E1213 01:33:09.167249 3500 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"17b16b6f-09f1-4ddb-a8a5-c0692d5afcac\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"11cf48d339a3e24e6793a271dfe6b899e3a7036288cb32edd199f38144cb15db\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7977594b99-6rk5q" podUID="17b16b6f-09f1-4ddb-a8a5-c0692d5afcac" Dec 13 01:33:09.167910 containerd[2086]: time="2024-12-13T01:33:09.167416141Z" level=error msg="StopPodSandbox for \"eef006b16c82eab73c7b18acd0119e674e90c0c1665d6505c46fc27c5e0cf13c\" failed" error="failed to destroy network for sandbox \"eef006b16c82eab73c7b18acd0119e674e90c0c1665d6505c46fc27c5e0cf13c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:09.168095 kubelet[3500]: E1213 01:33:09.168036 3500 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"eef006b16c82eab73c7b18acd0119e674e90c0c1665d6505c46fc27c5e0cf13c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="eef006b16c82eab73c7b18acd0119e674e90c0c1665d6505c46fc27c5e0cf13c" Dec 13 01:33:09.168780 kubelet[3500]: E1213 01:33:09.168650 3500 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"eef006b16c82eab73c7b18acd0119e674e90c0c1665d6505c46fc27c5e0cf13c"} Dec 13 01:33:09.168780 kubelet[3500]: E1213 01:33:09.168712 3500 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0e3b0103-676b-4665-aa0f-646d586812a7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eef006b16c82eab73c7b18acd0119e674e90c0c1665d6505c46fc27c5e0cf13c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:33:09.168780 kubelet[3500]: E1213 01:33:09.168752 3500 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0e3b0103-676b-4665-aa0f-646d586812a7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eef006b16c82eab73c7b18acd0119e674e90c0c1665d6505c46fc27c5e0cf13c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-blsbn" podUID="0e3b0103-676b-4665-aa0f-646d586812a7" Dec 13 01:33:09.181060 containerd[2086]: time="2024-12-13T01:33:09.181006465Z" level=error msg="StopPodSandbox for \"b8e708e0d12c5c6d9a74f9327cdb4b8b9b99f30c3293f188c9c41b650ebba2b6\" failed" error="failed to destroy network for sandbox \"b8e708e0d12c5c6d9a74f9327cdb4b8b9b99f30c3293f188c9c41b650ebba2b6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:09.181664 kubelet[3500]: E1213 01:33:09.181476 3500 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b8e708e0d12c5c6d9a74f9327cdb4b8b9b99f30c3293f188c9c41b650ebba2b6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b8e708e0d12c5c6d9a74f9327cdb4b8b9b99f30c3293f188c9c41b650ebba2b6" Dec 13 01:33:09.181664 kubelet[3500]: E1213 01:33:09.181531 3500 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b8e708e0d12c5c6d9a74f9327cdb4b8b9b99f30c3293f188c9c41b650ebba2b6"} Dec 13 01:33:09.181664 kubelet[3500]: E1213 01:33:09.181592 3500 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0200e2b4-cc5f-4c09-802c-ef6b6feab695\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b8e708e0d12c5c6d9a74f9327cdb4b8b9b99f30c3293f188c9c41b650ebba2b6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:33:09.181664 kubelet[3500]: E1213 01:33:09.181635 3500 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0200e2b4-cc5f-4c09-802c-ef6b6feab695\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b8e708e0d12c5c6d9a74f9327cdb4b8b9b99f30c3293f188c9c41b650ebba2b6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7977594b99-m4sgj" podUID="0200e2b4-cc5f-4c09-802c-ef6b6feab695" Dec 13 01:33:09.186064 containerd[2086]: time="2024-12-13T01:33:09.185596505Z" level=error msg="StopPodSandbox for \"ac2d59a67e7467ec7c43637658c3dde72f2ef719b8381e20b848bf2be63df5b2\" failed" error="failed to destroy network for sandbox \"ac2d59a67e7467ec7c43637658c3dde72f2ef719b8381e20b848bf2be63df5b2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:09.186158 kubelet[3500]: E1213 01:33:09.185858 3500 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ac2d59a67e7467ec7c43637658c3dde72f2ef719b8381e20b848bf2be63df5b2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ac2d59a67e7467ec7c43637658c3dde72f2ef719b8381e20b848bf2be63df5b2" Dec 13 01:33:09.186158 kubelet[3500]: E1213 01:33:09.185901 3500 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ac2d59a67e7467ec7c43637658c3dde72f2ef719b8381e20b848bf2be63df5b2"} Dec 13 01:33:09.186158 kubelet[3500]: E1213 01:33:09.185958 3500 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"37e377bc-ecb2-46be-838f-bd5c4df2e7cf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ac2d59a67e7467ec7c43637658c3dde72f2ef719b8381e20b848bf2be63df5b2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:33:09.186158 kubelet[3500]: E1213 01:33:09.185998 3500 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"37e377bc-ecb2-46be-838f-bd5c4df2e7cf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ac2d59a67e7467ec7c43637658c3dde72f2ef719b8381e20b848bf2be63df5b2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-nk5pm" podUID="37e377bc-ecb2-46be-838f-bd5c4df2e7cf" Dec 13 01:33:15.053938 kubelet[3500]: I1213 01:33:15.053884 3500 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:33:15.696612 systemd-resolved[1968]: Under memory pressure, flushing caches. Dec 13 01:33:15.698543 systemd-journald[1563]: Under memory pressure, flushing caches. Dec 13 01:33:15.696671 systemd-resolved[1968]: Flushed all caches. Dec 13 01:33:16.072634 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1752523348.mount: Deactivated successfully. Dec 13 01:33:16.166327 containerd[2086]: time="2024-12-13T01:33:16.156118042Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Dec 13 01:33:16.182608 containerd[2086]: time="2024-12-13T01:33:16.182544864Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 8.236561737s" Dec 13 01:33:16.182608 containerd[2086]: time="2024-12-13T01:33:16.182599924Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Dec 13 01:33:16.220679 containerd[2086]: time="2024-12-13T01:33:16.220622145Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:16.268366 containerd[2086]: time="2024-12-13T01:33:16.267161509Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:16.269032 containerd[2086]: time="2024-12-13T01:33:16.268983069Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:16.328833 containerd[2086]: time="2024-12-13T01:33:16.328711998Z" level=info msg="CreateContainer within sandbox \"4b5a14711aea015f978538cab31058cdedc020d701f855b8a06245a986479047\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 01:33:16.441593 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount33374988.mount: Deactivated successfully. Dec 13 01:33:16.485293 containerd[2086]: time="2024-12-13T01:33:16.485241642Z" level=info msg="CreateContainer within sandbox \"4b5a14711aea015f978538cab31058cdedc020d701f855b8a06245a986479047\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"5d1a5551f25c7d0dba3047f91bcaab295db341ac67d825ce88bd7761ffcc00e8\"" Dec 13 01:33:16.491798 containerd[2086]: time="2024-12-13T01:33:16.491453276Z" level=info msg="StartContainer for \"5d1a5551f25c7d0dba3047f91bcaab295db341ac67d825ce88bd7761ffcc00e8\"" Dec 13 01:33:16.802625 containerd[2086]: time="2024-12-13T01:33:16.800642527Z" level=info msg="StartContainer for \"5d1a5551f25c7d0dba3047f91bcaab295db341ac67d825ce88bd7761ffcc00e8\" returns successfully" Dec 13 01:33:17.052613 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 01:33:17.052741 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 13 01:33:17.744442 systemd-resolved[1968]: Under memory pressure, flushing caches. Dec 13 01:33:17.746688 systemd-journald[1563]: Under memory pressure, flushing caches. Dec 13 01:33:17.744452 systemd-resolved[1968]: Flushed all caches. Dec 13 01:33:18.941726 systemd[1]: Started sshd@7-172.31.29.53:22-139.178.68.195:52368.service - OpenSSH per-connection server daemon (139.178.68.195:52368). Dec 13 01:33:19.208422 kernel: bpftool[4806]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Dec 13 01:33:19.248766 sshd[4770]: Accepted publickey for core from 139.178.68.195 port 52368 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:33:19.253175 sshd[4770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:33:19.263270 systemd-logind[2059]: New session 8 of user core. Dec 13 01:33:19.278244 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 01:33:19.639960 systemd-networkd[1647]: vxlan.calico: Link UP Dec 13 01:33:19.639980 systemd-networkd[1647]: vxlan.calico: Gained carrier Dec 13 01:33:19.648324 sshd[4770]: pam_unix(sshd:session): session closed for user core Dec 13 01:33:19.657765 (udev-worker)[4833]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:33:19.675643 systemd[1]: sshd@7-172.31.29.53:22-139.178.68.195:52368.service: Deactivated successfully. Dec 13 01:33:19.681019 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 01:33:19.681649 systemd-logind[2059]: Session 8 logged out. Waiting for processes to exit. Dec 13 01:33:19.694278 systemd-logind[2059]: Removed session 8. Dec 13 01:33:19.699778 containerd[2086]: time="2024-12-13T01:33:19.696320126Z" level=info msg="StopPodSandbox for \"ad47095c098ae93d34affadd78444e72b74ececb4a2617e1e2937cfd63115909\"" Dec 13 01:33:19.702766 containerd[2086]: time="2024-12-13T01:33:19.699682011Z" level=info msg="StopPodSandbox for \"ac2d59a67e7467ec7c43637658c3dde72f2ef719b8381e20b848bf2be63df5b2\"" Dec 13 01:33:19.710750 (udev-worker)[4853]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:33:19.917926 kubelet[3500]: I1213 01:33:19.917498 3500 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-7xhp8" podStartSLOduration=5.474666727 podStartE2EDuration="28.903841575s" podCreationTimestamp="2024-12-13 01:32:51 +0000 UTC" firstStartedPulling="2024-12-13 01:32:52.753766882 +0000 UTC m=+21.253360693" lastFinishedPulling="2024-12-13 01:33:16.18294173 +0000 UTC m=+44.682535541" observedRunningTime="2024-12-13 01:33:17.077380538 +0000 UTC m=+45.576974360" watchObservedRunningTime="2024-12-13 01:33:19.903841575 +0000 UTC m=+48.403435396" Dec 13 01:33:20.224537 containerd[2086]: 2024-12-13 01:33:19.902 [INFO][4886] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ac2d59a67e7467ec7c43637658c3dde72f2ef719b8381e20b848bf2be63df5b2" Dec 13 01:33:20.224537 containerd[2086]: 2024-12-13 01:33:19.903 [INFO][4886] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ac2d59a67e7467ec7c43637658c3dde72f2ef719b8381e20b848bf2be63df5b2" iface="eth0" netns="/var/run/netns/cni-b2aefa39-76a6-5a69-27a9-21f7a313f1bf" Dec 13 01:33:20.224537 containerd[2086]: 2024-12-13 01:33:19.904 [INFO][4886] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ac2d59a67e7467ec7c43637658c3dde72f2ef719b8381e20b848bf2be63df5b2" iface="eth0" netns="/var/run/netns/cni-b2aefa39-76a6-5a69-27a9-21f7a313f1bf" Dec 13 01:33:20.224537 containerd[2086]: 2024-12-13 01:33:19.909 [INFO][4886] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ac2d59a67e7467ec7c43637658c3dde72f2ef719b8381e20b848bf2be63df5b2" iface="eth0" netns="/var/run/netns/cni-b2aefa39-76a6-5a69-27a9-21f7a313f1bf" Dec 13 01:33:20.224537 containerd[2086]: 2024-12-13 01:33:19.909 [INFO][4886] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ac2d59a67e7467ec7c43637658c3dde72f2ef719b8381e20b848bf2be63df5b2" Dec 13 01:33:20.224537 containerd[2086]: 2024-12-13 01:33:19.909 [INFO][4886] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ac2d59a67e7467ec7c43637658c3dde72f2ef719b8381e20b848bf2be63df5b2" Dec 13 01:33:20.224537 containerd[2086]: 2024-12-13 01:33:20.188 [INFO][4899] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ac2d59a67e7467ec7c43637658c3dde72f2ef719b8381e20b848bf2be63df5b2" HandleID="k8s-pod-network.ac2d59a67e7467ec7c43637658c3dde72f2ef719b8381e20b848bf2be63df5b2" Workload="ip--172--31--29--53-k8s-csi--node--driver--nk5pm-eth0" Dec 13 01:33:20.224537 containerd[2086]: 2024-12-13 01:33:20.190 [INFO][4899] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:33:20.224537 containerd[2086]: 2024-12-13 01:33:20.190 [INFO][4899] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:33:20.224537 containerd[2086]: 2024-12-13 01:33:20.201 [WARNING][4899] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ac2d59a67e7467ec7c43637658c3dde72f2ef719b8381e20b848bf2be63df5b2" HandleID="k8s-pod-network.ac2d59a67e7467ec7c43637658c3dde72f2ef719b8381e20b848bf2be63df5b2" Workload="ip--172--31--29--53-k8s-csi--node--driver--nk5pm-eth0" Dec 13 01:33:20.224537 containerd[2086]: 2024-12-13 01:33:20.201 [INFO][4899] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ac2d59a67e7467ec7c43637658c3dde72f2ef719b8381e20b848bf2be63df5b2" HandleID="k8s-pod-network.ac2d59a67e7467ec7c43637658c3dde72f2ef719b8381e20b848bf2be63df5b2" Workload="ip--172--31--29--53-k8s-csi--node--driver--nk5pm-eth0" Dec 13 01:33:20.224537 containerd[2086]: 2024-12-13 01:33:20.206 [INFO][4899] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:33:20.224537 containerd[2086]: 2024-12-13 01:33:20.214 [INFO][4886] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ac2d59a67e7467ec7c43637658c3dde72f2ef719b8381e20b848bf2be63df5b2" Dec 13 01:33:20.254412 containerd[2086]: time="2024-12-13T01:33:20.250268250Z" level=info msg="TearDown network for sandbox \"ac2d59a67e7467ec7c43637658c3dde72f2ef719b8381e20b848bf2be63df5b2\" successfully" Dec 13 01:33:20.254412 containerd[2086]: time="2024-12-13T01:33:20.250438706Z" level=info msg="StopPodSandbox for \"ac2d59a67e7467ec7c43637658c3dde72f2ef719b8381e20b848bf2be63df5b2\" returns successfully" Dec 13 01:33:20.252223 systemd[1]: run-netns-cni\x2db2aefa39\x2d76a6\x2d5a69\x2d27a9\x2d21f7a313f1bf.mount: Deactivated successfully. Dec 13 01:33:20.258708 containerd[2086]: time="2024-12-13T01:33:20.255697701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nk5pm,Uid:37e377bc-ecb2-46be-838f-bd5c4df2e7cf,Namespace:calico-system,Attempt:1,}" Dec 13 01:33:20.259480 containerd[2086]: 2024-12-13 01:33:19.906 [INFO][4885] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ad47095c098ae93d34affadd78444e72b74ececb4a2617e1e2937cfd63115909" Dec 13 01:33:20.259480 containerd[2086]: 2024-12-13 01:33:19.906 [INFO][4885] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ad47095c098ae93d34affadd78444e72b74ececb4a2617e1e2937cfd63115909" iface="eth0" netns="/var/run/netns/cni-306843ae-6542-e8c4-878e-b9a215a8f729" Dec 13 01:33:20.259480 containerd[2086]: 2024-12-13 01:33:19.910 [INFO][4885] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ad47095c098ae93d34affadd78444e72b74ececb4a2617e1e2937cfd63115909" iface="eth0" netns="/var/run/netns/cni-306843ae-6542-e8c4-878e-b9a215a8f729" Dec 13 01:33:20.259480 containerd[2086]: 2024-12-13 01:33:19.910 [INFO][4885] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ad47095c098ae93d34affadd78444e72b74ececb4a2617e1e2937cfd63115909" iface="eth0" netns="/var/run/netns/cni-306843ae-6542-e8c4-878e-b9a215a8f729" Dec 13 01:33:20.259480 containerd[2086]: 2024-12-13 01:33:19.911 [INFO][4885] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ad47095c098ae93d34affadd78444e72b74ececb4a2617e1e2937cfd63115909" Dec 13 01:33:20.259480 containerd[2086]: 2024-12-13 01:33:19.911 [INFO][4885] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ad47095c098ae93d34affadd78444e72b74ececb4a2617e1e2937cfd63115909" Dec 13 01:33:20.259480 containerd[2086]: 2024-12-13 01:33:20.188 [INFO][4900] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ad47095c098ae93d34affadd78444e72b74ececb4a2617e1e2937cfd63115909" HandleID="k8s-pod-network.ad47095c098ae93d34affadd78444e72b74ececb4a2617e1e2937cfd63115909" Workload="ip--172--31--29--53-k8s-coredns--76f75df574--z9648-eth0" Dec 13 01:33:20.259480 containerd[2086]: 2024-12-13 01:33:20.190 [INFO][4900] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:33:20.259480 containerd[2086]: 2024-12-13 01:33:20.206 [INFO][4900] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:33:20.259480 containerd[2086]: 2024-12-13 01:33:20.222 [WARNING][4900] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ad47095c098ae93d34affadd78444e72b74ececb4a2617e1e2937cfd63115909" HandleID="k8s-pod-network.ad47095c098ae93d34affadd78444e72b74ececb4a2617e1e2937cfd63115909" Workload="ip--172--31--29--53-k8s-coredns--76f75df574--z9648-eth0" Dec 13 01:33:20.259480 containerd[2086]: 2024-12-13 01:33:20.222 [INFO][4900] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ad47095c098ae93d34affadd78444e72b74ececb4a2617e1e2937cfd63115909" HandleID="k8s-pod-network.ad47095c098ae93d34affadd78444e72b74ececb4a2617e1e2937cfd63115909" Workload="ip--172--31--29--53-k8s-coredns--76f75df574--z9648-eth0" Dec 13 01:33:20.259480 containerd[2086]: 2024-12-13 01:33:20.227 [INFO][4900] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:33:20.259480 containerd[2086]: 2024-12-13 01:33:20.240 [INFO][4885] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ad47095c098ae93d34affadd78444e72b74ececb4a2617e1e2937cfd63115909" Dec 13 01:33:20.259480 containerd[2086]: time="2024-12-13T01:33:20.258995959Z" level=info msg="TearDown network for sandbox \"ad47095c098ae93d34affadd78444e72b74ececb4a2617e1e2937cfd63115909\" successfully" Dec 13 01:33:20.259480 containerd[2086]: time="2024-12-13T01:33:20.259023768Z" level=info msg="StopPodSandbox for \"ad47095c098ae93d34affadd78444e72b74ececb4a2617e1e2937cfd63115909\" returns successfully" Dec 13 01:33:20.264484 containerd[2086]: time="2024-12-13T01:33:20.263660378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-z9648,Uid:8a855430-bc04-49b7-b6f5-e957608abefc,Namespace:kube-system,Attempt:1,}" Dec 13 01:33:20.264653 systemd[1]: run-netns-cni\x2d306843ae\x2d6542\x2de8c4\x2d878e\x2db9a215a8f729.mount: Deactivated successfully. Dec 13 01:33:20.597310 systemd-networkd[1647]: cali7c3980a2145: Link UP Dec 13 01:33:20.599684 systemd-networkd[1647]: cali7c3980a2145: Gained carrier Dec 13 01:33:20.628397 containerd[2086]: 2024-12-13 01:33:20.440 [INFO][4947] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--53-k8s-coredns--76f75df574--z9648-eth0 coredns-76f75df574- kube-system 8a855430-bc04-49b7-b6f5-e957608abefc 825 0 2024-12-13 01:32:43 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-29-53 coredns-76f75df574-z9648 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7c3980a2145 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="dfaa1526553c7ac0a8be894ba21167884216e54faa55aca254b91305b1533f6d" Namespace="kube-system" Pod="coredns-76f75df574-z9648" WorkloadEndpoint="ip--172--31--29--53-k8s-coredns--76f75df574--z9648-" Dec 13 01:33:20.628397 containerd[2086]: 2024-12-13 01:33:20.440 [INFO][4947] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="dfaa1526553c7ac0a8be894ba21167884216e54faa55aca254b91305b1533f6d" Namespace="kube-system" Pod="coredns-76f75df574-z9648" WorkloadEndpoint="ip--172--31--29--53-k8s-coredns--76f75df574--z9648-eth0" Dec 13 01:33:20.628397 containerd[2086]: 2024-12-13 01:33:20.509 [INFO][4969] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dfaa1526553c7ac0a8be894ba21167884216e54faa55aca254b91305b1533f6d" HandleID="k8s-pod-network.dfaa1526553c7ac0a8be894ba21167884216e54faa55aca254b91305b1533f6d" Workload="ip--172--31--29--53-k8s-coredns--76f75df574--z9648-eth0" Dec 13 01:33:20.628397 containerd[2086]: 2024-12-13 01:33:20.540 [INFO][4969] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dfaa1526553c7ac0a8be894ba21167884216e54faa55aca254b91305b1533f6d" HandleID="k8s-pod-network.dfaa1526553c7ac0a8be894ba21167884216e54faa55aca254b91305b1533f6d" Workload="ip--172--31--29--53-k8s-coredns--76f75df574--z9648-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003b4ba0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-29-53", "pod":"coredns-76f75df574-z9648", "timestamp":"2024-12-13 01:33:20.509047545 +0000 UTC"}, Hostname:"ip-172-31-29-53", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:33:20.628397 containerd[2086]: 2024-12-13 01:33:20.540 [INFO][4969] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:33:20.628397 containerd[2086]: 2024-12-13 01:33:20.540 [INFO][4969] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:33:20.628397 containerd[2086]: 2024-12-13 01:33:20.540 [INFO][4969] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-53' Dec 13 01:33:20.628397 containerd[2086]: 2024-12-13 01:33:20.546 [INFO][4969] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.dfaa1526553c7ac0a8be894ba21167884216e54faa55aca254b91305b1533f6d" host="ip-172-31-29-53" Dec 13 01:33:20.628397 containerd[2086]: 2024-12-13 01:33:20.555 [INFO][4969] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-29-53" Dec 13 01:33:20.628397 containerd[2086]: 2024-12-13 01:33:20.561 [INFO][4969] ipam/ipam.go 489: Trying affinity for 192.168.99.0/26 host="ip-172-31-29-53" Dec 13 01:33:20.628397 containerd[2086]: 2024-12-13 01:33:20.563 [INFO][4969] ipam/ipam.go 155: Attempting to load block cidr=192.168.99.0/26 host="ip-172-31-29-53" Dec 13 01:33:20.628397 containerd[2086]: 2024-12-13 01:33:20.566 [INFO][4969] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.99.0/26 host="ip-172-31-29-53" Dec 13 01:33:20.628397 containerd[2086]: 2024-12-13 01:33:20.566 [INFO][4969] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.99.0/26 handle="k8s-pod-network.dfaa1526553c7ac0a8be894ba21167884216e54faa55aca254b91305b1533f6d" host="ip-172-31-29-53" Dec 13 01:33:20.628397 containerd[2086]: 2024-12-13 01:33:20.568 [INFO][4969] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.dfaa1526553c7ac0a8be894ba21167884216e54faa55aca254b91305b1533f6d Dec 13 01:33:20.628397 containerd[2086]: 2024-12-13 01:33:20.573 [INFO][4969] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.99.0/26 handle="k8s-pod-network.dfaa1526553c7ac0a8be894ba21167884216e54faa55aca254b91305b1533f6d" host="ip-172-31-29-53" Dec 13 01:33:20.628397 containerd[2086]: 2024-12-13 01:33:20.583 [INFO][4969] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.99.1/26] block=192.168.99.0/26 handle="k8s-pod-network.dfaa1526553c7ac0a8be894ba21167884216e54faa55aca254b91305b1533f6d" host="ip-172-31-29-53" Dec 13 01:33:20.628397 containerd[2086]: 2024-12-13 01:33:20.583 [INFO][4969] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.99.1/26] handle="k8s-pod-network.dfaa1526553c7ac0a8be894ba21167884216e54faa55aca254b91305b1533f6d" host="ip-172-31-29-53" Dec 13 01:33:20.628397 containerd[2086]: 2024-12-13 01:33:20.583 [INFO][4969] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:33:20.628397 containerd[2086]: 2024-12-13 01:33:20.583 [INFO][4969] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.1/26] IPv6=[] ContainerID="dfaa1526553c7ac0a8be894ba21167884216e54faa55aca254b91305b1533f6d" HandleID="k8s-pod-network.dfaa1526553c7ac0a8be894ba21167884216e54faa55aca254b91305b1533f6d" Workload="ip--172--31--29--53-k8s-coredns--76f75df574--z9648-eth0" Dec 13 01:33:20.631149 containerd[2086]: 2024-12-13 01:33:20.589 [INFO][4947] cni-plugin/k8s.go 386: Populated endpoint ContainerID="dfaa1526553c7ac0a8be894ba21167884216e54faa55aca254b91305b1533f6d" Namespace="kube-system" Pod="coredns-76f75df574-z9648" WorkloadEndpoint="ip--172--31--29--53-k8s-coredns--76f75df574--z9648-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--53-k8s-coredns--76f75df574--z9648-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"8a855430-bc04-49b7-b6f5-e957608abefc", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 32, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-53", ContainerID:"", Pod:"coredns-76f75df574-z9648", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.99.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7c3980a2145", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:33:20.631149 containerd[2086]: 2024-12-13 01:33:20.589 [INFO][4947] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.99.1/32] ContainerID="dfaa1526553c7ac0a8be894ba21167884216e54faa55aca254b91305b1533f6d" Namespace="kube-system" Pod="coredns-76f75df574-z9648" WorkloadEndpoint="ip--172--31--29--53-k8s-coredns--76f75df574--z9648-eth0" Dec 13 01:33:20.631149 containerd[2086]: 2024-12-13 01:33:20.589 [INFO][4947] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7c3980a2145 ContainerID="dfaa1526553c7ac0a8be894ba21167884216e54faa55aca254b91305b1533f6d" Namespace="kube-system" Pod="coredns-76f75df574-z9648" WorkloadEndpoint="ip--172--31--29--53-k8s-coredns--76f75df574--z9648-eth0" Dec 13 01:33:20.631149 containerd[2086]: 2024-12-13 01:33:20.600 [INFO][4947] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dfaa1526553c7ac0a8be894ba21167884216e54faa55aca254b91305b1533f6d" Namespace="kube-system" Pod="coredns-76f75df574-z9648" WorkloadEndpoint="ip--172--31--29--53-k8s-coredns--76f75df574--z9648-eth0" Dec 13 01:33:20.631149 containerd[2086]: 2024-12-13 01:33:20.600 [INFO][4947] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="dfaa1526553c7ac0a8be894ba21167884216e54faa55aca254b91305b1533f6d" Namespace="kube-system" Pod="coredns-76f75df574-z9648" WorkloadEndpoint="ip--172--31--29--53-k8s-coredns--76f75df574--z9648-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--53-k8s-coredns--76f75df574--z9648-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"8a855430-bc04-49b7-b6f5-e957608abefc", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 32, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-53", ContainerID:"dfaa1526553c7ac0a8be894ba21167884216e54faa55aca254b91305b1533f6d", Pod:"coredns-76f75df574-z9648", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.99.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7c3980a2145", MAC:"2a:66:b6:0a:3f:ba", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:33:20.631149 containerd[2086]: 2024-12-13 01:33:20.623 [INFO][4947] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="dfaa1526553c7ac0a8be894ba21167884216e54faa55aca254b91305b1533f6d" Namespace="kube-system" Pod="coredns-76f75df574-z9648" WorkloadEndpoint="ip--172--31--29--53-k8s-coredns--76f75df574--z9648-eth0" Dec 13 01:33:20.682078 systemd-networkd[1647]: cali37868fa7cf9: Link UP Dec 13 01:33:20.682326 systemd-networkd[1647]: cali37868fa7cf9: Gained carrier Dec 13 01:33:20.740373 containerd[2086]: 2024-12-13 01:33:20.440 [INFO][4942] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--53-k8s-csi--node--driver--nk5pm-eth0 csi-node-driver- calico-system 37e377bc-ecb2-46be-838f-bd5c4df2e7cf 824 0 2024-12-13 01:32:52 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-29-53 csi-node-driver-nk5pm eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali37868fa7cf9 [] []}} ContainerID="92a04c2245e9a0cfba1becaeaff9edd76ad2a40881d601931a3d93fc60b8ca41" Namespace="calico-system" Pod="csi-node-driver-nk5pm" WorkloadEndpoint="ip--172--31--29--53-k8s-csi--node--driver--nk5pm-" Dec 13 01:33:20.740373 containerd[2086]: 2024-12-13 01:33:20.440 [INFO][4942] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="92a04c2245e9a0cfba1becaeaff9edd76ad2a40881d601931a3d93fc60b8ca41" Namespace="calico-system" Pod="csi-node-driver-nk5pm" WorkloadEndpoint="ip--172--31--29--53-k8s-csi--node--driver--nk5pm-eth0" Dec 13 01:33:20.740373 containerd[2086]: 2024-12-13 01:33:20.517 [INFO][4965] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="92a04c2245e9a0cfba1becaeaff9edd76ad2a40881d601931a3d93fc60b8ca41" HandleID="k8s-pod-network.92a04c2245e9a0cfba1becaeaff9edd76ad2a40881d601931a3d93fc60b8ca41" Workload="ip--172--31--29--53-k8s-csi--node--driver--nk5pm-eth0" Dec 13 01:33:20.740373 containerd[2086]: 2024-12-13 01:33:20.546 [INFO][4965] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="92a04c2245e9a0cfba1becaeaff9edd76ad2a40881d601931a3d93fc60b8ca41" HandleID="k8s-pod-network.92a04c2245e9a0cfba1becaeaff9edd76ad2a40881d601931a3d93fc60b8ca41" Workload="ip--172--31--29--53-k8s-csi--node--driver--nk5pm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002641c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-29-53", "pod":"csi-node-driver-nk5pm", "timestamp":"2024-12-13 01:33:20.517377852 +0000 UTC"}, Hostname:"ip-172-31-29-53", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:33:20.740373 containerd[2086]: 2024-12-13 01:33:20.546 [INFO][4965] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:33:20.740373 containerd[2086]: 2024-12-13 01:33:20.584 [INFO][4965] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:33:20.740373 containerd[2086]: 2024-12-13 01:33:20.584 [INFO][4965] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-53' Dec 13 01:33:20.740373 containerd[2086]: 2024-12-13 01:33:20.587 [INFO][4965] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.92a04c2245e9a0cfba1becaeaff9edd76ad2a40881d601931a3d93fc60b8ca41" host="ip-172-31-29-53" Dec 13 01:33:20.740373 containerd[2086]: 2024-12-13 01:33:20.595 [INFO][4965] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-29-53" Dec 13 01:33:20.740373 containerd[2086]: 2024-12-13 01:33:20.620 [INFO][4965] ipam/ipam.go 489: Trying affinity for 192.168.99.0/26 host="ip-172-31-29-53" Dec 13 01:33:20.740373 containerd[2086]: 2024-12-13 01:33:20.627 [INFO][4965] ipam/ipam.go 155: Attempting to load block cidr=192.168.99.0/26 host="ip-172-31-29-53" Dec 13 01:33:20.740373 containerd[2086]: 2024-12-13 01:33:20.631 [INFO][4965] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.99.0/26 host="ip-172-31-29-53" Dec 13 01:33:20.740373 containerd[2086]: 2024-12-13 01:33:20.632 [INFO][4965] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.99.0/26 handle="k8s-pod-network.92a04c2245e9a0cfba1becaeaff9edd76ad2a40881d601931a3d93fc60b8ca41" host="ip-172-31-29-53" Dec 13 01:33:20.740373 containerd[2086]: 2024-12-13 01:33:20.634 [INFO][4965] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.92a04c2245e9a0cfba1becaeaff9edd76ad2a40881d601931a3d93fc60b8ca41 Dec 13 01:33:20.740373 containerd[2086]: 2024-12-13 01:33:20.645 [INFO][4965] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.99.0/26 handle="k8s-pod-network.92a04c2245e9a0cfba1becaeaff9edd76ad2a40881d601931a3d93fc60b8ca41" host="ip-172-31-29-53" Dec 13 01:33:20.740373 containerd[2086]: 2024-12-13 01:33:20.656 [INFO][4965] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.99.2/26] block=192.168.99.0/26 handle="k8s-pod-network.92a04c2245e9a0cfba1becaeaff9edd76ad2a40881d601931a3d93fc60b8ca41" host="ip-172-31-29-53" Dec 13 01:33:20.740373 containerd[2086]: 2024-12-13 01:33:20.656 [INFO][4965] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.99.2/26] handle="k8s-pod-network.92a04c2245e9a0cfba1becaeaff9edd76ad2a40881d601931a3d93fc60b8ca41" host="ip-172-31-29-53" Dec 13 01:33:20.740373 containerd[2086]: 2024-12-13 01:33:20.656 [INFO][4965] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:33:20.740373 containerd[2086]: 2024-12-13 01:33:20.657 [INFO][4965] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.2/26] IPv6=[] ContainerID="92a04c2245e9a0cfba1becaeaff9edd76ad2a40881d601931a3d93fc60b8ca41" HandleID="k8s-pod-network.92a04c2245e9a0cfba1becaeaff9edd76ad2a40881d601931a3d93fc60b8ca41" Workload="ip--172--31--29--53-k8s-csi--node--driver--nk5pm-eth0" Dec 13 01:33:20.748607 containerd[2086]: 2024-12-13 01:33:20.671 [INFO][4942] cni-plugin/k8s.go 386: Populated endpoint ContainerID="92a04c2245e9a0cfba1becaeaff9edd76ad2a40881d601931a3d93fc60b8ca41" Namespace="calico-system" Pod="csi-node-driver-nk5pm" WorkloadEndpoint="ip--172--31--29--53-k8s-csi--node--driver--nk5pm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--53-k8s-csi--node--driver--nk5pm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"37e377bc-ecb2-46be-838f-bd5c4df2e7cf", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 32, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-53", ContainerID:"", Pod:"csi-node-driver-nk5pm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.99.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali37868fa7cf9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:33:20.748607 containerd[2086]: 2024-12-13 01:33:20.671 [INFO][4942] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.99.2/32] ContainerID="92a04c2245e9a0cfba1becaeaff9edd76ad2a40881d601931a3d93fc60b8ca41" Namespace="calico-system" Pod="csi-node-driver-nk5pm" WorkloadEndpoint="ip--172--31--29--53-k8s-csi--node--driver--nk5pm-eth0" Dec 13 01:33:20.748607 containerd[2086]: 2024-12-13 01:33:20.671 [INFO][4942] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali37868fa7cf9 ContainerID="92a04c2245e9a0cfba1becaeaff9edd76ad2a40881d601931a3d93fc60b8ca41" Namespace="calico-system" Pod="csi-node-driver-nk5pm" WorkloadEndpoint="ip--172--31--29--53-k8s-csi--node--driver--nk5pm-eth0" Dec 13 01:33:20.748607 containerd[2086]: 2024-12-13 01:33:20.684 [INFO][4942] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="92a04c2245e9a0cfba1becaeaff9edd76ad2a40881d601931a3d93fc60b8ca41" Namespace="calico-system" Pod="csi-node-driver-nk5pm" WorkloadEndpoint="ip--172--31--29--53-k8s-csi--node--driver--nk5pm-eth0" Dec 13 01:33:20.748607 containerd[2086]: 2024-12-13 01:33:20.686 [INFO][4942] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="92a04c2245e9a0cfba1becaeaff9edd76ad2a40881d601931a3d93fc60b8ca41" Namespace="calico-system" Pod="csi-node-driver-nk5pm" WorkloadEndpoint="ip--172--31--29--53-k8s-csi--node--driver--nk5pm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--53-k8s-csi--node--driver--nk5pm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"37e377bc-ecb2-46be-838f-bd5c4df2e7cf", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 32, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-53", ContainerID:"92a04c2245e9a0cfba1becaeaff9edd76ad2a40881d601931a3d93fc60b8ca41", Pod:"csi-node-driver-nk5pm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.99.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali37868fa7cf9", MAC:"f6:80:10:74:53:69", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:33:20.748607 containerd[2086]: 2024-12-13 01:33:20.722 [INFO][4942] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="92a04c2245e9a0cfba1becaeaff9edd76ad2a40881d601931a3d93fc60b8ca41" Namespace="calico-system" Pod="csi-node-driver-nk5pm" WorkloadEndpoint="ip--172--31--29--53-k8s-csi--node--driver--nk5pm-eth0" Dec 13 01:33:20.799108 containerd[2086]: time="2024-12-13T01:33:20.798298337Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:33:20.799108 containerd[2086]: time="2024-12-13T01:33:20.798411693Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:33:20.799108 containerd[2086]: time="2024-12-13T01:33:20.798438875Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:33:20.799108 containerd[2086]: time="2024-12-13T01:33:20.798581255Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:33:20.836403 containerd[2086]: time="2024-12-13T01:33:20.835996861Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:33:20.836403 containerd[2086]: time="2024-12-13T01:33:20.836109552Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:33:20.836403 containerd[2086]: time="2024-12-13T01:33:20.836131193Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:33:20.836403 containerd[2086]: time="2024-12-13T01:33:20.836319907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:33:20.940391 containerd[2086]: time="2024-12-13T01:33:20.939888126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nk5pm,Uid:37e377bc-ecb2-46be-838f-bd5c4df2e7cf,Namespace:calico-system,Attempt:1,} returns sandbox id \"92a04c2245e9a0cfba1becaeaff9edd76ad2a40881d601931a3d93fc60b8ca41\"" Dec 13 01:33:20.944605 systemd-networkd[1647]: vxlan.calico: Gained IPv6LL Dec 13 01:33:20.945854 containerd[2086]: time="2024-12-13T01:33:20.945524833Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Dec 13 01:33:20.956840 containerd[2086]: time="2024-12-13T01:33:20.956673668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-z9648,Uid:8a855430-bc04-49b7-b6f5-e957608abefc,Namespace:kube-system,Attempt:1,} returns sandbox id \"dfaa1526553c7ac0a8be894ba21167884216e54faa55aca254b91305b1533f6d\"" Dec 13 01:33:20.962906 containerd[2086]: time="2024-12-13T01:33:20.962856825Z" level=info msg="CreateContainer within sandbox \"dfaa1526553c7ac0a8be894ba21167884216e54faa55aca254b91305b1533f6d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:33:21.031887 containerd[2086]: time="2024-12-13T01:33:21.031657605Z" level=info msg="CreateContainer within sandbox \"dfaa1526553c7ac0a8be894ba21167884216e54faa55aca254b91305b1533f6d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"463dcafeb5a7437ac69c8281212d967003e6a21d9edb22267a2d83753e385660\"" Dec 13 01:33:21.036506 containerd[2086]: time="2024-12-13T01:33:21.033292268Z" level=info msg="StartContainer for \"463dcafeb5a7437ac69c8281212d967003e6a21d9edb22267a2d83753e385660\"" Dec 13 01:33:21.137090 containerd[2086]: time="2024-12-13T01:33:21.137017719Z" level=info msg="StartContainer for \"463dcafeb5a7437ac69c8281212d967003e6a21d9edb22267a2d83753e385660\" returns successfully" Dec 13 01:33:21.699304 containerd[2086]: time="2024-12-13T01:33:21.698759655Z" level=info msg="StopPodSandbox for \"11cf48d339a3e24e6793a271dfe6b899e3a7036288cb32edd199f38144cb15db\"" Dec 13 01:33:21.855425 containerd[2086]: 2024-12-13 01:33:21.778 [INFO][5135] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="11cf48d339a3e24e6793a271dfe6b899e3a7036288cb32edd199f38144cb15db" Dec 13 01:33:21.855425 containerd[2086]: 2024-12-13 01:33:21.778 [INFO][5135] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="11cf48d339a3e24e6793a271dfe6b899e3a7036288cb32edd199f38144cb15db" iface="eth0" netns="/var/run/netns/cni-f342cbaa-918f-403f-6b95-14900cff12de" Dec 13 01:33:21.855425 containerd[2086]: 2024-12-13 01:33:21.778 [INFO][5135] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="11cf48d339a3e24e6793a271dfe6b899e3a7036288cb32edd199f38144cb15db" iface="eth0" netns="/var/run/netns/cni-f342cbaa-918f-403f-6b95-14900cff12de" Dec 13 01:33:21.855425 containerd[2086]: 2024-12-13 01:33:21.779 [INFO][5135] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="11cf48d339a3e24e6793a271dfe6b899e3a7036288cb32edd199f38144cb15db" iface="eth0" netns="/var/run/netns/cni-f342cbaa-918f-403f-6b95-14900cff12de" Dec 13 01:33:21.855425 containerd[2086]: 2024-12-13 01:33:21.779 [INFO][5135] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="11cf48d339a3e24e6793a271dfe6b899e3a7036288cb32edd199f38144cb15db" Dec 13 01:33:21.855425 containerd[2086]: 2024-12-13 01:33:21.779 [INFO][5135] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="11cf48d339a3e24e6793a271dfe6b899e3a7036288cb32edd199f38144cb15db" Dec 13 01:33:21.855425 containerd[2086]: 2024-12-13 01:33:21.811 [INFO][5142] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="11cf48d339a3e24e6793a271dfe6b899e3a7036288cb32edd199f38144cb15db" HandleID="k8s-pod-network.11cf48d339a3e24e6793a271dfe6b899e3a7036288cb32edd199f38144cb15db" Workload="ip--172--31--29--53-k8s-calico--apiserver--7977594b99--6rk5q-eth0" Dec 13 01:33:21.855425 containerd[2086]: 2024-12-13 01:33:21.812 [INFO][5142] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:33:21.855425 containerd[2086]: 2024-12-13 01:33:21.812 [INFO][5142] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:33:21.855425 containerd[2086]: 2024-12-13 01:33:21.820 [WARNING][5142] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="11cf48d339a3e24e6793a271dfe6b899e3a7036288cb32edd199f38144cb15db" HandleID="k8s-pod-network.11cf48d339a3e24e6793a271dfe6b899e3a7036288cb32edd199f38144cb15db" Workload="ip--172--31--29--53-k8s-calico--apiserver--7977594b99--6rk5q-eth0" Dec 13 01:33:21.855425 containerd[2086]: 2024-12-13 01:33:21.820 [INFO][5142] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="11cf48d339a3e24e6793a271dfe6b899e3a7036288cb32edd199f38144cb15db" HandleID="k8s-pod-network.11cf48d339a3e24e6793a271dfe6b899e3a7036288cb32edd199f38144cb15db" Workload="ip--172--31--29--53-k8s-calico--apiserver--7977594b99--6rk5q-eth0" Dec 13 01:33:21.855425 containerd[2086]: 2024-12-13 01:33:21.823 [INFO][5142] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:33:21.855425 containerd[2086]: 2024-12-13 01:33:21.851 [INFO][5135] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="11cf48d339a3e24e6793a271dfe6b899e3a7036288cb32edd199f38144cb15db" Dec 13 01:33:21.860487 containerd[2086]: time="2024-12-13T01:33:21.860438785Z" level=info msg="TearDown network for sandbox \"11cf48d339a3e24e6793a271dfe6b899e3a7036288cb32edd199f38144cb15db\" successfully" Dec 13 01:33:21.860487 containerd[2086]: time="2024-12-13T01:33:21.860487256Z" level=info msg="StopPodSandbox for \"11cf48d339a3e24e6793a271dfe6b899e3a7036288cb32edd199f38144cb15db\" returns successfully" Dec 13 01:33:21.861282 containerd[2086]: time="2024-12-13T01:33:21.861248050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7977594b99-6rk5q,Uid:17b16b6f-09f1-4ddb-a8a5-c0692d5afcac,Namespace:calico-apiserver,Attempt:1,}" Dec 13 01:33:21.862580 systemd[1]: run-netns-cni\x2df342cbaa\x2d918f\x2d403f\x2d6b95\x2d14900cff12de.mount: Deactivated successfully. Dec 13 01:33:21.967562 systemd-networkd[1647]: cali37868fa7cf9: Gained IPv6LL Dec 13 01:33:22.063838 systemd-networkd[1647]: cali369a1dcf294: Link UP Dec 13 01:33:22.065674 systemd-networkd[1647]: cali369a1dcf294: Gained carrier Dec 13 01:33:22.104255 containerd[2086]: 2024-12-13 01:33:21.939 [INFO][5149] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--53-k8s-calico--apiserver--7977594b99--6rk5q-eth0 calico-apiserver-7977594b99- calico-apiserver 17b16b6f-09f1-4ddb-a8a5-c0692d5afcac 843 0 2024-12-13 01:32:51 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7977594b99 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-29-53 calico-apiserver-7977594b99-6rk5q eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali369a1dcf294 [] []}} ContainerID="4ed85d2be517ab44164abc2283df73121938ba155716bdd4fa4d095c5ddb2b78" Namespace="calico-apiserver" Pod="calico-apiserver-7977594b99-6rk5q" WorkloadEndpoint="ip--172--31--29--53-k8s-calico--apiserver--7977594b99--6rk5q-" Dec 13 01:33:22.104255 containerd[2086]: 2024-12-13 01:33:21.939 [INFO][5149] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4ed85d2be517ab44164abc2283df73121938ba155716bdd4fa4d095c5ddb2b78" Namespace="calico-apiserver" Pod="calico-apiserver-7977594b99-6rk5q" WorkloadEndpoint="ip--172--31--29--53-k8s-calico--apiserver--7977594b99--6rk5q-eth0" Dec 13 01:33:22.104255 containerd[2086]: 2024-12-13 01:33:21.990 [INFO][5160] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4ed85d2be517ab44164abc2283df73121938ba155716bdd4fa4d095c5ddb2b78" HandleID="k8s-pod-network.4ed85d2be517ab44164abc2283df73121938ba155716bdd4fa4d095c5ddb2b78" Workload="ip--172--31--29--53-k8s-calico--apiserver--7977594b99--6rk5q-eth0" Dec 13 01:33:22.104255 containerd[2086]: 2024-12-13 01:33:22.004 [INFO][5160] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4ed85d2be517ab44164abc2283df73121938ba155716bdd4fa4d095c5ddb2b78" HandleID="k8s-pod-network.4ed85d2be517ab44164abc2283df73121938ba155716bdd4fa4d095c5ddb2b78" Workload="ip--172--31--29--53-k8s-calico--apiserver--7977594b99--6rk5q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000291550), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-29-53", "pod":"calico-apiserver-7977594b99-6rk5q", "timestamp":"2024-12-13 01:33:21.990190807 +0000 UTC"}, Hostname:"ip-172-31-29-53", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:33:22.104255 containerd[2086]: 2024-12-13 01:33:22.004 [INFO][5160] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:33:22.104255 containerd[2086]: 2024-12-13 01:33:22.004 [INFO][5160] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:33:22.104255 containerd[2086]: 2024-12-13 01:33:22.004 [INFO][5160] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-53' Dec 13 01:33:22.104255 containerd[2086]: 2024-12-13 01:33:22.010 [INFO][5160] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4ed85d2be517ab44164abc2283df73121938ba155716bdd4fa4d095c5ddb2b78" host="ip-172-31-29-53" Dec 13 01:33:22.104255 containerd[2086]: 2024-12-13 01:33:22.018 [INFO][5160] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-29-53" Dec 13 01:33:22.104255 containerd[2086]: 2024-12-13 01:33:22.025 [INFO][5160] ipam/ipam.go 489: Trying affinity for 192.168.99.0/26 host="ip-172-31-29-53" Dec 13 01:33:22.104255 containerd[2086]: 2024-12-13 01:33:22.027 [INFO][5160] ipam/ipam.go 155: Attempting to load block cidr=192.168.99.0/26 host="ip-172-31-29-53" Dec 13 01:33:22.104255 containerd[2086]: 2024-12-13 01:33:22.030 [INFO][5160] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.99.0/26 host="ip-172-31-29-53" Dec 13 01:33:22.104255 containerd[2086]: 2024-12-13 01:33:22.030 [INFO][5160] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.99.0/26 handle="k8s-pod-network.4ed85d2be517ab44164abc2283df73121938ba155716bdd4fa4d095c5ddb2b78" host="ip-172-31-29-53" Dec 13 01:33:22.104255 containerd[2086]: 2024-12-13 01:33:22.032 [INFO][5160] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4ed85d2be517ab44164abc2283df73121938ba155716bdd4fa4d095c5ddb2b78 Dec 13 01:33:22.104255 containerd[2086]: 2024-12-13 01:33:22.041 [INFO][5160] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.99.0/26 handle="k8s-pod-network.4ed85d2be517ab44164abc2283df73121938ba155716bdd4fa4d095c5ddb2b78" host="ip-172-31-29-53" Dec 13 01:33:22.104255 containerd[2086]: 2024-12-13 01:33:22.055 [INFO][5160] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.99.3/26] block=192.168.99.0/26 handle="k8s-pod-network.4ed85d2be517ab44164abc2283df73121938ba155716bdd4fa4d095c5ddb2b78" host="ip-172-31-29-53" Dec 13 01:33:22.104255 containerd[2086]: 2024-12-13 01:33:22.055 [INFO][5160] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.99.3/26] handle="k8s-pod-network.4ed85d2be517ab44164abc2283df73121938ba155716bdd4fa4d095c5ddb2b78" host="ip-172-31-29-53" Dec 13 01:33:22.104255 containerd[2086]: 2024-12-13 01:33:22.055 [INFO][5160] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:33:22.104255 containerd[2086]: 2024-12-13 01:33:22.056 [INFO][5160] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.3/26] IPv6=[] ContainerID="4ed85d2be517ab44164abc2283df73121938ba155716bdd4fa4d095c5ddb2b78" HandleID="k8s-pod-network.4ed85d2be517ab44164abc2283df73121938ba155716bdd4fa4d095c5ddb2b78" Workload="ip--172--31--29--53-k8s-calico--apiserver--7977594b99--6rk5q-eth0" Dec 13 01:33:22.106598 containerd[2086]: 2024-12-13 01:33:22.059 [INFO][5149] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4ed85d2be517ab44164abc2283df73121938ba155716bdd4fa4d095c5ddb2b78" Namespace="calico-apiserver" Pod="calico-apiserver-7977594b99-6rk5q" WorkloadEndpoint="ip--172--31--29--53-k8s-calico--apiserver--7977594b99--6rk5q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--53-k8s-calico--apiserver--7977594b99--6rk5q-eth0", GenerateName:"calico-apiserver-7977594b99-", Namespace:"calico-apiserver", SelfLink:"", UID:"17b16b6f-09f1-4ddb-a8a5-c0692d5afcac", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 32, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7977594b99", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-53", ContainerID:"", Pod:"calico-apiserver-7977594b99-6rk5q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.99.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali369a1dcf294", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:33:22.106598 containerd[2086]: 2024-12-13 01:33:22.059 [INFO][5149] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.99.3/32] ContainerID="4ed85d2be517ab44164abc2283df73121938ba155716bdd4fa4d095c5ddb2b78" Namespace="calico-apiserver" Pod="calico-apiserver-7977594b99-6rk5q" WorkloadEndpoint="ip--172--31--29--53-k8s-calico--apiserver--7977594b99--6rk5q-eth0" Dec 13 01:33:22.106598 containerd[2086]: 2024-12-13 01:33:22.059 [INFO][5149] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali369a1dcf294 ContainerID="4ed85d2be517ab44164abc2283df73121938ba155716bdd4fa4d095c5ddb2b78" Namespace="calico-apiserver" Pod="calico-apiserver-7977594b99-6rk5q" WorkloadEndpoint="ip--172--31--29--53-k8s-calico--apiserver--7977594b99--6rk5q-eth0" Dec 13 01:33:22.106598 containerd[2086]: 2024-12-13 01:33:22.065 [INFO][5149] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4ed85d2be517ab44164abc2283df73121938ba155716bdd4fa4d095c5ddb2b78" Namespace="calico-apiserver" Pod="calico-apiserver-7977594b99-6rk5q" WorkloadEndpoint="ip--172--31--29--53-k8s-calico--apiserver--7977594b99--6rk5q-eth0" Dec 13 01:33:22.106598 containerd[2086]: 2024-12-13 01:33:22.067 [INFO][5149] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4ed85d2be517ab44164abc2283df73121938ba155716bdd4fa4d095c5ddb2b78" Namespace="calico-apiserver" Pod="calico-apiserver-7977594b99-6rk5q" WorkloadEndpoint="ip--172--31--29--53-k8s-calico--apiserver--7977594b99--6rk5q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--53-k8s-calico--apiserver--7977594b99--6rk5q-eth0", GenerateName:"calico-apiserver-7977594b99-", Namespace:"calico-apiserver", SelfLink:"", UID:"17b16b6f-09f1-4ddb-a8a5-c0692d5afcac", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 32, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7977594b99", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-53", ContainerID:"4ed85d2be517ab44164abc2283df73121938ba155716bdd4fa4d095c5ddb2b78", Pod:"calico-apiserver-7977594b99-6rk5q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.99.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali369a1dcf294", MAC:"ce:14:63:81:08:90", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:33:22.106598 containerd[2086]: 2024-12-13 01:33:22.089 [INFO][5149] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4ed85d2be517ab44164abc2283df73121938ba155716bdd4fa4d095c5ddb2b78" Namespace="calico-apiserver" Pod="calico-apiserver-7977594b99-6rk5q" WorkloadEndpoint="ip--172--31--29--53-k8s-calico--apiserver--7977594b99--6rk5q-eth0" Dec 13 01:33:22.131784 kubelet[3500]: I1213 01:33:22.130448 3500 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-z9648" podStartSLOduration=39.130389118 podStartE2EDuration="39.130389118s" podCreationTimestamp="2024-12-13 01:32:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:33:22.124924492 +0000 UTC m=+50.624518311" watchObservedRunningTime="2024-12-13 01:33:22.130389118 +0000 UTC m=+50.629982960" Dec 13 01:33:22.278489 containerd[2086]: time="2024-12-13T01:33:22.278006759Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:33:22.278489 containerd[2086]: time="2024-12-13T01:33:22.278117722Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:33:22.278489 containerd[2086]: time="2024-12-13T01:33:22.278142444Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:33:22.278489 containerd[2086]: time="2024-12-13T01:33:22.278278562Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:33:22.287455 systemd-networkd[1647]: cali7c3980a2145: Gained IPv6LL Dec 13 01:33:22.498375 containerd[2086]: time="2024-12-13T01:33:22.497724223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7977594b99-6rk5q,Uid:17b16b6f-09f1-4ddb-a8a5-c0692d5afcac,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"4ed85d2be517ab44164abc2283df73121938ba155716bdd4fa4d095c5ddb2b78\"" Dec 13 01:33:22.688785 containerd[2086]: time="2024-12-13T01:33:22.687883336Z" level=info msg="StopPodSandbox for \"eef006b16c82eab73c7b18acd0119e674e90c0c1665d6505c46fc27c5e0cf13c\"" Dec 13 01:33:22.689543 containerd[2086]: time="2024-12-13T01:33:22.689515285Z" level=info msg="StopPodSandbox for \"ad3a991bea30bc6d9465d4002e8615c37f32464d59de6aceb8ffdaf7412d1e82\"" Dec 13 01:33:23.138773 containerd[2086]: time="2024-12-13T01:33:23.137909370Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:23.146877 containerd[2086]: time="2024-12-13T01:33:23.146365117Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Dec 13 01:33:23.153512 containerd[2086]: time="2024-12-13T01:33:23.151993172Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:23.169363 containerd[2086]: time="2024-12-13T01:33:23.164417098Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:23.169363 containerd[2086]: time="2024-12-13T01:33:23.166385049Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 2.220821047s" Dec 13 01:33:23.169363 containerd[2086]: time="2024-12-13T01:33:23.166426512Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Dec 13 01:33:23.169363 containerd[2086]: time="2024-12-13T01:33:23.168574553Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 01:33:23.178673 containerd[2086]: time="2024-12-13T01:33:23.178162620Z" level=info msg="CreateContainer within sandbox \"92a04c2245e9a0cfba1becaeaff9edd76ad2a40881d601931a3d93fc60b8ca41\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Dec 13 01:33:23.223619 containerd[2086]: 2024-12-13 01:33:22.950 [INFO][5249] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="eef006b16c82eab73c7b18acd0119e674e90c0c1665d6505c46fc27c5e0cf13c" Dec 13 01:33:23.223619 containerd[2086]: 2024-12-13 01:33:22.953 [INFO][5249] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="eef006b16c82eab73c7b18acd0119e674e90c0c1665d6505c46fc27c5e0cf13c" iface="eth0" netns="/var/run/netns/cni-ccf25627-4d72-317c-394e-0b5de4ce6af5" Dec 13 01:33:23.223619 containerd[2086]: 2024-12-13 01:33:22.955 [INFO][5249] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="eef006b16c82eab73c7b18acd0119e674e90c0c1665d6505c46fc27c5e0cf13c" iface="eth0" netns="/var/run/netns/cni-ccf25627-4d72-317c-394e-0b5de4ce6af5" Dec 13 01:33:23.223619 containerd[2086]: 2024-12-13 01:33:22.956 [INFO][5249] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="eef006b16c82eab73c7b18acd0119e674e90c0c1665d6505c46fc27c5e0cf13c" iface="eth0" netns="/var/run/netns/cni-ccf25627-4d72-317c-394e-0b5de4ce6af5" Dec 13 01:33:23.223619 containerd[2086]: 2024-12-13 01:33:22.956 [INFO][5249] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="eef006b16c82eab73c7b18acd0119e674e90c0c1665d6505c46fc27c5e0cf13c" Dec 13 01:33:23.223619 containerd[2086]: 2024-12-13 01:33:22.956 [INFO][5249] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eef006b16c82eab73c7b18acd0119e674e90c0c1665d6505c46fc27c5e0cf13c" Dec 13 01:33:23.223619 containerd[2086]: 2024-12-13 01:33:23.154 [INFO][5265] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="eef006b16c82eab73c7b18acd0119e674e90c0c1665d6505c46fc27c5e0cf13c" HandleID="k8s-pod-network.eef006b16c82eab73c7b18acd0119e674e90c0c1665d6505c46fc27c5e0cf13c" Workload="ip--172--31--29--53-k8s-coredns--76f75df574--blsbn-eth0" Dec 13 01:33:23.223619 containerd[2086]: 2024-12-13 01:33:23.154 [INFO][5265] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:33:23.223619 containerd[2086]: 2024-12-13 01:33:23.155 [INFO][5265] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:33:23.223619 containerd[2086]: 2024-12-13 01:33:23.176 [WARNING][5265] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="eef006b16c82eab73c7b18acd0119e674e90c0c1665d6505c46fc27c5e0cf13c" HandleID="k8s-pod-network.eef006b16c82eab73c7b18acd0119e674e90c0c1665d6505c46fc27c5e0cf13c" Workload="ip--172--31--29--53-k8s-coredns--76f75df574--blsbn-eth0" Dec 13 01:33:23.223619 containerd[2086]: 2024-12-13 01:33:23.176 [INFO][5265] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="eef006b16c82eab73c7b18acd0119e674e90c0c1665d6505c46fc27c5e0cf13c" HandleID="k8s-pod-network.eef006b16c82eab73c7b18acd0119e674e90c0c1665d6505c46fc27c5e0cf13c" Workload="ip--172--31--29--53-k8s-coredns--76f75df574--blsbn-eth0" Dec 13 01:33:23.223619 containerd[2086]: 2024-12-13 01:33:23.187 [INFO][5265] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:33:23.223619 containerd[2086]: 2024-12-13 01:33:23.201 [INFO][5249] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="eef006b16c82eab73c7b18acd0119e674e90c0c1665d6505c46fc27c5e0cf13c" Dec 13 01:33:23.237077 containerd[2086]: time="2024-12-13T01:33:23.232488829Z" level=info msg="TearDown network for sandbox \"eef006b16c82eab73c7b18acd0119e674e90c0c1665d6505c46fc27c5e0cf13c\" successfully" Dec 13 01:33:23.237077 containerd[2086]: time="2024-12-13T01:33:23.232529600Z" level=info msg="StopPodSandbox for \"eef006b16c82eab73c7b18acd0119e674e90c0c1665d6505c46fc27c5e0cf13c\" returns successfully" Dec 13 01:33:23.238256 systemd[1]: run-netns-cni\x2dccf25627\x2d4d72\x2d317c\x2d394e\x2d0b5de4ce6af5.mount: Deactivated successfully. Dec 13 01:33:23.239476 containerd[2086]: time="2024-12-13T01:33:23.238797061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-blsbn,Uid:0e3b0103-676b-4665-aa0f-646d586812a7,Namespace:kube-system,Attempt:1,}" Dec 13 01:33:23.260041 containerd[2086]: 2024-12-13 01:33:23.000 [INFO][5256] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ad3a991bea30bc6d9465d4002e8615c37f32464d59de6aceb8ffdaf7412d1e82" Dec 13 01:33:23.260041 containerd[2086]: 2024-12-13 01:33:23.002 [INFO][5256] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ad3a991bea30bc6d9465d4002e8615c37f32464d59de6aceb8ffdaf7412d1e82" iface="eth0" netns="/var/run/netns/cni-2537fa7b-b3d8-f28f-ecd3-1d34bae63197" Dec 13 01:33:23.260041 containerd[2086]: 2024-12-13 01:33:23.006 [INFO][5256] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ad3a991bea30bc6d9465d4002e8615c37f32464d59de6aceb8ffdaf7412d1e82" iface="eth0" netns="/var/run/netns/cni-2537fa7b-b3d8-f28f-ecd3-1d34bae63197" Dec 13 01:33:23.260041 containerd[2086]: 2024-12-13 01:33:23.007 [INFO][5256] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ad3a991bea30bc6d9465d4002e8615c37f32464d59de6aceb8ffdaf7412d1e82" iface="eth0" netns="/var/run/netns/cni-2537fa7b-b3d8-f28f-ecd3-1d34bae63197" Dec 13 01:33:23.260041 containerd[2086]: 2024-12-13 01:33:23.008 [INFO][5256] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ad3a991bea30bc6d9465d4002e8615c37f32464d59de6aceb8ffdaf7412d1e82" Dec 13 01:33:23.260041 containerd[2086]: 2024-12-13 01:33:23.008 [INFO][5256] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ad3a991bea30bc6d9465d4002e8615c37f32464d59de6aceb8ffdaf7412d1e82" Dec 13 01:33:23.260041 containerd[2086]: 2024-12-13 01:33:23.236 [INFO][5269] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ad3a991bea30bc6d9465d4002e8615c37f32464d59de6aceb8ffdaf7412d1e82" HandleID="k8s-pod-network.ad3a991bea30bc6d9465d4002e8615c37f32464d59de6aceb8ffdaf7412d1e82" Workload="ip--172--31--29--53-k8s-calico--kube--controllers--ddd8db86d--nkrpz-eth0" Dec 13 01:33:23.260041 containerd[2086]: 2024-12-13 01:33:23.241 [INFO][5269] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:33:23.260041 containerd[2086]: 2024-12-13 01:33:23.241 [INFO][5269] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:33:23.260041 containerd[2086]: 2024-12-13 01:33:23.252 [WARNING][5269] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ad3a991bea30bc6d9465d4002e8615c37f32464d59de6aceb8ffdaf7412d1e82" HandleID="k8s-pod-network.ad3a991bea30bc6d9465d4002e8615c37f32464d59de6aceb8ffdaf7412d1e82" Workload="ip--172--31--29--53-k8s-calico--kube--controllers--ddd8db86d--nkrpz-eth0" Dec 13 01:33:23.260041 containerd[2086]: 2024-12-13 01:33:23.252 [INFO][5269] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ad3a991bea30bc6d9465d4002e8615c37f32464d59de6aceb8ffdaf7412d1e82" HandleID="k8s-pod-network.ad3a991bea30bc6d9465d4002e8615c37f32464d59de6aceb8ffdaf7412d1e82" Workload="ip--172--31--29--53-k8s-calico--kube--controllers--ddd8db86d--nkrpz-eth0" Dec 13 01:33:23.260041 containerd[2086]: 2024-12-13 01:33:23.254 [INFO][5269] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:33:23.260041 containerd[2086]: 2024-12-13 01:33:23.257 [INFO][5256] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ad3a991bea30bc6d9465d4002e8615c37f32464d59de6aceb8ffdaf7412d1e82" Dec 13 01:33:23.263614 containerd[2086]: time="2024-12-13T01:33:23.261264529Z" level=info msg="TearDown network for sandbox \"ad3a991bea30bc6d9465d4002e8615c37f32464d59de6aceb8ffdaf7412d1e82\" successfully" Dec 13 01:33:23.263614 containerd[2086]: time="2024-12-13T01:33:23.261292652Z" level=info msg="StopPodSandbox for \"ad3a991bea30bc6d9465d4002e8615c37f32464d59de6aceb8ffdaf7412d1e82\" returns successfully" Dec 13 01:33:23.265480 containerd[2086]: time="2024-12-13T01:33:23.263980075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-ddd8db86d-nkrpz,Uid:32353b72-7c12-4d50-b914-daf11815550a,Namespace:calico-system,Attempt:1,}" Dec 13 01:33:23.267020 systemd[1]: run-netns-cni\x2d2537fa7b\x2db3d8\x2df28f\x2decd3\x2d1d34bae63197.mount: Deactivated successfully. Dec 13 01:33:23.295697 containerd[2086]: time="2024-12-13T01:33:23.295644853Z" level=info msg="CreateContainer within sandbox \"92a04c2245e9a0cfba1becaeaff9edd76ad2a40881d601931a3d93fc60b8ca41\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"6e36b1ff8c14773876f22600809f621bcf5df2332bf4eceb78b4a9d3a68d2ff8\"" Dec 13 01:33:23.297220 containerd[2086]: time="2024-12-13T01:33:23.297167768Z" level=info msg="StartContainer for \"6e36b1ff8c14773876f22600809f621bcf5df2332bf4eceb78b4a9d3a68d2ff8\"" Dec 13 01:33:23.471460 containerd[2086]: time="2024-12-13T01:33:23.470286893Z" level=info msg="StartContainer for \"6e36b1ff8c14773876f22600809f621bcf5df2332bf4eceb78b4a9d3a68d2ff8\" returns successfully" Dec 13 01:33:23.647472 systemd-networkd[1647]: caliaaaab6756f6: Link UP Dec 13 01:33:23.648484 systemd-networkd[1647]: caliaaaab6756f6: Gained carrier Dec 13 01:33:23.687893 containerd[2086]: 2024-12-13 01:33:23.483 [INFO][5291] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--53-k8s-coredns--76f75df574--blsbn-eth0 coredns-76f75df574- kube-system 0e3b0103-676b-4665-aa0f-646d586812a7 860 0 2024-12-13 01:32:43 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-29-53 coredns-76f75df574-blsbn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliaaaab6756f6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="4e61c61fff7ec93c4ebbe8512414b5221a2651cbcf416dc0112398fc44493d87" Namespace="kube-system" Pod="coredns-76f75df574-blsbn" WorkloadEndpoint="ip--172--31--29--53-k8s-coredns--76f75df574--blsbn-" Dec 13 01:33:23.687893 containerd[2086]: 2024-12-13 01:33:23.484 [INFO][5291] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4e61c61fff7ec93c4ebbe8512414b5221a2651cbcf416dc0112398fc44493d87" Namespace="kube-system" Pod="coredns-76f75df574-blsbn" WorkloadEndpoint="ip--172--31--29--53-k8s-coredns--76f75df574--blsbn-eth0" Dec 13 01:33:23.687893 containerd[2086]: 2024-12-13 01:33:23.553 [INFO][5335] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4e61c61fff7ec93c4ebbe8512414b5221a2651cbcf416dc0112398fc44493d87" HandleID="k8s-pod-network.4e61c61fff7ec93c4ebbe8512414b5221a2651cbcf416dc0112398fc44493d87" Workload="ip--172--31--29--53-k8s-coredns--76f75df574--blsbn-eth0" Dec 13 01:33:23.687893 containerd[2086]: 2024-12-13 01:33:23.569 [INFO][5335] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4e61c61fff7ec93c4ebbe8512414b5221a2651cbcf416dc0112398fc44493d87" HandleID="k8s-pod-network.4e61c61fff7ec93c4ebbe8512414b5221a2651cbcf416dc0112398fc44493d87" Workload="ip--172--31--29--53-k8s-coredns--76f75df574--blsbn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003194d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-29-53", "pod":"coredns-76f75df574-blsbn", "timestamp":"2024-12-13 01:33:23.55340142 +0000 UTC"}, Hostname:"ip-172-31-29-53", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:33:23.687893 containerd[2086]: 2024-12-13 01:33:23.572 [INFO][5335] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:33:23.687893 containerd[2086]: 2024-12-13 01:33:23.572 [INFO][5335] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:33:23.687893 containerd[2086]: 2024-12-13 01:33:23.572 [INFO][5335] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-53' Dec 13 01:33:23.687893 containerd[2086]: 2024-12-13 01:33:23.576 [INFO][5335] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4e61c61fff7ec93c4ebbe8512414b5221a2651cbcf416dc0112398fc44493d87" host="ip-172-31-29-53" Dec 13 01:33:23.687893 containerd[2086]: 2024-12-13 01:33:23.584 [INFO][5335] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-29-53" Dec 13 01:33:23.687893 containerd[2086]: 2024-12-13 01:33:23.595 [INFO][5335] ipam/ipam.go 489: Trying affinity for 192.168.99.0/26 host="ip-172-31-29-53" Dec 13 01:33:23.687893 containerd[2086]: 2024-12-13 01:33:23.600 [INFO][5335] ipam/ipam.go 155: Attempting to load block cidr=192.168.99.0/26 host="ip-172-31-29-53" Dec 13 01:33:23.687893 containerd[2086]: 2024-12-13 01:33:23.609 [INFO][5335] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.99.0/26 host="ip-172-31-29-53" Dec 13 01:33:23.687893 containerd[2086]: 2024-12-13 01:33:23.609 [INFO][5335] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.99.0/26 handle="k8s-pod-network.4e61c61fff7ec93c4ebbe8512414b5221a2651cbcf416dc0112398fc44493d87" host="ip-172-31-29-53" Dec 13 01:33:23.687893 containerd[2086]: 2024-12-13 01:33:23.612 [INFO][5335] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4e61c61fff7ec93c4ebbe8512414b5221a2651cbcf416dc0112398fc44493d87 Dec 13 01:33:23.687893 containerd[2086]: 2024-12-13 01:33:23.620 [INFO][5335] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.99.0/26 handle="k8s-pod-network.4e61c61fff7ec93c4ebbe8512414b5221a2651cbcf416dc0112398fc44493d87" host="ip-172-31-29-53" Dec 13 01:33:23.687893 containerd[2086]: 2024-12-13 01:33:23.632 [INFO][5335] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.99.4/26] block=192.168.99.0/26 handle="k8s-pod-network.4e61c61fff7ec93c4ebbe8512414b5221a2651cbcf416dc0112398fc44493d87" host="ip-172-31-29-53" Dec 13 01:33:23.687893 containerd[2086]: 2024-12-13 01:33:23.632 [INFO][5335] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.99.4/26] handle="k8s-pod-network.4e61c61fff7ec93c4ebbe8512414b5221a2651cbcf416dc0112398fc44493d87" host="ip-172-31-29-53" Dec 13 01:33:23.687893 containerd[2086]: 2024-12-13 01:33:23.632 [INFO][5335] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:33:23.687893 containerd[2086]: 2024-12-13 01:33:23.632 [INFO][5335] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.4/26] IPv6=[] ContainerID="4e61c61fff7ec93c4ebbe8512414b5221a2651cbcf416dc0112398fc44493d87" HandleID="k8s-pod-network.4e61c61fff7ec93c4ebbe8512414b5221a2651cbcf416dc0112398fc44493d87" Workload="ip--172--31--29--53-k8s-coredns--76f75df574--blsbn-eth0" Dec 13 01:33:23.692946 containerd[2086]: 2024-12-13 01:33:23.636 [INFO][5291] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4e61c61fff7ec93c4ebbe8512414b5221a2651cbcf416dc0112398fc44493d87" Namespace="kube-system" Pod="coredns-76f75df574-blsbn" WorkloadEndpoint="ip--172--31--29--53-k8s-coredns--76f75df574--blsbn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--53-k8s-coredns--76f75df574--blsbn-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"0e3b0103-676b-4665-aa0f-646d586812a7", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 32, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-53", ContainerID:"", Pod:"coredns-76f75df574-blsbn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.99.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaaaab6756f6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:33:23.692946 containerd[2086]: 2024-12-13 01:33:23.637 [INFO][5291] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.99.4/32] ContainerID="4e61c61fff7ec93c4ebbe8512414b5221a2651cbcf416dc0112398fc44493d87" Namespace="kube-system" Pod="coredns-76f75df574-blsbn" WorkloadEndpoint="ip--172--31--29--53-k8s-coredns--76f75df574--blsbn-eth0" Dec 13 01:33:23.692946 containerd[2086]: 2024-12-13 01:33:23.637 [INFO][5291] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaaaab6756f6 ContainerID="4e61c61fff7ec93c4ebbe8512414b5221a2651cbcf416dc0112398fc44493d87" Namespace="kube-system" Pod="coredns-76f75df574-blsbn" WorkloadEndpoint="ip--172--31--29--53-k8s-coredns--76f75df574--blsbn-eth0" Dec 13 01:33:23.692946 containerd[2086]: 2024-12-13 01:33:23.645 [INFO][5291] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4e61c61fff7ec93c4ebbe8512414b5221a2651cbcf416dc0112398fc44493d87" Namespace="kube-system" Pod="coredns-76f75df574-blsbn" WorkloadEndpoint="ip--172--31--29--53-k8s-coredns--76f75df574--blsbn-eth0" Dec 13 01:33:23.692946 containerd[2086]: 2024-12-13 01:33:23.646 [INFO][5291] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4e61c61fff7ec93c4ebbe8512414b5221a2651cbcf416dc0112398fc44493d87" Namespace="kube-system" Pod="coredns-76f75df574-blsbn" WorkloadEndpoint="ip--172--31--29--53-k8s-coredns--76f75df574--blsbn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--53-k8s-coredns--76f75df574--blsbn-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"0e3b0103-676b-4665-aa0f-646d586812a7", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 32, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-53", ContainerID:"4e61c61fff7ec93c4ebbe8512414b5221a2651cbcf416dc0112398fc44493d87", Pod:"coredns-76f75df574-blsbn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.99.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaaaab6756f6", MAC:"e2:20:8f:4a:5d:d9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:33:23.692946 containerd[2086]: 2024-12-13 01:33:23.676 [INFO][5291] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4e61c61fff7ec93c4ebbe8512414b5221a2651cbcf416dc0112398fc44493d87" Namespace="kube-system" Pod="coredns-76f75df574-blsbn" WorkloadEndpoint="ip--172--31--29--53-k8s-coredns--76f75df574--blsbn-eth0" Dec 13 01:33:23.694365 containerd[2086]: time="2024-12-13T01:33:23.691434328Z" level=info msg="StopPodSandbox for \"b8e708e0d12c5c6d9a74f9327cdb4b8b9b99f30c3293f188c9c41b650ebba2b6\"" Dec 13 01:33:23.761374 systemd-journald[1563]: Under memory pressure, flushing caches. Dec 13 01:33:23.759483 systemd-resolved[1968]: Under memory pressure, flushing caches. Dec 13 01:33:23.759552 systemd-resolved[1968]: Flushed all caches. Dec 13 01:33:23.771493 systemd-networkd[1647]: cali19677a8587e: Link UP Dec 13 01:33:23.777099 systemd-networkd[1647]: cali19677a8587e: Gained carrier Dec 13 01:33:23.820921 containerd[2086]: 2024-12-13 01:33:23.486 [INFO][5293] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--53-k8s-calico--kube--controllers--ddd8db86d--nkrpz-eth0 calico-kube-controllers-ddd8db86d- calico-system 32353b72-7c12-4d50-b914-daf11815550a 862 0 2024-12-13 01:32:52 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:ddd8db86d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-29-53 calico-kube-controllers-ddd8db86d-nkrpz eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali19677a8587e [] []}} ContainerID="fc58a06be2c2fb097d0dae9145f168d1307fd35f4f02f580e41013dca27c5f8d" Namespace="calico-system" Pod="calico-kube-controllers-ddd8db86d-nkrpz" WorkloadEndpoint="ip--172--31--29--53-k8s-calico--kube--controllers--ddd8db86d--nkrpz-" Dec 13 01:33:23.820921 containerd[2086]: 2024-12-13 01:33:23.486 [INFO][5293] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="fc58a06be2c2fb097d0dae9145f168d1307fd35f4f02f580e41013dca27c5f8d" Namespace="calico-system" Pod="calico-kube-controllers-ddd8db86d-nkrpz" WorkloadEndpoint="ip--172--31--29--53-k8s-calico--kube--controllers--ddd8db86d--nkrpz-eth0" Dec 13 01:33:23.820921 containerd[2086]: 2024-12-13 01:33:23.609 [INFO][5339] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fc58a06be2c2fb097d0dae9145f168d1307fd35f4f02f580e41013dca27c5f8d" HandleID="k8s-pod-network.fc58a06be2c2fb097d0dae9145f168d1307fd35f4f02f580e41013dca27c5f8d" Workload="ip--172--31--29--53-k8s-calico--kube--controllers--ddd8db86d--nkrpz-eth0" Dec 13 01:33:23.820921 containerd[2086]: 2024-12-13 01:33:23.623 [INFO][5339] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fc58a06be2c2fb097d0dae9145f168d1307fd35f4f02f580e41013dca27c5f8d" HandleID="k8s-pod-network.fc58a06be2c2fb097d0dae9145f168d1307fd35f4f02f580e41013dca27c5f8d" Workload="ip--172--31--29--53-k8s-calico--kube--controllers--ddd8db86d--nkrpz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000455b40), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-29-53", "pod":"calico-kube-controllers-ddd8db86d-nkrpz", "timestamp":"2024-12-13 01:33:23.609365573 +0000 UTC"}, Hostname:"ip-172-31-29-53", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:33:23.820921 containerd[2086]: 2024-12-13 01:33:23.623 [INFO][5339] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:33:23.820921 containerd[2086]: 2024-12-13 01:33:23.633 [INFO][5339] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:33:23.820921 containerd[2086]: 2024-12-13 01:33:23.633 [INFO][5339] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-53' Dec 13 01:33:23.820921 containerd[2086]: 2024-12-13 01:33:23.638 [INFO][5339] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.fc58a06be2c2fb097d0dae9145f168d1307fd35f4f02f580e41013dca27c5f8d" host="ip-172-31-29-53" Dec 13 01:33:23.820921 containerd[2086]: 2024-12-13 01:33:23.669 [INFO][5339] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-29-53" Dec 13 01:33:23.820921 containerd[2086]: 2024-12-13 01:33:23.695 [INFO][5339] ipam/ipam.go 489: Trying affinity for 192.168.99.0/26 host="ip-172-31-29-53" Dec 13 01:33:23.820921 containerd[2086]: 2024-12-13 01:33:23.706 [INFO][5339] ipam/ipam.go 155: Attempting to load block cidr=192.168.99.0/26 host="ip-172-31-29-53" Dec 13 01:33:23.820921 containerd[2086]: 2024-12-13 01:33:23.723 [INFO][5339] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.99.0/26 host="ip-172-31-29-53" Dec 13 01:33:23.820921 containerd[2086]: 2024-12-13 01:33:23.723 [INFO][5339] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.99.0/26 handle="k8s-pod-network.fc58a06be2c2fb097d0dae9145f168d1307fd35f4f02f580e41013dca27c5f8d" host="ip-172-31-29-53" Dec 13 01:33:23.820921 containerd[2086]: 2024-12-13 01:33:23.728 [INFO][5339] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.fc58a06be2c2fb097d0dae9145f168d1307fd35f4f02f580e41013dca27c5f8d Dec 13 01:33:23.820921 containerd[2086]: 2024-12-13 01:33:23.741 [INFO][5339] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.99.0/26 handle="k8s-pod-network.fc58a06be2c2fb097d0dae9145f168d1307fd35f4f02f580e41013dca27c5f8d" host="ip-172-31-29-53" Dec 13 01:33:23.820921 containerd[2086]: 2024-12-13 01:33:23.754 [INFO][5339] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.99.5/26] block=192.168.99.0/26 handle="k8s-pod-network.fc58a06be2c2fb097d0dae9145f168d1307fd35f4f02f580e41013dca27c5f8d" host="ip-172-31-29-53" Dec 13 01:33:23.820921 containerd[2086]: 2024-12-13 01:33:23.756 [INFO][5339] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.99.5/26] handle="k8s-pod-network.fc58a06be2c2fb097d0dae9145f168d1307fd35f4f02f580e41013dca27c5f8d" host="ip-172-31-29-53" Dec 13 01:33:23.820921 containerd[2086]: 2024-12-13 01:33:23.756 [INFO][5339] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:33:23.820921 containerd[2086]: 2024-12-13 01:33:23.756 [INFO][5339] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.5/26] IPv6=[] ContainerID="fc58a06be2c2fb097d0dae9145f168d1307fd35f4f02f580e41013dca27c5f8d" HandleID="k8s-pod-network.fc58a06be2c2fb097d0dae9145f168d1307fd35f4f02f580e41013dca27c5f8d" Workload="ip--172--31--29--53-k8s-calico--kube--controllers--ddd8db86d--nkrpz-eth0" Dec 13 01:33:23.824089 containerd[2086]: 2024-12-13 01:33:23.763 [INFO][5293] cni-plugin/k8s.go 386: Populated endpoint ContainerID="fc58a06be2c2fb097d0dae9145f168d1307fd35f4f02f580e41013dca27c5f8d" Namespace="calico-system" Pod="calico-kube-controllers-ddd8db86d-nkrpz" WorkloadEndpoint="ip--172--31--29--53-k8s-calico--kube--controllers--ddd8db86d--nkrpz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--53-k8s-calico--kube--controllers--ddd8db86d--nkrpz-eth0", GenerateName:"calico-kube-controllers-ddd8db86d-", Namespace:"calico-system", SelfLink:"", UID:"32353b72-7c12-4d50-b914-daf11815550a", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 32, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"ddd8db86d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-53", ContainerID:"", Pod:"calico-kube-controllers-ddd8db86d-nkrpz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.99.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali19677a8587e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:33:23.824089 containerd[2086]: 2024-12-13 01:33:23.763 [INFO][5293] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.99.5/32] ContainerID="fc58a06be2c2fb097d0dae9145f168d1307fd35f4f02f580e41013dca27c5f8d" Namespace="calico-system" Pod="calico-kube-controllers-ddd8db86d-nkrpz" WorkloadEndpoint="ip--172--31--29--53-k8s-calico--kube--controllers--ddd8db86d--nkrpz-eth0" Dec 13 01:33:23.824089 containerd[2086]: 2024-12-13 01:33:23.763 [INFO][5293] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali19677a8587e ContainerID="fc58a06be2c2fb097d0dae9145f168d1307fd35f4f02f580e41013dca27c5f8d" Namespace="calico-system" Pod="calico-kube-controllers-ddd8db86d-nkrpz" WorkloadEndpoint="ip--172--31--29--53-k8s-calico--kube--controllers--ddd8db86d--nkrpz-eth0" Dec 13 01:33:23.824089 containerd[2086]: 2024-12-13 01:33:23.775 [INFO][5293] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fc58a06be2c2fb097d0dae9145f168d1307fd35f4f02f580e41013dca27c5f8d" Namespace="calico-system" Pod="calico-kube-controllers-ddd8db86d-nkrpz" WorkloadEndpoint="ip--172--31--29--53-k8s-calico--kube--controllers--ddd8db86d--nkrpz-eth0" Dec 13 01:33:23.824089 containerd[2086]: 2024-12-13 01:33:23.780 [INFO][5293] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="fc58a06be2c2fb097d0dae9145f168d1307fd35f4f02f580e41013dca27c5f8d" Namespace="calico-system" Pod="calico-kube-controllers-ddd8db86d-nkrpz" WorkloadEndpoint="ip--172--31--29--53-k8s-calico--kube--controllers--ddd8db86d--nkrpz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--53-k8s-calico--kube--controllers--ddd8db86d--nkrpz-eth0", GenerateName:"calico-kube-controllers-ddd8db86d-", Namespace:"calico-system", SelfLink:"", UID:"32353b72-7c12-4d50-b914-daf11815550a", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 32, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"ddd8db86d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-53", ContainerID:"fc58a06be2c2fb097d0dae9145f168d1307fd35f4f02f580e41013dca27c5f8d", Pod:"calico-kube-controllers-ddd8db86d-nkrpz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.99.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali19677a8587e", MAC:"92:8b:de:40:ce:ed", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:33:23.824089 containerd[2086]: 2024-12-13 01:33:23.806 [INFO][5293] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="fc58a06be2c2fb097d0dae9145f168d1307fd35f4f02f580e41013dca27c5f8d" Namespace="calico-system" Pod="calico-kube-controllers-ddd8db86d-nkrpz" WorkloadEndpoint="ip--172--31--29--53-k8s-calico--kube--controllers--ddd8db86d--nkrpz-eth0" Dec 13 01:33:23.824617 systemd-networkd[1647]: cali369a1dcf294: Gained IPv6LL Dec 13 01:33:23.883235 containerd[2086]: time="2024-12-13T01:33:23.882935616Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:33:23.883235 containerd[2086]: time="2024-12-13T01:33:23.883019672Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:33:23.883235 containerd[2086]: time="2024-12-13T01:33:23.883035932Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:33:23.883235 containerd[2086]: time="2024-12-13T01:33:23.883143145Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:33:23.944244 containerd[2086]: time="2024-12-13T01:33:23.943958282Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:33:23.944244 containerd[2086]: time="2024-12-13T01:33:23.944063757Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:33:23.944244 containerd[2086]: time="2024-12-13T01:33:23.944083282Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:33:23.946145 containerd[2086]: time="2024-12-13T01:33:23.946058720Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:33:24.092638 containerd[2086]: time="2024-12-13T01:33:24.092577055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-blsbn,Uid:0e3b0103-676b-4665-aa0f-646d586812a7,Namespace:kube-system,Attempt:1,} returns sandbox id \"4e61c61fff7ec93c4ebbe8512414b5221a2651cbcf416dc0112398fc44493d87\"" Dec 13 01:33:24.097790 containerd[2086]: time="2024-12-13T01:33:24.097563850Z" level=info msg="CreateContainer within sandbox \"4e61c61fff7ec93c4ebbe8512414b5221a2651cbcf416dc0112398fc44493d87\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:33:24.126289 containerd[2086]: time="2024-12-13T01:33:24.126101341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-ddd8db86d-nkrpz,Uid:32353b72-7c12-4d50-b914-daf11815550a,Namespace:calico-system,Attempt:1,} returns sandbox id \"fc58a06be2c2fb097d0dae9145f168d1307fd35f4f02f580e41013dca27c5f8d\"" Dec 13 01:33:24.136747 containerd[2086]: 2024-12-13 01:33:23.974 [INFO][5373] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b8e708e0d12c5c6d9a74f9327cdb4b8b9b99f30c3293f188c9c41b650ebba2b6" Dec 13 01:33:24.136747 containerd[2086]: 2024-12-13 01:33:23.975 [INFO][5373] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b8e708e0d12c5c6d9a74f9327cdb4b8b9b99f30c3293f188c9c41b650ebba2b6" iface="eth0" netns="/var/run/netns/cni-78d16566-7172-4e84-dcc9-a169d715b7fe" Dec 13 01:33:24.136747 containerd[2086]: 2024-12-13 01:33:23.976 [INFO][5373] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b8e708e0d12c5c6d9a74f9327cdb4b8b9b99f30c3293f188c9c41b650ebba2b6" iface="eth0" netns="/var/run/netns/cni-78d16566-7172-4e84-dcc9-a169d715b7fe" Dec 13 01:33:24.136747 containerd[2086]: 2024-12-13 01:33:23.978 [INFO][5373] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b8e708e0d12c5c6d9a74f9327cdb4b8b9b99f30c3293f188c9c41b650ebba2b6" iface="eth0" netns="/var/run/netns/cni-78d16566-7172-4e84-dcc9-a169d715b7fe" Dec 13 01:33:24.136747 containerd[2086]: 2024-12-13 01:33:23.978 [INFO][5373] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b8e708e0d12c5c6d9a74f9327cdb4b8b9b99f30c3293f188c9c41b650ebba2b6" Dec 13 01:33:24.136747 containerd[2086]: 2024-12-13 01:33:23.978 [INFO][5373] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b8e708e0d12c5c6d9a74f9327cdb4b8b9b99f30c3293f188c9c41b650ebba2b6" Dec 13 01:33:24.136747 containerd[2086]: 2024-12-13 01:33:24.116 [INFO][5450] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b8e708e0d12c5c6d9a74f9327cdb4b8b9b99f30c3293f188c9c41b650ebba2b6" HandleID="k8s-pod-network.b8e708e0d12c5c6d9a74f9327cdb4b8b9b99f30c3293f188c9c41b650ebba2b6" Workload="ip--172--31--29--53-k8s-calico--apiserver--7977594b99--m4sgj-eth0" Dec 13 01:33:24.136747 containerd[2086]: 2024-12-13 01:33:24.117 [INFO][5450] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:33:24.136747 containerd[2086]: 2024-12-13 01:33:24.117 [INFO][5450] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:33:24.136747 containerd[2086]: 2024-12-13 01:33:24.125 [WARNING][5450] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b8e708e0d12c5c6d9a74f9327cdb4b8b9b99f30c3293f188c9c41b650ebba2b6" HandleID="k8s-pod-network.b8e708e0d12c5c6d9a74f9327cdb4b8b9b99f30c3293f188c9c41b650ebba2b6" Workload="ip--172--31--29--53-k8s-calico--apiserver--7977594b99--m4sgj-eth0" Dec 13 01:33:24.136747 containerd[2086]: 2024-12-13 01:33:24.125 [INFO][5450] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b8e708e0d12c5c6d9a74f9327cdb4b8b9b99f30c3293f188c9c41b650ebba2b6" HandleID="k8s-pod-network.b8e708e0d12c5c6d9a74f9327cdb4b8b9b99f30c3293f188c9c41b650ebba2b6" Workload="ip--172--31--29--53-k8s-calico--apiserver--7977594b99--m4sgj-eth0" Dec 13 01:33:24.136747 containerd[2086]: 2024-12-13 01:33:24.131 [INFO][5450] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:33:24.136747 containerd[2086]: 2024-12-13 01:33:24.134 [INFO][5373] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b8e708e0d12c5c6d9a74f9327cdb4b8b9b99f30c3293f188c9c41b650ebba2b6" Dec 13 01:33:24.137753 containerd[2086]: time="2024-12-13T01:33:24.137223034Z" level=info msg="TearDown network for sandbox \"b8e708e0d12c5c6d9a74f9327cdb4b8b9b99f30c3293f188c9c41b650ebba2b6\" successfully" Dec 13 01:33:24.137753 containerd[2086]: time="2024-12-13T01:33:24.137248202Z" level=info msg="StopPodSandbox for \"b8e708e0d12c5c6d9a74f9327cdb4b8b9b99f30c3293f188c9c41b650ebba2b6\" returns successfully" Dec 13 01:33:24.137753 containerd[2086]: time="2024-12-13T01:33:24.137715284Z" level=info msg="CreateContainer within sandbox \"4e61c61fff7ec93c4ebbe8512414b5221a2651cbcf416dc0112398fc44493d87\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5fdc35d548deb72a5a742effe311484dcb5d3472a6cca555d7bcd94d5cad412e\"" Dec 13 01:33:24.138916 containerd[2086]: time="2024-12-13T01:33:24.138444042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7977594b99-m4sgj,Uid:0200e2b4-cc5f-4c09-802c-ef6b6feab695,Namespace:calico-apiserver,Attempt:1,}" Dec 13 01:33:24.139246 containerd[2086]: time="2024-12-13T01:33:24.138923279Z" level=info msg="StartContainer for \"5fdc35d548deb72a5a742effe311484dcb5d3472a6cca555d7bcd94d5cad412e\"" Dec 13 01:33:24.248953 containerd[2086]: time="2024-12-13T01:33:24.246856876Z" level=info msg="StartContainer for \"5fdc35d548deb72a5a742effe311484dcb5d3472a6cca555d7bcd94d5cad412e\" returns successfully" Dec 13 01:33:24.300631 systemd[1]: run-netns-cni\x2d78d16566\x2d7172\x2d4e84\x2ddcc9\x2da169d715b7fe.mount: Deactivated successfully. Dec 13 01:33:24.477061 systemd-networkd[1647]: cali8a5dc7c5113: Link UP Dec 13 01:33:24.477469 systemd-networkd[1647]: cali8a5dc7c5113: Gained carrier Dec 13 01:33:24.510868 containerd[2086]: 2024-12-13 01:33:24.286 [INFO][5502] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--53-k8s-calico--apiserver--7977594b99--m4sgj-eth0 calico-apiserver-7977594b99- calico-apiserver 0200e2b4-cc5f-4c09-802c-ef6b6feab695 882 0 2024-12-13 01:32:51 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7977594b99 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-29-53 calico-apiserver-7977594b99-m4sgj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8a5dc7c5113 [] []}} ContainerID="68f256421a1e85a78cb78504ed764cb9ba66d960c32bab7458d39cbefb9f7c80" Namespace="calico-apiserver" Pod="calico-apiserver-7977594b99-m4sgj" WorkloadEndpoint="ip--172--31--29--53-k8s-calico--apiserver--7977594b99--m4sgj-" Dec 13 01:33:24.510868 containerd[2086]: 2024-12-13 01:33:24.288 [INFO][5502] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="68f256421a1e85a78cb78504ed764cb9ba66d960c32bab7458d39cbefb9f7c80" Namespace="calico-apiserver" Pod="calico-apiserver-7977594b99-m4sgj" WorkloadEndpoint="ip--172--31--29--53-k8s-calico--apiserver--7977594b99--m4sgj-eth0" Dec 13 01:33:24.510868 containerd[2086]: 2024-12-13 01:33:24.382 [INFO][5522] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="68f256421a1e85a78cb78504ed764cb9ba66d960c32bab7458d39cbefb9f7c80" HandleID="k8s-pod-network.68f256421a1e85a78cb78504ed764cb9ba66d960c32bab7458d39cbefb9f7c80" Workload="ip--172--31--29--53-k8s-calico--apiserver--7977594b99--m4sgj-eth0" Dec 13 01:33:24.510868 containerd[2086]: 2024-12-13 01:33:24.394 [INFO][5522] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="68f256421a1e85a78cb78504ed764cb9ba66d960c32bab7458d39cbefb9f7c80" HandleID="k8s-pod-network.68f256421a1e85a78cb78504ed764cb9ba66d960c32bab7458d39cbefb9f7c80" Workload="ip--172--31--29--53-k8s-calico--apiserver--7977594b99--m4sgj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00037d290), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-29-53", "pod":"calico-apiserver-7977594b99-m4sgj", "timestamp":"2024-12-13 01:33:24.382387558 +0000 UTC"}, Hostname:"ip-172-31-29-53", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:33:24.510868 containerd[2086]: 2024-12-13 01:33:24.394 [INFO][5522] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:33:24.510868 containerd[2086]: 2024-12-13 01:33:24.394 [INFO][5522] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:33:24.510868 containerd[2086]: 2024-12-13 01:33:24.394 [INFO][5522] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-53' Dec 13 01:33:24.510868 containerd[2086]: 2024-12-13 01:33:24.399 [INFO][5522] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.68f256421a1e85a78cb78504ed764cb9ba66d960c32bab7458d39cbefb9f7c80" host="ip-172-31-29-53" Dec 13 01:33:24.510868 containerd[2086]: 2024-12-13 01:33:24.425 [INFO][5522] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-29-53" Dec 13 01:33:24.510868 containerd[2086]: 2024-12-13 01:33:24.439 [INFO][5522] ipam/ipam.go 489: Trying affinity for 192.168.99.0/26 host="ip-172-31-29-53" Dec 13 01:33:24.510868 containerd[2086]: 2024-12-13 01:33:24.443 [INFO][5522] ipam/ipam.go 155: Attempting to load block cidr=192.168.99.0/26 host="ip-172-31-29-53" Dec 13 01:33:24.510868 containerd[2086]: 2024-12-13 01:33:24.447 [INFO][5522] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.99.0/26 host="ip-172-31-29-53" Dec 13 01:33:24.510868 containerd[2086]: 2024-12-13 01:33:24.447 [INFO][5522] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.99.0/26 handle="k8s-pod-network.68f256421a1e85a78cb78504ed764cb9ba66d960c32bab7458d39cbefb9f7c80" host="ip-172-31-29-53" Dec 13 01:33:24.510868 containerd[2086]: 2024-12-13 01:33:24.448 [INFO][5522] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.68f256421a1e85a78cb78504ed764cb9ba66d960c32bab7458d39cbefb9f7c80 Dec 13 01:33:24.510868 containerd[2086]: 2024-12-13 01:33:24.456 [INFO][5522] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.99.0/26 handle="k8s-pod-network.68f256421a1e85a78cb78504ed764cb9ba66d960c32bab7458d39cbefb9f7c80" host="ip-172-31-29-53" Dec 13 01:33:24.510868 containerd[2086]: 2024-12-13 01:33:24.467 [INFO][5522] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.99.6/26] block=192.168.99.0/26 handle="k8s-pod-network.68f256421a1e85a78cb78504ed764cb9ba66d960c32bab7458d39cbefb9f7c80" host="ip-172-31-29-53" Dec 13 01:33:24.510868 containerd[2086]: 2024-12-13 01:33:24.467 [INFO][5522] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.99.6/26] handle="k8s-pod-network.68f256421a1e85a78cb78504ed764cb9ba66d960c32bab7458d39cbefb9f7c80" host="ip-172-31-29-53" Dec 13 01:33:24.510868 containerd[2086]: 2024-12-13 01:33:24.467 [INFO][5522] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:33:24.510868 containerd[2086]: 2024-12-13 01:33:24.467 [INFO][5522] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.6/26] IPv6=[] ContainerID="68f256421a1e85a78cb78504ed764cb9ba66d960c32bab7458d39cbefb9f7c80" HandleID="k8s-pod-network.68f256421a1e85a78cb78504ed764cb9ba66d960c32bab7458d39cbefb9f7c80" Workload="ip--172--31--29--53-k8s-calico--apiserver--7977594b99--m4sgj-eth0" Dec 13 01:33:24.512098 containerd[2086]: 2024-12-13 01:33:24.471 [INFO][5502] cni-plugin/k8s.go 386: Populated endpoint ContainerID="68f256421a1e85a78cb78504ed764cb9ba66d960c32bab7458d39cbefb9f7c80" Namespace="calico-apiserver" Pod="calico-apiserver-7977594b99-m4sgj" WorkloadEndpoint="ip--172--31--29--53-k8s-calico--apiserver--7977594b99--m4sgj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--53-k8s-calico--apiserver--7977594b99--m4sgj-eth0", GenerateName:"calico-apiserver-7977594b99-", Namespace:"calico-apiserver", SelfLink:"", UID:"0200e2b4-cc5f-4c09-802c-ef6b6feab695", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 32, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7977594b99", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-53", ContainerID:"", Pod:"calico-apiserver-7977594b99-m4sgj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.99.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8a5dc7c5113", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:33:24.512098 containerd[2086]: 2024-12-13 01:33:24.471 [INFO][5502] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.99.6/32] ContainerID="68f256421a1e85a78cb78504ed764cb9ba66d960c32bab7458d39cbefb9f7c80" Namespace="calico-apiserver" Pod="calico-apiserver-7977594b99-m4sgj" WorkloadEndpoint="ip--172--31--29--53-k8s-calico--apiserver--7977594b99--m4sgj-eth0" Dec 13 01:33:24.512098 containerd[2086]: 2024-12-13 01:33:24.471 [INFO][5502] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8a5dc7c5113 ContainerID="68f256421a1e85a78cb78504ed764cb9ba66d960c32bab7458d39cbefb9f7c80" Namespace="calico-apiserver" Pod="calico-apiserver-7977594b99-m4sgj" WorkloadEndpoint="ip--172--31--29--53-k8s-calico--apiserver--7977594b99--m4sgj-eth0" Dec 13 01:33:24.512098 containerd[2086]: 2024-12-13 01:33:24.478 [INFO][5502] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="68f256421a1e85a78cb78504ed764cb9ba66d960c32bab7458d39cbefb9f7c80" Namespace="calico-apiserver" Pod="calico-apiserver-7977594b99-m4sgj" WorkloadEndpoint="ip--172--31--29--53-k8s-calico--apiserver--7977594b99--m4sgj-eth0" Dec 13 01:33:24.512098 containerd[2086]: 2024-12-13 01:33:24.481 [INFO][5502] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="68f256421a1e85a78cb78504ed764cb9ba66d960c32bab7458d39cbefb9f7c80" Namespace="calico-apiserver" Pod="calico-apiserver-7977594b99-m4sgj" WorkloadEndpoint="ip--172--31--29--53-k8s-calico--apiserver--7977594b99--m4sgj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--53-k8s-calico--apiserver--7977594b99--m4sgj-eth0", GenerateName:"calico-apiserver-7977594b99-", Namespace:"calico-apiserver", SelfLink:"", UID:"0200e2b4-cc5f-4c09-802c-ef6b6feab695", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 32, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7977594b99", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-53", ContainerID:"68f256421a1e85a78cb78504ed764cb9ba66d960c32bab7458d39cbefb9f7c80", Pod:"calico-apiserver-7977594b99-m4sgj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.99.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8a5dc7c5113", MAC:"a2:8e:16:fe:89:4e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:33:24.512098 containerd[2086]: 2024-12-13 01:33:24.504 [INFO][5502] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="68f256421a1e85a78cb78504ed764cb9ba66d960c32bab7458d39cbefb9f7c80" Namespace="calico-apiserver" Pod="calico-apiserver-7977594b99-m4sgj" WorkloadEndpoint="ip--172--31--29--53-k8s-calico--apiserver--7977594b99--m4sgj-eth0" Dec 13 01:33:24.581735 containerd[2086]: time="2024-12-13T01:33:24.581324958Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:33:24.581735 containerd[2086]: time="2024-12-13T01:33:24.581455329Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:33:24.581735 containerd[2086]: time="2024-12-13T01:33:24.581473627Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:33:24.581735 containerd[2086]: time="2024-12-13T01:33:24.581606200Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:33:24.682233 systemd[1]: Started sshd@8-172.31.29.53:22-139.178.68.195:52378.service - OpenSSH per-connection server daemon (139.178.68.195:52378). Dec 13 01:33:24.809307 containerd[2086]: time="2024-12-13T01:33:24.809252166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7977594b99-m4sgj,Uid:0200e2b4-cc5f-4c09-802c-ef6b6feab695,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"68f256421a1e85a78cb78504ed764cb9ba66d960c32bab7458d39cbefb9f7c80\"" Dec 13 01:33:24.912307 systemd-networkd[1647]: caliaaaab6756f6: Gained IPv6LL Dec 13 01:33:24.935891 sshd[5578]: Accepted publickey for core from 139.178.68.195 port 52378 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:33:24.940437 sshd[5578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:33:24.955653 systemd-logind[2059]: New session 9 of user core. Dec 13 01:33:24.962968 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 01:33:25.211191 kubelet[3500]: I1213 01:33:25.208853 3500 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-blsbn" podStartSLOduration=42.208721086 podStartE2EDuration="42.208721086s" podCreationTimestamp="2024-12-13 01:32:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:33:25.207537879 +0000 UTC m=+53.707131721" watchObservedRunningTime="2024-12-13 01:33:25.208721086 +0000 UTC m=+53.708314907" Dec 13 01:33:25.339445 systemd[1]: run-containerd-runc-k8s.io-5d1a5551f25c7d0dba3047f91bcaab295db341ac67d825ce88bd7761ffcc00e8-runc.TfnK3B.mount: Deactivated successfully. Dec 13 01:33:25.551716 systemd-networkd[1647]: cali8a5dc7c5113: Gained IPv6LL Dec 13 01:33:25.617048 systemd-networkd[1647]: cali19677a8587e: Gained IPv6LL Dec 13 01:33:25.762062 sshd[5578]: pam_unix(sshd:session): session closed for user core Dec 13 01:33:25.778934 systemd[1]: sshd@8-172.31.29.53:22-139.178.68.195:52378.service: Deactivated successfully. Dec 13 01:33:25.800807 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 01:33:25.813184 systemd-journald[1563]: Under memory pressure, flushing caches. Dec 13 01:33:25.806152 systemd-logind[2059]: Session 9 logged out. Waiting for processes to exit. Dec 13 01:33:25.809871 systemd-resolved[1968]: Under memory pressure, flushing caches. Dec 13 01:33:25.809927 systemd-resolved[1968]: Flushed all caches. Dec 13 01:33:25.814735 systemd-logind[2059]: Removed session 9. Dec 13 01:33:27.263138 containerd[2086]: time="2024-12-13T01:33:27.263088193Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:27.273880 containerd[2086]: time="2024-12-13T01:33:27.273804018Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Dec 13 01:33:27.287411 containerd[2086]: time="2024-12-13T01:33:27.287315473Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:27.299040 containerd[2086]: time="2024-12-13T01:33:27.298625043Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:27.299625 containerd[2086]: time="2024-12-13T01:33:27.299585561Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 4.130948284s" Dec 13 01:33:27.299720 containerd[2086]: time="2024-12-13T01:33:27.299631653Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Dec 13 01:33:27.300666 containerd[2086]: time="2024-12-13T01:33:27.300217392Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Dec 13 01:33:27.303994 containerd[2086]: time="2024-12-13T01:33:27.303957225Z" level=info msg="CreateContainer within sandbox \"4ed85d2be517ab44164abc2283df73121938ba155716bdd4fa4d095c5ddb2b78\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 01:33:27.364173 containerd[2086]: time="2024-12-13T01:33:27.364132167Z" level=info msg="CreateContainer within sandbox \"4ed85d2be517ab44164abc2283df73121938ba155716bdd4fa4d095c5ddb2b78\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0071d2212cf79351dc9dfa09e3c6ea6631aff1e0faccba0051fb4bb1f0aa1ca0\"" Dec 13 01:33:27.367042 containerd[2086]: time="2024-12-13T01:33:27.365303782Z" level=info msg="StartContainer for \"0071d2212cf79351dc9dfa09e3c6ea6631aff1e0faccba0051fb4bb1f0aa1ca0\"" Dec 13 01:33:27.466221 containerd[2086]: time="2024-12-13T01:33:27.466101586Z" level=info msg="StartContainer for \"0071d2212cf79351dc9dfa09e3c6ea6631aff1e0faccba0051fb4bb1f0aa1ca0\" returns successfully" Dec 13 01:33:28.549283 ntpd[2037]: Listen normally on 6 vxlan.calico 192.168.99.0:123 Dec 13 01:33:28.549389 ntpd[2037]: Listen normally on 7 vxlan.calico [fe80::64ef:5aff:fee5:34aa%4]:123 Dec 13 01:33:28.549969 ntpd[2037]: 13 Dec 01:33:28 ntpd[2037]: Listen normally on 6 vxlan.calico 192.168.99.0:123 Dec 13 01:33:28.549969 ntpd[2037]: 13 Dec 01:33:28 ntpd[2037]: Listen normally on 7 vxlan.calico [fe80::64ef:5aff:fee5:34aa%4]:123 Dec 13 01:33:28.549969 ntpd[2037]: 13 Dec 01:33:28 ntpd[2037]: Listen normally on 8 cali7c3980a2145 [fe80::ecee:eeff:feee:eeee%7]:123 Dec 13 01:33:28.549969 ntpd[2037]: 13 Dec 01:33:28 ntpd[2037]: Listen normally on 9 cali37868fa7cf9 [fe80::ecee:eeff:feee:eeee%8]:123 Dec 13 01:33:28.549969 ntpd[2037]: 13 Dec 01:33:28 ntpd[2037]: Listen normally on 10 cali369a1dcf294 [fe80::ecee:eeff:feee:eeee%9]:123 Dec 13 01:33:28.549969 ntpd[2037]: 13 Dec 01:33:28 ntpd[2037]: Listen normally on 11 caliaaaab6756f6 [fe80::ecee:eeff:feee:eeee%10]:123 Dec 13 01:33:28.549969 ntpd[2037]: 13 Dec 01:33:28 ntpd[2037]: Listen normally on 12 cali19677a8587e [fe80::ecee:eeff:feee:eeee%11]:123 Dec 13 01:33:28.549969 ntpd[2037]: 13 Dec 01:33:28 ntpd[2037]: Listen normally on 13 cali8a5dc7c5113 [fe80::ecee:eeff:feee:eeee%12]:123 Dec 13 01:33:28.549447 ntpd[2037]: Listen normally on 8 cali7c3980a2145 [fe80::ecee:eeff:feee:eeee%7]:123 Dec 13 01:33:28.549489 ntpd[2037]: Listen normally on 9 cali37868fa7cf9 [fe80::ecee:eeff:feee:eeee%8]:123 Dec 13 01:33:28.549689 ntpd[2037]: Listen normally on 10 cali369a1dcf294 [fe80::ecee:eeff:feee:eeee%9]:123 Dec 13 01:33:28.549748 ntpd[2037]: Listen normally on 11 caliaaaab6756f6 [fe80::ecee:eeff:feee:eeee%10]:123 Dec 13 01:33:28.549787 ntpd[2037]: Listen normally on 12 cali19677a8587e [fe80::ecee:eeff:feee:eeee%11]:123 Dec 13 01:33:28.549825 ntpd[2037]: Listen normally on 13 cali8a5dc7c5113 [fe80::ecee:eeff:feee:eeee%12]:123 Dec 13 01:33:28.983324 containerd[2086]: time="2024-12-13T01:33:28.983202108Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:28.992198 containerd[2086]: time="2024-12-13T01:33:28.992113190Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Dec 13 01:33:28.995655 containerd[2086]: time="2024-12-13T01:33:28.995448428Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:29.010389 containerd[2086]: time="2024-12-13T01:33:29.009982848Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:29.011317 containerd[2086]: time="2024-12-13T01:33:29.011277009Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.71102745s" Dec 13 01:33:29.011453 containerd[2086]: time="2024-12-13T01:33:29.011317551Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Dec 13 01:33:29.014358 containerd[2086]: time="2024-12-13T01:33:29.014212292Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Dec 13 01:33:29.025203 containerd[2086]: time="2024-12-13T01:33:29.025070979Z" level=info msg="CreateContainer within sandbox \"92a04c2245e9a0cfba1becaeaff9edd76ad2a40881d601931a3d93fc60b8ca41\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Dec 13 01:33:29.091592 containerd[2086]: time="2024-12-13T01:33:29.091528847Z" level=info msg="CreateContainer within sandbox \"92a04c2245e9a0cfba1becaeaff9edd76ad2a40881d601931a3d93fc60b8ca41\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"7a40ef6dcd7ab7e25eaf876c5b5aca6a1808ab73558f44d54b572e147ddd5714\"" Dec 13 01:33:29.092371 containerd[2086]: time="2024-12-13T01:33:29.092327394Z" level=info msg="StartContainer for \"7a40ef6dcd7ab7e25eaf876c5b5aca6a1808ab73558f44d54b572e147ddd5714\"" Dec 13 01:33:29.160246 systemd[1]: run-containerd-runc-k8s.io-7a40ef6dcd7ab7e25eaf876c5b5aca6a1808ab73558f44d54b572e147ddd5714-runc.id6dfw.mount: Deactivated successfully. Dec 13 01:33:29.189608 kubelet[3500]: I1213 01:33:29.189561 3500 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:33:29.204751 containerd[2086]: time="2024-12-13T01:33:29.204709289Z" level=info msg="StartContainer for \"7a40ef6dcd7ab7e25eaf876c5b5aca6a1808ab73558f44d54b572e147ddd5714\" returns successfully" Dec 13 01:33:30.066664 kubelet[3500]: I1213 01:33:30.066618 3500 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Dec 13 01:33:30.072314 kubelet[3500]: I1213 01:33:30.072274 3500 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Dec 13 01:33:30.290297 kubelet[3500]: I1213 01:33:30.284525 3500 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-nk5pm" podStartSLOduration=30.216528341 podStartE2EDuration="38.28444197s" podCreationTimestamp="2024-12-13 01:32:52 +0000 UTC" firstStartedPulling="2024-12-13 01:33:20.94511908 +0000 UTC m=+49.444712894" lastFinishedPulling="2024-12-13 01:33:29.013032711 +0000 UTC m=+57.512626523" observedRunningTime="2024-12-13 01:33:30.281858888 +0000 UTC m=+58.781452703" watchObservedRunningTime="2024-12-13 01:33:30.28444197 +0000 UTC m=+58.784035789" Dec 13 01:33:30.290297 kubelet[3500]: I1213 01:33:30.285954 3500 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7977594b99-6rk5q" podStartSLOduration=34.492342279 podStartE2EDuration="39.285906906s" podCreationTimestamp="2024-12-13 01:32:51 +0000 UTC" firstStartedPulling="2024-12-13 01:33:22.506490259 +0000 UTC m=+51.006084068" lastFinishedPulling="2024-12-13 01:33:27.300054893 +0000 UTC m=+55.799648695" observedRunningTime="2024-12-13 01:33:28.206102174 +0000 UTC m=+56.705695995" watchObservedRunningTime="2024-12-13 01:33:30.285906906 +0000 UTC m=+58.785500726" Dec 13 01:33:30.793166 systemd[1]: Started sshd@9-172.31.29.53:22-139.178.68.195:46728.service - OpenSSH per-connection server daemon (139.178.68.195:46728). Dec 13 01:33:31.017218 sshd[5731]: Accepted publickey for core from 139.178.68.195 port 46728 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:33:31.020646 sshd[5731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:33:31.028447 systemd-logind[2059]: New session 10 of user core. Dec 13 01:33:31.035888 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 01:33:31.703363 systemd-journald[1563]: Under memory pressure, flushing caches. Dec 13 01:33:31.695762 systemd-resolved[1968]: Under memory pressure, flushing caches. Dec 13 01:33:31.695809 systemd-resolved[1968]: Flushed all caches. Dec 13 01:33:31.916013 sshd[5731]: pam_unix(sshd:session): session closed for user core Dec 13 01:33:31.922749 systemd[1]: sshd@9-172.31.29.53:22-139.178.68.195:46728.service: Deactivated successfully. Dec 13 01:33:31.931399 systemd-logind[2059]: Session 10 logged out. Waiting for processes to exit. Dec 13 01:33:31.933209 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 01:33:31.938539 containerd[2086]: time="2024-12-13T01:33:31.938500418Z" level=info msg="StopPodSandbox for \"ad47095c098ae93d34affadd78444e72b74ececb4a2617e1e2937cfd63115909\"" Dec 13 01:33:31.942014 systemd-logind[2059]: Removed session 10. Dec 13 01:33:31.949054 systemd[1]: Started sshd@10-172.31.29.53:22-139.178.68.195:46732.service - OpenSSH per-connection server daemon (139.178.68.195:46732). Dec 13 01:33:32.112165 sshd[5748]: Accepted publickey for core from 139.178.68.195 port 46732 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:33:32.118594 sshd[5748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:33:32.129172 systemd-logind[2059]: New session 11 of user core. Dec 13 01:33:32.134784 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 01:33:32.156068 containerd[2086]: time="2024-12-13T01:33:32.156022077Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:32.163521 containerd[2086]: time="2024-12-13T01:33:32.163357795Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Dec 13 01:33:32.168413 containerd[2086]: time="2024-12-13T01:33:32.167239535Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:32.171818 containerd[2086]: time="2024-12-13T01:33:32.171769612Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:32.172594 containerd[2086]: time="2024-12-13T01:33:32.172560678Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 3.158309285s" Dec 13 01:33:32.172736 containerd[2086]: time="2024-12-13T01:33:32.172714761Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Dec 13 01:33:32.174928 containerd[2086]: time="2024-12-13T01:33:32.174899819Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 01:33:32.219033 containerd[2086]: time="2024-12-13T01:33:32.218993092Z" level=info msg="CreateContainer within sandbox \"fc58a06be2c2fb097d0dae9145f168d1307fd35f4f02f580e41013dca27c5f8d\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Dec 13 01:33:32.288083 containerd[2086]: time="2024-12-13T01:33:32.287990490Z" level=info msg="CreateContainer within sandbox \"fc58a06be2c2fb097d0dae9145f168d1307fd35f4f02f580e41013dca27c5f8d\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"d0ddc949f1845009b82cc36140b327262b20682fbf15521d1189b72fd19be279\"" Dec 13 01:33:32.290907 containerd[2086]: time="2024-12-13T01:33:32.290871983Z" level=info msg="StartContainer for \"d0ddc949f1845009b82cc36140b327262b20682fbf15521d1189b72fd19be279\"" Dec 13 01:33:32.531631 containerd[2086]: 2024-12-13 01:33:32.301 [WARNING][5761] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ad47095c098ae93d34affadd78444e72b74ececb4a2617e1e2937cfd63115909" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--53-k8s-coredns--76f75df574--z9648-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"8a855430-bc04-49b7-b6f5-e957608abefc", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 32, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-53", ContainerID:"dfaa1526553c7ac0a8be894ba21167884216e54faa55aca254b91305b1533f6d", Pod:"coredns-76f75df574-z9648", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.99.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7c3980a2145", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:33:32.531631 containerd[2086]: 2024-12-13 01:33:32.312 [INFO][5761] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ad47095c098ae93d34affadd78444e72b74ececb4a2617e1e2937cfd63115909" Dec 13 01:33:32.531631 containerd[2086]: 2024-12-13 01:33:32.312 [INFO][5761] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ad47095c098ae93d34affadd78444e72b74ececb4a2617e1e2937cfd63115909" iface="eth0" netns="" Dec 13 01:33:32.531631 containerd[2086]: 2024-12-13 01:33:32.313 [INFO][5761] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ad47095c098ae93d34affadd78444e72b74ececb4a2617e1e2937cfd63115909" Dec 13 01:33:32.531631 containerd[2086]: 2024-12-13 01:33:32.313 [INFO][5761] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ad47095c098ae93d34affadd78444e72b74ececb4a2617e1e2937cfd63115909" Dec 13 01:33:32.531631 containerd[2086]: 2024-12-13 01:33:32.501 [INFO][5779] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ad47095c098ae93d34affadd78444e72b74ececb4a2617e1e2937cfd63115909" HandleID="k8s-pod-network.ad47095c098ae93d34affadd78444e72b74ececb4a2617e1e2937cfd63115909" Workload="ip--172--31--29--53-k8s-coredns--76f75df574--z9648-eth0" Dec 13 01:33:32.531631 containerd[2086]: 2024-12-13 01:33:32.501 [INFO][5779] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:33:32.531631 containerd[2086]: 2024-12-13 01:33:32.502 [INFO][5779] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:33:32.531631 containerd[2086]: 2024-12-13 01:33:32.516 [WARNING][5779] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ad47095c098ae93d34affadd78444e72b74ececb4a2617e1e2937cfd63115909" HandleID="k8s-pod-network.ad47095c098ae93d34affadd78444e72b74ececb4a2617e1e2937cfd63115909" Workload="ip--172--31--29--53-k8s-coredns--76f75df574--z9648-eth0" Dec 13 01:33:32.531631 containerd[2086]: 2024-12-13 01:33:32.516 [INFO][5779] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ad47095c098ae93d34affadd78444e72b74ececb4a2617e1e2937cfd63115909" HandleID="k8s-pod-network.ad47095c098ae93d34affadd78444e72b74ececb4a2617e1e2937cfd63115909" Workload="ip--172--31--29--53-k8s-coredns--76f75df574--z9648-eth0" Dec 13 01:33:32.531631 containerd[2086]: 2024-12-13 01:33:32.518 [INFO][5779] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:33:32.531631 containerd[2086]: 2024-12-13 01:33:32.524 [INFO][5761] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ad47095c098ae93d34affadd78444e72b74ececb4a2617e1e2937cfd63115909" Dec 13 01:33:32.531631 containerd[2086]: time="2024-12-13T01:33:32.528793512Z" level=info msg="TearDown network for sandbox \"ad47095c098ae93d34affadd78444e72b74ececb4a2617e1e2937cfd63115909\" successfully" Dec 13 01:33:32.531631 containerd[2086]: time="2024-12-13T01:33:32.528818627Z" level=info msg="StopPodSandbox for \"ad47095c098ae93d34affadd78444e72b74ececb4a2617e1e2937cfd63115909\" returns successfully" Dec 13 01:33:32.631199 containerd[2086]: time="2024-12-13T01:33:32.628086515Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:32.639970 containerd[2086]: time="2024-12-13T01:33:32.634599702Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Dec 13 01:33:32.656421 containerd[2086]: time="2024-12-13T01:33:32.652981650Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 477.663572ms" Dec 13 01:33:32.656421 containerd[2086]: time="2024-12-13T01:33:32.653030670Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Dec 13 01:33:32.656421 containerd[2086]: time="2024-12-13T01:33:32.653126738Z" level=info msg="RemovePodSandbox for \"ad47095c098ae93d34affadd78444e72b74ececb4a2617e1e2937cfd63115909\"" Dec 13 01:33:32.656421 containerd[2086]: time="2024-12-13T01:33:32.654539075Z" level=info msg="StartContainer for \"d0ddc949f1845009b82cc36140b327262b20682fbf15521d1189b72fd19be279\" returns successfully" Dec 13 01:33:32.662675 containerd[2086]: time="2024-12-13T01:33:32.662382700Z" level=info msg="Forcibly stopping sandbox \"ad47095c098ae93d34affadd78444e72b74ececb4a2617e1e2937cfd63115909\"" Dec 13 01:33:32.666709 containerd[2086]: time="2024-12-13T01:33:32.666599756Z" level=info msg="CreateContainer within sandbox \"68f256421a1e85a78cb78504ed764cb9ba66d960c32bab7458d39cbefb9f7c80\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 01:33:32.813956 containerd[2086]: time="2024-12-13T01:33:32.813735479Z" level=info msg="CreateContainer within sandbox \"68f256421a1e85a78cb78504ed764cb9ba66d960c32bab7458d39cbefb9f7c80\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f0e00ee1c6629dff72b1c089669550c06d6fbfb8a103515e6e50f44fd2493c99\"" Dec 13 01:33:32.818364 containerd[2086]: time="2024-12-13T01:33:32.816171194Z" level=info msg="StartContainer for \"f0e00ee1c6629dff72b1c089669550c06d6fbfb8a103515e6e50f44fd2493c99\"" Dec 13 01:33:32.909976 sshd[5748]: pam_unix(sshd:session): session closed for user core Dec 13 01:33:32.926046 systemd[1]: sshd@10-172.31.29.53:22-139.178.68.195:46732.service: Deactivated successfully. Dec 13 01:33:32.966096 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 01:33:32.972079 systemd-logind[2059]: Session 11 logged out. Waiting for processes to exit. Dec 13 01:33:32.983479 systemd[1]: Started sshd@11-172.31.29.53:22-139.178.68.195:46742.service - OpenSSH per-connection server daemon (139.178.68.195:46742). Dec 13 01:33:32.990750 systemd-logind[2059]: Removed session 11. Dec 13 01:33:33.029435 containerd[2086]: 2024-12-13 01:33:32.830 [WARNING][5828] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ad47095c098ae93d34affadd78444e72b74ececb4a2617e1e2937cfd63115909" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--53-k8s-coredns--76f75df574--z9648-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"8a855430-bc04-49b7-b6f5-e957608abefc", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 32, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-53", ContainerID:"dfaa1526553c7ac0a8be894ba21167884216e54faa55aca254b91305b1533f6d", Pod:"coredns-76f75df574-z9648", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.99.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7c3980a2145", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:33:33.029435 containerd[2086]: 2024-12-13 01:33:32.834 [INFO][5828] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ad47095c098ae93d34affadd78444e72b74ececb4a2617e1e2937cfd63115909" Dec 13 01:33:33.029435 containerd[2086]: 2024-12-13 01:33:32.834 [INFO][5828] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ad47095c098ae93d34affadd78444e72b74ececb4a2617e1e2937cfd63115909" iface="eth0" netns="" Dec 13 01:33:33.029435 containerd[2086]: 2024-12-13 01:33:32.834 [INFO][5828] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ad47095c098ae93d34affadd78444e72b74ececb4a2617e1e2937cfd63115909" Dec 13 01:33:33.029435 containerd[2086]: 2024-12-13 01:33:32.834 [INFO][5828] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ad47095c098ae93d34affadd78444e72b74ececb4a2617e1e2937cfd63115909" Dec 13 01:33:33.029435 containerd[2086]: 2024-12-13 01:33:32.928 [INFO][5841] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ad47095c098ae93d34affadd78444e72b74ececb4a2617e1e2937cfd63115909" HandleID="k8s-pod-network.ad47095c098ae93d34affadd78444e72b74ececb4a2617e1e2937cfd63115909" Workload="ip--172--31--29--53-k8s-coredns--76f75df574--z9648-eth0" Dec 13 01:33:33.029435 containerd[2086]: 2024-12-13 01:33:32.930 [INFO][5841] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:33:33.029435 containerd[2086]: 2024-12-13 01:33:32.930 [INFO][5841] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:33:33.029435 containerd[2086]: 2024-12-13 01:33:32.983 [WARNING][5841] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ad47095c098ae93d34affadd78444e72b74ececb4a2617e1e2937cfd63115909" HandleID="k8s-pod-network.ad47095c098ae93d34affadd78444e72b74ececb4a2617e1e2937cfd63115909" Workload="ip--172--31--29--53-k8s-coredns--76f75df574--z9648-eth0" Dec 13 01:33:33.029435 containerd[2086]: 2024-12-13 01:33:32.983 [INFO][5841] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ad47095c098ae93d34affadd78444e72b74ececb4a2617e1e2937cfd63115909" HandleID="k8s-pod-network.ad47095c098ae93d34affadd78444e72b74ececb4a2617e1e2937cfd63115909" Workload="ip--172--31--29--53-k8s-coredns--76f75df574--z9648-eth0" Dec 13 01:33:33.029435 containerd[2086]: 2024-12-13 01:33:33.004 [INFO][5841] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:33:33.029435 containerd[2086]: 2024-12-13 01:33:33.023 [INFO][5828] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ad47095c098ae93d34affadd78444e72b74ececb4a2617e1e2937cfd63115909" Dec 13 01:33:33.032186 containerd[2086]: time="2024-12-13T01:33:33.029479212Z" level=info msg="TearDown network for sandbox \"ad47095c098ae93d34affadd78444e72b74ececb4a2617e1e2937cfd63115909\" successfully" Dec 13 01:33:33.109121 containerd[2086]: time="2024-12-13T01:33:33.108963971Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ad47095c098ae93d34affadd78444e72b74ececb4a2617e1e2937cfd63115909\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:33:33.249361 containerd[2086]: time="2024-12-13T01:33:33.247903189Z" level=info msg="RemovePodSandbox \"ad47095c098ae93d34affadd78444e72b74ececb4a2617e1e2937cfd63115909\" returns successfully" Dec 13 01:33:33.249361 containerd[2086]: time="2024-12-13T01:33:33.249109121Z" level=info msg="StopPodSandbox for \"11cf48d339a3e24e6793a271dfe6b899e3a7036288cb32edd199f38144cb15db\"" Dec 13 01:33:33.249561 containerd[2086]: time="2024-12-13T01:33:33.249376480Z" level=info msg="StartContainer for \"f0e00ee1c6629dff72b1c089669550c06d6fbfb8a103515e6e50f44fd2493c99\" returns successfully" Dec 13 01:33:33.276432 sshd[5866]: Accepted publickey for core from 139.178.68.195 port 46742 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:33:33.288511 sshd[5866]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:33:33.326643 systemd-logind[2059]: New session 12 of user core. Dec 13 01:33:33.333255 kubelet[3500]: I1213 01:33:33.329371 3500 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-ddd8db86d-nkrpz" podStartSLOduration=33.281972753 podStartE2EDuration="41.328282307s" podCreationTimestamp="2024-12-13 01:32:52 +0000 UTC" firstStartedPulling="2024-12-13 01:33:24.127804234 +0000 UTC m=+52.627398035" lastFinishedPulling="2024-12-13 01:33:32.174113789 +0000 UTC m=+60.673707589" observedRunningTime="2024-12-13 01:33:33.316738928 +0000 UTC m=+61.816332749" watchObservedRunningTime="2024-12-13 01:33:33.328282307 +0000 UTC m=+61.827876128" Dec 13 01:33:33.329723 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 01:33:33.385492 kubelet[3500]: I1213 01:33:33.379632 3500 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7977594b99-m4sgj" podStartSLOduration=34.535642156 podStartE2EDuration="42.379561885s" podCreationTimestamp="2024-12-13 01:32:51 +0000 UTC" firstStartedPulling="2024-12-13 01:33:24.810852896 +0000 UTC m=+53.310446707" lastFinishedPulling="2024-12-13 01:33:32.654772624 +0000 UTC m=+61.154366436" observedRunningTime="2024-12-13 01:33:33.377237686 +0000 UTC m=+61.876831527" watchObservedRunningTime="2024-12-13 01:33:33.379561885 +0000 UTC m=+61.879155706" Dec 13 01:33:33.420323 systemd[1]: run-containerd-runc-k8s.io-d0ddc949f1845009b82cc36140b327262b20682fbf15521d1189b72fd19be279-runc.Za5snm.mount: Deactivated successfully. Dec 13 01:33:33.724502 containerd[2086]: 2024-12-13 01:33:33.551 [WARNING][5907] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="11cf48d339a3e24e6793a271dfe6b899e3a7036288cb32edd199f38144cb15db" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--53-k8s-calico--apiserver--7977594b99--6rk5q-eth0", GenerateName:"calico-apiserver-7977594b99-", Namespace:"calico-apiserver", SelfLink:"", UID:"17b16b6f-09f1-4ddb-a8a5-c0692d5afcac", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 32, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7977594b99", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-53", ContainerID:"4ed85d2be517ab44164abc2283df73121938ba155716bdd4fa4d095c5ddb2b78", Pod:"calico-apiserver-7977594b99-6rk5q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.99.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali369a1dcf294", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:33:33.724502 containerd[2086]: 2024-12-13 01:33:33.555 [INFO][5907] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="11cf48d339a3e24e6793a271dfe6b899e3a7036288cb32edd199f38144cb15db" Dec 13 01:33:33.724502 containerd[2086]: 2024-12-13 01:33:33.555 [INFO][5907] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="11cf48d339a3e24e6793a271dfe6b899e3a7036288cb32edd199f38144cb15db" iface="eth0" netns="" Dec 13 01:33:33.724502 containerd[2086]: 2024-12-13 01:33:33.555 [INFO][5907] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="11cf48d339a3e24e6793a271dfe6b899e3a7036288cb32edd199f38144cb15db" Dec 13 01:33:33.724502 containerd[2086]: 2024-12-13 01:33:33.555 [INFO][5907] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="11cf48d339a3e24e6793a271dfe6b899e3a7036288cb32edd199f38144cb15db" Dec 13 01:33:33.724502 containerd[2086]: 2024-12-13 01:33:33.635 [INFO][5934] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="11cf48d339a3e24e6793a271dfe6b899e3a7036288cb32edd199f38144cb15db" HandleID="k8s-pod-network.11cf48d339a3e24e6793a271dfe6b899e3a7036288cb32edd199f38144cb15db" Workload="ip--172--31--29--53-k8s-calico--apiserver--7977594b99--6rk5q-eth0" Dec 13 01:33:33.724502 containerd[2086]: 2024-12-13 01:33:33.635 [INFO][5934] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:33:33.724502 containerd[2086]: 2024-12-13 01:33:33.635 [INFO][5934] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:33:33.724502 containerd[2086]: 2024-12-13 01:33:33.677 [WARNING][5934] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="11cf48d339a3e24e6793a271dfe6b899e3a7036288cb32edd199f38144cb15db" HandleID="k8s-pod-network.11cf48d339a3e24e6793a271dfe6b899e3a7036288cb32edd199f38144cb15db" Workload="ip--172--31--29--53-k8s-calico--apiserver--7977594b99--6rk5q-eth0" Dec 13 01:33:33.724502 containerd[2086]: 2024-12-13 01:33:33.677 [INFO][5934] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="11cf48d339a3e24e6793a271dfe6b899e3a7036288cb32edd199f38144cb15db" HandleID="k8s-pod-network.11cf48d339a3e24e6793a271dfe6b899e3a7036288cb32edd199f38144cb15db" Workload="ip--172--31--29--53-k8s-calico--apiserver--7977594b99--6rk5q-eth0" Dec 13 01:33:33.724502 containerd[2086]: 2024-12-13 01:33:33.694 [INFO][5934] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:33:33.724502 containerd[2086]: 2024-12-13 01:33:33.716 [INFO][5907] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="11cf48d339a3e24e6793a271dfe6b899e3a7036288cb32edd199f38144cb15db" Dec 13 01:33:33.724502 containerd[2086]: time="2024-12-13T01:33:33.723022110Z" level=info msg="TearDown network for sandbox \"11cf48d339a3e24e6793a271dfe6b899e3a7036288cb32edd199f38144cb15db\" successfully" Dec 13 01:33:33.724502 containerd[2086]: time="2024-12-13T01:33:33.723052954Z" level=info msg="StopPodSandbox for \"11cf48d339a3e24e6793a271dfe6b899e3a7036288cb32edd199f38144cb15db\" returns successfully" Dec 13 01:33:33.724502 containerd[2086]: time="2024-12-13T01:33:33.724417579Z" level=info msg="RemovePodSandbox for \"11cf48d339a3e24e6793a271dfe6b899e3a7036288cb32edd199f38144cb15db\"" Dec 13 01:33:33.728532 containerd[2086]: time="2024-12-13T01:33:33.725063831Z" level=info msg="Forcibly stopping sandbox \"11cf48d339a3e24e6793a271dfe6b899e3a7036288cb32edd199f38144cb15db\"" Dec 13 01:33:33.743635 systemd-resolved[1968]: Under memory pressure, flushing caches. Dec 13 01:33:33.743674 systemd-resolved[1968]: Flushed all caches. Dec 13 01:33:33.748731 systemd-journald[1563]: Under memory pressure, flushing caches. Dec 13 01:33:33.946138 sshd[5866]: pam_unix(sshd:session): session closed for user core Dec 13 01:33:33.957163 systemd[1]: sshd@11-172.31.29.53:22-139.178.68.195:46742.service: Deactivated successfully. Dec 13 01:33:33.968180 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 01:33:33.973524 systemd-logind[2059]: Session 12 logged out. Waiting for processes to exit. Dec 13 01:33:33.976804 systemd-logind[2059]: Removed session 12. Dec 13 01:33:33.985834 containerd[2086]: 2024-12-13 01:33:33.883 [WARNING][5956] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="11cf48d339a3e24e6793a271dfe6b899e3a7036288cb32edd199f38144cb15db" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--53-k8s-calico--apiserver--7977594b99--6rk5q-eth0", GenerateName:"calico-apiserver-7977594b99-", Namespace:"calico-apiserver", SelfLink:"", UID:"17b16b6f-09f1-4ddb-a8a5-c0692d5afcac", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 32, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7977594b99", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-53", ContainerID:"4ed85d2be517ab44164abc2283df73121938ba155716bdd4fa4d095c5ddb2b78", Pod:"calico-apiserver-7977594b99-6rk5q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.99.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali369a1dcf294", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:33:33.985834 containerd[2086]: 2024-12-13 01:33:33.883 [INFO][5956] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="11cf48d339a3e24e6793a271dfe6b899e3a7036288cb32edd199f38144cb15db" Dec 13 01:33:33.985834 containerd[2086]: 2024-12-13 01:33:33.883 [INFO][5956] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="11cf48d339a3e24e6793a271dfe6b899e3a7036288cb32edd199f38144cb15db" iface="eth0" netns="" Dec 13 01:33:33.985834 containerd[2086]: 2024-12-13 01:33:33.883 [INFO][5956] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="11cf48d339a3e24e6793a271dfe6b899e3a7036288cb32edd199f38144cb15db" Dec 13 01:33:33.985834 containerd[2086]: 2024-12-13 01:33:33.883 [INFO][5956] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="11cf48d339a3e24e6793a271dfe6b899e3a7036288cb32edd199f38144cb15db" Dec 13 01:33:33.985834 containerd[2086]: 2024-12-13 01:33:33.954 [INFO][5964] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="11cf48d339a3e24e6793a271dfe6b899e3a7036288cb32edd199f38144cb15db" HandleID="k8s-pod-network.11cf48d339a3e24e6793a271dfe6b899e3a7036288cb32edd199f38144cb15db" Workload="ip--172--31--29--53-k8s-calico--apiserver--7977594b99--6rk5q-eth0" Dec 13 01:33:33.985834 containerd[2086]: 2024-12-13 01:33:33.956 [INFO][5964] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:33:33.985834 containerd[2086]: 2024-12-13 01:33:33.956 [INFO][5964] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:33:33.985834 containerd[2086]: 2024-12-13 01:33:33.974 [WARNING][5964] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="11cf48d339a3e24e6793a271dfe6b899e3a7036288cb32edd199f38144cb15db" HandleID="k8s-pod-network.11cf48d339a3e24e6793a271dfe6b899e3a7036288cb32edd199f38144cb15db" Workload="ip--172--31--29--53-k8s-calico--apiserver--7977594b99--6rk5q-eth0" Dec 13 01:33:33.985834 containerd[2086]: 2024-12-13 01:33:33.974 [INFO][5964] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="11cf48d339a3e24e6793a271dfe6b899e3a7036288cb32edd199f38144cb15db" HandleID="k8s-pod-network.11cf48d339a3e24e6793a271dfe6b899e3a7036288cb32edd199f38144cb15db" Workload="ip--172--31--29--53-k8s-calico--apiserver--7977594b99--6rk5q-eth0" Dec 13 01:33:33.985834 containerd[2086]: 2024-12-13 01:33:33.979 [INFO][5964] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:33:33.985834 containerd[2086]: 2024-12-13 01:33:33.982 [INFO][5956] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="11cf48d339a3e24e6793a271dfe6b899e3a7036288cb32edd199f38144cb15db" Dec 13 01:33:33.998127 containerd[2086]: time="2024-12-13T01:33:33.985882654Z" level=info msg="TearDown network for sandbox \"11cf48d339a3e24e6793a271dfe6b899e3a7036288cb32edd199f38144cb15db\" successfully" Dec 13 01:33:34.006776 containerd[2086]: time="2024-12-13T01:33:34.006726088Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"11cf48d339a3e24e6793a271dfe6b899e3a7036288cb32edd199f38144cb15db\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:33:34.007108 containerd[2086]: time="2024-12-13T01:33:34.007036491Z" level=info msg="RemovePodSandbox \"11cf48d339a3e24e6793a271dfe6b899e3a7036288cb32edd199f38144cb15db\" returns successfully" Dec 13 01:33:34.007670 containerd[2086]: time="2024-12-13T01:33:34.007640951Z" level=info msg="StopPodSandbox for \"eef006b16c82eab73c7b18acd0119e674e90c0c1665d6505c46fc27c5e0cf13c\"" Dec 13 01:33:34.153416 containerd[2086]: 2024-12-13 01:33:34.084 [WARNING][5986] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="eef006b16c82eab73c7b18acd0119e674e90c0c1665d6505c46fc27c5e0cf13c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--53-k8s-coredns--76f75df574--blsbn-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"0e3b0103-676b-4665-aa0f-646d586812a7", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 32, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-53", ContainerID:"4e61c61fff7ec93c4ebbe8512414b5221a2651cbcf416dc0112398fc44493d87", Pod:"coredns-76f75df574-blsbn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.99.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaaaab6756f6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:33:34.153416 containerd[2086]: 2024-12-13 01:33:34.085 [INFO][5986] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="eef006b16c82eab73c7b18acd0119e674e90c0c1665d6505c46fc27c5e0cf13c" Dec 13 01:33:34.153416 containerd[2086]: 2024-12-13 01:33:34.085 [INFO][5986] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="eef006b16c82eab73c7b18acd0119e674e90c0c1665d6505c46fc27c5e0cf13c" iface="eth0" netns="" Dec 13 01:33:34.153416 containerd[2086]: 2024-12-13 01:33:34.085 [INFO][5986] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="eef006b16c82eab73c7b18acd0119e674e90c0c1665d6505c46fc27c5e0cf13c" Dec 13 01:33:34.153416 containerd[2086]: 2024-12-13 01:33:34.085 [INFO][5986] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eef006b16c82eab73c7b18acd0119e674e90c0c1665d6505c46fc27c5e0cf13c" Dec 13 01:33:34.153416 containerd[2086]: 2024-12-13 01:33:34.138 [INFO][5992] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="eef006b16c82eab73c7b18acd0119e674e90c0c1665d6505c46fc27c5e0cf13c" HandleID="k8s-pod-network.eef006b16c82eab73c7b18acd0119e674e90c0c1665d6505c46fc27c5e0cf13c" Workload="ip--172--31--29--53-k8s-coredns--76f75df574--blsbn-eth0" Dec 13 01:33:34.153416 containerd[2086]: 2024-12-13 01:33:34.138 [INFO][5992] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:33:34.153416 containerd[2086]: 2024-12-13 01:33:34.138 [INFO][5992] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:33:34.153416 containerd[2086]: 2024-12-13 01:33:34.146 [WARNING][5992] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="eef006b16c82eab73c7b18acd0119e674e90c0c1665d6505c46fc27c5e0cf13c" HandleID="k8s-pod-network.eef006b16c82eab73c7b18acd0119e674e90c0c1665d6505c46fc27c5e0cf13c" Workload="ip--172--31--29--53-k8s-coredns--76f75df574--blsbn-eth0" Dec 13 01:33:34.153416 containerd[2086]: 2024-12-13 01:33:34.146 [INFO][5992] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="eef006b16c82eab73c7b18acd0119e674e90c0c1665d6505c46fc27c5e0cf13c" HandleID="k8s-pod-network.eef006b16c82eab73c7b18acd0119e674e90c0c1665d6505c46fc27c5e0cf13c" Workload="ip--172--31--29--53-k8s-coredns--76f75df574--blsbn-eth0" Dec 13 01:33:34.153416 containerd[2086]: 2024-12-13 01:33:34.148 [INFO][5992] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:33:34.153416 containerd[2086]: 2024-12-13 01:33:34.151 [INFO][5986] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="eef006b16c82eab73c7b18acd0119e674e90c0c1665d6505c46fc27c5e0cf13c" Dec 13 01:33:34.154452 containerd[2086]: time="2024-12-13T01:33:34.153464780Z" level=info msg="TearDown network for sandbox \"eef006b16c82eab73c7b18acd0119e674e90c0c1665d6505c46fc27c5e0cf13c\" successfully" Dec 13 01:33:34.154452 containerd[2086]: time="2024-12-13T01:33:34.153500118Z" level=info msg="StopPodSandbox for \"eef006b16c82eab73c7b18acd0119e674e90c0c1665d6505c46fc27c5e0cf13c\" returns successfully" Dec 13 01:33:34.154452 containerd[2086]: time="2024-12-13T01:33:34.154041591Z" level=info msg="RemovePodSandbox for \"eef006b16c82eab73c7b18acd0119e674e90c0c1665d6505c46fc27c5e0cf13c\"" Dec 13 01:33:34.154452 containerd[2086]: time="2024-12-13T01:33:34.154085603Z" level=info msg="Forcibly stopping sandbox \"eef006b16c82eab73c7b18acd0119e674e90c0c1665d6505c46fc27c5e0cf13c\"" Dec 13 01:33:34.316505 containerd[2086]: 2024-12-13 01:33:34.230 [WARNING][6010] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="eef006b16c82eab73c7b18acd0119e674e90c0c1665d6505c46fc27c5e0cf13c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--53-k8s-coredns--76f75df574--blsbn-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"0e3b0103-676b-4665-aa0f-646d586812a7", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 32, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-53", ContainerID:"4e61c61fff7ec93c4ebbe8512414b5221a2651cbcf416dc0112398fc44493d87", Pod:"coredns-76f75df574-blsbn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.99.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaaaab6756f6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:33:34.316505 containerd[2086]: 2024-12-13 01:33:34.231 [INFO][6010] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="eef006b16c82eab73c7b18acd0119e674e90c0c1665d6505c46fc27c5e0cf13c" Dec 13 01:33:34.316505 containerd[2086]: 2024-12-13 01:33:34.231 [INFO][6010] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="eef006b16c82eab73c7b18acd0119e674e90c0c1665d6505c46fc27c5e0cf13c" iface="eth0" netns="" Dec 13 01:33:34.316505 containerd[2086]: 2024-12-13 01:33:34.231 [INFO][6010] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="eef006b16c82eab73c7b18acd0119e674e90c0c1665d6505c46fc27c5e0cf13c" Dec 13 01:33:34.316505 containerd[2086]: 2024-12-13 01:33:34.231 [INFO][6010] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eef006b16c82eab73c7b18acd0119e674e90c0c1665d6505c46fc27c5e0cf13c" Dec 13 01:33:34.316505 containerd[2086]: 2024-12-13 01:33:34.295 [INFO][6016] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="eef006b16c82eab73c7b18acd0119e674e90c0c1665d6505c46fc27c5e0cf13c" HandleID="k8s-pod-network.eef006b16c82eab73c7b18acd0119e674e90c0c1665d6505c46fc27c5e0cf13c" Workload="ip--172--31--29--53-k8s-coredns--76f75df574--blsbn-eth0" Dec 13 01:33:34.316505 containerd[2086]: 2024-12-13 01:33:34.295 [INFO][6016] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:33:34.316505 containerd[2086]: 2024-12-13 01:33:34.296 [INFO][6016] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:33:34.316505 containerd[2086]: 2024-12-13 01:33:34.308 [WARNING][6016] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="eef006b16c82eab73c7b18acd0119e674e90c0c1665d6505c46fc27c5e0cf13c" HandleID="k8s-pod-network.eef006b16c82eab73c7b18acd0119e674e90c0c1665d6505c46fc27c5e0cf13c" Workload="ip--172--31--29--53-k8s-coredns--76f75df574--blsbn-eth0" Dec 13 01:33:34.316505 containerd[2086]: 2024-12-13 01:33:34.308 [INFO][6016] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="eef006b16c82eab73c7b18acd0119e674e90c0c1665d6505c46fc27c5e0cf13c" HandleID="k8s-pod-network.eef006b16c82eab73c7b18acd0119e674e90c0c1665d6505c46fc27c5e0cf13c" Workload="ip--172--31--29--53-k8s-coredns--76f75df574--blsbn-eth0" Dec 13 01:33:34.316505 containerd[2086]: 2024-12-13 01:33:34.310 [INFO][6016] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:33:34.316505 containerd[2086]: 2024-12-13 01:33:34.313 [INFO][6010] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="eef006b16c82eab73c7b18acd0119e674e90c0c1665d6505c46fc27c5e0cf13c" Dec 13 01:33:34.317182 containerd[2086]: time="2024-12-13T01:33:34.316561781Z" level=info msg="TearDown network for sandbox \"eef006b16c82eab73c7b18acd0119e674e90c0c1665d6505c46fc27c5e0cf13c\" successfully" Dec 13 01:33:34.326434 containerd[2086]: time="2024-12-13T01:33:34.326389055Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"eef006b16c82eab73c7b18acd0119e674e90c0c1665d6505c46fc27c5e0cf13c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:33:34.326576 containerd[2086]: time="2024-12-13T01:33:34.326472393Z" level=info msg="RemovePodSandbox \"eef006b16c82eab73c7b18acd0119e674e90c0c1665d6505c46fc27c5e0cf13c\" returns successfully" Dec 13 01:33:34.327130 containerd[2086]: time="2024-12-13T01:33:34.327086080Z" level=info msg="StopPodSandbox for \"ac2d59a67e7467ec7c43637658c3dde72f2ef719b8381e20b848bf2be63df5b2\"" Dec 13 01:33:34.532496 containerd[2086]: 2024-12-13 01:33:34.459 [WARNING][6034] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ac2d59a67e7467ec7c43637658c3dde72f2ef719b8381e20b848bf2be63df5b2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--53-k8s-csi--node--driver--nk5pm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"37e377bc-ecb2-46be-838f-bd5c4df2e7cf", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 32, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-53", ContainerID:"92a04c2245e9a0cfba1becaeaff9edd76ad2a40881d601931a3d93fc60b8ca41", Pod:"csi-node-driver-nk5pm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.99.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali37868fa7cf9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:33:34.532496 containerd[2086]: 2024-12-13 01:33:34.460 [INFO][6034] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ac2d59a67e7467ec7c43637658c3dde72f2ef719b8381e20b848bf2be63df5b2" Dec 13 01:33:34.532496 containerd[2086]: 2024-12-13 01:33:34.460 [INFO][6034] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ac2d59a67e7467ec7c43637658c3dde72f2ef719b8381e20b848bf2be63df5b2" iface="eth0" netns="" Dec 13 01:33:34.532496 containerd[2086]: 2024-12-13 01:33:34.460 [INFO][6034] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ac2d59a67e7467ec7c43637658c3dde72f2ef719b8381e20b848bf2be63df5b2" Dec 13 01:33:34.532496 containerd[2086]: 2024-12-13 01:33:34.461 [INFO][6034] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ac2d59a67e7467ec7c43637658c3dde72f2ef719b8381e20b848bf2be63df5b2" Dec 13 01:33:34.532496 containerd[2086]: 2024-12-13 01:33:34.515 [INFO][6041] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ac2d59a67e7467ec7c43637658c3dde72f2ef719b8381e20b848bf2be63df5b2" HandleID="k8s-pod-network.ac2d59a67e7467ec7c43637658c3dde72f2ef719b8381e20b848bf2be63df5b2" Workload="ip--172--31--29--53-k8s-csi--node--driver--nk5pm-eth0" Dec 13 01:33:34.532496 containerd[2086]: 2024-12-13 01:33:34.515 [INFO][6041] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:33:34.532496 containerd[2086]: 2024-12-13 01:33:34.515 [INFO][6041] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:33:34.532496 containerd[2086]: 2024-12-13 01:33:34.524 [WARNING][6041] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ac2d59a67e7467ec7c43637658c3dde72f2ef719b8381e20b848bf2be63df5b2" HandleID="k8s-pod-network.ac2d59a67e7467ec7c43637658c3dde72f2ef719b8381e20b848bf2be63df5b2" Workload="ip--172--31--29--53-k8s-csi--node--driver--nk5pm-eth0" Dec 13 01:33:34.532496 containerd[2086]: 2024-12-13 01:33:34.524 [INFO][6041] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ac2d59a67e7467ec7c43637658c3dde72f2ef719b8381e20b848bf2be63df5b2" HandleID="k8s-pod-network.ac2d59a67e7467ec7c43637658c3dde72f2ef719b8381e20b848bf2be63df5b2" Workload="ip--172--31--29--53-k8s-csi--node--driver--nk5pm-eth0" Dec 13 01:33:34.532496 containerd[2086]: 2024-12-13 01:33:34.527 [INFO][6041] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:33:34.532496 containerd[2086]: 2024-12-13 01:33:34.530 [INFO][6034] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ac2d59a67e7467ec7c43637658c3dde72f2ef719b8381e20b848bf2be63df5b2" Dec 13 01:33:34.536203 containerd[2086]: time="2024-12-13T01:33:34.532672764Z" level=info msg="TearDown network for sandbox \"ac2d59a67e7467ec7c43637658c3dde72f2ef719b8381e20b848bf2be63df5b2\" successfully" Dec 13 01:33:34.536203 containerd[2086]: time="2024-12-13T01:33:34.532712242Z" level=info msg="StopPodSandbox for \"ac2d59a67e7467ec7c43637658c3dde72f2ef719b8381e20b848bf2be63df5b2\" returns successfully" Dec 13 01:33:34.536203 containerd[2086]: time="2024-12-13T01:33:34.534616028Z" level=info msg="RemovePodSandbox for \"ac2d59a67e7467ec7c43637658c3dde72f2ef719b8381e20b848bf2be63df5b2\"" Dec 13 01:33:34.536203 containerd[2086]: time="2024-12-13T01:33:34.534715307Z" level=info msg="Forcibly stopping sandbox \"ac2d59a67e7467ec7c43637658c3dde72f2ef719b8381e20b848bf2be63df5b2\"" Dec 13 01:33:34.692778 containerd[2086]: 2024-12-13 01:33:34.613 [WARNING][6059] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ac2d59a67e7467ec7c43637658c3dde72f2ef719b8381e20b848bf2be63df5b2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--53-k8s-csi--node--driver--nk5pm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"37e377bc-ecb2-46be-838f-bd5c4df2e7cf", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 32, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-53", ContainerID:"92a04c2245e9a0cfba1becaeaff9edd76ad2a40881d601931a3d93fc60b8ca41", Pod:"csi-node-driver-nk5pm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.99.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali37868fa7cf9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:33:34.692778 containerd[2086]: 2024-12-13 01:33:34.614 [INFO][6059] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ac2d59a67e7467ec7c43637658c3dde72f2ef719b8381e20b848bf2be63df5b2" Dec 13 01:33:34.692778 containerd[2086]: 2024-12-13 01:33:34.614 [INFO][6059] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ac2d59a67e7467ec7c43637658c3dde72f2ef719b8381e20b848bf2be63df5b2" iface="eth0" netns="" Dec 13 01:33:34.692778 containerd[2086]: 2024-12-13 01:33:34.614 [INFO][6059] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ac2d59a67e7467ec7c43637658c3dde72f2ef719b8381e20b848bf2be63df5b2" Dec 13 01:33:34.692778 containerd[2086]: 2024-12-13 01:33:34.614 [INFO][6059] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ac2d59a67e7467ec7c43637658c3dde72f2ef719b8381e20b848bf2be63df5b2" Dec 13 01:33:34.692778 containerd[2086]: 2024-12-13 01:33:34.670 [INFO][6066] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ac2d59a67e7467ec7c43637658c3dde72f2ef719b8381e20b848bf2be63df5b2" HandleID="k8s-pod-network.ac2d59a67e7467ec7c43637658c3dde72f2ef719b8381e20b848bf2be63df5b2" Workload="ip--172--31--29--53-k8s-csi--node--driver--nk5pm-eth0" Dec 13 01:33:34.692778 containerd[2086]: 2024-12-13 01:33:34.671 [INFO][6066] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:33:34.692778 containerd[2086]: 2024-12-13 01:33:34.671 [INFO][6066] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:33:34.692778 containerd[2086]: 2024-12-13 01:33:34.682 [WARNING][6066] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ac2d59a67e7467ec7c43637658c3dde72f2ef719b8381e20b848bf2be63df5b2" HandleID="k8s-pod-network.ac2d59a67e7467ec7c43637658c3dde72f2ef719b8381e20b848bf2be63df5b2" Workload="ip--172--31--29--53-k8s-csi--node--driver--nk5pm-eth0" Dec 13 01:33:34.692778 containerd[2086]: 2024-12-13 01:33:34.682 [INFO][6066] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ac2d59a67e7467ec7c43637658c3dde72f2ef719b8381e20b848bf2be63df5b2" HandleID="k8s-pod-network.ac2d59a67e7467ec7c43637658c3dde72f2ef719b8381e20b848bf2be63df5b2" Workload="ip--172--31--29--53-k8s-csi--node--driver--nk5pm-eth0" Dec 13 01:33:34.692778 containerd[2086]: 2024-12-13 01:33:34.686 [INFO][6066] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:33:34.692778 containerd[2086]: 2024-12-13 01:33:34.689 [INFO][6059] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ac2d59a67e7467ec7c43637658c3dde72f2ef719b8381e20b848bf2be63df5b2" Dec 13 01:33:34.697055 containerd[2086]: time="2024-12-13T01:33:34.693517913Z" level=info msg="TearDown network for sandbox \"ac2d59a67e7467ec7c43637658c3dde72f2ef719b8381e20b848bf2be63df5b2\" successfully" Dec 13 01:33:34.705371 containerd[2086]: time="2024-12-13T01:33:34.705297593Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ac2d59a67e7467ec7c43637658c3dde72f2ef719b8381e20b848bf2be63df5b2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:33:34.705648 containerd[2086]: time="2024-12-13T01:33:34.705597443Z" level=info msg="RemovePodSandbox \"ac2d59a67e7467ec7c43637658c3dde72f2ef719b8381e20b848bf2be63df5b2\" returns successfully" Dec 13 01:33:34.706714 containerd[2086]: time="2024-12-13T01:33:34.706353719Z" level=info msg="StopPodSandbox for \"b8e708e0d12c5c6d9a74f9327cdb4b8b9b99f30c3293f188c9c41b650ebba2b6\"" Dec 13 01:33:34.815774 containerd[2086]: 2024-12-13 01:33:34.757 [WARNING][6086] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b8e708e0d12c5c6d9a74f9327cdb4b8b9b99f30c3293f188c9c41b650ebba2b6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--53-k8s-calico--apiserver--7977594b99--m4sgj-eth0", GenerateName:"calico-apiserver-7977594b99-", Namespace:"calico-apiserver", SelfLink:"", UID:"0200e2b4-cc5f-4c09-802c-ef6b6feab695", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 32, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7977594b99", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-53", ContainerID:"68f256421a1e85a78cb78504ed764cb9ba66d960c32bab7458d39cbefb9f7c80", Pod:"calico-apiserver-7977594b99-m4sgj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.99.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8a5dc7c5113", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:33:34.815774 containerd[2086]: 2024-12-13 01:33:34.757 [INFO][6086] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b8e708e0d12c5c6d9a74f9327cdb4b8b9b99f30c3293f188c9c41b650ebba2b6" Dec 13 01:33:34.815774 containerd[2086]: 2024-12-13 01:33:34.757 [INFO][6086] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b8e708e0d12c5c6d9a74f9327cdb4b8b9b99f30c3293f188c9c41b650ebba2b6" iface="eth0" netns="" Dec 13 01:33:34.815774 containerd[2086]: 2024-12-13 01:33:34.757 [INFO][6086] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b8e708e0d12c5c6d9a74f9327cdb4b8b9b99f30c3293f188c9c41b650ebba2b6" Dec 13 01:33:34.815774 containerd[2086]: 2024-12-13 01:33:34.757 [INFO][6086] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b8e708e0d12c5c6d9a74f9327cdb4b8b9b99f30c3293f188c9c41b650ebba2b6" Dec 13 01:33:34.815774 containerd[2086]: 2024-12-13 01:33:34.787 [INFO][6092] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b8e708e0d12c5c6d9a74f9327cdb4b8b9b99f30c3293f188c9c41b650ebba2b6" HandleID="k8s-pod-network.b8e708e0d12c5c6d9a74f9327cdb4b8b9b99f30c3293f188c9c41b650ebba2b6" Workload="ip--172--31--29--53-k8s-calico--apiserver--7977594b99--m4sgj-eth0" Dec 13 01:33:34.815774 containerd[2086]: 2024-12-13 01:33:34.787 [INFO][6092] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:33:34.815774 containerd[2086]: 2024-12-13 01:33:34.787 [INFO][6092] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:33:34.815774 containerd[2086]: 2024-12-13 01:33:34.800 [WARNING][6092] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b8e708e0d12c5c6d9a74f9327cdb4b8b9b99f30c3293f188c9c41b650ebba2b6" HandleID="k8s-pod-network.b8e708e0d12c5c6d9a74f9327cdb4b8b9b99f30c3293f188c9c41b650ebba2b6" Workload="ip--172--31--29--53-k8s-calico--apiserver--7977594b99--m4sgj-eth0" Dec 13 01:33:34.815774 containerd[2086]: 2024-12-13 01:33:34.800 [INFO][6092] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b8e708e0d12c5c6d9a74f9327cdb4b8b9b99f30c3293f188c9c41b650ebba2b6" HandleID="k8s-pod-network.b8e708e0d12c5c6d9a74f9327cdb4b8b9b99f30c3293f188c9c41b650ebba2b6" Workload="ip--172--31--29--53-k8s-calico--apiserver--7977594b99--m4sgj-eth0" Dec 13 01:33:34.815774 containerd[2086]: 2024-12-13 01:33:34.804 [INFO][6092] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:33:34.815774 containerd[2086]: 2024-12-13 01:33:34.810 [INFO][6086] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b8e708e0d12c5c6d9a74f9327cdb4b8b9b99f30c3293f188c9c41b650ebba2b6" Dec 13 01:33:34.820445 containerd[2086]: time="2024-12-13T01:33:34.818448594Z" level=info msg="TearDown network for sandbox \"b8e708e0d12c5c6d9a74f9327cdb4b8b9b99f30c3293f188c9c41b650ebba2b6\" successfully" Dec 13 01:33:34.820445 containerd[2086]: time="2024-12-13T01:33:34.818490619Z" level=info msg="StopPodSandbox for \"b8e708e0d12c5c6d9a74f9327cdb4b8b9b99f30c3293f188c9c41b650ebba2b6\" returns successfully" Dec 13 01:33:34.820445 containerd[2086]: time="2024-12-13T01:33:34.819306598Z" level=info msg="RemovePodSandbox for \"b8e708e0d12c5c6d9a74f9327cdb4b8b9b99f30c3293f188c9c41b650ebba2b6\"" Dec 13 01:33:34.820445 containerd[2086]: time="2024-12-13T01:33:34.819363605Z" level=info msg="Forcibly stopping sandbox \"b8e708e0d12c5c6d9a74f9327cdb4b8b9b99f30c3293f188c9c41b650ebba2b6\"" Dec 13 01:33:35.050513 containerd[2086]: 2024-12-13 01:33:34.934 [WARNING][6111] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b8e708e0d12c5c6d9a74f9327cdb4b8b9b99f30c3293f188c9c41b650ebba2b6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--53-k8s-calico--apiserver--7977594b99--m4sgj-eth0", GenerateName:"calico-apiserver-7977594b99-", Namespace:"calico-apiserver", SelfLink:"", UID:"0200e2b4-cc5f-4c09-802c-ef6b6feab695", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 32, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7977594b99", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-53", ContainerID:"68f256421a1e85a78cb78504ed764cb9ba66d960c32bab7458d39cbefb9f7c80", Pod:"calico-apiserver-7977594b99-m4sgj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.99.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8a5dc7c5113", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:33:35.050513 containerd[2086]: 2024-12-13 01:33:34.935 [INFO][6111] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b8e708e0d12c5c6d9a74f9327cdb4b8b9b99f30c3293f188c9c41b650ebba2b6" Dec 13 01:33:35.050513 containerd[2086]: 2024-12-13 01:33:34.935 [INFO][6111] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b8e708e0d12c5c6d9a74f9327cdb4b8b9b99f30c3293f188c9c41b650ebba2b6" iface="eth0" netns="" Dec 13 01:33:35.050513 containerd[2086]: 2024-12-13 01:33:34.935 [INFO][6111] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b8e708e0d12c5c6d9a74f9327cdb4b8b9b99f30c3293f188c9c41b650ebba2b6" Dec 13 01:33:35.050513 containerd[2086]: 2024-12-13 01:33:34.935 [INFO][6111] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b8e708e0d12c5c6d9a74f9327cdb4b8b9b99f30c3293f188c9c41b650ebba2b6" Dec 13 01:33:35.050513 containerd[2086]: 2024-12-13 01:33:35.000 [INFO][6118] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b8e708e0d12c5c6d9a74f9327cdb4b8b9b99f30c3293f188c9c41b650ebba2b6" HandleID="k8s-pod-network.b8e708e0d12c5c6d9a74f9327cdb4b8b9b99f30c3293f188c9c41b650ebba2b6" Workload="ip--172--31--29--53-k8s-calico--apiserver--7977594b99--m4sgj-eth0" Dec 13 01:33:35.050513 containerd[2086]: 2024-12-13 01:33:35.001 [INFO][6118] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:33:35.050513 containerd[2086]: 2024-12-13 01:33:35.001 [INFO][6118] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:33:35.050513 containerd[2086]: 2024-12-13 01:33:35.009 [WARNING][6118] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b8e708e0d12c5c6d9a74f9327cdb4b8b9b99f30c3293f188c9c41b650ebba2b6" HandleID="k8s-pod-network.b8e708e0d12c5c6d9a74f9327cdb4b8b9b99f30c3293f188c9c41b650ebba2b6" Workload="ip--172--31--29--53-k8s-calico--apiserver--7977594b99--m4sgj-eth0" Dec 13 01:33:35.050513 containerd[2086]: 2024-12-13 01:33:35.009 [INFO][6118] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b8e708e0d12c5c6d9a74f9327cdb4b8b9b99f30c3293f188c9c41b650ebba2b6" HandleID="k8s-pod-network.b8e708e0d12c5c6d9a74f9327cdb4b8b9b99f30c3293f188c9c41b650ebba2b6" Workload="ip--172--31--29--53-k8s-calico--apiserver--7977594b99--m4sgj-eth0" Dec 13 01:33:35.050513 containerd[2086]: 2024-12-13 01:33:35.027 [INFO][6118] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:33:35.050513 containerd[2086]: 2024-12-13 01:33:35.042 [INFO][6111] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b8e708e0d12c5c6d9a74f9327cdb4b8b9b99f30c3293f188c9c41b650ebba2b6" Dec 13 01:33:35.052476 containerd[2086]: time="2024-12-13T01:33:35.050572493Z" level=info msg="TearDown network for sandbox \"b8e708e0d12c5c6d9a74f9327cdb4b8b9b99f30c3293f188c9c41b650ebba2b6\" successfully" Dec 13 01:33:35.063486 containerd[2086]: time="2024-12-13T01:33:35.063440359Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b8e708e0d12c5c6d9a74f9327cdb4b8b9b99f30c3293f188c9c41b650ebba2b6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:33:35.063635 containerd[2086]: time="2024-12-13T01:33:35.063522113Z" level=info msg="RemovePodSandbox \"b8e708e0d12c5c6d9a74f9327cdb4b8b9b99f30c3293f188c9c41b650ebba2b6\" returns successfully" Dec 13 01:33:35.064955 containerd[2086]: time="2024-12-13T01:33:35.064928554Z" level=info msg="StopPodSandbox for \"ad3a991bea30bc6d9465d4002e8615c37f32464d59de6aceb8ffdaf7412d1e82\"" Dec 13 01:33:35.215767 containerd[2086]: 2024-12-13 01:33:35.146 [WARNING][6138] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ad3a991bea30bc6d9465d4002e8615c37f32464d59de6aceb8ffdaf7412d1e82" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--53-k8s-calico--kube--controllers--ddd8db86d--nkrpz-eth0", GenerateName:"calico-kube-controllers-ddd8db86d-", Namespace:"calico-system", SelfLink:"", UID:"32353b72-7c12-4d50-b914-daf11815550a", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 32, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"ddd8db86d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-53", ContainerID:"fc58a06be2c2fb097d0dae9145f168d1307fd35f4f02f580e41013dca27c5f8d", Pod:"calico-kube-controllers-ddd8db86d-nkrpz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.99.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali19677a8587e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:33:35.215767 containerd[2086]: 2024-12-13 01:33:35.146 [INFO][6138] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ad3a991bea30bc6d9465d4002e8615c37f32464d59de6aceb8ffdaf7412d1e82" Dec 13 01:33:35.215767 containerd[2086]: 2024-12-13 01:33:35.146 [INFO][6138] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ad3a991bea30bc6d9465d4002e8615c37f32464d59de6aceb8ffdaf7412d1e82" iface="eth0" netns="" Dec 13 01:33:35.215767 containerd[2086]: 2024-12-13 01:33:35.146 [INFO][6138] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ad3a991bea30bc6d9465d4002e8615c37f32464d59de6aceb8ffdaf7412d1e82" Dec 13 01:33:35.215767 containerd[2086]: 2024-12-13 01:33:35.146 [INFO][6138] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ad3a991bea30bc6d9465d4002e8615c37f32464d59de6aceb8ffdaf7412d1e82" Dec 13 01:33:35.215767 containerd[2086]: 2024-12-13 01:33:35.197 [INFO][6144] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ad3a991bea30bc6d9465d4002e8615c37f32464d59de6aceb8ffdaf7412d1e82" HandleID="k8s-pod-network.ad3a991bea30bc6d9465d4002e8615c37f32464d59de6aceb8ffdaf7412d1e82" Workload="ip--172--31--29--53-k8s-calico--kube--controllers--ddd8db86d--nkrpz-eth0" Dec 13 01:33:35.215767 containerd[2086]: 2024-12-13 01:33:35.200 [INFO][6144] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:33:35.215767 containerd[2086]: 2024-12-13 01:33:35.200 [INFO][6144] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:33:35.215767 containerd[2086]: 2024-12-13 01:33:35.207 [WARNING][6144] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ad3a991bea30bc6d9465d4002e8615c37f32464d59de6aceb8ffdaf7412d1e82" HandleID="k8s-pod-network.ad3a991bea30bc6d9465d4002e8615c37f32464d59de6aceb8ffdaf7412d1e82" Workload="ip--172--31--29--53-k8s-calico--kube--controllers--ddd8db86d--nkrpz-eth0" Dec 13 01:33:35.215767 containerd[2086]: 2024-12-13 01:33:35.207 [INFO][6144] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ad3a991bea30bc6d9465d4002e8615c37f32464d59de6aceb8ffdaf7412d1e82" HandleID="k8s-pod-network.ad3a991bea30bc6d9465d4002e8615c37f32464d59de6aceb8ffdaf7412d1e82" Workload="ip--172--31--29--53-k8s-calico--kube--controllers--ddd8db86d--nkrpz-eth0" Dec 13 01:33:35.215767 containerd[2086]: 2024-12-13 01:33:35.209 [INFO][6144] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:33:35.215767 containerd[2086]: 2024-12-13 01:33:35.213 [INFO][6138] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ad3a991bea30bc6d9465d4002e8615c37f32464d59de6aceb8ffdaf7412d1e82" Dec 13 01:33:35.217424 containerd[2086]: time="2024-12-13T01:33:35.216055788Z" level=info msg="TearDown network for sandbox \"ad3a991bea30bc6d9465d4002e8615c37f32464d59de6aceb8ffdaf7412d1e82\" successfully" Dec 13 01:33:35.217424 containerd[2086]: time="2024-12-13T01:33:35.216108976Z" level=info msg="StopPodSandbox for \"ad3a991bea30bc6d9465d4002e8615c37f32464d59de6aceb8ffdaf7412d1e82\" returns successfully" Dec 13 01:33:35.217551 containerd[2086]: time="2024-12-13T01:33:35.217521313Z" level=info msg="RemovePodSandbox for \"ad3a991bea30bc6d9465d4002e8615c37f32464d59de6aceb8ffdaf7412d1e82\"" Dec 13 01:33:35.217600 containerd[2086]: time="2024-12-13T01:33:35.217557899Z" level=info msg="Forcibly stopping sandbox \"ad3a991bea30bc6d9465d4002e8615c37f32464d59de6aceb8ffdaf7412d1e82\"" Dec 13 01:33:35.321306 containerd[2086]: 2024-12-13 01:33:35.273 [WARNING][6162] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ad3a991bea30bc6d9465d4002e8615c37f32464d59de6aceb8ffdaf7412d1e82" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--53-k8s-calico--kube--controllers--ddd8db86d--nkrpz-eth0", GenerateName:"calico-kube-controllers-ddd8db86d-", Namespace:"calico-system", SelfLink:"", UID:"32353b72-7c12-4d50-b914-daf11815550a", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 32, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"ddd8db86d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-53", ContainerID:"fc58a06be2c2fb097d0dae9145f168d1307fd35f4f02f580e41013dca27c5f8d", Pod:"calico-kube-controllers-ddd8db86d-nkrpz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.99.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali19677a8587e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:33:35.321306 containerd[2086]: 2024-12-13 01:33:35.274 [INFO][6162] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ad3a991bea30bc6d9465d4002e8615c37f32464d59de6aceb8ffdaf7412d1e82" Dec 13 01:33:35.321306 containerd[2086]: 2024-12-13 01:33:35.274 [INFO][6162] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ad3a991bea30bc6d9465d4002e8615c37f32464d59de6aceb8ffdaf7412d1e82" iface="eth0" netns="" Dec 13 01:33:35.321306 containerd[2086]: 2024-12-13 01:33:35.274 [INFO][6162] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ad3a991bea30bc6d9465d4002e8615c37f32464d59de6aceb8ffdaf7412d1e82" Dec 13 01:33:35.321306 containerd[2086]: 2024-12-13 01:33:35.274 [INFO][6162] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ad3a991bea30bc6d9465d4002e8615c37f32464d59de6aceb8ffdaf7412d1e82" Dec 13 01:33:35.321306 containerd[2086]: 2024-12-13 01:33:35.306 [INFO][6168] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ad3a991bea30bc6d9465d4002e8615c37f32464d59de6aceb8ffdaf7412d1e82" HandleID="k8s-pod-network.ad3a991bea30bc6d9465d4002e8615c37f32464d59de6aceb8ffdaf7412d1e82" Workload="ip--172--31--29--53-k8s-calico--kube--controllers--ddd8db86d--nkrpz-eth0" Dec 13 01:33:35.321306 containerd[2086]: 2024-12-13 01:33:35.306 [INFO][6168] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:33:35.321306 containerd[2086]: 2024-12-13 01:33:35.306 [INFO][6168] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:33:35.321306 containerd[2086]: 2024-12-13 01:33:35.315 [WARNING][6168] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ad3a991bea30bc6d9465d4002e8615c37f32464d59de6aceb8ffdaf7412d1e82" HandleID="k8s-pod-network.ad3a991bea30bc6d9465d4002e8615c37f32464d59de6aceb8ffdaf7412d1e82" Workload="ip--172--31--29--53-k8s-calico--kube--controllers--ddd8db86d--nkrpz-eth0" Dec 13 01:33:35.321306 containerd[2086]: 2024-12-13 01:33:35.315 [INFO][6168] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ad3a991bea30bc6d9465d4002e8615c37f32464d59de6aceb8ffdaf7412d1e82" HandleID="k8s-pod-network.ad3a991bea30bc6d9465d4002e8615c37f32464d59de6aceb8ffdaf7412d1e82" Workload="ip--172--31--29--53-k8s-calico--kube--controllers--ddd8db86d--nkrpz-eth0" Dec 13 01:33:35.321306 containerd[2086]: 2024-12-13 01:33:35.317 [INFO][6168] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:33:35.321306 containerd[2086]: 2024-12-13 01:33:35.319 [INFO][6162] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ad3a991bea30bc6d9465d4002e8615c37f32464d59de6aceb8ffdaf7412d1e82" Dec 13 01:33:35.321306 containerd[2086]: time="2024-12-13T01:33:35.321210775Z" level=info msg="TearDown network for sandbox \"ad3a991bea30bc6d9465d4002e8615c37f32464d59de6aceb8ffdaf7412d1e82\" successfully" Dec 13 01:33:35.329465 containerd[2086]: time="2024-12-13T01:33:35.329201678Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ad3a991bea30bc6d9465d4002e8615c37f32464d59de6aceb8ffdaf7412d1e82\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:33:35.329465 containerd[2086]: time="2024-12-13T01:33:35.329329383Z" level=info msg="RemovePodSandbox \"ad3a991bea30bc6d9465d4002e8615c37f32464d59de6aceb8ffdaf7412d1e82\" returns successfully" Dec 13 01:33:38.975711 systemd[1]: Started sshd@12-172.31.29.53:22-139.178.68.195:40298.service - OpenSSH per-connection server daemon (139.178.68.195:40298). Dec 13 01:33:39.162953 sshd[6194]: Accepted publickey for core from 139.178.68.195 port 40298 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:33:39.166740 sshd[6194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:33:39.175795 systemd-logind[2059]: New session 13 of user core. Dec 13 01:33:39.179829 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 01:33:39.521838 sshd[6194]: pam_unix(sshd:session): session closed for user core Dec 13 01:33:39.531028 systemd[1]: sshd@12-172.31.29.53:22-139.178.68.195:40298.service: Deactivated successfully. Dec 13 01:33:39.538688 systemd-logind[2059]: Session 13 logged out. Waiting for processes to exit. Dec 13 01:33:39.539983 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 01:33:39.542094 systemd-logind[2059]: Removed session 13. Dec 13 01:33:43.398635 kubelet[3500]: I1213 01:33:43.397672 3500 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:33:44.551733 systemd[1]: Started sshd@13-172.31.29.53:22-139.178.68.195:40306.service - OpenSSH per-connection server daemon (139.178.68.195:40306). Dec 13 01:33:44.732044 sshd[6220]: Accepted publickey for core from 139.178.68.195 port 40306 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:33:44.736288 sshd[6220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:33:44.741832 systemd-logind[2059]: New session 14 of user core. Dec 13 01:33:44.748761 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 01:33:45.182202 sshd[6220]: pam_unix(sshd:session): session closed for user core Dec 13 01:33:45.195176 systemd-logind[2059]: Session 14 logged out. Waiting for processes to exit. Dec 13 01:33:45.195527 systemd[1]: sshd@13-172.31.29.53:22-139.178.68.195:40306.service: Deactivated successfully. Dec 13 01:33:45.205566 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 01:33:45.207090 systemd-logind[2059]: Removed session 14. Dec 13 01:33:50.211129 systemd[1]: Started sshd@14-172.31.29.53:22-139.178.68.195:54274.service - OpenSSH per-connection server daemon (139.178.68.195:54274). Dec 13 01:33:50.407565 sshd[6244]: Accepted publickey for core from 139.178.68.195 port 54274 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:33:50.410737 sshd[6244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:33:50.416993 systemd-logind[2059]: New session 15 of user core. Dec 13 01:33:50.422139 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 01:33:51.010453 sshd[6244]: pam_unix(sshd:session): session closed for user core Dec 13 01:33:51.016721 systemd[1]: sshd@14-172.31.29.53:22-139.178.68.195:54274.service: Deactivated successfully. Dec 13 01:33:51.016773 systemd-logind[2059]: Session 15 logged out. Waiting for processes to exit. Dec 13 01:33:51.023834 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 01:33:51.025782 systemd-logind[2059]: Removed session 15. Dec 13 01:33:51.727611 systemd-resolved[1968]: Under memory pressure, flushing caches. Dec 13 01:33:51.727647 systemd-resolved[1968]: Flushed all caches. Dec 13 01:33:51.729400 systemd-journald[1563]: Under memory pressure, flushing caches. Dec 13 01:33:56.041727 systemd[1]: Started sshd@15-172.31.29.53:22-139.178.68.195:47750.service - OpenSSH per-connection server daemon (139.178.68.195:47750). Dec 13 01:33:56.219604 sshd[6281]: Accepted publickey for core from 139.178.68.195 port 47750 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:33:56.221862 sshd[6281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:33:56.228576 systemd-logind[2059]: New session 16 of user core. Dec 13 01:33:56.234110 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 01:33:56.759823 sshd[6281]: pam_unix(sshd:session): session closed for user core Dec 13 01:33:56.765608 systemd[1]: sshd@15-172.31.29.53:22-139.178.68.195:47750.service: Deactivated successfully. Dec 13 01:33:56.771977 systemd-logind[2059]: Session 16 logged out. Waiting for processes to exit. Dec 13 01:33:56.772942 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 01:33:56.775249 systemd-logind[2059]: Removed session 16. Dec 13 01:33:56.790825 systemd[1]: Started sshd@16-172.31.29.53:22-139.178.68.195:47752.service - OpenSSH per-connection server daemon (139.178.68.195:47752). Dec 13 01:33:56.970735 sshd[6295]: Accepted publickey for core from 139.178.68.195 port 47752 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:33:56.973666 sshd[6295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:33:56.980983 systemd-logind[2059]: New session 17 of user core. Dec 13 01:33:56.986727 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 01:33:57.679672 systemd-resolved[1968]: Under memory pressure, flushing caches. Dec 13 01:33:57.679704 systemd-resolved[1968]: Flushed all caches. Dec 13 01:33:57.681357 systemd-journald[1563]: Under memory pressure, flushing caches. Dec 13 01:33:57.721730 sshd[6295]: pam_unix(sshd:session): session closed for user core Dec 13 01:33:57.745295 systemd[1]: sshd@16-172.31.29.53:22-139.178.68.195:47752.service: Deactivated successfully. Dec 13 01:33:57.755797 systemd-logind[2059]: Session 17 logged out. Waiting for processes to exit. Dec 13 01:33:57.759771 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 01:33:57.768671 systemd[1]: Started sshd@17-172.31.29.53:22-139.178.68.195:47760.service - OpenSSH per-connection server daemon (139.178.68.195:47760). Dec 13 01:33:57.770225 systemd-logind[2059]: Removed session 17. Dec 13 01:33:57.942044 sshd[6309]: Accepted publickey for core from 139.178.68.195 port 47760 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:33:57.942361 sshd[6309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:33:57.948364 systemd-logind[2059]: New session 18 of user core. Dec 13 01:33:57.954707 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 01:34:00.887575 sshd[6309]: pam_unix(sshd:session): session closed for user core Dec 13 01:34:00.896987 systemd[1]: sshd@17-172.31.29.53:22-139.178.68.195:47760.service: Deactivated successfully. Dec 13 01:34:00.932634 systemd-logind[2059]: Session 18 logged out. Waiting for processes to exit. Dec 13 01:34:00.968661 systemd[1]: Started sshd@18-172.31.29.53:22-139.178.68.195:47770.service - OpenSSH per-connection server daemon (139.178.68.195:47770). Dec 13 01:34:00.969091 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 01:34:00.976776 systemd-logind[2059]: Removed session 18. Dec 13 01:34:01.187393 sshd[6334]: Accepted publickey for core from 139.178.68.195 port 47770 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:34:01.252233 sshd[6334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:34:01.259683 systemd-logind[2059]: New session 19 of user core. Dec 13 01:34:01.274060 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 01:34:01.713453 systemd-journald[1563]: Under memory pressure, flushing caches. Dec 13 01:34:01.712542 systemd-resolved[1968]: Under memory pressure, flushing caches. Dec 13 01:34:01.712550 systemd-resolved[1968]: Flushed all caches. Dec 13 01:34:02.553873 sshd[6334]: pam_unix(sshd:session): session closed for user core Dec 13 01:34:02.565950 systemd-logind[2059]: Session 19 logged out. Waiting for processes to exit. Dec 13 01:34:02.565998 systemd[1]: sshd@18-172.31.29.53:22-139.178.68.195:47770.service: Deactivated successfully. Dec 13 01:34:02.585904 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 01:34:02.595756 systemd[1]: Started sshd@19-172.31.29.53:22-139.178.68.195:47774.service - OpenSSH per-connection server daemon (139.178.68.195:47774). Dec 13 01:34:02.598611 systemd-logind[2059]: Removed session 19. Dec 13 01:34:02.789857 sshd[6347]: Accepted publickey for core from 139.178.68.195 port 47774 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:34:02.791653 sshd[6347]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:34:02.822108 systemd-logind[2059]: New session 20 of user core. Dec 13 01:34:02.826736 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 01:34:03.061158 sshd[6347]: pam_unix(sshd:session): session closed for user core Dec 13 01:34:03.067184 systemd[1]: sshd@19-172.31.29.53:22-139.178.68.195:47774.service: Deactivated successfully. Dec 13 01:34:03.070856 systemd-logind[2059]: Session 20 logged out. Waiting for processes to exit. Dec 13 01:34:03.071561 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 01:34:03.077115 systemd-logind[2059]: Removed session 20. Dec 13 01:34:03.762561 systemd-journald[1563]: Under memory pressure, flushing caches. Dec 13 01:34:03.762316 systemd-resolved[1968]: Under memory pressure, flushing caches. Dec 13 01:34:03.762325 systemd-resolved[1968]: Flushed all caches. Dec 13 01:34:08.095776 systemd[1]: Started sshd@20-172.31.29.53:22-139.178.68.195:36360.service - OpenSSH per-connection server daemon (139.178.68.195:36360). Dec 13 01:34:08.296000 sshd[6381]: Accepted publickey for core from 139.178.68.195 port 36360 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:34:08.303092 sshd[6381]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:34:08.316876 systemd-logind[2059]: New session 21 of user core. Dec 13 01:34:08.323908 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 01:34:08.711624 sshd[6381]: pam_unix(sshd:session): session closed for user core Dec 13 01:34:08.720892 systemd[1]: sshd@20-172.31.29.53:22-139.178.68.195:36360.service: Deactivated successfully. Dec 13 01:34:08.731193 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 01:34:08.733987 systemd-logind[2059]: Session 21 logged out. Waiting for processes to exit. Dec 13 01:34:08.741406 systemd-logind[2059]: Removed session 21. Dec 13 01:34:13.748262 systemd[1]: Started sshd@21-172.31.29.53:22-139.178.68.195:36366.service - OpenSSH per-connection server daemon (139.178.68.195:36366). Dec 13 01:34:13.937393 sshd[6398]: Accepted publickey for core from 139.178.68.195 port 36366 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:34:13.941088 sshd[6398]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:34:13.951832 systemd-logind[2059]: New session 22 of user core. Dec 13 01:34:13.960282 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 01:34:14.289272 sshd[6398]: pam_unix(sshd:session): session closed for user core Dec 13 01:34:14.302840 systemd[1]: sshd@21-172.31.29.53:22-139.178.68.195:36366.service: Deactivated successfully. Dec 13 01:34:14.303410 systemd-logind[2059]: Session 22 logged out. Waiting for processes to exit. Dec 13 01:34:14.310915 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 01:34:14.312988 systemd-logind[2059]: Removed session 22. Dec 13 01:34:19.318670 systemd[1]: Started sshd@22-172.31.29.53:22-139.178.68.195:57456.service - OpenSSH per-connection server daemon (139.178.68.195:57456). Dec 13 01:34:19.485846 sshd[6414]: Accepted publickey for core from 139.178.68.195 port 57456 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:34:19.487693 sshd[6414]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:34:19.492731 systemd-logind[2059]: New session 23 of user core. Dec 13 01:34:19.503109 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 01:34:19.867680 sshd[6414]: pam_unix(sshd:session): session closed for user core Dec 13 01:34:19.876915 systemd[1]: sshd@22-172.31.29.53:22-139.178.68.195:57456.service: Deactivated successfully. Dec 13 01:34:19.882774 systemd-logind[2059]: Session 23 logged out. Waiting for processes to exit. Dec 13 01:34:19.883045 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 01:34:19.887958 systemd-logind[2059]: Removed session 23. Dec 13 01:34:24.896807 systemd[1]: Started sshd@23-172.31.29.53:22-139.178.68.195:57472.service - OpenSSH per-connection server daemon (139.178.68.195:57472). Dec 13 01:34:25.082419 sshd[6449]: Accepted publickey for core from 139.178.68.195 port 57472 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:34:25.087583 sshd[6449]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:34:25.120953 systemd-logind[2059]: New session 24 of user core. Dec 13 01:34:25.130802 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 13 01:34:25.514456 sshd[6449]: pam_unix(sshd:session): session closed for user core Dec 13 01:34:25.519443 systemd[1]: sshd@23-172.31.29.53:22-139.178.68.195:57472.service: Deactivated successfully. Dec 13 01:34:25.523090 systemd-logind[2059]: Session 24 logged out. Waiting for processes to exit. Dec 13 01:34:25.523755 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 01:34:25.525937 systemd-logind[2059]: Removed session 24. Dec 13 01:34:30.543178 systemd[1]: Started sshd@24-172.31.29.53:22-139.178.68.195:43016.service - OpenSSH per-connection server daemon (139.178.68.195:43016). Dec 13 01:34:30.721960 sshd[6484]: Accepted publickey for core from 139.178.68.195 port 43016 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:34:30.727607 sshd[6484]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:34:30.735533 systemd-logind[2059]: New session 25 of user core. Dec 13 01:34:30.741711 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 13 01:34:31.005753 sshd[6484]: pam_unix(sshd:session): session closed for user core Dec 13 01:34:31.011380 systemd[1]: sshd@24-172.31.29.53:22-139.178.68.195:43016.service: Deactivated successfully. Dec 13 01:34:31.017501 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 01:34:31.019141 systemd-logind[2059]: Session 25 logged out. Waiting for processes to exit. Dec 13 01:34:31.021441 systemd-logind[2059]: Removed session 25. Dec 13 01:34:36.039317 systemd[1]: Started sshd@25-172.31.29.53:22-139.178.68.195:58278.service - OpenSSH per-connection server daemon (139.178.68.195:58278). Dec 13 01:34:36.211811 sshd[6500]: Accepted publickey for core from 139.178.68.195 port 58278 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:34:36.213431 sshd[6500]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:34:36.219398 systemd-logind[2059]: New session 26 of user core. Dec 13 01:34:36.226847 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 13 01:34:36.461001 sshd[6500]: pam_unix(sshd:session): session closed for user core Dec 13 01:34:36.466381 systemd[1]: sshd@25-172.31.29.53:22-139.178.68.195:58278.service: Deactivated successfully. Dec 13 01:34:36.473055 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 01:34:36.474584 systemd-logind[2059]: Session 26 logged out. Waiting for processes to exit. Dec 13 01:34:36.478104 systemd-logind[2059]: Removed session 26. Dec 13 01:34:51.657341 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e76d452d3d93a2524597f01c10402e9fa6093a06a61266184f4895bbecf6beef-rootfs.mount: Deactivated successfully. Dec 13 01:34:51.720719 containerd[2086]: time="2024-12-13T01:34:51.681964401Z" level=info msg="shim disconnected" id=e76d452d3d93a2524597f01c10402e9fa6093a06a61266184f4895bbecf6beef namespace=k8s.io Dec 13 01:34:51.733870 containerd[2086]: time="2024-12-13T01:34:51.733819030Z" level=warning msg="cleaning up after shim disconnected" id=e76d452d3d93a2524597f01c10402e9fa6093a06a61266184f4895bbecf6beef namespace=k8s.io Dec 13 01:34:51.733870 containerd[2086]: time="2024-12-13T01:34:51.733858073Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:34:51.970275 kubelet[3500]: I1213 01:34:51.970056 3500 scope.go:117] "RemoveContainer" containerID="e76d452d3d93a2524597f01c10402e9fa6093a06a61266184f4895bbecf6beef" Dec 13 01:34:51.984424 containerd[2086]: time="2024-12-13T01:34:51.984370605Z" level=info msg="CreateContainer within sandbox \"d927f4b1e246e9a38f0b18fe7ad0098462fe3f4a730bd3f9f99354463cbb7b4b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Dec 13 01:34:52.061302 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2775776309.mount: Deactivated successfully. Dec 13 01:34:52.078181 containerd[2086]: time="2024-12-13T01:34:52.078137501Z" level=info msg="CreateContainer within sandbox \"d927f4b1e246e9a38f0b18fe7ad0098462fe3f4a730bd3f9f99354463cbb7b4b\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"d3f3fa6e83ced335658c6cb20eead39869deb6bd82f4ffad51157635850c5be9\"" Dec 13 01:34:52.078863 containerd[2086]: time="2024-12-13T01:34:52.078835985Z" level=info msg="StartContainer for \"d3f3fa6e83ced335658c6cb20eead39869deb6bd82f4ffad51157635850c5be9\"" Dec 13 01:34:52.180654 containerd[2086]: time="2024-12-13T01:34:52.180527096Z" level=info msg="StartContainer for \"d3f3fa6e83ced335658c6cb20eead39869deb6bd82f4ffad51157635850c5be9\" returns successfully" Dec 13 01:34:52.252819 containerd[2086]: time="2024-12-13T01:34:52.252236466Z" level=info msg="shim disconnected" id=998db809948cf70fa3e333cd20f68be388d3b280fcbabb49b0aa2c8925c57bab namespace=k8s.io Dec 13 01:34:52.253299 containerd[2086]: time="2024-12-13T01:34:52.253079584Z" level=warning msg="cleaning up after shim disconnected" id=998db809948cf70fa3e333cd20f68be388d3b280fcbabb49b0aa2c8925c57bab namespace=k8s.io Dec 13 01:34:52.253299 containerd[2086]: time="2024-12-13T01:34:52.253104124Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:34:52.655759 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-998db809948cf70fa3e333cd20f68be388d3b280fcbabb49b0aa2c8925c57bab-rootfs.mount: Deactivated successfully. Dec 13 01:34:52.959183 kubelet[3500]: I1213 01:34:52.959038 3500 scope.go:117] "RemoveContainer" containerID="998db809948cf70fa3e333cd20f68be388d3b280fcbabb49b0aa2c8925c57bab" Dec 13 01:34:52.967799 containerd[2086]: time="2024-12-13T01:34:52.967701928Z" level=info msg="CreateContainer within sandbox \"39532486831f5d23cf98d4e91b6f5b131122c41932a47cf1e4acc8a0d875c0e8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Dec 13 01:34:53.082230 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2580189301.mount: Deactivated successfully. Dec 13 01:34:53.100552 containerd[2086]: time="2024-12-13T01:34:53.100501826Z" level=info msg="CreateContainer within sandbox \"39532486831f5d23cf98d4e91b6f5b131122c41932a47cf1e4acc8a0d875c0e8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"fce104930f1c2a8add70566fb8dcb3f0ee5792fb4c23b64581e0c3804d307c66\"" Dec 13 01:34:53.101394 containerd[2086]: time="2024-12-13T01:34:53.101359887Z" level=info msg="StartContainer for \"fce104930f1c2a8add70566fb8dcb3f0ee5792fb4c23b64581e0c3804d307c66\"" Dec 13 01:34:53.211859 containerd[2086]: time="2024-12-13T01:34:53.211584895Z" level=info msg="StartContainer for \"fce104930f1c2a8add70566fb8dcb3f0ee5792fb4c23b64581e0c3804d307c66\" returns successfully" Dec 13 01:34:54.478102 kubelet[3500]: E1213 01:34:54.477981 3500 request.go:1116] Unexpected error when reading response body: net/http: request canceled (Client.Timeout or context cancellation while reading body) Dec 13 01:34:54.486792 kubelet[3500]: E1213 01:34:54.485869 3500 controller.go:195] "Failed to update lease" err="unexpected error when reading response body. Please retry. Original error: net/http: request canceled (Client.Timeout or context cancellation while reading body)" Dec 13 01:34:55.730411 systemd-journald[1563]: Under memory pressure, flushing caches. Dec 13 01:34:55.727884 systemd-resolved[1968]: Under memory pressure, flushing caches. Dec 13 01:34:55.727920 systemd-resolved[1968]: Flushed all caches. Dec 13 01:34:56.544795 containerd[2086]: time="2024-12-13T01:34:56.544722085Z" level=info msg="shim disconnected" id=336c7eb008c3cb516b94f1b48bcb258d647da9a0bf1df6e575a874dd543a4827 namespace=k8s.io Dec 13 01:34:56.544795 containerd[2086]: time="2024-12-13T01:34:56.544789908Z" level=warning msg="cleaning up after shim disconnected" id=336c7eb008c3cb516b94f1b48bcb258d647da9a0bf1df6e575a874dd543a4827 namespace=k8s.io Dec 13 01:34:56.544795 containerd[2086]: time="2024-12-13T01:34:56.544800453Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:34:56.546830 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-336c7eb008c3cb516b94f1b48bcb258d647da9a0bf1df6e575a874dd543a4827-rootfs.mount: Deactivated successfully. Dec 13 01:34:57.008556 kubelet[3500]: I1213 01:34:57.008525 3500 scope.go:117] "RemoveContainer" containerID="336c7eb008c3cb516b94f1b48bcb258d647da9a0bf1df6e575a874dd543a4827" Dec 13 01:34:57.011410 containerd[2086]: time="2024-12-13T01:34:57.011373459Z" level=info msg="CreateContainer within sandbox \"bd645dbbeab9c04065596e4a1b15691f3f2ca2ea4d9823aa3dec0e946dbd7c61\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Dec 13 01:34:57.082512 containerd[2086]: time="2024-12-13T01:34:57.082458022Z" level=info msg="CreateContainer within sandbox \"bd645dbbeab9c04065596e4a1b15691f3f2ca2ea4d9823aa3dec0e946dbd7c61\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"086a3ad6024f54345e6af811405b1e91d2e499a7d701bb824d6205e9df6aab4e\"" Dec 13 01:34:57.083154 containerd[2086]: time="2024-12-13T01:34:57.083124034Z" level=info msg="StartContainer for \"086a3ad6024f54345e6af811405b1e91d2e499a7d701bb824d6205e9df6aab4e\"" Dec 13 01:34:57.186701 containerd[2086]: time="2024-12-13T01:34:57.186644374Z" level=info msg="StartContainer for \"086a3ad6024f54345e6af811405b1e91d2e499a7d701bb824d6205e9df6aab4e\" returns successfully" Dec 13 01:34:57.547753 systemd[1]: run-containerd-runc-k8s.io-086a3ad6024f54345e6af811405b1e91d2e499a7d701bb824d6205e9df6aab4e-runc.rqmKLQ.mount: Deactivated successfully. Dec 13 01:35:04.490977 kubelet[3500]: E1213 01:35:04.490932 3500 controller.go:195] "Failed to update lease" err="Put \"https://172.31.29.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-53?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)"