Dec 13 01:28:56.106323 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Dec 12 23:15:00 -00 2024 Dec 13 01:28:56.106366 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:28:56.110482 kernel: BIOS-provided physical RAM map: Dec 13 01:28:56.110507 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 01:28:56.110519 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 01:28:56.110531 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 01:28:56.110552 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Dec 13 01:28:56.110565 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Dec 13 01:28:56.110577 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Dec 13 01:28:56.110590 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 01:28:56.110603 kernel: NX (Execute Disable) protection: active Dec 13 01:28:56.110616 kernel: APIC: Static calls initialized Dec 13 01:28:56.110629 kernel: SMBIOS 2.7 present. Dec 13 01:28:56.110642 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Dec 13 01:28:56.110662 kernel: Hypervisor detected: KVM Dec 13 01:28:56.110677 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 01:28:56.110692 kernel: kvm-clock: using sched offset of 6247131054 cycles Dec 13 01:28:56.110707 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 01:28:56.110722 kernel: tsc: Detected 2499.994 MHz processor Dec 13 01:28:56.110737 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 01:28:56.110753 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 01:28:56.110771 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Dec 13 01:28:56.110786 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 13 01:28:56.110801 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 01:28:56.110816 kernel: Using GB pages for direct mapping Dec 13 01:28:56.110830 kernel: ACPI: Early table checksum verification disabled Dec 13 01:28:56.110844 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Dec 13 01:28:56.110859 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Dec 13 01:28:56.110874 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Dec 13 01:28:56.110889 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Dec 13 01:28:56.110907 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Dec 13 01:28:56.110921 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Dec 13 01:28:56.110935 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Dec 13 01:28:56.110950 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Dec 13 01:28:56.110965 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Dec 13 01:28:56.110979 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Dec 13 01:28:56.110994 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Dec 13 01:28:56.111008 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Dec 13 01:28:56.111023 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Dec 13 01:28:56.111041 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Dec 13 01:28:56.111062 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Dec 13 01:28:56.111077 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Dec 13 01:28:56.111092 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Dec 13 01:28:56.111107 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Dec 13 01:28:56.111126 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Dec 13 01:28:56.111141 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Dec 13 01:28:56.111157 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Dec 13 01:28:56.111173 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Dec 13 01:28:56.111188 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 01:28:56.111203 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 01:28:56.111219 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Dec 13 01:28:56.111234 kernel: NUMA: Initialized distance table, cnt=1 Dec 13 01:28:56.111573 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Dec 13 01:28:56.111607 kernel: Zone ranges: Dec 13 01:28:56.111623 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 01:28:56.111639 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Dec 13 01:28:56.111654 kernel: Normal empty Dec 13 01:28:56.111670 kernel: Movable zone start for each node Dec 13 01:28:56.111685 kernel: Early memory node ranges Dec 13 01:28:56.111700 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 01:28:56.111716 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Dec 13 01:28:56.111732 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Dec 13 01:28:56.111748 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 01:28:56.111767 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 01:28:56.111782 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Dec 13 01:28:56.111797 kernel: ACPI: PM-Timer IO Port: 0xb008 Dec 13 01:28:56.111813 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 01:28:56.111829 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Dec 13 01:28:56.111845 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 01:28:56.111861 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 01:28:56.111876 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 01:28:56.111892 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 01:28:56.111911 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 01:28:56.111927 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 01:28:56.111943 kernel: TSC deadline timer available Dec 13 01:28:56.111958 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 01:28:56.111974 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 13 01:28:56.111990 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Dec 13 01:28:56.112006 kernel: Booting paravirtualized kernel on KVM Dec 13 01:28:56.112022 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 01:28:56.112038 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 13 01:28:56.112057 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Dec 13 01:28:56.112072 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Dec 13 01:28:56.112087 kernel: pcpu-alloc: [0] 0 1 Dec 13 01:28:56.112103 kernel: kvm-guest: PV spinlocks enabled Dec 13 01:28:56.112118 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 01:28:56.112136 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:28:56.112161 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:28:56.112176 kernel: random: crng init done Dec 13 01:28:56.112195 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 01:28:56.112211 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 01:28:56.112227 kernel: Fallback order for Node 0: 0 Dec 13 01:28:56.112243 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Dec 13 01:28:56.112258 kernel: Policy zone: DMA32 Dec 13 01:28:56.112274 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:28:56.112290 kernel: Memory: 1932348K/2057760K available (12288K kernel code, 2299K rwdata, 22724K rodata, 42844K init, 2348K bss, 125152K reserved, 0K cma-reserved) Dec 13 01:28:56.112306 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 01:28:56.112322 kernel: Kernel/User page tables isolation: enabled Dec 13 01:28:56.112340 kernel: ftrace: allocating 37902 entries in 149 pages Dec 13 01:28:56.112355 kernel: ftrace: allocated 149 pages with 4 groups Dec 13 01:28:56.112371 kernel: Dynamic Preempt: voluntary Dec 13 01:28:56.115612 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 01:28:56.115638 kernel: rcu: RCU event tracing is enabled. Dec 13 01:28:56.115654 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 01:28:56.115669 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 01:28:56.115684 kernel: Rude variant of Tasks RCU enabled. Dec 13 01:28:56.115699 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:28:56.115720 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:28:56.115734 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 01:28:56.115750 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 13 01:28:56.115826 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 01:28:56.115843 kernel: Console: colour VGA+ 80x25 Dec 13 01:28:56.115858 kernel: printk: console [ttyS0] enabled Dec 13 01:28:56.115873 kernel: ACPI: Core revision 20230628 Dec 13 01:28:56.115888 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Dec 13 01:28:56.115903 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 01:28:56.115922 kernel: x2apic enabled Dec 13 01:28:56.115937 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 01:28:56.115964 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240933eba6e, max_idle_ns: 440795246008 ns Dec 13 01:28:56.115983 kernel: Calibrating delay loop (skipped) preset value.. 4999.98 BogoMIPS (lpj=2499994) Dec 13 01:28:56.115999 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Dec 13 01:28:56.116014 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Dec 13 01:28:56.116030 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 01:28:56.116045 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 01:28:56.116060 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 01:28:56.116075 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 01:28:56.116091 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Dec 13 01:28:56.116106 kernel: RETBleed: Vulnerable Dec 13 01:28:56.116125 kernel: Speculative Store Bypass: Vulnerable Dec 13 01:28:56.116141 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 01:28:56.116164 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 01:28:56.116180 kernel: GDS: Unknown: Dependent on hypervisor status Dec 13 01:28:56.116195 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 01:28:56.116210 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 01:28:56.116226 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 01:28:56.116244 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Dec 13 01:28:56.116260 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Dec 13 01:28:56.116275 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Dec 13 01:28:56.116291 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Dec 13 01:28:56.116306 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Dec 13 01:28:56.116322 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Dec 13 01:28:56.116337 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 01:28:56.116353 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Dec 13 01:28:56.116368 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Dec 13 01:28:56.116406 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Dec 13 01:28:56.116422 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Dec 13 01:28:56.116441 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Dec 13 01:28:56.116457 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Dec 13 01:28:56.116472 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Dec 13 01:28:56.116488 kernel: Freeing SMP alternatives memory: 32K Dec 13 01:28:56.116503 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:28:56.116518 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 01:28:56.116534 kernel: landlock: Up and running. Dec 13 01:28:56.116549 kernel: SELinux: Initializing. Dec 13 01:28:56.116565 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 01:28:56.116580 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 01:28:56.116596 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Dec 13 01:28:56.116723 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:28:56.116742 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:28:56.116759 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:28:56.116776 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Dec 13 01:28:56.116792 kernel: signal: max sigframe size: 3632 Dec 13 01:28:56.116808 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:28:56.116824 kernel: rcu: Max phase no-delay instances is 400. Dec 13 01:28:56.116840 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 01:28:56.116856 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:28:56.116876 kernel: smpboot: x86: Booting SMP configuration: Dec 13 01:28:56.116892 kernel: .... node #0, CPUs: #1 Dec 13 01:28:56.116909 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Dec 13 01:28:56.116926 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 13 01:28:56.116942 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 01:28:56.116958 kernel: smpboot: Max logical packages: 1 Dec 13 01:28:56.116973 kernel: smpboot: Total of 2 processors activated (9999.97 BogoMIPS) Dec 13 01:28:56.116989 kernel: devtmpfs: initialized Dec 13 01:28:56.117008 kernel: x86/mm: Memory block size: 128MB Dec 13 01:28:56.117024 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:28:56.117040 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 01:28:56.117056 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:28:56.117072 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:28:56.117088 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:28:56.117188 kernel: audit: type=2000 audit(1734053335.453:1): state=initialized audit_enabled=0 res=1 Dec 13 01:28:56.117205 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:28:56.117221 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 01:28:56.117240 kernel: cpuidle: using governor menu Dec 13 01:28:56.117256 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:28:56.117272 kernel: dca service started, version 1.12.1 Dec 13 01:28:56.117288 kernel: PCI: Using configuration type 1 for base access Dec 13 01:28:56.117304 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 01:28:56.117320 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:28:56.117336 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 01:28:56.117352 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:28:56.117368 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 01:28:56.118427 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:28:56.118450 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:28:56.118466 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:28:56.118482 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:28:56.118498 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Dec 13 01:28:56.118514 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 01:28:56.118664 kernel: ACPI: Interpreter enabled Dec 13 01:28:56.118682 kernel: ACPI: PM: (supports S0 S5) Dec 13 01:28:56.118697 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 01:28:56.118748 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 01:28:56.118772 kernel: PCI: Using E820 reservations for host bridge windows Dec 13 01:28:56.118788 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Dec 13 01:28:56.118804 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 01:28:56.119155 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 13 01:28:56.119427 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Dec 13 01:28:56.119566 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Dec 13 01:28:56.119586 kernel: acpiphp: Slot [3] registered Dec 13 01:28:56.119607 kernel: acpiphp: Slot [4] registered Dec 13 01:28:56.119623 kernel: acpiphp: Slot [5] registered Dec 13 01:28:56.119638 kernel: acpiphp: Slot [6] registered Dec 13 01:28:56.119654 kernel: acpiphp: Slot [7] registered Dec 13 01:28:56.119670 kernel: acpiphp: Slot [8] registered Dec 13 01:28:56.119685 kernel: acpiphp: Slot [9] registered Dec 13 01:28:56.119700 kernel: acpiphp: Slot [10] registered Dec 13 01:28:56.119716 kernel: acpiphp: Slot [11] registered Dec 13 01:28:56.119731 kernel: acpiphp: Slot [12] registered Dec 13 01:28:56.119750 kernel: acpiphp: Slot [13] registered Dec 13 01:28:56.119813 kernel: acpiphp: Slot [14] registered Dec 13 01:28:56.119832 kernel: acpiphp: Slot [15] registered Dec 13 01:28:56.119848 kernel: acpiphp: Slot [16] registered Dec 13 01:28:56.119864 kernel: acpiphp: Slot [17] registered Dec 13 01:28:56.119880 kernel: acpiphp: Slot [18] registered Dec 13 01:28:56.119895 kernel: acpiphp: Slot [19] registered Dec 13 01:28:56.119911 kernel: acpiphp: Slot [20] registered Dec 13 01:28:56.119926 kernel: acpiphp: Slot [21] registered Dec 13 01:28:56.119941 kernel: acpiphp: Slot [22] registered Dec 13 01:28:56.119961 kernel: acpiphp: Slot [23] registered Dec 13 01:28:56.119977 kernel: acpiphp: Slot [24] registered Dec 13 01:28:56.119992 kernel: acpiphp: Slot [25] registered Dec 13 01:28:56.120008 kernel: acpiphp: Slot [26] registered Dec 13 01:28:56.120024 kernel: acpiphp: Slot [27] registered Dec 13 01:28:56.120040 kernel: acpiphp: Slot [28] registered Dec 13 01:28:56.120055 kernel: acpiphp: Slot [29] registered Dec 13 01:28:56.120071 kernel: acpiphp: Slot [30] registered Dec 13 01:28:56.120086 kernel: acpiphp: Slot [31] registered Dec 13 01:28:56.120105 kernel: PCI host bridge to bus 0000:00 Dec 13 01:28:56.120261 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 01:28:56.125982 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 01:28:56.126186 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 01:28:56.126376 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Dec 13 01:28:56.126606 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 01:28:56.126760 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Dec 13 01:28:56.126908 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Dec 13 01:28:56.127043 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Dec 13 01:28:56.129538 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Dec 13 01:28:56.129701 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Dec 13 01:28:56.129898 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Dec 13 01:28:56.130053 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Dec 13 01:28:56.130188 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Dec 13 01:28:56.130328 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Dec 13 01:28:56.133571 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Dec 13 01:28:56.133933 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Dec 13 01:28:56.134103 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Dec 13 01:28:56.134241 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Dec 13 01:28:56.134372 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Dec 13 01:28:56.134523 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 01:28:56.134669 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Dec 13 01:28:56.134802 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Dec 13 01:28:56.134938 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Dec 13 01:28:56.135068 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Dec 13 01:28:56.135089 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 01:28:56.135106 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 01:28:56.135126 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 01:28:56.135143 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 01:28:56.135222 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 13 01:28:56.135243 kernel: iommu: Default domain type: Translated Dec 13 01:28:56.139481 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 01:28:56.139536 kernel: PCI: Using ACPI for IRQ routing Dec 13 01:28:56.139570 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 01:28:56.139602 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 01:28:56.139622 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Dec 13 01:28:56.139806 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Dec 13 01:28:56.140357 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Dec 13 01:28:56.140638 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 01:28:56.140666 kernel: vgaarb: loaded Dec 13 01:28:56.140687 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Dec 13 01:28:56.140707 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Dec 13 01:28:56.140727 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 01:28:56.140747 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:28:56.140766 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:28:56.140794 kernel: pnp: PnP ACPI init Dec 13 01:28:56.140813 kernel: pnp: PnP ACPI: found 5 devices Dec 13 01:28:56.140832 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 01:28:56.140853 kernel: NET: Registered PF_INET protocol family Dec 13 01:28:56.140872 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 01:28:56.140891 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 01:28:56.140910 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:28:56.140930 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 01:28:56.140950 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 13 01:28:56.140972 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 01:28:56.140991 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 01:28:56.141011 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 01:28:56.141030 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:28:56.141050 kernel: NET: Registered PF_XDP protocol family Dec 13 01:28:56.141210 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 01:28:56.141341 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 01:28:56.141474 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 01:28:56.141650 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Dec 13 01:28:56.141852 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 13 01:28:56.141873 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:28:56.141892 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 01:28:56.141910 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240933eba6e, max_idle_ns: 440795246008 ns Dec 13 01:28:56.141926 kernel: clocksource: Switched to clocksource tsc Dec 13 01:28:56.141943 kernel: Initialise system trusted keyrings Dec 13 01:28:56.141961 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 01:28:56.141986 kernel: Key type asymmetric registered Dec 13 01:28:56.142002 kernel: Asymmetric key parser 'x509' registered Dec 13 01:28:56.142018 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 01:28:56.142036 kernel: io scheduler mq-deadline registered Dec 13 01:28:56.142053 kernel: io scheduler kyber registered Dec 13 01:28:56.142070 kernel: io scheduler bfq registered Dec 13 01:28:56.142087 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 01:28:56.142104 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:28:56.142122 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 01:28:56.142145 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 01:28:56.142161 kernel: i8042: Warning: Keylock active Dec 13 01:28:56.142179 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 01:28:56.142197 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 01:28:56.142417 kernel: rtc_cmos 00:00: RTC can wake from S4 Dec 13 01:28:56.142636 kernel: rtc_cmos 00:00: registered as rtc0 Dec 13 01:28:56.142770 kernel: rtc_cmos 00:00: setting system clock to 2024-12-13T01:28:55 UTC (1734053335) Dec 13 01:28:56.142892 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Dec 13 01:28:56.142913 kernel: intel_pstate: CPU model not supported Dec 13 01:28:56.142928 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:28:56.142942 kernel: Segment Routing with IPv6 Dec 13 01:28:56.142957 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:28:56.142971 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:28:56.142984 kernel: Key type dns_resolver registered Dec 13 01:28:56.142998 kernel: IPI shorthand broadcast: enabled Dec 13 01:28:56.143012 kernel: sched_clock: Marking stable (643002527, 262430415)->(996302721, -90869779) Dec 13 01:28:56.143026 kernel: registered taskstats version 1 Dec 13 01:28:56.143044 kernel: Loading compiled-in X.509 certificates Dec 13 01:28:56.143058 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: c82d546f528d79a5758dcebbc47fb6daf92836a0' Dec 13 01:28:56.143071 kernel: Key type .fscrypt registered Dec 13 01:28:56.143084 kernel: Key type fscrypt-provisioning registered Dec 13 01:28:56.143097 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:28:56.143111 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:28:56.143125 kernel: ima: No architecture policies found Dec 13 01:28:56.143138 kernel: clk: Disabling unused clocks Dec 13 01:28:56.143152 kernel: Freeing unused kernel image (initmem) memory: 42844K Dec 13 01:28:56.143241 kernel: Write protecting the kernel read-only data: 36864k Dec 13 01:28:56.143257 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Dec 13 01:28:56.143271 kernel: Run /init as init process Dec 13 01:28:56.143285 kernel: with arguments: Dec 13 01:28:56.143298 kernel: /init Dec 13 01:28:56.143311 kernel: with environment: Dec 13 01:28:56.143325 kernel: HOME=/ Dec 13 01:28:56.143338 kernel: TERM=linux Dec 13 01:28:56.143442 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:28:56.143467 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:28:56.143496 systemd[1]: Detected virtualization amazon. Dec 13 01:28:56.143513 systemd[1]: Detected architecture x86-64. Dec 13 01:28:56.143528 systemd[1]: Running in initrd. Dec 13 01:28:56.143543 systemd[1]: No hostname configured, using default hostname. Dec 13 01:28:56.143562 systemd[1]: Hostname set to . Dec 13 01:28:56.143577 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:28:56.143592 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:28:56.143607 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:28:56.143623 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:28:56.143640 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 01:28:56.143655 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:28:56.143670 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 01:28:56.143687 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 01:28:56.143705 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 01:28:56.143720 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 01:28:56.143735 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:28:56.143751 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:28:56.143766 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:28:56.143781 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:28:56.143799 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:28:56.143814 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:28:56.143829 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:28:56.143845 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:28:56.143860 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 01:28:56.143879 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:28:56.143897 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:28:56.143913 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:28:56.143928 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:28:56.143944 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:28:56.143963 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:28:56.143978 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 01:28:56.143994 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:28:56.144009 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 01:28:56.144026 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:28:56.144044 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:28:56.144062 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:28:56.144078 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:28:56.144093 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 01:28:56.144108 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:28:56.144123 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:28:56.144179 systemd-journald[178]: Collecting audit messages is disabled. Dec 13 01:28:56.144217 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:28:56.144234 systemd-journald[178]: Journal started Dec 13 01:28:56.144556 systemd-journald[178]: Runtime Journal (/run/log/journal/ec22126f5087be49bdcfc53e1b7c93be) is 4.8M, max 38.6M, 33.7M free. Dec 13 01:28:56.095592 systemd-modules-load[179]: Inserted module 'overlay' Dec 13 01:28:56.302254 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:28:56.302293 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:28:56.302309 kernel: Bridge firewalling registered Dec 13 01:28:56.149174 systemd-modules-load[179]: Inserted module 'br_netfilter' Dec 13 01:28:56.298710 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:28:56.300463 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:28:56.302470 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:28:56.330635 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:28:56.335823 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:28:56.339403 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:28:56.352596 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:28:56.371849 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:28:56.376925 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:28:56.387947 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 01:28:56.391731 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:28:56.394430 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:28:56.404610 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:28:56.421413 dracut-cmdline[209]: dracut-dracut-053 Dec 13 01:28:56.425759 dracut-cmdline[209]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:28:56.467248 systemd-resolved[213]: Positive Trust Anchors: Dec 13 01:28:56.467269 systemd-resolved[213]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:28:56.467331 systemd-resolved[213]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:28:56.482861 systemd-resolved[213]: Defaulting to hostname 'linux'. Dec 13 01:28:56.485159 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:28:56.487665 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:28:56.532415 kernel: SCSI subsystem initialized Dec 13 01:28:56.542418 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:28:56.556408 kernel: iscsi: registered transport (tcp) Dec 13 01:28:56.578410 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:28:56.578487 kernel: QLogic iSCSI HBA Driver Dec 13 01:28:56.620887 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 01:28:56.627588 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 01:28:56.685634 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:28:56.685716 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:28:56.685738 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 01:28:56.758415 kernel: raid6: avx512x4 gen() 17695 MB/s Dec 13 01:28:56.775410 kernel: raid6: avx512x2 gen() 17764 MB/s Dec 13 01:28:56.792404 kernel: raid6: avx512x1 gen() 18027 MB/s Dec 13 01:28:56.809405 kernel: raid6: avx2x4 gen() 17802 MB/s Dec 13 01:28:56.826412 kernel: raid6: avx2x2 gen() 16939 MB/s Dec 13 01:28:56.843450 kernel: raid6: avx2x1 gen() 13809 MB/s Dec 13 01:28:56.843504 kernel: raid6: using algorithm avx512x1 gen() 18027 MB/s Dec 13 01:28:56.861399 kernel: raid6: .... xor() 21030 MB/s, rmw enabled Dec 13 01:28:56.861432 kernel: raid6: using avx512x2 recovery algorithm Dec 13 01:28:56.882407 kernel: xor: automatically using best checksumming function avx Dec 13 01:28:57.046413 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 01:28:57.056357 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:28:57.063580 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:28:57.077450 systemd-udevd[398]: Using default interface naming scheme 'v255'. Dec 13 01:28:57.084403 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:28:57.096618 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 01:28:57.141186 dracut-pre-trigger[402]: rd.md=0: removing MD RAID activation Dec 13 01:28:57.189809 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:28:57.200767 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:28:57.319090 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:28:57.330878 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 01:28:57.359183 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 01:28:57.360146 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:28:57.365542 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:28:57.368738 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:28:57.380597 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 01:28:57.415957 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:28:57.445107 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 01:28:57.448862 kernel: ena 0000:00:05.0: ENA device version: 0.10 Dec 13 01:28:57.476767 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Dec 13 01:28:57.477286 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Dec 13 01:28:57.477635 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:82:51:5a:d4:f1 Dec 13 01:28:57.481472 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 01:28:57.481537 kernel: AES CTR mode by8 optimization enabled Dec 13 01:28:57.483023 (udev-worker)[454]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:28:57.488200 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:28:57.488362 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:28:57.493852 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:28:57.495184 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:28:57.495423 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:28:57.497536 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:28:57.508634 kernel: nvme nvme0: pci function 0000:00:04.0 Dec 13 01:28:57.509829 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:28:57.516041 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Dec 13 01:28:57.528829 kernel: nvme nvme0: 2/0/0 default/read/poll queues Dec 13 01:28:57.538490 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 01:28:57.538584 kernel: GPT:9289727 != 16777215 Dec 13 01:28:57.538610 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 01:28:57.538633 kernel: GPT:9289727 != 16777215 Dec 13 01:28:57.538655 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 01:28:57.538678 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:28:57.655409 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (442) Dec 13 01:28:57.678406 kernel: BTRFS: device fsid c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be devid 1 transid 41 /dev/nvme0n1p3 scanned by (udev-worker) (446) Dec 13 01:28:57.701738 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:28:57.713237 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:28:57.765939 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Dec 13 01:28:57.785203 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Dec 13 01:28:57.788626 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:28:57.800517 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Dec 13 01:28:57.812129 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Dec 13 01:28:57.812291 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Dec 13 01:28:57.830624 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 01:28:57.839536 disk-uuid[624]: Primary Header is updated. Dec 13 01:28:57.839536 disk-uuid[624]: Secondary Entries is updated. Dec 13 01:28:57.839536 disk-uuid[624]: Secondary Header is updated. Dec 13 01:28:57.844406 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:28:57.849417 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:28:57.859964 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:28:58.866644 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:28:58.867789 disk-uuid[625]: The operation has completed successfully. Dec 13 01:28:59.072329 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:28:59.072470 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 01:28:59.100704 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 01:28:59.118807 sh[966]: Success Dec 13 01:28:59.179502 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 01:28:59.313617 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 01:28:59.324659 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 01:28:59.328692 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 01:28:59.377764 kernel: BTRFS info (device dm-0): first mount of filesystem c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be Dec 13 01:28:59.377837 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:28:59.377869 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 01:28:59.378693 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 01:28:59.379479 kernel: BTRFS info (device dm-0): using free space tree Dec 13 01:28:59.478411 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 13 01:28:59.480320 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 01:28:59.483559 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 01:28:59.493646 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 01:28:59.507936 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 01:28:59.545659 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:28:59.545730 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:28:59.545752 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 01:28:59.558406 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 01:28:59.581585 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:28:59.582067 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:28:59.591910 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 01:28:59.603680 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 01:28:59.745802 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:28:59.758818 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:28:59.807750 systemd-networkd[1159]: lo: Link UP Dec 13 01:28:59.807763 systemd-networkd[1159]: lo: Gained carrier Dec 13 01:28:59.811028 systemd-networkd[1159]: Enumeration completed Dec 13 01:28:59.811167 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:28:59.812838 systemd[1]: Reached target network.target - Network. Dec 13 01:28:59.818050 systemd-networkd[1159]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:28:59.818062 systemd-networkd[1159]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:28:59.826075 systemd-networkd[1159]: eth0: Link UP Dec 13 01:28:59.826087 systemd-networkd[1159]: eth0: Gained carrier Dec 13 01:28:59.826102 systemd-networkd[1159]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:28:59.845310 systemd-networkd[1159]: eth0: DHCPv4 address 172.31.31.20/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 01:29:00.134415 ignition[1077]: Ignition 2.19.0 Dec 13 01:29:00.134430 ignition[1077]: Stage: fetch-offline Dec 13 01:29:00.134700 ignition[1077]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:29:00.134712 ignition[1077]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:29:00.135932 ignition[1077]: Ignition finished successfully Dec 13 01:29:00.141594 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:29:00.148727 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 01:29:00.181173 ignition[1168]: Ignition 2.19.0 Dec 13 01:29:00.181190 ignition[1168]: Stage: fetch Dec 13 01:29:00.183926 ignition[1168]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:29:00.183950 ignition[1168]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:29:00.184080 ignition[1168]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:29:00.210064 ignition[1168]: PUT result: OK Dec 13 01:29:00.214820 ignition[1168]: parsed url from cmdline: "" Dec 13 01:29:00.214832 ignition[1168]: no config URL provided Dec 13 01:29:00.214843 ignition[1168]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:29:00.214858 ignition[1168]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:29:00.214883 ignition[1168]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:29:00.216423 ignition[1168]: PUT result: OK Dec 13 01:29:00.216488 ignition[1168]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Dec 13 01:29:00.219645 ignition[1168]: GET result: OK Dec 13 01:29:00.221076 ignition[1168]: parsing config with SHA512: ef3e2673dd048c115aecd75ebe4256a7f7a0be993cb7fb059a81b6defb93c388a4dd7207af12adf471aa9c11e176cbb9670907811a669c8a299f822a9f2605eb Dec 13 01:29:00.229679 unknown[1168]: fetched base config from "system" Dec 13 01:29:00.229689 unknown[1168]: fetched base config from "system" Dec 13 01:29:00.229695 unknown[1168]: fetched user config from "aws" Dec 13 01:29:00.231986 ignition[1168]: fetch: fetch complete Dec 13 01:29:00.231992 ignition[1168]: fetch: fetch passed Dec 13 01:29:00.234526 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 01:29:00.232063 ignition[1168]: Ignition finished successfully Dec 13 01:29:00.243742 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 01:29:00.282213 ignition[1174]: Ignition 2.19.0 Dec 13 01:29:00.282227 ignition[1174]: Stage: kargs Dec 13 01:29:00.282720 ignition[1174]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:29:00.282733 ignition[1174]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:29:00.282843 ignition[1174]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:29:00.285495 ignition[1174]: PUT result: OK Dec 13 01:29:00.313097 ignition[1174]: kargs: kargs passed Dec 13 01:29:00.313204 ignition[1174]: Ignition finished successfully Dec 13 01:29:00.317868 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 01:29:00.331177 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 01:29:00.370747 ignition[1180]: Ignition 2.19.0 Dec 13 01:29:00.370761 ignition[1180]: Stage: disks Dec 13 01:29:00.371403 ignition[1180]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:29:00.371422 ignition[1180]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:29:00.371536 ignition[1180]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:29:00.382540 ignition[1180]: PUT result: OK Dec 13 01:29:00.389668 ignition[1180]: disks: disks passed Dec 13 01:29:00.390208 ignition[1180]: Ignition finished successfully Dec 13 01:29:00.394073 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 01:29:00.399361 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 01:29:00.404666 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:29:00.409213 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:29:00.413035 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:29:00.413295 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:29:00.427080 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 01:29:00.548246 systemd-fsck[1188]: ROOT: clean, 14/553520 files, 52654/553472 blocks Dec 13 01:29:00.553999 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 01:29:00.569176 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 01:29:00.850458 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 390119fa-ab9c-4f50-b046-3b5c76c46193 r/w with ordered data mode. Quota mode: none. Dec 13 01:29:00.853016 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 01:29:00.858675 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 01:29:00.874798 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:29:00.892987 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 01:29:00.905172 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 01:29:00.905261 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:29:00.906482 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:29:00.988433 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1207) Dec 13 01:29:00.998980 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:29:00.999499 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:29:00.999531 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 01:29:01.001970 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 01:29:01.013582 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 01:29:01.030415 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 01:29:01.052703 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:29:01.145585 systemd-networkd[1159]: eth0: Gained IPv6LL Dec 13 01:29:01.504729 initrd-setup-root[1231]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:29:01.536510 initrd-setup-root[1238]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:29:01.557520 initrd-setup-root[1245]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:29:01.575252 initrd-setup-root[1252]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:29:01.956282 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 01:29:01.965872 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 01:29:01.979887 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 01:29:02.018847 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:29:02.019173 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 01:29:02.074747 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 01:29:02.089894 ignition[1320]: INFO : Ignition 2.19.0 Dec 13 01:29:02.089894 ignition[1320]: INFO : Stage: mount Dec 13 01:29:02.107288 ignition[1320]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:29:02.107288 ignition[1320]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:29:02.107288 ignition[1320]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:29:02.107288 ignition[1320]: INFO : PUT result: OK Dec 13 01:29:02.130020 ignition[1320]: INFO : mount: mount passed Dec 13 01:29:02.130020 ignition[1320]: INFO : Ignition finished successfully Dec 13 01:29:02.134174 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 01:29:02.148628 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 01:29:02.169630 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:29:02.201410 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1331) Dec 13 01:29:02.203412 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:29:02.203470 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:29:02.204417 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 01:29:02.212413 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 01:29:02.214967 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:29:02.243362 ignition[1348]: INFO : Ignition 2.19.0 Dec 13 01:29:02.243362 ignition[1348]: INFO : Stage: files Dec 13 01:29:02.248557 ignition[1348]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:29:02.248557 ignition[1348]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:29:02.248557 ignition[1348]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:29:02.257855 ignition[1348]: INFO : PUT result: OK Dec 13 01:29:02.267107 ignition[1348]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:29:02.286572 ignition[1348]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:29:02.286572 ignition[1348]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:29:02.317606 ignition[1348]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:29:02.320792 ignition[1348]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:29:02.324045 unknown[1348]: wrote ssh authorized keys file for user: core Dec 13 01:29:02.333440 ignition[1348]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:29:02.343871 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:29:02.349579 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 01:29:02.434763 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 01:29:02.617346 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:29:02.617346 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:29:02.625637 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:29:02.625637 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:29:02.625637 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:29:02.625637 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:29:02.625637 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:29:02.625637 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:29:02.625637 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:29:02.625637 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:29:02.625637 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:29:02.625637 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 01:29:02.625637 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 01:29:02.625637 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 01:29:02.625637 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Dec 13 01:29:03.059017 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 13 01:29:03.613819 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 01:29:03.613819 ignition[1348]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 13 01:29:03.618363 ignition[1348]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:29:03.620879 ignition[1348]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:29:03.620879 ignition[1348]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 13 01:29:03.620879 ignition[1348]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 13 01:29:03.620879 ignition[1348]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 01:29:03.620879 ignition[1348]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:29:03.620879 ignition[1348]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:29:03.620879 ignition[1348]: INFO : files: files passed Dec 13 01:29:03.620879 ignition[1348]: INFO : Ignition finished successfully Dec 13 01:29:03.653681 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 01:29:03.663656 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 01:29:03.668224 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 01:29:03.673182 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:29:03.673307 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 01:29:03.701598 initrd-setup-root-after-ignition[1377]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:29:03.701598 initrd-setup-root-after-ignition[1377]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:29:03.707579 initrd-setup-root-after-ignition[1381]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:29:03.724059 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:29:03.738053 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 01:29:03.747265 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 01:29:03.783633 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:29:03.783765 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 01:29:03.792454 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 01:29:03.795365 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 01:29:03.798159 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 01:29:03.814763 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 01:29:03.836069 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:29:03.845915 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 01:29:03.858977 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:29:03.859222 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:29:03.865548 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 01:29:03.865832 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:29:03.866268 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:29:03.866846 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 01:29:03.867018 systemd[1]: Stopped target basic.target - Basic System. Dec 13 01:29:03.867212 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 01:29:03.868282 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:29:03.869275 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 01:29:03.869825 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 01:29:03.870427 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:29:03.870917 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 01:29:03.871049 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 01:29:03.871205 systemd[1]: Stopped target swap.target - Swaps. Dec 13 01:29:03.871349 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:29:03.871502 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:29:03.871954 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:29:03.872482 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:29:03.872649 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 01:29:03.890897 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:29:03.930082 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:29:03.930951 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 01:29:03.936211 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:29:03.936423 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:29:03.940628 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:29:03.942157 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 01:29:03.959672 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 01:29:03.961108 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:29:03.961448 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:29:03.974072 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 01:29:03.977744 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:29:03.978129 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:29:03.983298 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:29:03.983595 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:29:03.998218 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:29:03.998349 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 01:29:04.018430 ignition[1401]: INFO : Ignition 2.19.0 Dec 13 01:29:04.019959 ignition[1401]: INFO : Stage: umount Dec 13 01:29:04.019959 ignition[1401]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:29:04.019959 ignition[1401]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:29:04.024375 ignition[1401]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:29:04.024375 ignition[1401]: INFO : PUT result: OK Dec 13 01:29:04.032145 ignition[1401]: INFO : umount: umount passed Dec 13 01:29:04.033269 ignition[1401]: INFO : Ignition finished successfully Dec 13 01:29:04.036264 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:29:04.036419 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 01:29:04.039713 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:29:04.039804 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 01:29:04.045726 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:29:04.045872 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 01:29:04.049340 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 01:29:04.050427 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 01:29:04.052550 systemd[1]: Stopped target network.target - Network. Dec 13 01:29:04.056819 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:29:04.056922 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:29:04.068304 systemd[1]: Stopped target paths.target - Path Units. Dec 13 01:29:04.071900 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:29:04.077751 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:29:04.082265 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 01:29:04.085649 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 01:29:04.088070 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:29:04.088149 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:29:04.092622 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:29:04.092691 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:29:04.095976 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:29:04.096210 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 01:29:04.099543 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 01:29:04.099635 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 01:29:04.102663 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 01:29:04.112831 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 01:29:04.117913 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:29:04.122607 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:29:04.122775 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 01:29:04.123673 systemd-networkd[1159]: eth0: DHCPv6 lease lost Dec 13 01:29:04.134745 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:29:04.134889 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 01:29:04.141480 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:29:04.141566 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:29:04.143568 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:29:04.145256 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 01:29:04.154240 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 01:29:04.158711 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:29:04.158798 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:29:04.168541 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:29:04.179087 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:29:04.181232 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 01:29:04.201522 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:29:04.201690 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:29:04.218271 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:29:04.218403 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 01:29:04.221511 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:29:04.221587 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:29:04.223874 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:29:04.223959 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:29:04.227673 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:29:04.227749 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 01:29:04.232752 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:29:04.233862 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:29:04.243920 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 01:29:04.245209 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:29:04.245284 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:29:04.247576 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:29:04.247659 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 01:29:04.249032 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:29:04.249099 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:29:04.250468 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 01:29:04.250539 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:29:04.252106 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:29:04.252199 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:29:04.254121 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:29:04.254335 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 01:29:04.264632 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:29:04.264822 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 01:29:04.268071 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 01:29:04.276733 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 01:29:04.299026 systemd[1]: Switching root. Dec 13 01:29:04.341491 systemd-journald[178]: Journal stopped Dec 13 01:29:06.843987 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Dec 13 01:29:06.844089 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 01:29:06.844118 kernel: SELinux: policy capability open_perms=1 Dec 13 01:29:06.844137 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 01:29:06.844156 kernel: SELinux: policy capability always_check_network=0 Dec 13 01:29:06.844183 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 01:29:06.844202 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 01:29:06.844221 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 01:29:06.844247 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 01:29:06.844265 kernel: audit: type=1403 audit(1734053345.053:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 01:29:06.844294 systemd[1]: Successfully loaded SELinux policy in 58.246ms. Dec 13 01:29:06.844321 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 20.853ms. Dec 13 01:29:06.844343 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:29:06.844362 systemd[1]: Detected virtualization amazon. Dec 13 01:29:06.845194 systemd[1]: Detected architecture x86-64. Dec 13 01:29:06.845236 systemd[1]: Detected first boot. Dec 13 01:29:06.845260 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:29:06.845284 zram_generator::config[1443]: No configuration found. Dec 13 01:29:06.845313 systemd[1]: Populated /etc with preset unit settings. Dec 13 01:29:06.845336 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 01:29:06.845358 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 01:29:06.845378 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 01:29:06.845418 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 01:29:06.845468 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 01:29:06.845495 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 01:29:06.845516 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 01:29:06.845671 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 01:29:06.845746 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 01:29:06.845768 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 01:29:06.845818 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 01:29:06.845838 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:29:06.845864 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:29:06.845883 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 01:29:06.845901 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 01:29:06.845920 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 01:29:06.845940 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:29:06.845963 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 01:29:06.845982 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:29:06.846005 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 01:29:06.846026 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 01:29:06.846048 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 01:29:06.846070 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 01:29:06.846089 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:29:06.846115 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:29:06.846134 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:29:06.846155 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:29:06.846175 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 01:29:06.846196 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 01:29:06.846217 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:29:06.846240 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:29:06.846262 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:29:06.846285 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 01:29:06.846308 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 01:29:06.846336 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 01:29:06.846358 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 01:29:06.846395 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:29:06.846423 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 01:29:06.846441 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 01:29:06.846459 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 01:29:06.846478 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 01:29:06.846497 systemd[1]: Reached target machines.target - Containers. Dec 13 01:29:06.846525 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 01:29:06.846546 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:29:06.846569 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:29:06.846594 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 01:29:06.846617 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:29:06.846642 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:29:06.846667 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:29:06.846691 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 01:29:06.846715 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:29:06.846744 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 01:29:06.846767 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 01:29:06.855478 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 01:29:06.855516 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 01:29:06.855539 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 01:29:06.855561 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:29:06.855583 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:29:06.855605 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 01:29:06.855635 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 01:29:06.855657 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:29:06.855679 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 01:29:06.855700 systemd[1]: Stopped verity-setup.service. Dec 13 01:29:06.855722 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:29:06.855744 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 01:29:06.855765 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 01:29:06.855786 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 01:29:06.855808 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 01:29:06.855833 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 01:29:06.855854 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 01:29:06.855876 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:29:06.855898 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 01:29:06.855918 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 01:29:06.855943 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:29:06.855965 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:29:06.855986 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:29:06.856008 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:29:06.856029 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 01:29:06.856051 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 01:29:06.856072 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 01:29:06.856095 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:29:06.856120 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 01:29:06.856142 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 01:29:06.856210 systemd-journald[1522]: Collecting audit messages is disabled. Dec 13 01:29:06.856250 kernel: loop: module loaded Dec 13 01:29:06.856274 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 01:29:06.856295 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:29:06.856318 systemd-journald[1522]: Journal started Dec 13 01:29:06.856358 systemd-journald[1522]: Runtime Journal (/run/log/journal/ec22126f5087be49bdcfc53e1b7c93be) is 4.8M, max 38.6M, 33.7M free. Dec 13 01:29:06.869130 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 01:29:06.284103 systemd[1]: Queued start job for default target multi-user.target. Dec 13 01:29:06.872725 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:29:06.324645 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Dec 13 01:29:06.325151 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 01:29:06.898412 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 01:29:06.898494 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 01:29:06.900411 kernel: fuse: init (API version 7.39) Dec 13 01:29:06.921275 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:29:06.914089 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 01:29:06.914460 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 01:29:06.916351 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:29:06.916697 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:29:06.919444 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:29:06.921218 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 01:29:06.922904 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 01:29:06.925022 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 01:29:06.928216 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 01:29:06.959592 kernel: ACPI: bus type drm_connector registered Dec 13 01:29:06.962763 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:29:06.963430 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:29:06.966983 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 01:29:06.985519 kernel: loop0: detected capacity change from 0 to 140768 Dec 13 01:29:06.991914 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 01:29:06.995107 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 01:29:07.005497 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 01:29:07.016750 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 01:29:07.035608 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 01:29:07.037166 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:29:07.040551 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:29:07.049667 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 01:29:07.053518 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 01:29:07.108459 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 01:29:07.131067 systemd-journald[1522]: Time spent on flushing to /var/log/journal/ec22126f5087be49bdcfc53e1b7c93be is 77.458ms for 964 entries. Dec 13 01:29:07.131067 systemd-journald[1522]: System Journal (/var/log/journal/ec22126f5087be49bdcfc53e1b7c93be) is 8.0M, max 195.6M, 187.6M free. Dec 13 01:29:07.233311 systemd-journald[1522]: Received client request to flush runtime journal. Dec 13 01:29:07.233435 kernel: loop1: detected capacity change from 0 to 61336 Dec 13 01:29:07.133937 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 01:29:07.137300 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:29:07.143364 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 01:29:07.159625 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 01:29:07.212221 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:29:07.217718 udevadm[1582]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 01:29:07.235659 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 01:29:07.276717 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 01:29:07.287246 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:29:07.304416 kernel: loop2: detected capacity change from 0 to 142488 Dec 13 01:29:07.320119 systemd-tmpfiles[1589]: ACLs are not supported, ignoring. Dec 13 01:29:07.320149 systemd-tmpfiles[1589]: ACLs are not supported, ignoring. Dec 13 01:29:07.329132 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:29:07.446224 kernel: loop3: detected capacity change from 0 to 205544 Dec 13 01:29:07.524282 kernel: loop4: detected capacity change from 0 to 140768 Dec 13 01:29:07.606417 kernel: loop5: detected capacity change from 0 to 61336 Dec 13 01:29:07.633636 kernel: loop6: detected capacity change from 0 to 142488 Dec 13 01:29:07.661415 kernel: loop7: detected capacity change from 0 to 205544 Dec 13 01:29:07.699925 (sd-merge)[1594]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Dec 13 01:29:07.700788 (sd-merge)[1594]: Merged extensions into '/usr'. Dec 13 01:29:07.712210 systemd[1]: Reloading requested from client PID 1549 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 01:29:07.712231 systemd[1]: Reloading... Dec 13 01:29:07.848507 zram_generator::config[1620]: No configuration found. Dec 13 01:29:08.171376 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:29:08.354677 systemd[1]: Reloading finished in 641 ms. Dec 13 01:29:08.417545 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 01:29:08.427727 systemd[1]: Starting ensure-sysext.service... Dec 13 01:29:08.432951 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:29:08.453053 systemd[1]: Reloading requested from client PID 1668 ('systemctl') (unit ensure-sysext.service)... Dec 13 01:29:08.453067 systemd[1]: Reloading... Dec 13 01:29:08.483600 systemd-tmpfiles[1669]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 01:29:08.484250 systemd-tmpfiles[1669]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 01:29:08.488105 systemd-tmpfiles[1669]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 01:29:08.489069 systemd-tmpfiles[1669]: ACLs are not supported, ignoring. Dec 13 01:29:08.489171 systemd-tmpfiles[1669]: ACLs are not supported, ignoring. Dec 13 01:29:08.497060 systemd-tmpfiles[1669]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:29:08.497226 systemd-tmpfiles[1669]: Skipping /boot Dec 13 01:29:08.522760 systemd-tmpfiles[1669]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:29:08.522780 systemd-tmpfiles[1669]: Skipping /boot Dec 13 01:29:08.605412 zram_generator::config[1696]: No configuration found. Dec 13 01:29:08.759936 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:29:08.859263 systemd[1]: Reloading finished in 405 ms. Dec 13 01:29:08.882347 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 01:29:08.887897 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:29:08.919930 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:29:08.941243 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 01:29:08.946334 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 01:29:08.961713 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:29:08.974690 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:29:08.981768 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 01:29:09.005787 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 01:29:09.009997 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:29:09.010266 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:29:09.021855 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:29:09.033404 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:29:09.048746 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:29:09.050180 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:29:09.050479 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:29:09.056790 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:29:09.057425 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:29:09.057980 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:29:09.058136 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:29:09.065852 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:29:09.068182 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:29:09.081873 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:29:09.083004 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:29:09.095614 ldconfig[1543]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 01:29:09.094574 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:29:09.111081 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:29:09.113532 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:29:09.113844 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 01:29:09.115236 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:29:09.130627 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 01:29:09.178374 systemd[1]: Finished ensure-sysext.service. Dec 13 01:29:09.215490 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 01:29:09.242306 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:29:09.247583 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:29:09.272066 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 01:29:09.283272 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:29:09.283596 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:29:09.287422 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:29:09.287765 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:29:09.291845 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:29:09.291952 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:29:09.304441 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 01:29:09.307370 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:29:09.308209 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:29:09.319254 systemd-udevd[1758]: Using default interface naming scheme 'v255'. Dec 13 01:29:09.329629 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 01:29:09.349413 augenrules[1788]: No rules Dec 13 01:29:09.354492 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:29:09.368281 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 01:29:09.418304 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:29:09.429737 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:29:09.448048 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 01:29:09.450115 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:29:09.455090 systemd-resolved[1754]: Positive Trust Anchors: Dec 13 01:29:09.455483 systemd-resolved[1754]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:29:09.455795 systemd-resolved[1754]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:29:09.470112 systemd-resolved[1754]: Defaulting to hostname 'linux'. Dec 13 01:29:09.477145 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:29:09.479762 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:29:09.552002 systemd-networkd[1800]: lo: Link UP Dec 13 01:29:09.552016 systemd-networkd[1800]: lo: Gained carrier Dec 13 01:29:09.597014 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 13 01:29:09.597288 (udev-worker)[1809]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:29:09.599763 systemd-networkd[1800]: Enumeration completed Dec 13 01:29:09.600517 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:29:09.602087 systemd[1]: Reached target network.target - Network. Dec 13 01:29:09.605094 systemd-networkd[1800]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:29:09.605402 systemd-networkd[1800]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:29:09.608445 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1797) Dec 13 01:29:09.613575 systemd-networkd[1800]: eth0: Link UP Dec 13 01:29:09.613833 systemd-networkd[1800]: eth0: Gained carrier Dec 13 01:29:09.613859 systemd-networkd[1800]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:29:09.616376 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 01:29:09.624998 systemd-networkd[1800]: eth0: DHCPv4 address 172.31.31.20/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 01:29:09.637432 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1797) Dec 13 01:29:09.677237 systemd-networkd[1800]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:29:09.717507 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Dec 13 01:29:09.725614 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Dec 13 01:29:09.748630 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (1809) Dec 13 01:29:09.748700 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Dec 13 01:29:09.761403 kernel: ACPI: button: Power Button [PWRF] Dec 13 01:29:09.766444 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Dec 13 01:29:09.798014 kernel: ACPI: button: Sleep Button [SLPF] Dec 13 01:29:09.869410 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 01:29:09.885745 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:29:09.960861 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Dec 13 01:29:09.964436 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 01:29:09.974032 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 01:29:09.984799 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 01:29:10.015012 lvm[1912]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:29:10.042225 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 01:29:10.046708 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 01:29:10.047928 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:29:10.055668 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 01:29:10.072585 lvm[1918]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:29:10.102961 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 01:29:10.284647 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:29:10.287059 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:29:10.288583 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 01:29:10.290531 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 01:29:10.293854 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 01:29:10.296759 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 01:29:10.298545 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 01:29:10.300091 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 01:29:10.300144 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:29:10.301197 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:29:10.304402 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 01:29:10.307458 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 01:29:10.317905 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 01:29:10.320421 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 01:29:10.322600 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:29:10.324926 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:29:10.326359 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:29:10.326448 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:29:10.333691 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 01:29:10.339597 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 01:29:10.343891 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 01:29:10.354470 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 01:29:10.358607 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 01:29:10.360235 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 01:29:10.371944 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 01:29:10.390185 systemd[1]: Started ntpd.service - Network Time Service. Dec 13 01:29:10.396543 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 01:29:10.412621 systemd[1]: Starting setup-oem.service - Setup OEM... Dec 13 01:29:10.417715 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 01:29:10.420798 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 01:29:10.437075 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 01:29:10.440193 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 01:29:10.441196 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 01:29:10.444041 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 01:29:10.447637 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 01:29:10.456406 jq[1927]: false Dec 13 01:29:10.459954 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 01:29:10.461309 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 01:29:10.543984 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 01:29:10.544744 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 01:29:10.545145 dbus-daemon[1926]: [system] SELinux support is enabled Dec 13 01:29:10.572043 extend-filesystems[1928]: Found loop4 Dec 13 01:29:10.572043 extend-filesystems[1928]: Found loop5 Dec 13 01:29:10.572043 extend-filesystems[1928]: Found loop6 Dec 13 01:29:10.572043 extend-filesystems[1928]: Found loop7 Dec 13 01:29:10.572043 extend-filesystems[1928]: Found nvme0n1 Dec 13 01:29:10.572043 extend-filesystems[1928]: Found nvme0n1p1 Dec 13 01:29:10.572043 extend-filesystems[1928]: Found nvme0n1p2 Dec 13 01:29:10.572043 extend-filesystems[1928]: Found nvme0n1p3 Dec 13 01:29:10.572043 extend-filesystems[1928]: Found usr Dec 13 01:29:10.572043 extend-filesystems[1928]: Found nvme0n1p4 Dec 13 01:29:10.572043 extend-filesystems[1928]: Found nvme0n1p6 Dec 13 01:29:10.572043 extend-filesystems[1928]: Found nvme0n1p7 Dec 13 01:29:10.572043 extend-filesystems[1928]: Found nvme0n1p9 Dec 13 01:29:10.572043 extend-filesystems[1928]: Checking size of /dev/nvme0n1p9 Dec 13 01:29:10.546663 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 01:29:10.653674 jq[1941]: true Dec 13 01:29:10.552425 dbus-daemon[1926]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1800 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 13 01:29:10.654039 tar[1948]: linux-amd64/helm Dec 13 01:29:10.675900 update_engine[1939]: I20241213 01:29:10.653252 1939 main.cc:92] Flatcar Update Engine starting Dec 13 01:29:10.675900 update_engine[1939]: I20241213 01:29:10.663322 1939 update_check_scheduler.cc:74] Next update check in 10m20s Dec 13 01:29:10.555430 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 01:29:10.632866 dbus-daemon[1926]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 01:29:10.676765 ntpd[1930]: 13 Dec 01:29:10 ntpd[1930]: ntpd 4.2.8p17@1.4004-o Thu Dec 12 22:36:14 UTC 2024 (1): Starting Dec 13 01:29:10.676765 ntpd[1930]: 13 Dec 01:29:10 ntpd[1930]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 13 01:29:10.676765 ntpd[1930]: 13 Dec 01:29:10 ntpd[1930]: ---------------------------------------------------- Dec 13 01:29:10.676765 ntpd[1930]: 13 Dec 01:29:10 ntpd[1930]: ntp-4 is maintained by Network Time Foundation, Dec 13 01:29:10.676765 ntpd[1930]: 13 Dec 01:29:10 ntpd[1930]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 13 01:29:10.676765 ntpd[1930]: 13 Dec 01:29:10 ntpd[1930]: corporation. Support and training for ntp-4 are Dec 13 01:29:10.676765 ntpd[1930]: 13 Dec 01:29:10 ntpd[1930]: available at https://www.nwtime.org/support Dec 13 01:29:10.676765 ntpd[1930]: 13 Dec 01:29:10 ntpd[1930]: ---------------------------------------------------- Dec 13 01:29:10.676765 ntpd[1930]: 13 Dec 01:29:10 ntpd[1930]: proto: precision = 0.077 usec (-24) Dec 13 01:29:10.676765 ntpd[1930]: 13 Dec 01:29:10 ntpd[1930]: basedate set to 2024-11-30 Dec 13 01:29:10.676765 ntpd[1930]: 13 Dec 01:29:10 ntpd[1930]: gps base set to 2024-12-01 (week 2343) Dec 13 01:29:10.676765 ntpd[1930]: 13 Dec 01:29:10 ntpd[1930]: Listen and drop on 0 v6wildcard [::]:123 Dec 13 01:29:10.676765 ntpd[1930]: 13 Dec 01:29:10 ntpd[1930]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 13 01:29:10.676765 ntpd[1930]: 13 Dec 01:29:10 ntpd[1930]: Listen normally on 2 lo 127.0.0.1:123 Dec 13 01:29:10.676765 ntpd[1930]: 13 Dec 01:29:10 ntpd[1930]: Listen normally on 3 eth0 172.31.31.20:123 Dec 13 01:29:10.676765 ntpd[1930]: 13 Dec 01:29:10 ntpd[1930]: Listen normally on 4 lo [::1]:123 Dec 13 01:29:10.676765 ntpd[1930]: 13 Dec 01:29:10 ntpd[1930]: bind(21) AF_INET6 fe80::482:51ff:fe5a:d4f1%2#123 flags 0x11 failed: Cannot assign requested address Dec 13 01:29:10.676765 ntpd[1930]: 13 Dec 01:29:10 ntpd[1930]: unable to create socket on eth0 (5) for fe80::482:51ff:fe5a:d4f1%2#123 Dec 13 01:29:10.676765 ntpd[1930]: 13 Dec 01:29:10 ntpd[1930]: failed to init interface for address fe80::482:51ff:fe5a:d4f1%2 Dec 13 01:29:10.676765 ntpd[1930]: 13 Dec 01:29:10 ntpd[1930]: Listening on routing socket on fd #21 for interface updates Dec 13 01:29:10.716474 jq[1962]: true Dec 13 01:29:10.555688 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 01:29:10.655880 ntpd[1930]: ntpd 4.2.8p17@1.4004-o Thu Dec 12 22:36:14 UTC 2024 (1): Starting Dec 13 01:29:10.724971 ntpd[1930]: 13 Dec 01:29:10 ntpd[1930]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:29:10.724971 ntpd[1930]: 13 Dec 01:29:10 ntpd[1930]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:29:10.725055 extend-filesystems[1928]: Resized partition /dev/nvme0n1p9 Dec 13 01:29:10.595079 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 01:29:10.655905 ntpd[1930]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 13 01:29:10.728093 extend-filesystems[1978]: resize2fs 1.47.1 (20-May-2024) Dec 13 01:29:10.595163 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 01:29:10.655916 ntpd[1930]: ---------------------------------------------------- Dec 13 01:29:10.597232 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 01:29:10.655926 ntpd[1930]: ntp-4 is maintained by Network Time Foundation, Dec 13 01:29:10.597266 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 01:29:10.655936 ntpd[1930]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 13 01:29:10.655636 (ntainerd)[1956]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 01:29:10.655945 ntpd[1930]: corporation. Support and training for ntp-4 are Dec 13 01:29:10.656864 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Dec 13 01:29:10.655954 ntpd[1930]: available at https://www.nwtime.org/support Dec 13 01:29:10.660678 systemd[1]: Started update-engine.service - Update Engine. Dec 13 01:29:10.655965 ntpd[1930]: ---------------------------------------------------- Dec 13 01:29:10.682788 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 01:29:10.662789 ntpd[1930]: proto: precision = 0.077 usec (-24) Dec 13 01:29:10.738480 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Dec 13 01:29:10.722311 systemd[1]: Finished setup-oem.service - Setup OEM. Dec 13 01:29:10.665423 ntpd[1930]: basedate set to 2024-11-30 Dec 13 01:29:10.665447 ntpd[1930]: gps base set to 2024-12-01 (week 2343) Dec 13 01:29:10.669787 ntpd[1930]: Listen and drop on 0 v6wildcard [::]:123 Dec 13 01:29:10.669841 ntpd[1930]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 13 01:29:10.670058 ntpd[1930]: Listen normally on 2 lo 127.0.0.1:123 Dec 13 01:29:10.670096 ntpd[1930]: Listen normally on 3 eth0 172.31.31.20:123 Dec 13 01:29:10.670139 ntpd[1930]: Listen normally on 4 lo [::1]:123 Dec 13 01:29:10.670185 ntpd[1930]: bind(21) AF_INET6 fe80::482:51ff:fe5a:d4f1%2#123 flags 0x11 failed: Cannot assign requested address Dec 13 01:29:10.670207 ntpd[1930]: unable to create socket on eth0 (5) for fe80::482:51ff:fe5a:d4f1%2#123 Dec 13 01:29:10.670223 ntpd[1930]: failed to init interface for address fe80::482:51ff:fe5a:d4f1%2 Dec 13 01:29:10.670261 ntpd[1930]: Listening on routing socket on fd #21 for interface updates Dec 13 01:29:10.713434 ntpd[1930]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:29:10.713470 ntpd[1930]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:29:10.775256 systemd-logind[1938]: Watching system buttons on /dev/input/event2 (Power Button) Dec 13 01:29:10.783515 systemd-logind[1938]: Watching system buttons on /dev/input/event3 (Sleep Button) Dec 13 01:29:10.783550 systemd-logind[1938]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 01:29:10.783799 systemd-logind[1938]: New seat seat0. Dec 13 01:29:10.786898 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 01:29:10.799467 coreos-metadata[1925]: Dec 13 01:29:10.795 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 13 01:29:10.799467 coreos-metadata[1925]: Dec 13 01:29:10.795 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Dec 13 01:29:10.799467 coreos-metadata[1925]: Dec 13 01:29:10.797 INFO Fetch successful Dec 13 01:29:10.799467 coreos-metadata[1925]: Dec 13 01:29:10.797 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Dec 13 01:29:10.801281 coreos-metadata[1925]: Dec 13 01:29:10.801 INFO Fetch successful Dec 13 01:29:10.801281 coreos-metadata[1925]: Dec 13 01:29:10.801 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Dec 13 01:29:10.808537 coreos-metadata[1925]: Dec 13 01:29:10.807 INFO Fetch successful Dec 13 01:29:10.808537 coreos-metadata[1925]: Dec 13 01:29:10.807 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Dec 13 01:29:10.808537 coreos-metadata[1925]: Dec 13 01:29:10.808 INFO Fetch successful Dec 13 01:29:10.808537 coreos-metadata[1925]: Dec 13 01:29:10.808 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Dec 13 01:29:10.811588 coreos-metadata[1925]: Dec 13 01:29:10.810 INFO Fetch failed with 404: resource not found Dec 13 01:29:10.811588 coreos-metadata[1925]: Dec 13 01:29:10.810 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Dec 13 01:29:10.811588 coreos-metadata[1925]: Dec 13 01:29:10.811 INFO Fetch successful Dec 13 01:29:10.811588 coreos-metadata[1925]: Dec 13 01:29:10.811 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Dec 13 01:29:10.814674 coreos-metadata[1925]: Dec 13 01:29:10.812 INFO Fetch successful Dec 13 01:29:10.814674 coreos-metadata[1925]: Dec 13 01:29:10.812 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Dec 13 01:29:10.815569 coreos-metadata[1925]: Dec 13 01:29:10.814 INFO Fetch successful Dec 13 01:29:10.819520 coreos-metadata[1925]: Dec 13 01:29:10.816 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Dec 13 01:29:10.821159 coreos-metadata[1925]: Dec 13 01:29:10.821 INFO Fetch successful Dec 13 01:29:10.821159 coreos-metadata[1925]: Dec 13 01:29:10.821 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Dec 13 01:29:10.824595 coreos-metadata[1925]: Dec 13 01:29:10.824 INFO Fetch successful Dec 13 01:29:10.870210 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Dec 13 01:29:10.936523 extend-filesystems[1978]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Dec 13 01:29:10.936523 extend-filesystems[1978]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 01:29:10.936523 extend-filesystems[1978]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Dec 13 01:29:10.942597 extend-filesystems[1928]: Resized filesystem in /dev/nvme0n1p9 Dec 13 01:29:10.954150 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 01:29:10.954417 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 01:29:10.984426 dbus-daemon[1926]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 13 01:29:10.989606 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Dec 13 01:29:10.998206 bash[1999]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:29:11.007826 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (1804) Dec 13 01:29:11.006977 dbus-daemon[1926]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1969 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 13 01:29:10.999863 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 01:29:11.008806 systemd[1]: Starting sshkeys.service... Dec 13 01:29:11.037748 systemd[1]: Starting polkit.service - Authorization Manager... Dec 13 01:29:11.039573 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 01:29:11.051040 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 01:29:11.063889 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 13 01:29:11.079978 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 13 01:29:11.108297 polkitd[2014]: Started polkitd version 121 Dec 13 01:29:11.187217 polkitd[2014]: Loading rules from directory /etc/polkit-1/rules.d Dec 13 01:29:11.187487 polkitd[2014]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 13 01:29:11.192418 polkitd[2014]: Finished loading, compiling and executing 2 rules Dec 13 01:29:11.193295 dbus-daemon[1926]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 13 01:29:11.193525 systemd[1]: Started polkit.service - Authorization Manager. Dec 13 01:29:11.198534 polkitd[2014]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 13 01:29:11.233617 locksmithd[1971]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 01:29:11.261784 systemd-hostnamed[1969]: Hostname set to (transient) Dec 13 01:29:11.261917 systemd-resolved[1754]: System hostname changed to 'ip-172-31-31-20'. Dec 13 01:29:11.340884 coreos-metadata[2021]: Dec 13 01:29:11.340 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 13 01:29:11.342467 coreos-metadata[2021]: Dec 13 01:29:11.342 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Dec 13 01:29:11.343330 coreos-metadata[2021]: Dec 13 01:29:11.343 INFO Fetch successful Dec 13 01:29:11.343330 coreos-metadata[2021]: Dec 13 01:29:11.343 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 13 01:29:11.344304 coreos-metadata[2021]: Dec 13 01:29:11.344 INFO Fetch successful Dec 13 01:29:11.352596 unknown[2021]: wrote ssh authorized keys file for user: core Dec 13 01:29:11.385422 update-ssh-keys[2108]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:29:11.387154 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 13 01:29:11.392120 systemd[1]: Finished sshkeys.service. Dec 13 01:29:11.450972 systemd-networkd[1800]: eth0: Gained IPv6LL Dec 13 01:29:11.458145 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 01:29:11.470327 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 01:29:11.477711 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Dec 13 01:29:11.489672 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:29:11.494451 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 01:29:11.660837 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 01:29:11.733706 amazon-ssm-agent[2121]: Initializing new seelog logger Dec 13 01:29:11.733706 amazon-ssm-agent[2121]: New Seelog Logger Creation Complete Dec 13 01:29:11.733706 amazon-ssm-agent[2121]: 2024/12/13 01:29:11 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:29:11.733706 amazon-ssm-agent[2121]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:29:11.736710 amazon-ssm-agent[2121]: 2024/12/13 01:29:11 processing appconfig overrides Dec 13 01:29:11.744509 amazon-ssm-agent[2121]: 2024/12/13 01:29:11 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:29:11.744509 amazon-ssm-agent[2121]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:29:11.744509 amazon-ssm-agent[2121]: 2024/12/13 01:29:11 processing appconfig overrides Dec 13 01:29:11.744509 amazon-ssm-agent[2121]: 2024/12/13 01:29:11 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:29:11.744509 amazon-ssm-agent[2121]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:29:11.744509 amazon-ssm-agent[2121]: 2024/12/13 01:29:11 processing appconfig overrides Dec 13 01:29:11.746658 amazon-ssm-agent[2121]: 2024-12-13 01:29:11 INFO Proxy environment variables: Dec 13 01:29:11.759555 amazon-ssm-agent[2121]: 2024/12/13 01:29:11 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:29:11.759555 amazon-ssm-agent[2121]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:29:11.762537 amazon-ssm-agent[2121]: 2024/12/13 01:29:11 processing appconfig overrides Dec 13 01:29:11.818936 containerd[1956]: time="2024-12-13T01:29:11.818828705Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 01:29:11.848801 amazon-ssm-agent[2121]: 2024-12-13 01:29:11 INFO no_proxy: Dec 13 01:29:11.893410 sshd_keygen[1966]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 01:29:11.948416 amazon-ssm-agent[2121]: 2024-12-13 01:29:11 INFO https_proxy: Dec 13 01:29:11.960221 containerd[1956]: time="2024-12-13T01:29:11.960157406Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:29:11.972873 containerd[1956]: time="2024-12-13T01:29:11.972499354Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:29:11.972873 containerd[1956]: time="2024-12-13T01:29:11.972576212Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 01:29:11.972873 containerd[1956]: time="2024-12-13T01:29:11.972604140Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 01:29:11.972873 containerd[1956]: time="2024-12-13T01:29:11.972850408Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 01:29:11.973110 containerd[1956]: time="2024-12-13T01:29:11.972888915Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 01:29:11.973110 containerd[1956]: time="2024-12-13T01:29:11.972980048Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:29:11.973110 containerd[1956]: time="2024-12-13T01:29:11.972998963Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:29:11.979409 containerd[1956]: time="2024-12-13T01:29:11.974524099Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:29:11.979409 containerd[1956]: time="2024-12-13T01:29:11.974584733Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 01:29:11.979409 containerd[1956]: time="2024-12-13T01:29:11.974610580Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:29:11.979409 containerd[1956]: time="2024-12-13T01:29:11.974628162Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 01:29:11.979409 containerd[1956]: time="2024-12-13T01:29:11.974814513Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:29:11.979409 containerd[1956]: time="2024-12-13T01:29:11.975280877Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:29:11.979409 containerd[1956]: time="2024-12-13T01:29:11.976202140Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:29:11.979409 containerd[1956]: time="2024-12-13T01:29:11.976231554Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 01:29:11.979409 containerd[1956]: time="2024-12-13T01:29:11.976493741Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 01:29:11.979409 containerd[1956]: time="2024-12-13T01:29:11.978144428Z" level=info msg="metadata content store policy set" policy=shared Dec 13 01:29:11.988327 containerd[1956]: time="2024-12-13T01:29:11.988280411Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 01:29:11.988450 containerd[1956]: time="2024-12-13T01:29:11.988367720Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 01:29:11.988495 containerd[1956]: time="2024-12-13T01:29:11.988455151Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 01:29:11.988495 containerd[1956]: time="2024-12-13T01:29:11.988482854Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 01:29:11.988560 containerd[1956]: time="2024-12-13T01:29:11.988506609Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 01:29:11.990405 containerd[1956]: time="2024-12-13T01:29:11.988695270Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 01:29:11.991182 containerd[1956]: time="2024-12-13T01:29:11.991143854Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 01:29:11.991365 containerd[1956]: time="2024-12-13T01:29:11.991340784Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 01:29:11.991450 containerd[1956]: time="2024-12-13T01:29:11.991430102Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 01:29:11.991492 containerd[1956]: time="2024-12-13T01:29:11.991459445Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 01:29:11.991530 containerd[1956]: time="2024-12-13T01:29:11.991486069Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 01:29:11.991530 containerd[1956]: time="2024-12-13T01:29:11.991507081Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 01:29:11.991618 containerd[1956]: time="2024-12-13T01:29:11.991527995Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 01:29:11.991618 containerd[1956]: time="2024-12-13T01:29:11.991550577Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 01:29:11.991618 containerd[1956]: time="2024-12-13T01:29:11.991571911Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 01:29:11.991618 containerd[1956]: time="2024-12-13T01:29:11.991603321Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 01:29:11.991752 containerd[1956]: time="2024-12-13T01:29:11.991625808Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 01:29:11.991752 containerd[1956]: time="2024-12-13T01:29:11.991644859Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 01:29:11.991752 containerd[1956]: time="2024-12-13T01:29:11.991674468Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 01:29:11.991752 containerd[1956]: time="2024-12-13T01:29:11.991695980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 01:29:11.991752 containerd[1956]: time="2024-12-13T01:29:11.991715608Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 01:29:11.991752 containerd[1956]: time="2024-12-13T01:29:11.991735212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 01:29:11.991968 containerd[1956]: time="2024-12-13T01:29:11.991754374Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 01:29:11.991968 containerd[1956]: time="2024-12-13T01:29:11.991777273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 01:29:11.991968 containerd[1956]: time="2024-12-13T01:29:11.991796032Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 01:29:11.991968 containerd[1956]: time="2024-12-13T01:29:11.991816730Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 01:29:11.991968 containerd[1956]: time="2024-12-13T01:29:11.991860613Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 01:29:11.991968 containerd[1956]: time="2024-12-13T01:29:11.991882733Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 01:29:11.991968 containerd[1956]: time="2024-12-13T01:29:11.991900712Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 01:29:11.991968 containerd[1956]: time="2024-12-13T01:29:11.991918158Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 01:29:11.991968 containerd[1956]: time="2024-12-13T01:29:11.991937246Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 01:29:11.992277 containerd[1956]: time="2024-12-13T01:29:11.991971606Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 01:29:11.992277 containerd[1956]: time="2024-12-13T01:29:11.992010720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 01:29:11.992277 containerd[1956]: time="2024-12-13T01:29:11.992029510Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 01:29:11.992277 containerd[1956]: time="2024-12-13T01:29:11.992046444Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 01:29:11.994519 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 01:29:12.007844 containerd[1956]: time="2024-12-13T01:29:11.998157265Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 01:29:12.007844 containerd[1956]: time="2024-12-13T01:29:11.998302367Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 01:29:12.007844 containerd[1956]: time="2024-12-13T01:29:11.998321909Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 01:29:12.007844 containerd[1956]: time="2024-12-13T01:29:11.998340514Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 01:29:12.007844 containerd[1956]: time="2024-12-13T01:29:11.998357286Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 01:29:12.007844 containerd[1956]: time="2024-12-13T01:29:11.998379094Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 01:29:12.007844 containerd[1956]: time="2024-12-13T01:29:11.998434061Z" level=info msg="NRI interface is disabled by configuration." Dec 13 01:29:12.007844 containerd[1956]: time="2024-12-13T01:29:11.998450622Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 01:29:12.003965 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 01:29:12.008756 containerd[1956]: time="2024-12-13T01:29:11.998947337Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 01:29:12.008756 containerd[1956]: time="2024-12-13T01:29:11.999047059Z" level=info msg="Connect containerd service" Dec 13 01:29:12.008756 containerd[1956]: time="2024-12-13T01:29:11.999106636Z" level=info msg="using legacy CRI server" Dec 13 01:29:12.008756 containerd[1956]: time="2024-12-13T01:29:11.999118616Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 01:29:12.008756 containerd[1956]: time="2024-12-13T01:29:12.008007367Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 01:29:12.013949 containerd[1956]: time="2024-12-13T01:29:12.013780752Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:29:12.014820 containerd[1956]: time="2024-12-13T01:29:12.014324756Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 01:29:12.017828 containerd[1956]: time="2024-12-13T01:29:12.017791846Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 01:29:12.018064 containerd[1956]: time="2024-12-13T01:29:12.018027839Z" level=info msg="Start subscribing containerd event" Dec 13 01:29:12.037178 containerd[1956]: time="2024-12-13T01:29:12.036603211Z" level=info msg="Start recovering state" Dec 13 01:29:12.037178 containerd[1956]: time="2024-12-13T01:29:12.036746628Z" level=info msg="Start event monitor" Dec 13 01:29:12.037178 containerd[1956]: time="2024-12-13T01:29:12.036775457Z" level=info msg="Start snapshots syncer" Dec 13 01:29:12.037178 containerd[1956]: time="2024-12-13T01:29:12.036789749Z" level=info msg="Start cni network conf syncer for default" Dec 13 01:29:12.037178 containerd[1956]: time="2024-12-13T01:29:12.036800624Z" level=info msg="Start streaming server" Dec 13 01:29:12.037178 containerd[1956]: time="2024-12-13T01:29:12.036896293Z" level=info msg="containerd successfully booted in 0.221677s" Dec 13 01:29:12.037284 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 01:29:12.046044 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 01:29:12.046305 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 01:29:12.051659 amazon-ssm-agent[2121]: 2024-12-13 01:29:11 INFO http_proxy: Dec 13 01:29:12.060212 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 01:29:12.098181 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 01:29:12.110316 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 01:29:12.118224 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 01:29:12.120645 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 01:29:12.149894 amazon-ssm-agent[2121]: 2024-12-13 01:29:11 INFO Checking if agent identity type OnPrem can be assumed Dec 13 01:29:12.249039 amazon-ssm-agent[2121]: 2024-12-13 01:29:11 INFO Checking if agent identity type EC2 can be assumed Dec 13 01:29:12.314521 tar[1948]: linux-amd64/LICENSE Dec 13 01:29:12.315003 tar[1948]: linux-amd64/README.md Dec 13 01:29:12.336159 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 01:29:12.349084 amazon-ssm-agent[2121]: 2024-12-13 01:29:11 INFO Agent will take identity from EC2 Dec 13 01:29:12.449179 amazon-ssm-agent[2121]: 2024-12-13 01:29:11 INFO [amazon-ssm-agent] using named pipe channel for IPC Dec 13 01:29:12.553463 amazon-ssm-agent[2121]: 2024-12-13 01:29:11 INFO [amazon-ssm-agent] using named pipe channel for IPC Dec 13 01:29:12.651903 amazon-ssm-agent[2121]: 2024-12-13 01:29:11 INFO [amazon-ssm-agent] using named pipe channel for IPC Dec 13 01:29:12.665514 amazon-ssm-agent[2121]: 2024-12-13 01:29:11 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Dec 13 01:29:12.665514 amazon-ssm-agent[2121]: 2024-12-13 01:29:11 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Dec 13 01:29:12.665514 amazon-ssm-agent[2121]: 2024-12-13 01:29:11 INFO [amazon-ssm-agent] Starting Core Agent Dec 13 01:29:12.665514 amazon-ssm-agent[2121]: 2024-12-13 01:29:11 INFO [amazon-ssm-agent] registrar detected. Attempting registration Dec 13 01:29:12.665514 amazon-ssm-agent[2121]: 2024-12-13 01:29:11 INFO [Registrar] Starting registrar module Dec 13 01:29:12.665514 amazon-ssm-agent[2121]: 2024-12-13 01:29:11 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Dec 13 01:29:12.665514 amazon-ssm-agent[2121]: 2024-12-13 01:29:12 INFO [EC2Identity] EC2 registration was successful. Dec 13 01:29:12.665514 amazon-ssm-agent[2121]: 2024-12-13 01:29:12 INFO [CredentialRefresher] credentialRefresher has started Dec 13 01:29:12.665514 amazon-ssm-agent[2121]: 2024-12-13 01:29:12 INFO [CredentialRefresher] Starting credentials refresher loop Dec 13 01:29:12.665514 amazon-ssm-agent[2121]: 2024-12-13 01:29:12 INFO EC2RoleProvider Successfully connected with instance profile role credentials Dec 13 01:29:12.750542 amazon-ssm-agent[2121]: 2024-12-13 01:29:12 INFO [CredentialRefresher] Next credential rotation will be in 31.683325784766666 minutes Dec 13 01:29:13.525109 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:29:13.537048 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 01:29:13.540468 systemd[1]: Startup finished in 914ms (kernel) + 9.206s (initrd) + 8.543s (userspace) = 18.664s. Dec 13 01:29:13.544520 (kubelet)[2172]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:29:13.657353 ntpd[1930]: Listen normally on 6 eth0 [fe80::482:51ff:fe5a:d4f1%2]:123 Dec 13 01:29:13.658675 ntpd[1930]: 13 Dec 01:29:13 ntpd[1930]: Listen normally on 6 eth0 [fe80::482:51ff:fe5a:d4f1%2]:123 Dec 13 01:29:13.712589 amazon-ssm-agent[2121]: 2024-12-13 01:29:13 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Dec 13 01:29:13.817597 amazon-ssm-agent[2121]: 2024-12-13 01:29:13 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2176) started Dec 13 01:29:13.921773 amazon-ssm-agent[2121]: 2024-12-13 01:29:13 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Dec 13 01:29:14.780065 kubelet[2172]: E1213 01:29:14.779957 2172 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:29:14.783172 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:29:14.783377 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:29:14.784076 systemd[1]: kubelet.service: Consumed 1.004s CPU time. Dec 13 01:29:18.532982 systemd-resolved[1754]: Clock change detected. Flushing caches. Dec 13 01:29:21.093326 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 01:29:21.106801 systemd[1]: Started sshd@0-172.31.31.20:22-139.178.68.195:42088.service - OpenSSH per-connection server daemon (139.178.68.195:42088). Dec 13 01:29:21.338484 sshd[2196]: Accepted publickey for core from 139.178.68.195 port 42088 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:29:21.341519 sshd[2196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:29:21.372416 systemd-logind[1938]: New session 1 of user core. Dec 13 01:29:21.375977 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 01:29:21.389942 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 01:29:21.449631 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 01:29:21.462567 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 01:29:21.493124 (systemd)[2200]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:29:21.689765 systemd[2200]: Queued start job for default target default.target. Dec 13 01:29:21.696415 systemd[2200]: Created slice app.slice - User Application Slice. Dec 13 01:29:21.696459 systemd[2200]: Reached target paths.target - Paths. Dec 13 01:29:21.696479 systemd[2200]: Reached target timers.target - Timers. Dec 13 01:29:21.698174 systemd[2200]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 01:29:21.713899 systemd[2200]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 01:29:21.714057 systemd[2200]: Reached target sockets.target - Sockets. Dec 13 01:29:21.714079 systemd[2200]: Reached target basic.target - Basic System. Dec 13 01:29:21.714134 systemd[2200]: Reached target default.target - Main User Target. Dec 13 01:29:21.714181 systemd[2200]: Startup finished in 202ms. Dec 13 01:29:21.714496 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 01:29:21.722129 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 01:29:21.878295 systemd[1]: Started sshd@1-172.31.31.20:22-139.178.68.195:42094.service - OpenSSH per-connection server daemon (139.178.68.195:42094). Dec 13 01:29:22.036282 sshd[2211]: Accepted publickey for core from 139.178.68.195 port 42094 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:29:22.038927 sshd[2211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:29:22.046414 systemd-logind[1938]: New session 2 of user core. Dec 13 01:29:22.053134 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 01:29:22.183498 sshd[2211]: pam_unix(sshd:session): session closed for user core Dec 13 01:29:22.190972 systemd[1]: sshd@1-172.31.31.20:22-139.178.68.195:42094.service: Deactivated successfully. Dec 13 01:29:22.194702 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 01:29:22.198255 systemd-logind[1938]: Session 2 logged out. Waiting for processes to exit. Dec 13 01:29:22.201941 systemd-logind[1938]: Removed session 2. Dec 13 01:29:22.227336 systemd[1]: Started sshd@2-172.31.31.20:22-139.178.68.195:42108.service - OpenSSH per-connection server daemon (139.178.68.195:42108). Dec 13 01:29:22.396596 sshd[2218]: Accepted publickey for core from 139.178.68.195 port 42108 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:29:22.397985 sshd[2218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:29:22.406448 systemd-logind[1938]: New session 3 of user core. Dec 13 01:29:22.419148 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 01:29:22.539461 sshd[2218]: pam_unix(sshd:session): session closed for user core Dec 13 01:29:22.544223 systemd[1]: sshd@2-172.31.31.20:22-139.178.68.195:42108.service: Deactivated successfully. Dec 13 01:29:22.546618 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 01:29:22.547567 systemd-logind[1938]: Session 3 logged out. Waiting for processes to exit. Dec 13 01:29:22.548740 systemd-logind[1938]: Removed session 3. Dec 13 01:29:22.572244 systemd[1]: Started sshd@3-172.31.31.20:22-139.178.68.195:42112.service - OpenSSH per-connection server daemon (139.178.68.195:42112). Dec 13 01:29:22.733465 sshd[2225]: Accepted publickey for core from 139.178.68.195 port 42112 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:29:22.734601 sshd[2225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:29:22.739034 systemd-logind[1938]: New session 4 of user core. Dec 13 01:29:22.745085 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 01:29:22.867935 sshd[2225]: pam_unix(sshd:session): session closed for user core Dec 13 01:29:22.881775 systemd[1]: sshd@3-172.31.31.20:22-139.178.68.195:42112.service: Deactivated successfully. Dec 13 01:29:22.886427 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 01:29:22.891959 systemd-logind[1938]: Session 4 logged out. Waiting for processes to exit. Dec 13 01:29:22.903486 systemd[1]: Started sshd@4-172.31.31.20:22-139.178.68.195:42128.service - OpenSSH per-connection server daemon (139.178.68.195:42128). Dec 13 01:29:22.904754 systemd-logind[1938]: Removed session 4. Dec 13 01:29:23.067502 sshd[2232]: Accepted publickey for core from 139.178.68.195 port 42128 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:29:23.069810 sshd[2232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:29:23.076320 systemd-logind[1938]: New session 5 of user core. Dec 13 01:29:23.084090 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 01:29:23.237117 sudo[2235]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 01:29:23.237601 sudo[2235]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:29:23.267656 sudo[2235]: pam_unix(sudo:session): session closed for user root Dec 13 01:29:23.294695 sshd[2232]: pam_unix(sshd:session): session closed for user core Dec 13 01:29:23.339220 systemd[1]: sshd@4-172.31.31.20:22-139.178.68.195:42128.service: Deactivated successfully. Dec 13 01:29:23.342669 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 01:29:23.345915 systemd-logind[1938]: Session 5 logged out. Waiting for processes to exit. Dec 13 01:29:23.354283 systemd[1]: Started sshd@5-172.31.31.20:22-139.178.68.195:42144.service - OpenSSH per-connection server daemon (139.178.68.195:42144). Dec 13 01:29:23.358014 systemd-logind[1938]: Removed session 5. Dec 13 01:29:23.527954 sshd[2240]: Accepted publickey for core from 139.178.68.195 port 42144 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:29:23.530362 sshd[2240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:29:23.541008 systemd-logind[1938]: New session 6 of user core. Dec 13 01:29:23.544870 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 01:29:23.647449 sudo[2244]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 01:29:23.647861 sudo[2244]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:29:23.656064 sudo[2244]: pam_unix(sudo:session): session closed for user root Dec 13 01:29:23.666268 sudo[2243]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 01:29:23.666662 sudo[2243]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:29:23.692431 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 01:29:23.695922 auditctl[2247]: No rules Dec 13 01:29:23.696705 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 01:29:23.697429 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 01:29:23.711685 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:29:23.781099 augenrules[2265]: No rules Dec 13 01:29:23.785527 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:29:23.787995 sudo[2243]: pam_unix(sudo:session): session closed for user root Dec 13 01:29:23.815064 sshd[2240]: pam_unix(sshd:session): session closed for user core Dec 13 01:29:23.820666 systemd[1]: sshd@5-172.31.31.20:22-139.178.68.195:42144.service: Deactivated successfully. Dec 13 01:29:23.829458 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 01:29:23.831287 systemd-logind[1938]: Session 6 logged out. Waiting for processes to exit. Dec 13 01:29:23.836200 systemd-logind[1938]: Removed session 6. Dec 13 01:29:23.853781 systemd[1]: Started sshd@6-172.31.31.20:22-139.178.68.195:42154.service - OpenSSH per-connection server daemon (139.178.68.195:42154). Dec 13 01:29:24.037823 sshd[2273]: Accepted publickey for core from 139.178.68.195 port 42154 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:29:24.039127 sshd[2273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:29:24.045391 systemd-logind[1938]: New session 7 of user core. Dec 13 01:29:24.052780 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 01:29:24.150587 sudo[2276]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 01:29:24.150996 sudo[2276]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:29:24.870235 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 01:29:24.871453 (dockerd)[2292]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 01:29:25.549028 dockerd[2292]: time="2024-12-13T01:29:25.548966886Z" level=info msg="Starting up" Dec 13 01:29:25.730202 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 01:29:25.736212 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:29:25.821499 dockerd[2292]: time="2024-12-13T01:29:25.821141753Z" level=info msg="Loading containers: start." Dec 13 01:29:25.994884 kernel: Initializing XFRM netlink socket Dec 13 01:29:26.018069 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:29:26.021684 (kubelet)[2366]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:29:26.045191 (udev-worker)[2318]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:29:26.116535 kubelet[2366]: E1213 01:29:26.116353 2366 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:29:26.124707 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:29:26.125883 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:29:26.133137 systemd-networkd[1800]: docker0: Link UP Dec 13 01:29:26.160891 dockerd[2292]: time="2024-12-13T01:29:26.160831509Z" level=info msg="Loading containers: done." Dec 13 01:29:26.210029 dockerd[2292]: time="2024-12-13T01:29:26.209984357Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 01:29:26.210305 dockerd[2292]: time="2024-12-13T01:29:26.210113680Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 13 01:29:26.210305 dockerd[2292]: time="2024-12-13T01:29:26.210249573Z" level=info msg="Daemon has completed initialization" Dec 13 01:29:26.249821 dockerd[2292]: time="2024-12-13T01:29:26.249651715Z" level=info msg="API listen on /run/docker.sock" Dec 13 01:29:26.249961 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 01:29:27.489598 containerd[1956]: time="2024-12-13T01:29:27.489544655Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\"" Dec 13 01:29:28.166603 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1601849957.mount: Deactivated successfully. Dec 13 01:29:30.540267 containerd[1956]: time="2024-12-13T01:29:30.540208937Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:30.541733 containerd[1956]: time="2024-12-13T01:29:30.541689330Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.4: active requests=0, bytes read=27975483" Dec 13 01:29:30.543491 containerd[1956]: time="2024-12-13T01:29:30.543040264Z" level=info msg="ImageCreate event name:\"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:30.545975 containerd[1956]: time="2024-12-13T01:29:30.545933166Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:30.547185 containerd[1956]: time="2024-12-13T01:29:30.547141264Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.4\" with image id \"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\", size \"27972283\" in 3.057549394s" Dec 13 01:29:30.547281 containerd[1956]: time="2024-12-13T01:29:30.547195929Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\" returns image reference \"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\"" Dec 13 01:29:30.549347 containerd[1956]: time="2024-12-13T01:29:30.549315398Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\"" Dec 13 01:29:32.837383 containerd[1956]: time="2024-12-13T01:29:32.837322199Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:32.839663 containerd[1956]: time="2024-12-13T01:29:32.839494332Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.4: active requests=0, bytes read=24702157" Dec 13 01:29:32.842257 containerd[1956]: time="2024-12-13T01:29:32.841869598Z" level=info msg="ImageCreate event name:\"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:32.846357 containerd[1956]: time="2024-12-13T01:29:32.846305988Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:32.847809 containerd[1956]: time="2024-12-13T01:29:32.847746659Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.4\" with image id \"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\", size \"26147269\" in 2.298391378s" Dec 13 01:29:32.848015 containerd[1956]: time="2024-12-13T01:29:32.847992266Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\" returns image reference \"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\"" Dec 13 01:29:32.848988 containerd[1956]: time="2024-12-13T01:29:32.848957932Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\"" Dec 13 01:29:34.679130 containerd[1956]: time="2024-12-13T01:29:34.679076638Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:34.683041 containerd[1956]: time="2024-12-13T01:29:34.682789574Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.4: active requests=0, bytes read=18652067" Dec 13 01:29:34.687007 containerd[1956]: time="2024-12-13T01:29:34.686358290Z" level=info msg="ImageCreate event name:\"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:34.690133 containerd[1956]: time="2024-12-13T01:29:34.690088361Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:34.691201 containerd[1956]: time="2024-12-13T01:29:34.691162488Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.4\" with image id \"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\", size \"20097197\" in 1.842169443s" Dec 13 01:29:34.691341 containerd[1956]: time="2024-12-13T01:29:34.691320711Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\" returns image reference \"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\"" Dec 13 01:29:34.692295 containerd[1956]: time="2024-12-13T01:29:34.692264568Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\"" Dec 13 01:29:35.922818 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1678388636.mount: Deactivated successfully. Dec 13 01:29:36.249928 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 01:29:36.270189 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:29:36.570495 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:29:36.583475 (kubelet)[2525]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:29:36.685007 kubelet[2525]: E1213 01:29:36.684893 2525 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:29:36.688071 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:29:36.688259 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:29:36.773076 containerd[1956]: time="2024-12-13T01:29:36.773023444Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:36.774309 containerd[1956]: time="2024-12-13T01:29:36.774168583Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.4: active requests=0, bytes read=30230243" Dec 13 01:29:36.776473 containerd[1956]: time="2024-12-13T01:29:36.775269980Z" level=info msg="ImageCreate event name:\"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:36.778553 containerd[1956]: time="2024-12-13T01:29:36.777723333Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:36.778553 containerd[1956]: time="2024-12-13T01:29:36.778386920Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.4\" with image id \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\", repo tag \"registry.k8s.io/kube-proxy:v1.31.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\", size \"30229262\" in 2.086084913s" Dec 13 01:29:36.778553 containerd[1956]: time="2024-12-13T01:29:36.778427215Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\"" Dec 13 01:29:36.779162 containerd[1956]: time="2024-12-13T01:29:36.779137075Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 01:29:37.315550 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1327116175.mount: Deactivated successfully. Dec 13 01:29:38.609281 containerd[1956]: time="2024-12-13T01:29:38.609224207Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:38.610645 containerd[1956]: time="2024-12-13T01:29:38.610592448Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Dec 13 01:29:38.612017 containerd[1956]: time="2024-12-13T01:29:38.611608568Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:38.614469 containerd[1956]: time="2024-12-13T01:29:38.614426882Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:38.615901 containerd[1956]: time="2024-12-13T01:29:38.615829827Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.836568524s" Dec 13 01:29:38.615901 containerd[1956]: time="2024-12-13T01:29:38.615897449Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 01:29:38.617004 containerd[1956]: time="2024-12-13T01:29:38.616909321Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 13 01:29:39.177560 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount791649396.mount: Deactivated successfully. Dec 13 01:29:39.196075 containerd[1956]: time="2024-12-13T01:29:39.196016066Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:39.197196 containerd[1956]: time="2024-12-13T01:29:39.197111967Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Dec 13 01:29:39.199159 containerd[1956]: time="2024-12-13T01:29:39.199117867Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:39.208769 containerd[1956]: time="2024-12-13T01:29:39.206114137Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:39.208769 containerd[1956]: time="2024-12-13T01:29:39.207236318Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 590.102633ms" Dec 13 01:29:39.208769 containerd[1956]: time="2024-12-13T01:29:39.207279371Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Dec 13 01:29:39.211312 containerd[1956]: time="2024-12-13T01:29:39.211266592Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Dec 13 01:29:39.819701 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2827827167.mount: Deactivated successfully. Dec 13 01:29:42.173362 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 13 01:29:42.582879 containerd[1956]: time="2024-12-13T01:29:42.582189992Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:42.587752 containerd[1956]: time="2024-12-13T01:29:42.587334951Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779973" Dec 13 01:29:42.592168 containerd[1956]: time="2024-12-13T01:29:42.590670621Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:42.600046 containerd[1956]: time="2024-12-13T01:29:42.599943585Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:42.605132 containerd[1956]: time="2024-12-13T01:29:42.601472992Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.389523095s" Dec 13 01:29:42.605132 containerd[1956]: time="2024-12-13T01:29:42.601520616Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Dec 13 01:29:46.752831 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 01:29:46.766233 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:29:46.798828 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 01:29:46.798992 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 01:29:46.799509 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:29:46.808893 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:29:46.873388 systemd[1]: Reloading requested from client PID 2667 ('systemctl') (unit session-7.scope)... Dec 13 01:29:46.873409 systemd[1]: Reloading... Dec 13 01:29:47.102873 zram_generator::config[2707]: No configuration found. Dec 13 01:29:47.291429 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:29:47.411976 systemd[1]: Reloading finished in 537 ms. Dec 13 01:29:47.501228 (kubelet)[2759]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:29:47.505063 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:29:47.505780 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:29:47.506060 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:29:47.512313 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:29:47.864359 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:29:47.880389 (kubelet)[2770]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:29:47.929682 kubelet[2770]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:29:47.929682 kubelet[2770]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:29:47.929682 kubelet[2770]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:29:47.931798 kubelet[2770]: I1213 01:29:47.931743 2770 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:29:48.204770 kubelet[2770]: I1213 01:29:48.204621 2770 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Dec 13 01:29:48.206695 kubelet[2770]: I1213 01:29:48.206663 2770 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:29:48.207210 kubelet[2770]: I1213 01:29:48.207190 2770 server.go:929] "Client rotation is on, will bootstrap in background" Dec 13 01:29:48.255883 kubelet[2770]: I1213 01:29:48.255486 2770 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:29:48.258874 kubelet[2770]: E1213 01:29:48.258619 2770 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.31.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.31.20:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:29:48.269895 kubelet[2770]: E1213 01:29:48.269834 2770 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Dec 13 01:29:48.269895 kubelet[2770]: I1213 01:29:48.269890 2770 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Dec 13 01:29:48.274635 kubelet[2770]: I1213 01:29:48.274604 2770 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:29:48.276630 kubelet[2770]: I1213 01:29:48.276602 2770 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 13 01:29:48.276882 kubelet[2770]: I1213 01:29:48.276825 2770 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:29:48.277082 kubelet[2770]: I1213 01:29:48.276882 2770 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-31-20","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 01:29:48.277235 kubelet[2770]: I1213 01:29:48.277089 2770 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:29:48.277235 kubelet[2770]: I1213 01:29:48.277108 2770 container_manager_linux.go:300] "Creating device plugin manager" Dec 13 01:29:48.277311 kubelet[2770]: I1213 01:29:48.277237 2770 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:29:48.280961 kubelet[2770]: I1213 01:29:48.280291 2770 kubelet.go:408] "Attempting to sync node with API server" Dec 13 01:29:48.280961 kubelet[2770]: I1213 01:29:48.280327 2770 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:29:48.280961 kubelet[2770]: I1213 01:29:48.280367 2770 kubelet.go:314] "Adding apiserver pod source" Dec 13 01:29:48.280961 kubelet[2770]: I1213 01:29:48.280386 2770 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:29:48.291914 kubelet[2770]: W1213 01:29:48.291823 2770 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.31.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-20&limit=500&resourceVersion=0": dial tcp 172.31.31.20:6443: connect: connection refused Dec 13 01:29:48.291914 kubelet[2770]: E1213 01:29:48.291916 2770 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.31.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-20&limit=500&resourceVersion=0\": dial tcp 172.31.31.20:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:29:48.294018 kubelet[2770]: W1213 01:29:48.293964 2770 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.31.20:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.31.20:6443: connect: connection refused Dec 13 01:29:48.294112 kubelet[2770]: E1213 01:29:48.294028 2770 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.31.20:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.31.20:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:29:48.294157 kubelet[2770]: I1213 01:29:48.294144 2770 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:29:48.300236 kubelet[2770]: I1213 01:29:48.300207 2770 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:29:48.301339 kubelet[2770]: W1213 01:29:48.301313 2770 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 01:29:48.302155 kubelet[2770]: I1213 01:29:48.302064 2770 server.go:1269] "Started kubelet" Dec 13 01:29:48.311832 kubelet[2770]: I1213 01:29:48.310987 2770 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:29:48.315243 kubelet[2770]: I1213 01:29:48.314238 2770 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:29:48.315787 kubelet[2770]: I1213 01:29:48.315716 2770 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:29:48.316320 kubelet[2770]: I1213 01:29:48.316303 2770 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:29:48.320781 kubelet[2770]: E1213 01:29:48.320750 2770 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-31-20\" not found" Dec 13 01:29:48.320781 kubelet[2770]: I1213 01:29:48.318229 2770 server.go:460] "Adding debug handlers to kubelet server" Dec 13 01:29:48.322302 kubelet[2770]: I1213 01:29:48.318272 2770 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 13 01:29:48.322510 kubelet[2770]: I1213 01:29:48.317613 2770 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 01:29:48.324619 kubelet[2770]: I1213 01:29:48.324596 2770 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 01:29:48.332901 kubelet[2770]: E1213 01:29:48.325627 2770 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.31.20:6443/api/v1/namespaces/default/events\": dial tcp 172.31.31.20:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-31-20.1810985e874e199d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-31-20,UID:ip-172-31-31-20,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-31-20,},FirstTimestamp:2024-12-13 01:29:48.302031261 +0000 UTC m=+0.416774880,LastTimestamp:2024-12-13 01:29:48.302031261 +0000 UTC m=+0.416774880,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-31-20,}" Dec 13 01:29:48.332901 kubelet[2770]: E1213 01:29:48.332152 2770 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-20?timeout=10s\": dial tcp 172.31.31.20:6443: connect: connection refused" interval="200ms" Dec 13 01:29:48.333868 kubelet[2770]: I1213 01:29:48.333570 2770 reconciler.go:26] "Reconciler: start to sync state" Dec 13 01:29:48.335490 kubelet[2770]: I1213 01:29:48.335463 2770 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:29:48.335631 kubelet[2770]: I1213 01:29:48.335557 2770 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:29:48.337280 kubelet[2770]: W1213 01:29:48.336195 2770 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.31.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.31.20:6443: connect: connection refused Dec 13 01:29:48.337280 kubelet[2770]: E1213 01:29:48.336262 2770 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.31.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.31.20:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:29:48.342821 kubelet[2770]: I1213 01:29:48.341907 2770 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:29:48.364392 kubelet[2770]: I1213 01:29:48.364214 2770 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:29:48.367455 kubelet[2770]: I1213 01:29:48.367402 2770 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:29:48.367866 kubelet[2770]: I1213 01:29:48.367699 2770 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:29:48.367866 kubelet[2770]: I1213 01:29:48.367734 2770 kubelet.go:2321] "Starting kubelet main sync loop" Dec 13 01:29:48.367866 kubelet[2770]: E1213 01:29:48.367798 2770 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:29:48.371588 kubelet[2770]: W1213 01:29:48.371487 2770 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.31.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.31.20:6443: connect: connection refused Dec 13 01:29:48.371588 kubelet[2770]: E1213 01:29:48.371536 2770 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.31.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.31.20:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:29:48.377781 kubelet[2770]: I1213 01:29:48.377730 2770 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:29:48.377781 kubelet[2770]: I1213 01:29:48.377748 2770 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:29:48.377781 kubelet[2770]: I1213 01:29:48.377769 2770 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:29:48.379971 kubelet[2770]: I1213 01:29:48.379949 2770 policy_none.go:49] "None policy: Start" Dec 13 01:29:48.380761 kubelet[2770]: I1213 01:29:48.380743 2770 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:29:48.380891 kubelet[2770]: I1213 01:29:48.380769 2770 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:29:48.387445 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 01:29:48.397210 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 01:29:48.401049 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 01:29:48.413875 kubelet[2770]: I1213 01:29:48.413832 2770 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:29:48.415013 kubelet[2770]: I1213 01:29:48.414915 2770 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 01:29:48.415013 kubelet[2770]: I1213 01:29:48.414942 2770 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 01:29:48.415966 kubelet[2770]: I1213 01:29:48.415612 2770 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:29:48.418881 kubelet[2770]: E1213 01:29:48.418822 2770 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-31-20\" not found" Dec 13 01:29:48.478914 systemd[1]: Created slice kubepods-burstable-pod8f277926f7948ce131e3dc44161e8cbf.slice - libcontainer container kubepods-burstable-pod8f277926f7948ce131e3dc44161e8cbf.slice. Dec 13 01:29:48.498800 systemd[1]: Created slice kubepods-burstable-pod9c40a5d663b166d3b9d529093a20243d.slice - libcontainer container kubepods-burstable-pod9c40a5d663b166d3b9d529093a20243d.slice. Dec 13 01:29:48.509379 systemd[1]: Created slice kubepods-burstable-pod1c33ccc3ea5b328341d89a2011f9555b.slice - libcontainer container kubepods-burstable-pod1c33ccc3ea5b328341d89a2011f9555b.slice. Dec 13 01:29:48.517019 kubelet[2770]: I1213 01:29:48.516980 2770 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-31-20" Dec 13 01:29:48.517343 kubelet[2770]: E1213 01:29:48.517318 2770 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.31.20:6443/api/v1/nodes\": dial tcp 172.31.31.20:6443: connect: connection refused" node="ip-172-31-31-20" Dec 13 01:29:48.533492 kubelet[2770]: E1213 01:29:48.533435 2770 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-20?timeout=10s\": dial tcp 172.31.31.20:6443: connect: connection refused" interval="400ms" Dec 13 01:29:48.534670 kubelet[2770]: I1213 01:29:48.534639 2770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8f277926f7948ce131e3dc44161e8cbf-ca-certs\") pod \"kube-apiserver-ip-172-31-31-20\" (UID: \"8f277926f7948ce131e3dc44161e8cbf\") " pod="kube-system/kube-apiserver-ip-172-31-31-20" Dec 13 01:29:48.534987 kubelet[2770]: I1213 01:29:48.534678 2770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9c40a5d663b166d3b9d529093a20243d-ca-certs\") pod \"kube-controller-manager-ip-172-31-31-20\" (UID: \"9c40a5d663b166d3b9d529093a20243d\") " pod="kube-system/kube-controller-manager-ip-172-31-31-20" Dec 13 01:29:48.534987 kubelet[2770]: I1213 01:29:48.534704 2770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9c40a5d663b166d3b9d529093a20243d-k8s-certs\") pod \"kube-controller-manager-ip-172-31-31-20\" (UID: \"9c40a5d663b166d3b9d529093a20243d\") " pod="kube-system/kube-controller-manager-ip-172-31-31-20" Dec 13 01:29:48.534987 kubelet[2770]: I1213 01:29:48.534739 2770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9c40a5d663b166d3b9d529093a20243d-kubeconfig\") pod \"kube-controller-manager-ip-172-31-31-20\" (UID: \"9c40a5d663b166d3b9d529093a20243d\") " pod="kube-system/kube-controller-manager-ip-172-31-31-20" Dec 13 01:29:48.534987 kubelet[2770]: I1213 01:29:48.534774 2770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1c33ccc3ea5b328341d89a2011f9555b-kubeconfig\") pod \"kube-scheduler-ip-172-31-31-20\" (UID: \"1c33ccc3ea5b328341d89a2011f9555b\") " pod="kube-system/kube-scheduler-ip-172-31-31-20" Dec 13 01:29:48.534987 kubelet[2770]: I1213 01:29:48.534798 2770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8f277926f7948ce131e3dc44161e8cbf-k8s-certs\") pod \"kube-apiserver-ip-172-31-31-20\" (UID: \"8f277926f7948ce131e3dc44161e8cbf\") " pod="kube-system/kube-apiserver-ip-172-31-31-20" Dec 13 01:29:48.535130 kubelet[2770]: I1213 01:29:48.534826 2770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8f277926f7948ce131e3dc44161e8cbf-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-31-20\" (UID: \"8f277926f7948ce131e3dc44161e8cbf\") " pod="kube-system/kube-apiserver-ip-172-31-31-20" Dec 13 01:29:48.535130 kubelet[2770]: I1213 01:29:48.534861 2770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9c40a5d663b166d3b9d529093a20243d-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-31-20\" (UID: \"9c40a5d663b166d3b9d529093a20243d\") " pod="kube-system/kube-controller-manager-ip-172-31-31-20" Dec 13 01:29:48.535130 kubelet[2770]: I1213 01:29:48.534885 2770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9c40a5d663b166d3b9d529093a20243d-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-31-20\" (UID: \"9c40a5d663b166d3b9d529093a20243d\") " pod="kube-system/kube-controller-manager-ip-172-31-31-20" Dec 13 01:29:48.719761 kubelet[2770]: I1213 01:29:48.719730 2770 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-31-20" Dec 13 01:29:48.720103 kubelet[2770]: E1213 01:29:48.720070 2770 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.31.20:6443/api/v1/nodes\": dial tcp 172.31.31.20:6443: connect: connection refused" node="ip-172-31-31-20" Dec 13 01:29:48.797363 containerd[1956]: time="2024-12-13T01:29:48.797311549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-31-20,Uid:8f277926f7948ce131e3dc44161e8cbf,Namespace:kube-system,Attempt:0,}" Dec 13 01:29:48.814647 containerd[1956]: time="2024-12-13T01:29:48.814595289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-31-20,Uid:1c33ccc3ea5b328341d89a2011f9555b,Namespace:kube-system,Attempt:0,}" Dec 13 01:29:48.815252 containerd[1956]: time="2024-12-13T01:29:48.814595378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-31-20,Uid:9c40a5d663b166d3b9d529093a20243d,Namespace:kube-system,Attempt:0,}" Dec 13 01:29:48.934440 kubelet[2770]: E1213 01:29:48.934385 2770 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-20?timeout=10s\": dial tcp 172.31.31.20:6443: connect: connection refused" interval="800ms" Dec 13 01:29:49.122355 kubelet[2770]: I1213 01:29:49.122235 2770 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-31-20" Dec 13 01:29:49.122893 kubelet[2770]: E1213 01:29:49.122824 2770 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.31.20:6443/api/v1/nodes\": dial tcp 172.31.31.20:6443: connect: connection refused" node="ip-172-31-31-20" Dec 13 01:29:49.338229 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2144582032.mount: Deactivated successfully. Dec 13 01:29:49.350335 containerd[1956]: time="2024-12-13T01:29:49.350272166Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:29:49.352041 containerd[1956]: time="2024-12-13T01:29:49.351999428Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:29:49.353046 containerd[1956]: time="2024-12-13T01:29:49.352981238Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Dec 13 01:29:49.354315 containerd[1956]: time="2024-12-13T01:29:49.354278222Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:29:49.356083 containerd[1956]: time="2024-12-13T01:29:49.356039446Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:29:49.356885 containerd[1956]: time="2024-12-13T01:29:49.356195478Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:29:49.356885 containerd[1956]: time="2024-12-13T01:29:49.356458091Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:29:49.359545 containerd[1956]: time="2024-12-13T01:29:49.359509392Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:29:49.362742 containerd[1956]: time="2024-12-13T01:29:49.362650294Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 547.956222ms" Dec 13 01:29:49.364563 containerd[1956]: time="2024-12-13T01:29:49.364520921Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 549.401144ms" Dec 13 01:29:49.368513 containerd[1956]: time="2024-12-13T01:29:49.368239270Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 570.838452ms" Dec 13 01:29:49.373245 kubelet[2770]: W1213 01:29:49.373111 2770 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.31.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-20&limit=500&resourceVersion=0": dial tcp 172.31.31.20:6443: connect: connection refused Dec 13 01:29:49.373245 kubelet[2770]: E1213 01:29:49.373192 2770 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.31.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-20&limit=500&resourceVersion=0\": dial tcp 172.31.31.20:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:29:49.451769 kubelet[2770]: W1213 01:29:49.451472 2770 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.31.20:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.31.20:6443: connect: connection refused Dec 13 01:29:49.451769 kubelet[2770]: E1213 01:29:49.451714 2770 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.31.20:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.31.20:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:29:49.627877 containerd[1956]: time="2024-12-13T01:29:49.627660482Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:29:49.628509 containerd[1956]: time="2024-12-13T01:29:49.628323041Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:29:49.628509 containerd[1956]: time="2024-12-13T01:29:49.628350615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:29:49.628509 containerd[1956]: time="2024-12-13T01:29:49.628435431Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:29:49.634000 containerd[1956]: time="2024-12-13T01:29:49.633889728Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:29:49.634572 containerd[1956]: time="2024-12-13T01:29:49.634506445Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:29:49.638493 containerd[1956]: time="2024-12-13T01:29:49.638059378Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:29:49.638493 containerd[1956]: time="2024-12-13T01:29:49.638224659Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:29:49.646622 containerd[1956]: time="2024-12-13T01:29:49.646523898Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:29:49.646863 containerd[1956]: time="2024-12-13T01:29:49.646814266Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:29:49.647070 containerd[1956]: time="2024-12-13T01:29:49.647036481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:29:49.647493 containerd[1956]: time="2024-12-13T01:29:49.647457838Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:29:49.672244 kubelet[2770]: W1213 01:29:49.672196 2770 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.31.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.31.20:6443: connect: connection refused Dec 13 01:29:49.674271 kubelet[2770]: E1213 01:29:49.673722 2770 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.31.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.31.20:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:29:49.682116 systemd[1]: Started cri-containerd-406fd4a2a863d8ac23c72dfa6e8b6848d9f16af4728e17473adf6d836c1f9d5b.scope - libcontainer container 406fd4a2a863d8ac23c72dfa6e8b6848d9f16af4728e17473adf6d836c1f9d5b. Dec 13 01:29:49.697697 systemd[1]: Started cri-containerd-6d4fe0f21a5e3180bf711669ca94830baf77c846c1e3276888e6d64d1a2bc2ce.scope - libcontainer container 6d4fe0f21a5e3180bf711669ca94830baf77c846c1e3276888e6d64d1a2bc2ce. Dec 13 01:29:49.724757 systemd[1]: Started cri-containerd-44c26419c3fd85441256a05e3284b3a3bab9be8983fc0752a7836256135095fd.scope - libcontainer container 44c26419c3fd85441256a05e3284b3a3bab9be8983fc0752a7836256135095fd. Dec 13 01:29:49.737416 kubelet[2770]: E1213 01:29:49.737144 2770 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-20?timeout=10s\": dial tcp 172.31.31.20:6443: connect: connection refused" interval="1.6s" Dec 13 01:29:49.814996 containerd[1956]: time="2024-12-13T01:29:49.812654403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-31-20,Uid:8f277926f7948ce131e3dc44161e8cbf,Namespace:kube-system,Attempt:0,} returns sandbox id \"406fd4a2a863d8ac23c72dfa6e8b6848d9f16af4728e17473adf6d836c1f9d5b\"" Dec 13 01:29:49.832356 containerd[1956]: time="2024-12-13T01:29:49.831001544Z" level=info msg="CreateContainer within sandbox \"406fd4a2a863d8ac23c72dfa6e8b6848d9f16af4728e17473adf6d836c1f9d5b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 01:29:49.841332 kubelet[2770]: W1213 01:29:49.841032 2770 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.31.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.31.20:6443: connect: connection refused Dec 13 01:29:49.841332 kubelet[2770]: E1213 01:29:49.841119 2770 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.31.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.31.20:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:29:49.854588 containerd[1956]: time="2024-12-13T01:29:49.854060026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-31-20,Uid:9c40a5d663b166d3b9d529093a20243d,Namespace:kube-system,Attempt:0,} returns sandbox id \"6d4fe0f21a5e3180bf711669ca94830baf77c846c1e3276888e6d64d1a2bc2ce\"" Dec 13 01:29:49.859393 containerd[1956]: time="2024-12-13T01:29:49.859186184Z" level=info msg="CreateContainer within sandbox \"6d4fe0f21a5e3180bf711669ca94830baf77c846c1e3276888e6d64d1a2bc2ce\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 01:29:49.872223 containerd[1956]: time="2024-12-13T01:29:49.871630348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-31-20,Uid:1c33ccc3ea5b328341d89a2011f9555b,Namespace:kube-system,Attempt:0,} returns sandbox id \"44c26419c3fd85441256a05e3284b3a3bab9be8983fc0752a7836256135095fd\"" Dec 13 01:29:49.876702 containerd[1956]: time="2024-12-13T01:29:49.876425781Z" level=info msg="CreateContainer within sandbox \"44c26419c3fd85441256a05e3284b3a3bab9be8983fc0752a7836256135095fd\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 01:29:49.890283 containerd[1956]: time="2024-12-13T01:29:49.890186772Z" level=info msg="CreateContainer within sandbox \"6d4fe0f21a5e3180bf711669ca94830baf77c846c1e3276888e6d64d1a2bc2ce\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"536e9b7e36fadc641dd0d5e5cac00b8d4d0143659722497fc49de530126ba54c\"" Dec 13 01:29:49.892908 containerd[1956]: time="2024-12-13T01:29:49.891342968Z" level=info msg="StartContainer for \"536e9b7e36fadc641dd0d5e5cac00b8d4d0143659722497fc49de530126ba54c\"" Dec 13 01:29:49.902006 containerd[1956]: time="2024-12-13T01:29:49.901959091Z" level=info msg="CreateContainer within sandbox \"406fd4a2a863d8ac23c72dfa6e8b6848d9f16af4728e17473adf6d836c1f9d5b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"612d8f3bc305f4a652617520f382066d2b1c1b915d3a247a128fa90e592c3d0a\"" Dec 13 01:29:49.903120 containerd[1956]: time="2024-12-13T01:29:49.903055387Z" level=info msg="StartContainer for \"612d8f3bc305f4a652617520f382066d2b1c1b915d3a247a128fa90e592c3d0a\"" Dec 13 01:29:49.904243 containerd[1956]: time="2024-12-13T01:29:49.903986332Z" level=info msg="CreateContainer within sandbox \"44c26419c3fd85441256a05e3284b3a3bab9be8983fc0752a7836256135095fd\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"723cb1477de6e07b2b2187250c79cc3e930504c15a93dbffac68dff4e4f4248b\"" Dec 13 01:29:49.904880 containerd[1956]: time="2024-12-13T01:29:49.904464637Z" level=info msg="StartContainer for \"723cb1477de6e07b2b2187250c79cc3e930504c15a93dbffac68dff4e4f4248b\"" Dec 13 01:29:49.926595 kubelet[2770]: I1213 01:29:49.926568 2770 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-31-20" Dec 13 01:29:49.927472 kubelet[2770]: E1213 01:29:49.927440 2770 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.31.20:6443/api/v1/nodes\": dial tcp 172.31.31.20:6443: connect: connection refused" node="ip-172-31-31-20" Dec 13 01:29:49.960349 systemd[1]: Started cri-containerd-536e9b7e36fadc641dd0d5e5cac00b8d4d0143659722497fc49de530126ba54c.scope - libcontainer container 536e9b7e36fadc641dd0d5e5cac00b8d4d0143659722497fc49de530126ba54c. Dec 13 01:29:49.979257 systemd[1]: Started cri-containerd-612d8f3bc305f4a652617520f382066d2b1c1b915d3a247a128fa90e592c3d0a.scope - libcontainer container 612d8f3bc305f4a652617520f382066d2b1c1b915d3a247a128fa90e592c3d0a. Dec 13 01:29:49.992317 systemd[1]: Started cri-containerd-723cb1477de6e07b2b2187250c79cc3e930504c15a93dbffac68dff4e4f4248b.scope - libcontainer container 723cb1477de6e07b2b2187250c79cc3e930504c15a93dbffac68dff4e4f4248b. Dec 13 01:29:50.092607 kubelet[2770]: E1213 01:29:50.092307 2770 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.31.20:6443/api/v1/namespaces/default/events\": dial tcp 172.31.31.20:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-31-20.1810985e874e199d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-31-20,UID:ip-172-31-31-20,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-31-20,},FirstTimestamp:2024-12-13 01:29:48.302031261 +0000 UTC m=+0.416774880,LastTimestamp:2024-12-13 01:29:48.302031261 +0000 UTC m=+0.416774880,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-31-20,}" Dec 13 01:29:50.097549 containerd[1956]: time="2024-12-13T01:29:50.096574779Z" level=info msg="StartContainer for \"536e9b7e36fadc641dd0d5e5cac00b8d4d0143659722497fc49de530126ba54c\" returns successfully" Dec 13 01:29:50.114629 containerd[1956]: time="2024-12-13T01:29:50.114490713Z" level=info msg="StartContainer for \"612d8f3bc305f4a652617520f382066d2b1c1b915d3a247a128fa90e592c3d0a\" returns successfully" Dec 13 01:29:50.124608 containerd[1956]: time="2024-12-13T01:29:50.124433491Z" level=info msg="StartContainer for \"723cb1477de6e07b2b2187250c79cc3e930504c15a93dbffac68dff4e4f4248b\" returns successfully" Dec 13 01:29:50.272510 kubelet[2770]: E1213 01:29:50.272444 2770 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.31.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.31.20:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:29:51.530927 kubelet[2770]: I1213 01:29:51.530285 2770 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-31-20" Dec 13 01:29:52.980818 kubelet[2770]: E1213 01:29:52.980775 2770 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-31-20\" not found" node="ip-172-31-31-20" Dec 13 01:29:53.092928 kubelet[2770]: I1213 01:29:53.092893 2770 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-31-20" Dec 13 01:29:53.296882 kubelet[2770]: I1213 01:29:53.296846 2770 apiserver.go:52] "Watching apiserver" Dec 13 01:29:53.324823 kubelet[2770]: I1213 01:29:53.324795 2770 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 13 01:29:53.558673 kubelet[2770]: E1213 01:29:53.558545 2770 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-31-20\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-31-20" Dec 13 01:29:54.839549 systemd[1]: Reloading requested from client PID 3044 ('systemctl') (unit session-7.scope)... Dec 13 01:29:54.839568 systemd[1]: Reloading... Dec 13 01:29:54.957881 zram_generator::config[3084]: No configuration found. Dec 13 01:29:55.091169 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:29:55.188205 systemd[1]: Reloading finished in 348 ms. Dec 13 01:29:55.231636 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:29:55.241285 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:29:55.241505 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:29:55.248262 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:29:55.486062 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:29:55.498372 (kubelet)[3141]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:29:55.666425 kubelet[3141]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:29:55.666425 kubelet[3141]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:29:55.666425 kubelet[3141]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:29:55.667116 kubelet[3141]: I1213 01:29:55.666500 3141 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:29:55.673502 kubelet[3141]: I1213 01:29:55.673142 3141 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Dec 13 01:29:55.673502 kubelet[3141]: I1213 01:29:55.673172 3141 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:29:55.676866 kubelet[3141]: I1213 01:29:55.675621 3141 server.go:929] "Client rotation is on, will bootstrap in background" Dec 13 01:29:55.677334 kubelet[3141]: I1213 01:29:55.677303 3141 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 01:29:55.681193 kubelet[3141]: I1213 01:29:55.681156 3141 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:29:55.687965 kubelet[3141]: E1213 01:29:55.687915 3141 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Dec 13 01:29:55.687965 kubelet[3141]: I1213 01:29:55.687949 3141 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Dec 13 01:29:55.690003 kubelet[3141]: I1213 01:29:55.689956 3141 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:29:55.690157 kubelet[3141]: I1213 01:29:55.690095 3141 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 13 01:29:55.690297 kubelet[3141]: I1213 01:29:55.690257 3141 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:29:55.690488 kubelet[3141]: I1213 01:29:55.690295 3141 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-31-20","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 01:29:55.690612 kubelet[3141]: I1213 01:29:55.690488 3141 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:29:55.690612 kubelet[3141]: I1213 01:29:55.690504 3141 container_manager_linux.go:300] "Creating device plugin manager" Dec 13 01:29:55.690612 kubelet[3141]: I1213 01:29:55.690549 3141 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:29:55.691046 kubelet[3141]: I1213 01:29:55.690682 3141 kubelet.go:408] "Attempting to sync node with API server" Dec 13 01:29:55.691046 kubelet[3141]: I1213 01:29:55.690699 3141 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:29:55.691046 kubelet[3141]: I1213 01:29:55.690734 3141 kubelet.go:314] "Adding apiserver pod source" Dec 13 01:29:55.691046 kubelet[3141]: I1213 01:29:55.690751 3141 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:29:55.695245 kubelet[3141]: I1213 01:29:55.692563 3141 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:29:55.695245 kubelet[3141]: I1213 01:29:55.693292 3141 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:29:55.695245 kubelet[3141]: I1213 01:29:55.693772 3141 server.go:1269] "Started kubelet" Dec 13 01:29:55.700861 kubelet[3141]: I1213 01:29:55.698961 3141 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:29:55.700861 kubelet[3141]: I1213 01:29:55.699316 3141 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:29:55.700861 kubelet[3141]: I1213 01:29:55.699369 3141 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:29:55.702864 kubelet[3141]: I1213 01:29:55.701212 3141 server.go:460] "Adding debug handlers to kubelet server" Dec 13 01:29:55.703118 kubelet[3141]: I1213 01:29:55.703105 3141 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:29:55.722174 kubelet[3141]: I1213 01:29:55.721755 3141 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 01:29:55.726141 kubelet[3141]: E1213 01:29:55.724179 3141 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:29:55.726141 kubelet[3141]: I1213 01:29:55.725263 3141 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 13 01:29:55.726141 kubelet[3141]: I1213 01:29:55.725396 3141 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 01:29:55.726141 kubelet[3141]: I1213 01:29:55.725541 3141 reconciler.go:26] "Reconciler: start to sync state" Dec 13 01:29:55.729515 kubelet[3141]: I1213 01:29:55.729006 3141 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:29:55.730596 kubelet[3141]: I1213 01:29:55.730567 3141 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:29:55.731138 kubelet[3141]: I1213 01:29:55.730627 3141 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:29:55.734780 kubelet[3141]: I1213 01:29:55.734753 3141 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:29:55.734973 kubelet[3141]: I1213 01:29:55.734961 3141 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:29:55.735064 kubelet[3141]: I1213 01:29:55.735055 3141 kubelet.go:2321] "Starting kubelet main sync loop" Dec 13 01:29:55.735196 kubelet[3141]: E1213 01:29:55.735176 3141 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:29:55.739403 kubelet[3141]: I1213 01:29:55.739302 3141 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:29:55.795425 kubelet[3141]: I1213 01:29:55.795391 3141 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:29:55.795425 kubelet[3141]: I1213 01:29:55.795418 3141 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:29:55.795619 kubelet[3141]: I1213 01:29:55.795469 3141 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:29:55.795677 kubelet[3141]: I1213 01:29:55.795657 3141 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 01:29:55.795728 kubelet[3141]: I1213 01:29:55.795678 3141 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 01:29:55.795728 kubelet[3141]: I1213 01:29:55.795705 3141 policy_none.go:49] "None policy: Start" Dec 13 01:29:55.796501 kubelet[3141]: I1213 01:29:55.796479 3141 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:29:55.796574 kubelet[3141]: I1213 01:29:55.796504 3141 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:29:55.796722 kubelet[3141]: I1213 01:29:55.796702 3141 state_mem.go:75] "Updated machine memory state" Dec 13 01:29:55.801381 kubelet[3141]: I1213 01:29:55.801340 3141 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:29:55.801552 kubelet[3141]: I1213 01:29:55.801533 3141 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 01:29:55.801615 kubelet[3141]: I1213 01:29:55.801546 3141 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 01:29:55.803419 kubelet[3141]: I1213 01:29:55.802496 3141 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:29:55.849912 kubelet[3141]: E1213 01:29:55.849814 3141 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-31-20\" already exists" pod="kube-system/kube-scheduler-ip-172-31-31-20" Dec 13 01:29:55.911438 kubelet[3141]: I1213 01:29:55.910478 3141 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-31-20" Dec 13 01:29:55.926140 kubelet[3141]: I1213 01:29:55.926069 3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9c40a5d663b166d3b9d529093a20243d-ca-certs\") pod \"kube-controller-manager-ip-172-31-31-20\" (UID: \"9c40a5d663b166d3b9d529093a20243d\") " pod="kube-system/kube-controller-manager-ip-172-31-31-20" Dec 13 01:29:55.928113 kubelet[3141]: I1213 01:29:55.927753 3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9c40a5d663b166d3b9d529093a20243d-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-31-20\" (UID: \"9c40a5d663b166d3b9d529093a20243d\") " pod="kube-system/kube-controller-manager-ip-172-31-31-20" Dec 13 01:29:55.928113 kubelet[3141]: I1213 01:29:55.927802 3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9c40a5d663b166d3b9d529093a20243d-k8s-certs\") pod \"kube-controller-manager-ip-172-31-31-20\" (UID: \"9c40a5d663b166d3b9d529093a20243d\") " pod="kube-system/kube-controller-manager-ip-172-31-31-20" Dec 13 01:29:55.928113 kubelet[3141]: I1213 01:29:55.927830 3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9c40a5d663b166d3b9d529093a20243d-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-31-20\" (UID: \"9c40a5d663b166d3b9d529093a20243d\") " pod="kube-system/kube-controller-manager-ip-172-31-31-20" Dec 13 01:29:55.928113 kubelet[3141]: I1213 01:29:55.927885 3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1c33ccc3ea5b328341d89a2011f9555b-kubeconfig\") pod \"kube-scheduler-ip-172-31-31-20\" (UID: \"1c33ccc3ea5b328341d89a2011f9555b\") " pod="kube-system/kube-scheduler-ip-172-31-31-20" Dec 13 01:29:55.928113 kubelet[3141]: I1213 01:29:55.927902 3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8f277926f7948ce131e3dc44161e8cbf-k8s-certs\") pod \"kube-apiserver-ip-172-31-31-20\" (UID: \"8f277926f7948ce131e3dc44161e8cbf\") " pod="kube-system/kube-apiserver-ip-172-31-31-20" Dec 13 01:29:55.928365 kubelet[3141]: I1213 01:29:55.927923 3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8f277926f7948ce131e3dc44161e8cbf-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-31-20\" (UID: \"8f277926f7948ce131e3dc44161e8cbf\") " pod="kube-system/kube-apiserver-ip-172-31-31-20" Dec 13 01:29:55.928365 kubelet[3141]: I1213 01:29:55.928036 3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9c40a5d663b166d3b9d529093a20243d-kubeconfig\") pod \"kube-controller-manager-ip-172-31-31-20\" (UID: \"9c40a5d663b166d3b9d529093a20243d\") " pod="kube-system/kube-controller-manager-ip-172-31-31-20" Dec 13 01:29:55.928365 kubelet[3141]: I1213 01:29:55.928060 3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8f277926f7948ce131e3dc44161e8cbf-ca-certs\") pod \"kube-apiserver-ip-172-31-31-20\" (UID: \"8f277926f7948ce131e3dc44161e8cbf\") " pod="kube-system/kube-apiserver-ip-172-31-31-20" Dec 13 01:29:55.930158 kubelet[3141]: I1213 01:29:55.929366 3141 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-31-20" Dec 13 01:29:55.930158 kubelet[3141]: I1213 01:29:55.929885 3141 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-31-20" Dec 13 01:29:56.698542 kubelet[3141]: I1213 01:29:56.698497 3141 apiserver.go:52] "Watching apiserver" Dec 13 01:29:56.726888 kubelet[3141]: I1213 01:29:56.726027 3141 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 13 01:29:56.755875 update_engine[1939]: I20241213 01:29:56.754547 1939 update_attempter.cc:509] Updating boot flags... Dec 13 01:29:56.801739 kubelet[3141]: E1213 01:29:56.801704 3141 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-31-20\" already exists" pod="kube-system/kube-apiserver-ip-172-31-31-20" Dec 13 01:29:56.888716 kubelet[3141]: I1213 01:29:56.886548 3141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-31-20" podStartSLOduration=2.886528445 podStartE2EDuration="2.886528445s" podCreationTimestamp="2024-12-13 01:29:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:29:56.862494242 +0000 UTC m=+1.355152288" watchObservedRunningTime="2024-12-13 01:29:56.886528445 +0000 UTC m=+1.379186488" Dec 13 01:29:56.906755 kubelet[3141]: I1213 01:29:56.905233 3141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-31-20" podStartSLOduration=1.9052111790000001 podStartE2EDuration="1.905211179s" podCreationTimestamp="2024-12-13 01:29:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:29:56.887031153 +0000 UTC m=+1.379689197" watchObservedRunningTime="2024-12-13 01:29:56.905211179 +0000 UTC m=+1.397869219" Dec 13 01:29:56.908872 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (3190) Dec 13 01:29:56.939772 kubelet[3141]: I1213 01:29:56.939618 3141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-31-20" podStartSLOduration=1.939492671 podStartE2EDuration="1.939492671s" podCreationTimestamp="2024-12-13 01:29:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:29:56.905696692 +0000 UTC m=+1.398354739" watchObservedRunningTime="2024-12-13 01:29:56.939492671 +0000 UTC m=+1.432150716" Dec 13 01:29:57.294883 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (3192) Dec 13 01:30:00.571872 kubelet[3141]: I1213 01:30:00.571019 3141 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 01:30:00.572431 containerd[1956]: time="2024-12-13T01:30:00.571754215Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 01:30:00.575364 kubelet[3141]: I1213 01:30:00.572164 3141 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 01:30:01.582398 kubelet[3141]: I1213 01:30:01.582028 3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/aa4e28f4-1f31-4f0f-bb20-3391ed1021d0-kube-proxy\") pod \"kube-proxy-cqb6l\" (UID: \"aa4e28f4-1f31-4f0f-bb20-3391ed1021d0\") " pod="kube-system/kube-proxy-cqb6l" Dec 13 01:30:01.582398 kubelet[3141]: I1213 01:30:01.582073 3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aa4e28f4-1f31-4f0f-bb20-3391ed1021d0-xtables-lock\") pod \"kube-proxy-cqb6l\" (UID: \"aa4e28f4-1f31-4f0f-bb20-3391ed1021d0\") " pod="kube-system/kube-proxy-cqb6l" Dec 13 01:30:01.582398 kubelet[3141]: I1213 01:30:01.582098 3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aa4e28f4-1f31-4f0f-bb20-3391ed1021d0-lib-modules\") pod \"kube-proxy-cqb6l\" (UID: \"aa4e28f4-1f31-4f0f-bb20-3391ed1021d0\") " pod="kube-system/kube-proxy-cqb6l" Dec 13 01:30:01.582398 kubelet[3141]: I1213 01:30:01.582122 3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x84pn\" (UniqueName: \"kubernetes.io/projected/aa4e28f4-1f31-4f0f-bb20-3391ed1021d0-kube-api-access-x84pn\") pod \"kube-proxy-cqb6l\" (UID: \"aa4e28f4-1f31-4f0f-bb20-3391ed1021d0\") " pod="kube-system/kube-proxy-cqb6l" Dec 13 01:30:01.594770 systemd[1]: Created slice kubepods-besteffort-podaa4e28f4_1f31_4f0f_bb20_3391ed1021d0.slice - libcontainer container kubepods-besteffort-podaa4e28f4_1f31_4f0f_bb20_3391ed1021d0.slice. Dec 13 01:30:01.918509 containerd[1956]: time="2024-12-13T01:30:01.918371248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cqb6l,Uid:aa4e28f4-1f31-4f0f-bb20-3391ed1021d0,Namespace:kube-system,Attempt:0,}" Dec 13 01:30:01.949183 systemd[1]: Created slice kubepods-besteffort-podc564fdda_3937_492a_a435_b3a3aeb472bb.slice - libcontainer container kubepods-besteffort-podc564fdda_3937_492a_a435_b3a3aeb472bb.slice. Dec 13 01:30:02.002146 kubelet[3141]: I1213 01:30:02.000880 3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c564fdda-3937-492a-a435-b3a3aeb472bb-var-lib-calico\") pod \"tigera-operator-76c4976dd7-dk7bp\" (UID: \"c564fdda-3937-492a-a435-b3a3aeb472bb\") " pod="tigera-operator/tigera-operator-76c4976dd7-dk7bp" Dec 13 01:30:02.002146 kubelet[3141]: I1213 01:30:02.000954 3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qv9l\" (UniqueName: \"kubernetes.io/projected/c564fdda-3937-492a-a435-b3a3aeb472bb-kube-api-access-9qv9l\") pod \"tigera-operator-76c4976dd7-dk7bp\" (UID: \"c564fdda-3937-492a-a435-b3a3aeb472bb\") " pod="tigera-operator/tigera-operator-76c4976dd7-dk7bp" Dec 13 01:30:02.070894 containerd[1956]: time="2024-12-13T01:30:02.069064679Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:30:02.070894 containerd[1956]: time="2024-12-13T01:30:02.069155668Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:30:02.070894 containerd[1956]: time="2024-12-13T01:30:02.069176709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:30:02.070894 containerd[1956]: time="2024-12-13T01:30:02.069310211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:30:02.277105 containerd[1956]: time="2024-12-13T01:30:02.276968581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-dk7bp,Uid:c564fdda-3937-492a-a435-b3a3aeb472bb,Namespace:tigera-operator,Attempt:0,}" Dec 13 01:30:02.309091 systemd[1]: Started cri-containerd-d0a81a96d8a893cde6e5bec0c5f342a52781c6f2688d45f3905ea54a186f53f6.scope - libcontainer container d0a81a96d8a893cde6e5bec0c5f342a52781c6f2688d45f3905ea54a186f53f6. Dec 13 01:30:02.462822 containerd[1956]: time="2024-12-13T01:30:02.456118158Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:30:02.462822 containerd[1956]: time="2024-12-13T01:30:02.457177318Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:30:02.462822 containerd[1956]: time="2024-12-13T01:30:02.457273594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:30:02.464214 containerd[1956]: time="2024-12-13T01:30:02.464007279Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:30:02.506790 containerd[1956]: time="2024-12-13T01:30:02.505804774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cqb6l,Uid:aa4e28f4-1f31-4f0f-bb20-3391ed1021d0,Namespace:kube-system,Attempt:0,} returns sandbox id \"d0a81a96d8a893cde6e5bec0c5f342a52781c6f2688d45f3905ea54a186f53f6\"" Dec 13 01:30:02.522487 containerd[1956]: time="2024-12-13T01:30:02.522082379Z" level=info msg="CreateContainer within sandbox \"d0a81a96d8a893cde6e5bec0c5f342a52781c6f2688d45f3905ea54a186f53f6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 01:30:02.561126 systemd[1]: Started cri-containerd-62a90aff388d57c5e6520d807889f276faeeb1ccc1f37d8f51802a0cfed7f7f8.scope - libcontainer container 62a90aff388d57c5e6520d807889f276faeeb1ccc1f37d8f51802a0cfed7f7f8. Dec 13 01:30:02.617718 containerd[1956]: time="2024-12-13T01:30:02.616937258Z" level=info msg="CreateContainer within sandbox \"d0a81a96d8a893cde6e5bec0c5f342a52781c6f2688d45f3905ea54a186f53f6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"bc67f95fe58c78d933c657e00bb7a6f0189ad41da99bc2088822f58204f70bcf\"" Dec 13 01:30:02.619882 containerd[1956]: time="2024-12-13T01:30:02.619796466Z" level=info msg="StartContainer for \"bc67f95fe58c78d933c657e00bb7a6f0189ad41da99bc2088822f58204f70bcf\"" Dec 13 01:30:02.790710 systemd[1]: Started cri-containerd-bc67f95fe58c78d933c657e00bb7a6f0189ad41da99bc2088822f58204f70bcf.scope - libcontainer container bc67f95fe58c78d933c657e00bb7a6f0189ad41da99bc2088822f58204f70bcf. Dec 13 01:30:02.923641 containerd[1956]: time="2024-12-13T01:30:02.912351908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-dk7bp,Uid:c564fdda-3937-492a-a435-b3a3aeb472bb,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"62a90aff388d57c5e6520d807889f276faeeb1ccc1f37d8f51802a0cfed7f7f8\"" Dec 13 01:30:02.927215 containerd[1956]: time="2024-12-13T01:30:02.926742059Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Dec 13 01:30:02.942867 systemd[1]: run-containerd-runc-k8s.io-d0a81a96d8a893cde6e5bec0c5f342a52781c6f2688d45f3905ea54a186f53f6-runc.NZZk0l.mount: Deactivated successfully. Dec 13 01:30:03.119620 containerd[1956]: time="2024-12-13T01:30:03.119339433Z" level=info msg="StartContainer for \"bc67f95fe58c78d933c657e00bb7a6f0189ad41da99bc2088822f58204f70bcf\" returns successfully" Dec 13 01:30:03.587303 sudo[2276]: pam_unix(sudo:session): session closed for user root Dec 13 01:30:03.618723 sshd[2273]: pam_unix(sshd:session): session closed for user core Dec 13 01:30:03.662375 systemd[1]: sshd@6-172.31.31.20:22-139.178.68.195:42154.service: Deactivated successfully. Dec 13 01:30:03.671106 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 01:30:03.671476 systemd[1]: session-7.scope: Consumed 5.467s CPU time, 141.4M memory peak, 0B memory swap peak. Dec 13 01:30:03.677723 systemd-logind[1938]: Session 7 logged out. Waiting for processes to exit. Dec 13 01:30:03.679651 systemd-logind[1938]: Removed session 7. Dec 13 01:30:07.380920 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3477327799.mount: Deactivated successfully. Dec 13 01:30:07.920721 kubelet[3141]: I1213 01:30:07.920541 3141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-cqb6l" podStartSLOduration=6.920519126 podStartE2EDuration="6.920519126s" podCreationTimestamp="2024-12-13 01:30:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:30:03.88393109 +0000 UTC m=+8.376589134" watchObservedRunningTime="2024-12-13 01:30:07.920519126 +0000 UTC m=+12.413177169" Dec 13 01:30:12.253424 containerd[1956]: time="2024-12-13T01:30:12.253367986Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:30:12.255128 containerd[1956]: time="2024-12-13T01:30:12.254792869Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21764313" Dec 13 01:30:12.257652 containerd[1956]: time="2024-12-13T01:30:12.256121203Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:30:12.258956 containerd[1956]: time="2024-12-13T01:30:12.258826562Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:30:12.259809 containerd[1956]: time="2024-12-13T01:30:12.259649178Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 9.332854732s" Dec 13 01:30:12.259809 containerd[1956]: time="2024-12-13T01:30:12.259694738Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Dec 13 01:30:12.263310 containerd[1956]: time="2024-12-13T01:30:12.263271199Z" level=info msg="CreateContainer within sandbox \"62a90aff388d57c5e6520d807889f276faeeb1ccc1f37d8f51802a0cfed7f7f8\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 13 01:30:12.290947 containerd[1956]: time="2024-12-13T01:30:12.290896905Z" level=info msg="CreateContainer within sandbox \"62a90aff388d57c5e6520d807889f276faeeb1ccc1f37d8f51802a0cfed7f7f8\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"60783935829f2fd2a4bfa85fae21b136c6c6c967b299790bf558169569a24cac\"" Dec 13 01:30:12.292595 containerd[1956]: time="2024-12-13T01:30:12.291675426Z" level=info msg="StartContainer for \"60783935829f2fd2a4bfa85fae21b136c6c6c967b299790bf558169569a24cac\"" Dec 13 01:30:12.330397 systemd[1]: run-containerd-runc-k8s.io-60783935829f2fd2a4bfa85fae21b136c6c6c967b299790bf558169569a24cac-runc.YlgVXS.mount: Deactivated successfully. Dec 13 01:30:12.341319 systemd[1]: Started cri-containerd-60783935829f2fd2a4bfa85fae21b136c6c6c967b299790bf558169569a24cac.scope - libcontainer container 60783935829f2fd2a4bfa85fae21b136c6c6c967b299790bf558169569a24cac. Dec 13 01:30:12.377991 containerd[1956]: time="2024-12-13T01:30:12.377944073Z" level=info msg="StartContainer for \"60783935829f2fd2a4bfa85fae21b136c6c6c967b299790bf558169569a24cac\" returns successfully" Dec 13 01:30:16.077028 kubelet[3141]: I1213 01:30:16.076305 3141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4976dd7-dk7bp" podStartSLOduration=5.740751617 podStartE2EDuration="15.076275999s" podCreationTimestamp="2024-12-13 01:30:01 +0000 UTC" firstStartedPulling="2024-12-13 01:30:02.92527581 +0000 UTC m=+7.417933839" lastFinishedPulling="2024-12-13 01:30:12.260800186 +0000 UTC m=+16.753458221" observedRunningTime="2024-12-13 01:30:12.930540822 +0000 UTC m=+17.423198866" watchObservedRunningTime="2024-12-13 01:30:16.076275999 +0000 UTC m=+20.568934043" Dec 13 01:30:16.108041 systemd[1]: Created slice kubepods-besteffort-pod1613735b_71a2_470b_a102_9662a843f91a.slice - libcontainer container kubepods-besteffort-pod1613735b_71a2_470b_a102_9662a843f91a.slice. Dec 13 01:30:16.248471 kubelet[3141]: I1213 01:30:16.248417 3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7k9n\" (UniqueName: \"kubernetes.io/projected/1613735b-71a2-470b-a102-9662a843f91a-kube-api-access-l7k9n\") pod \"calico-typha-66d5d9b775-rgc2v\" (UID: \"1613735b-71a2-470b-a102-9662a843f91a\") " pod="calico-system/calico-typha-66d5d9b775-rgc2v" Dec 13 01:30:16.248968 kubelet[3141]: I1213 01:30:16.248481 3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1613735b-71a2-470b-a102-9662a843f91a-tigera-ca-bundle\") pod \"calico-typha-66d5d9b775-rgc2v\" (UID: \"1613735b-71a2-470b-a102-9662a843f91a\") " pod="calico-system/calico-typha-66d5d9b775-rgc2v" Dec 13 01:30:16.248968 kubelet[3141]: I1213 01:30:16.248504 3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/1613735b-71a2-470b-a102-9662a843f91a-typha-certs\") pod \"calico-typha-66d5d9b775-rgc2v\" (UID: \"1613735b-71a2-470b-a102-9662a843f91a\") " pod="calico-system/calico-typha-66d5d9b775-rgc2v" Dec 13 01:30:16.309514 systemd[1]: Created slice kubepods-besteffort-podf3d225be_a8d6_43d8_8820_af452c73c95b.slice - libcontainer container kubepods-besteffort-podf3d225be_a8d6_43d8_8820_af452c73c95b.slice. Dec 13 01:30:16.449530 containerd[1956]: time="2024-12-13T01:30:16.449405551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-66d5d9b775-rgc2v,Uid:1613735b-71a2-470b-a102-9662a843f91a,Namespace:calico-system,Attempt:0,}" Dec 13 01:30:16.451776 kubelet[3141]: I1213 01:30:16.451624 3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f3d225be-a8d6-43d8-8820-af452c73c95b-policysync\") pod \"calico-node-75pkx\" (UID: \"f3d225be-a8d6-43d8-8820-af452c73c95b\") " pod="calico-system/calico-node-75pkx" Dec 13 01:30:16.451776 kubelet[3141]: I1213 01:30:16.451675 3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f3d225be-a8d6-43d8-8820-af452c73c95b-cni-net-dir\") pod \"calico-node-75pkx\" (UID: \"f3d225be-a8d6-43d8-8820-af452c73c95b\") " pod="calico-system/calico-node-75pkx" Dec 13 01:30:16.451776 kubelet[3141]: I1213 01:30:16.451713 3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f3d225be-a8d6-43d8-8820-af452c73c95b-cni-bin-dir\") pod \"calico-node-75pkx\" (UID: \"f3d225be-a8d6-43d8-8820-af452c73c95b\") " pod="calico-system/calico-node-75pkx" Dec 13 01:30:16.451776 kubelet[3141]: I1213 01:30:16.451740 3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f3d225be-a8d6-43d8-8820-af452c73c95b-tigera-ca-bundle\") pod \"calico-node-75pkx\" (UID: \"f3d225be-a8d6-43d8-8820-af452c73c95b\") " pod="calico-system/calico-node-75pkx" Dec 13 01:30:16.451776 kubelet[3141]: I1213 01:30:16.451765 3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f3d225be-a8d6-43d8-8820-af452c73c95b-var-run-calico\") pod \"calico-node-75pkx\" (UID: \"f3d225be-a8d6-43d8-8820-af452c73c95b\") " pod="calico-system/calico-node-75pkx" Dec 13 01:30:16.452252 kubelet[3141]: I1213 01:30:16.451787 3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f3d225be-a8d6-43d8-8820-af452c73c95b-lib-modules\") pod \"calico-node-75pkx\" (UID: \"f3d225be-a8d6-43d8-8820-af452c73c95b\") " pod="calico-system/calico-node-75pkx" Dec 13 01:30:16.452252 kubelet[3141]: I1213 01:30:16.451810 3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f3d225be-a8d6-43d8-8820-af452c73c95b-flexvol-driver-host\") pod \"calico-node-75pkx\" (UID: \"f3d225be-a8d6-43d8-8820-af452c73c95b\") " pod="calico-system/calico-node-75pkx" Dec 13 01:30:16.452252 kubelet[3141]: I1213 01:30:16.451834 3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f3d225be-a8d6-43d8-8820-af452c73c95b-var-lib-calico\") pod \"calico-node-75pkx\" (UID: \"f3d225be-a8d6-43d8-8820-af452c73c95b\") " pod="calico-system/calico-node-75pkx" Dec 13 01:30:16.452252 kubelet[3141]: I1213 01:30:16.451871 3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9vdt\" (UniqueName: \"kubernetes.io/projected/f3d225be-a8d6-43d8-8820-af452c73c95b-kube-api-access-t9vdt\") pod \"calico-node-75pkx\" (UID: \"f3d225be-a8d6-43d8-8820-af452c73c95b\") " pod="calico-system/calico-node-75pkx" Dec 13 01:30:16.452252 kubelet[3141]: I1213 01:30:16.451905 3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f3d225be-a8d6-43d8-8820-af452c73c95b-xtables-lock\") pod \"calico-node-75pkx\" (UID: \"f3d225be-a8d6-43d8-8820-af452c73c95b\") " pod="calico-system/calico-node-75pkx" Dec 13 01:30:16.452511 kubelet[3141]: I1213 01:30:16.451930 3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f3d225be-a8d6-43d8-8820-af452c73c95b-node-certs\") pod \"calico-node-75pkx\" (UID: \"f3d225be-a8d6-43d8-8820-af452c73c95b\") " pod="calico-system/calico-node-75pkx" Dec 13 01:30:16.452511 kubelet[3141]: I1213 01:30:16.451953 3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f3d225be-a8d6-43d8-8820-af452c73c95b-cni-log-dir\") pod \"calico-node-75pkx\" (UID: \"f3d225be-a8d6-43d8-8820-af452c73c95b\") " pod="calico-system/calico-node-75pkx" Dec 13 01:30:16.534208 kubelet[3141]: E1213 01:30:16.534142 3141 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tjm8g" podUID="38d9e318-7884-46ef-aa8d-69d6c11c0096" Dec 13 01:30:16.537390 containerd[1956]: time="2024-12-13T01:30:16.537252562Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:30:16.537390 containerd[1956]: time="2024-12-13T01:30:16.537337709Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:30:16.537926 containerd[1956]: time="2024-12-13T01:30:16.537374419Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:30:16.537926 containerd[1956]: time="2024-12-13T01:30:16.537523735Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:30:16.590909 kubelet[3141]: E1213 01:30:16.590642 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:30:16.590909 kubelet[3141]: W1213 01:30:16.590668 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:30:16.590909 kubelet[3141]: E1213 01:30:16.590696 3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:30:16.602102 systemd[1]: Started cri-containerd-bd398f87876241bc50eb8e7ab0833bef28fc83a3493df15a673b079ca9fa682d.scope - libcontainer container bd398f87876241bc50eb8e7ab0833bef28fc83a3493df15a673b079ca9fa682d. Dec 13 01:30:16.606569 kubelet[3141]: E1213 01:30:16.606177 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:30:16.606569 kubelet[3141]: W1213 01:30:16.606209 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:30:16.606569 kubelet[3141]: E1213 01:30:16.606318 3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:30:16.622470 kubelet[3141]: E1213 01:30:16.622438 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:30:16.622470 kubelet[3141]: W1213 01:30:16.622463 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:30:16.622678 kubelet[3141]: E1213 01:30:16.622486 3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:30:16.653592 kubelet[3141]: E1213 01:30:16.653540 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:30:16.653592 kubelet[3141]: W1213 01:30:16.653569 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:30:16.653592 kubelet[3141]: E1213 01:30:16.653595 3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:30:16.654967 kubelet[3141]: I1213 01:30:16.653638 3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/38d9e318-7884-46ef-aa8d-69d6c11c0096-socket-dir\") pod \"csi-node-driver-tjm8g\" (UID: \"38d9e318-7884-46ef-aa8d-69d6c11c0096\") " pod="calico-system/csi-node-driver-tjm8g" Dec 13 01:30:16.654967 kubelet[3141]: E1213 01:30:16.654391 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:30:16.654967 kubelet[3141]: W1213 01:30:16.654410 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:30:16.654967 kubelet[3141]: E1213 01:30:16.654439 3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:30:16.654967 kubelet[3141]: I1213 01:30:16.654481 3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/38d9e318-7884-46ef-aa8d-69d6c11c0096-kubelet-dir\") pod \"csi-node-driver-tjm8g\" (UID: \"38d9e318-7884-46ef-aa8d-69d6c11c0096\") " pod="calico-system/csi-node-driver-tjm8g" Dec 13 01:30:16.655797 kubelet[3141]: E1213 01:30:16.655590 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:30:16.655797 kubelet[3141]: W1213 01:30:16.655607 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:30:16.655797 kubelet[3141]: E1213 01:30:16.655738 3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:30:16.657386 kubelet[3141]: E1213 01:30:16.656530 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:30:16.657386 kubelet[3141]: W1213 01:30:16.656546 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:30:16.657386 kubelet[3141]: E1213 01:30:16.656575 3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:30:16.657929 kubelet[3141]: E1213 01:30:16.657751 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:30:16.657929 kubelet[3141]: W1213 01:30:16.657766 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:30:16.657929 kubelet[3141]: E1213 01:30:16.657805 3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:30:16.657929 kubelet[3141]: I1213 01:30:16.657919 3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/38d9e318-7884-46ef-aa8d-69d6c11c0096-registration-dir\") pod \"csi-node-driver-tjm8g\" (UID: \"38d9e318-7884-46ef-aa8d-69d6c11c0096\") " pod="calico-system/csi-node-driver-tjm8g" Dec 13 01:30:16.658654 kubelet[3141]: E1213 01:30:16.658385 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:30:16.658654 kubelet[3141]: W1213 01:30:16.658399 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:30:16.658654 kubelet[3141]: E1213 01:30:16.658417 3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:30:16.658983 kubelet[3141]: E1213 01:30:16.658893 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:30:16.658983 kubelet[3141]: W1213 01:30:16.658906 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:30:16.658983 kubelet[3141]: E1213 01:30:16.658924 3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:30:16.660479 kubelet[3141]: E1213 01:30:16.659504 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:30:16.660479 kubelet[3141]: W1213 01:30:16.659518 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:30:16.660479 kubelet[3141]: E1213 01:30:16.659551 3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:30:16.660479 kubelet[3141]: I1213 01:30:16.659581 3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnxqb\" (UniqueName: \"kubernetes.io/projected/38d9e318-7884-46ef-aa8d-69d6c11c0096-kube-api-access-tnxqb\") pod \"csi-node-driver-tjm8g\" (UID: \"38d9e318-7884-46ef-aa8d-69d6c11c0096\") " pod="calico-system/csi-node-driver-tjm8g" Dec 13 01:30:16.661145 kubelet[3141]: E1213 01:30:16.660934 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:30:16.661145 kubelet[3141]: W1213 01:30:16.660950 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:30:16.661145 kubelet[3141]: E1213 01:30:16.661047 3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:30:16.661655 kubelet[3141]: E1213 01:30:16.661537 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:30:16.661655 kubelet[3141]: W1213 01:30:16.661551 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:30:16.661655 kubelet[3141]: E1213 01:30:16.661578 3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:30:16.662388 kubelet[3141]: E1213 01:30:16.662093 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:30:16.662388 kubelet[3141]: W1213 01:30:16.662107 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:30:16.662388 kubelet[3141]: E1213 01:30:16.662135 3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:30:16.662388 kubelet[3141]: I1213 01:30:16.662161 3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/38d9e318-7884-46ef-aa8d-69d6c11c0096-varrun\") pod \"csi-node-driver-tjm8g\" (UID: \"38d9e318-7884-46ef-aa8d-69d6c11c0096\") " pod="calico-system/csi-node-driver-tjm8g" Dec 13 01:30:16.663050 kubelet[3141]: E1213 01:30:16.662868 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:30:16.663050 kubelet[3141]: W1213 01:30:16.662884 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:30:16.663050 kubelet[3141]: E1213 01:30:16.662902 3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:30:16.663891 kubelet[3141]: E1213 01:30:16.663584 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:30:16.663891 kubelet[3141]: W1213 01:30:16.663599 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:30:16.663891 kubelet[3141]: E1213 01:30:16.663655 3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:30:16.665104 kubelet[3141]: E1213 01:30:16.664953 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:30:16.665104 kubelet[3141]: W1213 01:30:16.664974 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:30:16.665104 kubelet[3141]: E1213 01:30:16.664990 3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:30:16.666294 kubelet[3141]: E1213 01:30:16.666237 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:30:16.666294 kubelet[3141]: W1213 01:30:16.666251 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:30:16.666294 kubelet[3141]: E1213 01:30:16.666264 3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:30:16.764183 kubelet[3141]: E1213 01:30:16.764104 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:30:16.764183 kubelet[3141]: W1213 01:30:16.764130 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:30:16.764183 kubelet[3141]: E1213 01:30:16.764156 3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:30:16.766314 kubelet[3141]: E1213 01:30:16.766285 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:30:16.766314 kubelet[3141]: W1213 01:30:16.766311 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:30:16.766662 kubelet[3141]: E1213 01:30:16.766349 3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:30:16.767278 kubelet[3141]: E1213 01:30:16.766815 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:30:16.767278 kubelet[3141]: W1213 01:30:16.766831 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:30:16.767278 kubelet[3141]: E1213 01:30:16.766878 3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:30:16.767278 kubelet[3141]: E1213 01:30:16.767122 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:30:16.767278 kubelet[3141]: W1213 01:30:16.767132 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:30:16.767278 kubelet[3141]: E1213 01:30:16.767148 3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:30:16.767920 kubelet[3141]: E1213 01:30:16.767741 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:30:16.767920 kubelet[3141]: W1213 01:30:16.767756 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:30:16.767920 kubelet[3141]: E1213 01:30:16.767790 3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:30:16.768095 kubelet[3141]: E1213 01:30:16.768079 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:30:16.768095 kubelet[3141]: W1213 01:30:16.768092 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:30:16.768282 kubelet[3141]: E1213 01:30:16.768107 3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:30:16.768432 kubelet[3141]: E1213 01:30:16.768344 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:30:16.768432 kubelet[3141]: W1213 01:30:16.768357 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:30:16.768536 kubelet[3141]: E1213 01:30:16.768461 3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:30:16.769684 kubelet[3141]: E1213 01:30:16.769648 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:30:16.769684 kubelet[3141]: W1213 01:30:16.769661 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:30:16.769997 kubelet[3141]: E1213 01:30:16.769857 3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:30:16.769997 kubelet[3141]: E1213 01:30:16.769964 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:30:16.769997 kubelet[3141]: W1213 01:30:16.769975 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:30:16.770171 kubelet[3141]: E1213 01:30:16.770071 3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:30:16.770604 kubelet[3141]: E1213 01:30:16.770269 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:30:16.770604 kubelet[3141]: W1213 01:30:16.770281 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:30:16.770604 kubelet[3141]: E1213 01:30:16.770426 3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:30:16.772860 kubelet[3141]: E1213 01:30:16.771862 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:30:16.772860 kubelet[3141]: W1213 01:30:16.771934 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:30:16.772860 kubelet[3141]: E1213 01:30:16.772045 3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:30:16.773049 kubelet[3141]: E1213 01:30:16.773004 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:30:16.773049 kubelet[3141]: W1213 01:30:16.773026 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:30:16.773145 kubelet[3141]: E1213 01:30:16.773112 3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:30:16.773512 kubelet[3141]: E1213 01:30:16.773492 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:30:16.773512 kubelet[3141]: W1213 01:30:16.773509 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:30:16.773825 kubelet[3141]: E1213 01:30:16.773712 3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:30:16.774714 kubelet[3141]: E1213 01:30:16.774079 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:30:16.774714 kubelet[3141]: W1213 01:30:16.774184 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:30:16.774714 kubelet[3141]: E1213 01:30:16.774371 3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:30:16.774901 kubelet[3141]: E1213 01:30:16.774753 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:30:16.774901 kubelet[3141]: W1213 01:30:16.774764 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:30:16.775001 kubelet[3141]: E1213 01:30:16.774958 3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:30:16.775241 kubelet[3141]: E1213 01:30:16.775227 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:30:16.775428 kubelet[3141]: W1213 01:30:16.775305 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:30:16.775428 kubelet[3141]: E1213 01:30:16.775407 3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:30:16.775863 kubelet[3141]: E1213 01:30:16.775670 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:30:16.775863 kubelet[3141]: W1213 01:30:16.775682 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:30:16.775863 kubelet[3141]: E1213 01:30:16.775723 3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:30:16.776942 kubelet[3141]: E1213 01:30:16.776465 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:30:16.776942 kubelet[3141]: W1213 01:30:16.776477 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:30:16.776942 kubelet[3141]: E1213 01:30:16.776516 3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:30:16.777079 containerd[1956]: time="2024-12-13T01:30:16.776623887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-66d5d9b775-rgc2v,Uid:1613735b-71a2-470b-a102-9662a843f91a,Namespace:calico-system,Attempt:0,} returns sandbox id \"bd398f87876241bc50eb8e7ab0833bef28fc83a3493df15a673b079ca9fa682d\"" Dec 13 01:30:16.777518 kubelet[3141]: E1213 01:30:16.777268 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:30:16.777518 kubelet[3141]: W1213 01:30:16.777279 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:30:16.777619 kubelet[3141]: E1213 01:30:16.777568 3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:30:16.781390 kubelet[3141]: E1213 01:30:16.779989 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:30:16.781390 kubelet[3141]: W1213 01:30:16.780043 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:30:16.781975 kubelet[3141]: E1213 01:30:16.781812 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:30:16.781975 kubelet[3141]: W1213 01:30:16.781829 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:30:16.782199 kubelet[3141]: E1213 01:30:16.782171 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:30:16.782404 kubelet[3141]: W1213 01:30:16.782383 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:30:16.782779 kubelet[3141]: E1213 01:30:16.782762 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:30:16.782779 kubelet[3141]: W1213 01:30:16.782779 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:30:16.782921 kubelet[3141]: E1213 01:30:16.782795 3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:30:16.786057 kubelet[3141]: E1213 01:30:16.786026 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:30:16.786283 kubelet[3141]: W1213 01:30:16.786261 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:30:16.786343 kubelet[3141]: E1213 01:30:16.786294 3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:30:16.787864 kubelet[3141]: E1213 01:30:16.782356 3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:30:16.789671 kubelet[3141]: E1213 01:30:16.788368 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:30:16.789671 kubelet[3141]: W1213 01:30:16.788393 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:30:16.789671 kubelet[3141]: E1213 01:30:16.788466 3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:30:16.789671 kubelet[3141]: E1213 01:30:16.782373 3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:30:16.789671 kubelet[3141]: E1213 01:30:16.788515 3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:30:16.796324 containerd[1956]: time="2024-12-13T01:30:16.796282749Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Dec 13 01:30:16.813945 kubelet[3141]: E1213 01:30:16.812956 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:30:16.813945 kubelet[3141]: W1213 01:30:16.812982 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:30:16.813945 kubelet[3141]: E1213 01:30:16.813007 3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:30:16.916199 containerd[1956]: time="2024-12-13T01:30:16.916150239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-75pkx,Uid:f3d225be-a8d6-43d8-8820-af452c73c95b,Namespace:calico-system,Attempt:0,}" Dec 13 01:30:16.968283 containerd[1956]: time="2024-12-13T01:30:16.968126508Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:30:16.968283 containerd[1956]: time="2024-12-13T01:30:16.968218183Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:30:16.968283 containerd[1956]: time="2024-12-13T01:30:16.968241994Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:30:16.968656 containerd[1956]: time="2024-12-13T01:30:16.968569305Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:30:16.998179 systemd[1]: Started cri-containerd-35dd47a49f97e5994344b5910b663ea539d8ae338930c5701f2b4e6aae1208d4.scope - libcontainer container 35dd47a49f97e5994344b5910b663ea539d8ae338930c5701f2b4e6aae1208d4. Dec 13 01:30:17.074747 containerd[1956]: time="2024-12-13T01:30:17.074401799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-75pkx,Uid:f3d225be-a8d6-43d8-8820-af452c73c95b,Namespace:calico-system,Attempt:0,} returns sandbox id \"35dd47a49f97e5994344b5910b663ea539d8ae338930c5701f2b4e6aae1208d4\"" Dec 13 01:30:17.371089 systemd[1]: run-containerd-runc-k8s.io-bd398f87876241bc50eb8e7ab0833bef28fc83a3493df15a673b079ca9fa682d-runc.TsnTS4.mount: Deactivated successfully. Dec 13 01:30:17.739147 kubelet[3141]: E1213 01:30:17.738060 3141 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tjm8g" podUID="38d9e318-7884-46ef-aa8d-69d6c11c0096" Dec 13 01:30:18.267562 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4227578421.mount: Deactivated successfully. Dec 13 01:30:19.755348 kubelet[3141]: E1213 01:30:19.754792 3141 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tjm8g" podUID="38d9e318-7884-46ef-aa8d-69d6c11c0096" Dec 13 01:30:20.015085 containerd[1956]: time="2024-12-13T01:30:20.014954105Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:30:20.019625 containerd[1956]: time="2024-12-13T01:30:20.018041631Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Dec 13 01:30:20.022614 containerd[1956]: time="2024-12-13T01:30:20.022522075Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:30:20.030220 containerd[1956]: time="2024-12-13T01:30:20.029045832Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:30:20.032158 containerd[1956]: time="2024-12-13T01:30:20.032114254Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 3.235664473s" Dec 13 01:30:20.032701 containerd[1956]: time="2024-12-13T01:30:20.032665903Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Dec 13 01:30:20.035289 containerd[1956]: time="2024-12-13T01:30:20.035226896Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Dec 13 01:30:20.063270 containerd[1956]: time="2024-12-13T01:30:20.063222008Z" level=info msg="CreateContainer within sandbox \"bd398f87876241bc50eb8e7ab0833bef28fc83a3493df15a673b079ca9fa682d\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 13 01:30:20.121876 containerd[1956]: time="2024-12-13T01:30:20.121772511Z" level=info msg="CreateContainer within sandbox \"bd398f87876241bc50eb8e7ab0833bef28fc83a3493df15a673b079ca9fa682d\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"3104e1be1b63744167255313992b32fe85e783bb917828104d2043c5544f92ac\"" Dec 13 01:30:20.123873 containerd[1956]: time="2024-12-13T01:30:20.122941762Z" level=info msg="StartContainer for \"3104e1be1b63744167255313992b32fe85e783bb917828104d2043c5544f92ac\"" Dec 13 01:30:20.201114 systemd[1]: Started cri-containerd-3104e1be1b63744167255313992b32fe85e783bb917828104d2043c5544f92ac.scope - libcontainer container 3104e1be1b63744167255313992b32fe85e783bb917828104d2043c5544f92ac. Dec 13 01:30:20.310084 containerd[1956]: time="2024-12-13T01:30:20.309957785Z" level=info msg="StartContainer for \"3104e1be1b63744167255313992b32fe85e783bb917828104d2043c5544f92ac\" returns successfully" Dec 13 01:30:21.022065 kubelet[3141]: I1213 01:30:21.021507 3141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-66d5d9b775-rgc2v" podStartSLOduration=1.783356741 podStartE2EDuration="5.021480814s" podCreationTimestamp="2024-12-13 01:30:16 +0000 UTC" firstStartedPulling="2024-12-13 01:30:16.795937823 +0000 UTC m=+21.288595850" lastFinishedPulling="2024-12-13 01:30:20.034061885 +0000 UTC m=+24.526719923" observedRunningTime="2024-12-13 01:30:21.015520986 +0000 UTC m=+25.508179027" watchObservedRunningTime="2024-12-13 01:30:21.021480814 +0000 UTC m=+25.514138875" Dec 13 01:30:21.045908 kubelet[3141]: E1213 01:30:21.044158 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:30:21.045908 kubelet[3141]: W1213 01:30:21.044192 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:30:21.045908 kubelet[3141]: E1213 01:30:21.044239 3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:30:21.045908 kubelet[3141]: E1213 01:30:21.044497 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:30:21.045908 kubelet[3141]: W1213 01:30:21.044509 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:30:21.045908 kubelet[3141]: E1213 01:30:21.044530 3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:30:21.045908 kubelet[3141]: E1213 01:30:21.044743 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:30:21.045908 kubelet[3141]: W1213 01:30:21.044753 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:30:21.045908 kubelet[3141]: E1213 01:30:21.044772 3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:30:21.045908 kubelet[3141]: E1213 01:30:21.045068 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:30:21.046525 kubelet[3141]: W1213 01:30:21.045080 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:30:21.046525 kubelet[3141]: E1213 01:30:21.045100 3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:30:21.046525 kubelet[3141]: E1213 01:30:21.045311 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:30:21.046525 kubelet[3141]: W1213 01:30:21.045320 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:30:21.046525 kubelet[3141]: E1213 01:30:21.045332 3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:30:21.046525 kubelet[3141]: E1213 01:30:21.045565 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:30:21.046525 kubelet[3141]: W1213 01:30:21.045576 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:30:21.046525 kubelet[3141]: E1213 01:30:21.045587 3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:30:21.047989 kubelet[3141]: E1213 01:30:21.046935 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:30:21.047989 kubelet[3141]: W1213 01:30:21.046963 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:30:21.047989 kubelet[3141]: E1213 01:30:21.046979 3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:30:21.047989 kubelet[3141]: E1213 01:30:21.047710 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:30:21.047989 kubelet[3141]: W1213 01:30:21.047732 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:30:21.047989 kubelet[3141]: E1213 01:30:21.047748 3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:30:21.048716 kubelet[3141]: E1213 01:30:21.048240 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:30:21.048716 kubelet[3141]: W1213 01:30:21.048251 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:30:21.048716 kubelet[3141]: E1213 01:30:21.048266 3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:30:21.050986 kubelet[3141]: E1213 01:30:21.050960 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:30:21.050986 kubelet[3141]: W1213 01:30:21.050984 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:30:21.051137 kubelet[3141]: E1213 01:30:21.051003 3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:30:21.053236 kubelet[3141]: E1213 01:30:21.053212 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:30:21.053236 kubelet[3141]: W1213 01:30:21.053236 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:30:21.053858 kubelet[3141]: E1213 01:30:21.053256 3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:30:21.053858 kubelet[3141]: E1213 01:30:21.053524 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:30:21.053858 kubelet[3141]: W1213 01:30:21.053535 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:30:21.053858 kubelet[3141]: E1213 01:30:21.053549 3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:30:21.055022 kubelet[3141]: E1213 01:30:21.053990 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:30:21.055022 kubelet[3141]: W1213 01:30:21.054001 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:30:21.055022 kubelet[3141]: E1213 01:30:21.054015 3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:30:21.055872 kubelet[3141]: E1213 01:30:21.055830 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:30:21.055872 kubelet[3141]: W1213 01:30:21.055871 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:30:21.056009 kubelet[3141]: E1213 01:30:21.055889 3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:30:21.056517 kubelet[3141]: E1213 01:30:21.056299 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:30:21.056517 kubelet[3141]: W1213 01:30:21.056313 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:30:21.056517 kubelet[3141]: E1213 01:30:21.056328 3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:30:21.117871 kubelet[3141]: E1213 01:30:21.117354 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:30:21.117871 kubelet[3141]: W1213 01:30:21.117381 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:30:21.117871 kubelet[3141]: E1213 01:30:21.117404 3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:30:21.118145 kubelet[3141]: E1213 01:30:21.117907 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:30:21.118145 kubelet[3141]: W1213 01:30:21.117919 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:30:21.118145 kubelet[3141]: E1213 01:30:21.118027 3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:30:21.120205 kubelet[3141]: E1213 01:30:21.120176 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:30:21.120334 kubelet[3141]: W1213 01:30:21.120200 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:30:21.120531 kubelet[3141]: E1213 01:30:21.120514 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:30:21.120590 kubelet[3141]: W1213 01:30:21.120530 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:30:21.120590 kubelet[3141]: E1213 01:30:21.120545 3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:30:21.120761 kubelet[3141]: E1213 01:30:21.120693 3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:30:21.121864 kubelet[3141]: E1213 01:30:21.120959 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:30:21.121864 kubelet[3141]: W1213 01:30:21.120972 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:30:21.121864 kubelet[3141]: E1213 01:30:21.121010 3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:30:21.121864 kubelet[3141]: E1213 01:30:21.121260 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:30:21.121864 kubelet[3141]: W1213 01:30:21.121269 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:30:21.121864 kubelet[3141]: E1213 01:30:21.121288 3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:30:21.122191 kubelet[3141]: E1213 01:30:21.121888 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:30:21.122191 kubelet[3141]: W1213 01:30:21.121901 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:30:21.122191 kubelet[3141]: E1213 01:30:21.122023 3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:30:21.122321 kubelet[3141]: E1213 01:30:21.122193 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:30:21.122321 kubelet[3141]: W1213 01:30:21.122202 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:30:21.122404 kubelet[3141]: E1213 01:30:21.122325 3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:30:21.125088 kubelet[3141]: E1213 01:30:21.125068 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:30:21.125088 kubelet[3141]: W1213 01:30:21.125087 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:30:21.125256 kubelet[3141]: E1213 01:30:21.125109 3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:30:21.125440 kubelet[3141]: E1213 01:30:21.125424 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:30:21.125498 kubelet[3141]: W1213 01:30:21.125440 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:30:21.125873 kubelet[3141]: E1213 01:30:21.125545 3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:30:21.125940 kubelet[3141]: E1213 01:30:21.125917 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:30:21.125940 kubelet[3141]: W1213 01:30:21.125927 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:30:21.126260 kubelet[3141]: E1213 01:30:21.126245 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:30:21.126339 kubelet[3141]: W1213 01:30:21.126259 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:30:21.126339 kubelet[3141]: E1213 01:30:21.126281 3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:30:21.126426 kubelet[3141]: E1213 01:30:21.126412 3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:30:21.126594 kubelet[3141]: E1213 01:30:21.126580 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:30:21.126645 kubelet[3141]: W1213 01:30:21.126594 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:30:21.126645 kubelet[3141]: E1213 01:30:21.126611 3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:30:21.128030 kubelet[3141]: E1213 01:30:21.128012 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:30:21.128030 kubelet[3141]: W1213 01:30:21.128029 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:30:21.128137 kubelet[3141]: E1213 01:30:21.128062 3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:30:21.129120 kubelet[3141]: E1213 01:30:21.129103 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:30:21.129120 kubelet[3141]: W1213 01:30:21.129119 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:30:21.129243 kubelet[3141]: E1213 01:30:21.129138 3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:30:21.129459 kubelet[3141]: E1213 01:30:21.129441 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:30:21.132229 kubelet[3141]: W1213 01:30:21.132200 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:30:21.132324 kubelet[3141]: E1213 01:30:21.132250 3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:30:21.133441 kubelet[3141]: E1213 01:30:21.133240 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:30:21.133549 kubelet[3141]: W1213 01:30:21.133475 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:30:21.133549 kubelet[3141]: E1213 01:30:21.133504 3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:30:21.133793 kubelet[3141]: E1213 01:30:21.133777 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:30:21.133873 kubelet[3141]: W1213 01:30:21.133793 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:30:21.133873 kubelet[3141]: E1213 01:30:21.133808 3141 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:30:21.538057 containerd[1956]: time="2024-12-13T01:30:21.538003675Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:30:21.539402 containerd[1956]: time="2024-12-13T01:30:21.539200365Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Dec 13 01:30:21.540864 containerd[1956]: time="2024-12-13T01:30:21.540789583Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:30:21.544056 containerd[1956]: time="2024-12-13T01:30:21.543997935Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:30:21.544872 containerd[1956]: time="2024-12-13T01:30:21.544748015Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.509216304s" Dec 13 01:30:21.544872 containerd[1956]: time="2024-12-13T01:30:21.544791736Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Dec 13 01:30:21.548904 containerd[1956]: time="2024-12-13T01:30:21.547895381Z" level=info msg="CreateContainer within sandbox \"35dd47a49f97e5994344b5910b663ea539d8ae338930c5701f2b4e6aae1208d4\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 01:30:21.577856 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3469175197.mount: Deactivated successfully. Dec 13 01:30:21.590322 containerd[1956]: time="2024-12-13T01:30:21.590277237Z" level=info msg="CreateContainer within sandbox \"35dd47a49f97e5994344b5910b663ea539d8ae338930c5701f2b4e6aae1208d4\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e4b9f1927ff5a9575a1bb16a2426b1824b157284755fc11e1de596ba03f2b848\"" Dec 13 01:30:21.591126 containerd[1956]: time="2024-12-13T01:30:21.591090503Z" level=info msg="StartContainer for \"e4b9f1927ff5a9575a1bb16a2426b1824b157284755fc11e1de596ba03f2b848\"" Dec 13 01:30:21.647249 systemd[1]: Started cri-containerd-e4b9f1927ff5a9575a1bb16a2426b1824b157284755fc11e1de596ba03f2b848.scope - libcontainer container e4b9f1927ff5a9575a1bb16a2426b1824b157284755fc11e1de596ba03f2b848. Dec 13 01:30:21.685377 containerd[1956]: time="2024-12-13T01:30:21.685319856Z" level=info msg="StartContainer for \"e4b9f1927ff5a9575a1bb16a2426b1824b157284755fc11e1de596ba03f2b848\" returns successfully" Dec 13 01:30:21.703031 systemd[1]: cri-containerd-e4b9f1927ff5a9575a1bb16a2426b1824b157284755fc11e1de596ba03f2b848.scope: Deactivated successfully. Dec 13 01:30:21.735554 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e4b9f1927ff5a9575a1bb16a2426b1824b157284755fc11e1de596ba03f2b848-rootfs.mount: Deactivated successfully. Dec 13 01:30:21.738768 kubelet[3141]: E1213 01:30:21.735869 3141 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tjm8g" podUID="38d9e318-7884-46ef-aa8d-69d6c11c0096" Dec 13 01:30:21.963093 containerd[1956]: time="2024-12-13T01:30:21.921813536Z" level=info msg="shim disconnected" id=e4b9f1927ff5a9575a1bb16a2426b1824b157284755fc11e1de596ba03f2b848 namespace=k8s.io Dec 13 01:30:21.963093 containerd[1956]: time="2024-12-13T01:30:21.962989755Z" level=warning msg="cleaning up after shim disconnected" id=e4b9f1927ff5a9575a1bb16a2426b1824b157284755fc11e1de596ba03f2b848 namespace=k8s.io Dec 13 01:30:21.963093 containerd[1956]: time="2024-12-13T01:30:21.963006007Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:30:21.969026 kubelet[3141]: I1213 01:30:21.966523 3141 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:30:22.976219 containerd[1956]: time="2024-12-13T01:30:22.975505688Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Dec 13 01:30:23.738945 kubelet[3141]: E1213 01:30:23.735999 3141 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tjm8g" podUID="38d9e318-7884-46ef-aa8d-69d6c11c0096" Dec 13 01:30:25.745889 kubelet[3141]: E1213 01:30:25.736807 3141 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tjm8g" podUID="38d9e318-7884-46ef-aa8d-69d6c11c0096" Dec 13 01:30:27.736042 kubelet[3141]: E1213 01:30:27.735977 3141 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tjm8g" podUID="38d9e318-7884-46ef-aa8d-69d6c11c0096" Dec 13 01:30:28.577908 containerd[1956]: time="2024-12-13T01:30:28.577834602Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:30:28.580880 containerd[1956]: time="2024-12-13T01:30:28.580625389Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Dec 13 01:30:28.583409 containerd[1956]: time="2024-12-13T01:30:28.583343679Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:30:28.588309 containerd[1956]: time="2024-12-13T01:30:28.586818561Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:30:28.588309 containerd[1956]: time="2024-12-13T01:30:28.588158989Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 5.61260616s" Dec 13 01:30:28.588309 containerd[1956]: time="2024-12-13T01:30:28.588204744Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Dec 13 01:30:28.596708 containerd[1956]: time="2024-12-13T01:30:28.596666834Z" level=info msg="CreateContainer within sandbox \"35dd47a49f97e5994344b5910b663ea539d8ae338930c5701f2b4e6aae1208d4\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 01:30:28.641360 containerd[1956]: time="2024-12-13T01:30:28.641310066Z" level=info msg="CreateContainer within sandbox \"35dd47a49f97e5994344b5910b663ea539d8ae338930c5701f2b4e6aae1208d4\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d61f9ce39b43829bebae7b949738aa014af5163db419360b8024143b08aa9c28\"" Dec 13 01:30:28.643554 containerd[1956]: time="2024-12-13T01:30:28.642208327Z" level=info msg="StartContainer for \"d61f9ce39b43829bebae7b949738aa014af5163db419360b8024143b08aa9c28\"" Dec 13 01:30:28.716049 systemd[1]: Started cri-containerd-d61f9ce39b43829bebae7b949738aa014af5163db419360b8024143b08aa9c28.scope - libcontainer container d61f9ce39b43829bebae7b949738aa014af5163db419360b8024143b08aa9c28. Dec 13 01:30:28.766884 containerd[1956]: time="2024-12-13T01:30:28.766820133Z" level=info msg="StartContainer for \"d61f9ce39b43829bebae7b949738aa014af5163db419360b8024143b08aa9c28\" returns successfully" Dec 13 01:30:29.609003 systemd[1]: cri-containerd-d61f9ce39b43829bebae7b949738aa014af5163db419360b8024143b08aa9c28.scope: Deactivated successfully. Dec 13 01:30:29.642827 kubelet[3141]: I1213 01:30:29.642489 3141 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:30:29.740203 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d61f9ce39b43829bebae7b949738aa014af5163db419360b8024143b08aa9c28-rootfs.mount: Deactivated successfully. Dec 13 01:30:29.756187 kubelet[3141]: E1213 01:30:29.756147 3141 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tjm8g" podUID="38d9e318-7884-46ef-aa8d-69d6c11c0096" Dec 13 01:30:29.786679 kubelet[3141]: I1213 01:30:29.786651 3141 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Dec 13 01:30:29.919528 systemd[1]: Created slice kubepods-burstable-pod3c9798d9_0918_4f04_b830_5e93da684068.slice - libcontainer container kubepods-burstable-pod3c9798d9_0918_4f04_b830_5e93da684068.slice. Dec 13 01:30:29.931255 containerd[1956]: time="2024-12-13T01:30:29.930126783Z" level=info msg="shim disconnected" id=d61f9ce39b43829bebae7b949738aa014af5163db419360b8024143b08aa9c28 namespace=k8s.io Dec 13 01:30:29.931255 containerd[1956]: time="2024-12-13T01:30:29.930278694Z" level=warning msg="cleaning up after shim disconnected" id=d61f9ce39b43829bebae7b949738aa014af5163db419360b8024143b08aa9c28 namespace=k8s.io Dec 13 01:30:29.931255 containerd[1956]: time="2024-12-13T01:30:29.930294830Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:30:29.972943 systemd[1]: Created slice kubepods-burstable-pod1318da38_1af2_423d_bab0_f7184d00175d.slice - libcontainer container kubepods-burstable-pod1318da38_1af2_423d_bab0_f7184d00175d.slice. Dec 13 01:30:29.993365 kubelet[3141]: W1213 01:30:29.993331 3141 reflector.go:561] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-31-20" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ip-172-31-31-20' and this object Dec 13 01:30:29.993913 kubelet[3141]: E1213 01:30:29.993721 3141 reflector.go:158] "Unhandled Error" err="object-\"calico-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ip-172-31-31-20\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ip-172-31-31-20' and this object" logger="UnhandledError" Dec 13 01:30:30.000622 systemd[1]: Created slice kubepods-besteffort-pod6789e551_cd4a_4631_b879_423157868f76.slice - libcontainer container kubepods-besteffort-pod6789e551_cd4a_4631_b879_423157868f76.slice. Dec 13 01:30:30.003092 kubelet[3141]: I1213 01:30:30.002717 3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lj9vn\" (UniqueName: \"kubernetes.io/projected/3c9798d9-0918-4f04-b830-5e93da684068-kube-api-access-lj9vn\") pod \"coredns-6f6b679f8f-9ljzf\" (UID: \"3c9798d9-0918-4f04-b830-5e93da684068\") " pod="kube-system/coredns-6f6b679f8f-9ljzf" Dec 13 01:30:30.004299 kubelet[3141]: I1213 01:30:30.003463 3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6789e551-cd4a-4631-b879-423157868f76-calico-apiserver-certs\") pod \"calico-apiserver-6fb57f9dbd-4vmnq\" (UID: \"6789e551-cd4a-4631-b879-423157868f76\") " pod="calico-apiserver/calico-apiserver-6fb57f9dbd-4vmnq" Dec 13 01:30:30.004299 kubelet[3141]: I1213 01:30:30.003513 3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1318da38-1af2-423d-bab0-f7184d00175d-config-volume\") pod \"coredns-6f6b679f8f-j9snz\" (UID: \"1318da38-1af2-423d-bab0-f7184d00175d\") " pod="kube-system/coredns-6f6b679f8f-j9snz" Dec 13 01:30:30.004299 kubelet[3141]: I1213 01:30:30.003547 3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4p7d\" (UniqueName: \"kubernetes.io/projected/6789e551-cd4a-4631-b879-423157868f76-kube-api-access-s4p7d\") pod \"calico-apiserver-6fb57f9dbd-4vmnq\" (UID: \"6789e551-cd4a-4631-b879-423157868f76\") " pod="calico-apiserver/calico-apiserver-6fb57f9dbd-4vmnq" Dec 13 01:30:30.004299 kubelet[3141]: I1213 01:30:30.003575 3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c9798d9-0918-4f04-b830-5e93da684068-config-volume\") pod \"coredns-6f6b679f8f-9ljzf\" (UID: \"3c9798d9-0918-4f04-b830-5e93da684068\") " pod="kube-system/coredns-6f6b679f8f-9ljzf" Dec 13 01:30:30.004299 kubelet[3141]: I1213 01:30:30.003606 3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tj526\" (UniqueName: \"kubernetes.io/projected/1318da38-1af2-423d-bab0-f7184d00175d-kube-api-access-tj526\") pod \"coredns-6f6b679f8f-j9snz\" (UID: \"1318da38-1af2-423d-bab0-f7184d00175d\") " pod="kube-system/coredns-6f6b679f8f-j9snz" Dec 13 01:30:30.025584 systemd[1]: Created slice kubepods-besteffort-pode810db5b_2666_45f5_b096_3468d8993a9c.slice - libcontainer container kubepods-besteffort-pode810db5b_2666_45f5_b096_3468d8993a9c.slice. Dec 13 01:30:30.041568 systemd[1]: Created slice kubepods-besteffort-podd3ab87f4_6e23_4e25_bdd7_87d4f2bfc4d4.slice - libcontainer container kubepods-besteffort-podd3ab87f4_6e23_4e25_bdd7_87d4f2bfc4d4.slice. Dec 13 01:30:30.106204 kubelet[3141]: I1213 01:30:30.105853 3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67vbk\" (UniqueName: \"kubernetes.io/projected/d3ab87f4-6e23-4e25-bdd7-87d4f2bfc4d4-kube-api-access-67vbk\") pod \"calico-kube-controllers-57c86ff57f-7c9qb\" (UID: \"d3ab87f4-6e23-4e25-bdd7-87d4f2bfc4d4\") " pod="calico-system/calico-kube-controllers-57c86ff57f-7c9qb" Dec 13 01:30:30.106204 kubelet[3141]: I1213 01:30:30.105997 3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e810db5b-2666-45f5-b096-3468d8993a9c-calico-apiserver-certs\") pod \"calico-apiserver-6fb57f9dbd-9szhv\" (UID: \"e810db5b-2666-45f5-b096-3468d8993a9c\") " pod="calico-apiserver/calico-apiserver-6fb57f9dbd-9szhv" Dec 13 01:30:30.109719 kubelet[3141]: I1213 01:30:30.109605 3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vdnj\" (UniqueName: \"kubernetes.io/projected/e810db5b-2666-45f5-b096-3468d8993a9c-kube-api-access-7vdnj\") pod \"calico-apiserver-6fb57f9dbd-9szhv\" (UID: \"e810db5b-2666-45f5-b096-3468d8993a9c\") " pod="calico-apiserver/calico-apiserver-6fb57f9dbd-9szhv" Dec 13 01:30:30.110175 kubelet[3141]: I1213 01:30:30.110145 3141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3ab87f4-6e23-4e25-bdd7-87d4f2bfc4d4-tigera-ca-bundle\") pod \"calico-kube-controllers-57c86ff57f-7c9qb\" (UID: \"d3ab87f4-6e23-4e25-bdd7-87d4f2bfc4d4\") " pod="calico-system/calico-kube-controllers-57c86ff57f-7c9qb" Dec 13 01:30:30.241062 containerd[1956]: time="2024-12-13T01:30:30.233471621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-9ljzf,Uid:3c9798d9-0918-4f04-b830-5e93da684068,Namespace:kube-system,Attempt:0,}" Dec 13 01:30:30.293290 containerd[1956]: time="2024-12-13T01:30:30.293229348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-j9snz,Uid:1318da38-1af2-423d-bab0-f7184d00175d,Namespace:kube-system,Attempt:0,}" Dec 13 01:30:30.399040 containerd[1956]: time="2024-12-13T01:30:30.396418938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57c86ff57f-7c9qb,Uid:d3ab87f4-6e23-4e25-bdd7-87d4f2bfc4d4,Namespace:calico-system,Attempt:0,}" Dec 13 01:30:30.928989 containerd[1956]: time="2024-12-13T01:30:30.928065815Z" level=error msg="Failed to destroy network for sandbox \"6d18f0f0b855a4997f454d04a8504dd5d0fe82e40fe180e414d3e76aff02d194\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:30:30.932671 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6d18f0f0b855a4997f454d04a8504dd5d0fe82e40fe180e414d3e76aff02d194-shm.mount: Deactivated successfully. Dec 13 01:30:30.939637 containerd[1956]: time="2024-12-13T01:30:30.939114860Z" level=error msg="Failed to destroy network for sandbox \"4d2904961290749dbda9039d6da12f7ad0ecaf8fb204246801402ca631e5c577\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:30:30.947827 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4d2904961290749dbda9039d6da12f7ad0ecaf8fb204246801402ca631e5c577-shm.mount: Deactivated successfully. Dec 13 01:30:30.955385 containerd[1956]: time="2024-12-13T01:30:30.955325228Z" level=error msg="encountered an error cleaning up failed sandbox \"4d2904961290749dbda9039d6da12f7ad0ecaf8fb204246801402ca631e5c577\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:30:30.955644 containerd[1956]: time="2024-12-13T01:30:30.955496808Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-j9snz,Uid:1318da38-1af2-423d-bab0-f7184d00175d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4d2904961290749dbda9039d6da12f7ad0ecaf8fb204246801402ca631e5c577\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:30:30.956492 containerd[1956]: time="2024-12-13T01:30:30.955323227Z" level=error msg="encountered an error cleaning up failed sandbox \"6d18f0f0b855a4997f454d04a8504dd5d0fe82e40fe180e414d3e76aff02d194\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:30:30.956782 containerd[1956]: time="2024-12-13T01:30:30.956658812Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57c86ff57f-7c9qb,Uid:d3ab87f4-6e23-4e25-bdd7-87d4f2bfc4d4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6d18f0f0b855a4997f454d04a8504dd5d0fe82e40fe180e414d3e76aff02d194\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:30:30.967416 kubelet[3141]: E1213 01:30:30.966994 3141 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d18f0f0b855a4997f454d04a8504dd5d0fe82e40fe180e414d3e76aff02d194\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:30:30.967416 kubelet[3141]: E1213 01:30:30.967102 3141 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d18f0f0b855a4997f454d04a8504dd5d0fe82e40fe180e414d3e76aff02d194\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-57c86ff57f-7c9qb" Dec 13 01:30:30.967416 kubelet[3141]: E1213 01:30:30.967142 3141 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d18f0f0b855a4997f454d04a8504dd5d0fe82e40fe180e414d3e76aff02d194\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-57c86ff57f-7c9qb" Dec 13 01:30:30.968179 kubelet[3141]: E1213 01:30:30.967236 3141 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-57c86ff57f-7c9qb_calico-system(d3ab87f4-6e23-4e25-bdd7-87d4f2bfc4d4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-57c86ff57f-7c9qb_calico-system(d3ab87f4-6e23-4e25-bdd7-87d4f2bfc4d4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6d18f0f0b855a4997f454d04a8504dd5d0fe82e40fe180e414d3e76aff02d194\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-57c86ff57f-7c9qb" podUID="d3ab87f4-6e23-4e25-bdd7-87d4f2bfc4d4" Dec 13 01:30:30.968179 kubelet[3141]: E1213 01:30:30.967262 3141 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d2904961290749dbda9039d6da12f7ad0ecaf8fb204246801402ca631e5c577\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:30:30.968179 kubelet[3141]: E1213 01:30:30.967300 3141 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d2904961290749dbda9039d6da12f7ad0ecaf8fb204246801402ca631e5c577\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-j9snz" Dec 13 01:30:30.968369 kubelet[3141]: E1213 01:30:30.967320 3141 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d2904961290749dbda9039d6da12f7ad0ecaf8fb204246801402ca631e5c577\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-j9snz" Dec 13 01:30:30.968369 kubelet[3141]: E1213 01:30:30.967355 3141 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-j9snz_kube-system(1318da38-1af2-423d-bab0-f7184d00175d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-j9snz_kube-system(1318da38-1af2-423d-bab0-f7184d00175d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4d2904961290749dbda9039d6da12f7ad0ecaf8fb204246801402ca631e5c577\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-j9snz" podUID="1318da38-1af2-423d-bab0-f7184d00175d" Dec 13 01:30:30.975930 containerd[1956]: time="2024-12-13T01:30:30.975612465Z" level=error msg="Failed to destroy network for sandbox \"a08fd61ae5da2a34f88b90565ef18bcf8003849e01be7621285d4bcfe4abe94b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:30:30.978767 containerd[1956]: time="2024-12-13T01:30:30.977280751Z" level=error msg="encountered an error cleaning up failed sandbox \"a08fd61ae5da2a34f88b90565ef18bcf8003849e01be7621285d4bcfe4abe94b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:30:30.978767 containerd[1956]: time="2024-12-13T01:30:30.977353134Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-9ljzf,Uid:3c9798d9-0918-4f04-b830-5e93da684068,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a08fd61ae5da2a34f88b90565ef18bcf8003849e01be7621285d4bcfe4abe94b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:30:30.979100 kubelet[3141]: E1213 01:30:30.978098 3141 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a08fd61ae5da2a34f88b90565ef18bcf8003849e01be7621285d4bcfe4abe94b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:30:30.979100 kubelet[3141]: E1213 01:30:30.978157 3141 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a08fd61ae5da2a34f88b90565ef18bcf8003849e01be7621285d4bcfe4abe94b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-9ljzf" Dec 13 01:30:30.979100 kubelet[3141]: E1213 01:30:30.978185 3141 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a08fd61ae5da2a34f88b90565ef18bcf8003849e01be7621285d4bcfe4abe94b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-9ljzf" Dec 13 01:30:30.980475 kubelet[3141]: E1213 01:30:30.978235 3141 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-9ljzf_kube-system(3c9798d9-0918-4f04-b830-5e93da684068)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-9ljzf_kube-system(3c9798d9-0918-4f04-b830-5e93da684068)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a08fd61ae5da2a34f88b90565ef18bcf8003849e01be7621285d4bcfe4abe94b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-9ljzf" podUID="3c9798d9-0918-4f04-b830-5e93da684068" Dec 13 01:30:30.983399 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a08fd61ae5da2a34f88b90565ef18bcf8003849e01be7621285d4bcfe4abe94b-shm.mount: Deactivated successfully. Dec 13 01:30:31.011814 kubelet[3141]: I1213 01:30:31.011777 3141 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d18f0f0b855a4997f454d04a8504dd5d0fe82e40fe180e414d3e76aff02d194" Dec 13 01:30:31.017664 kubelet[3141]: I1213 01:30:31.017627 3141 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d2904961290749dbda9039d6da12f7ad0ecaf8fb204246801402ca631e5c577" Dec 13 01:30:31.020923 containerd[1956]: time="2024-12-13T01:30:31.020363122Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Dec 13 01:30:31.024951 containerd[1956]: time="2024-12-13T01:30:31.024615020Z" level=info msg="StopPodSandbox for \"4d2904961290749dbda9039d6da12f7ad0ecaf8fb204246801402ca631e5c577\"" Dec 13 01:30:31.028465 containerd[1956]: time="2024-12-13T01:30:31.027588531Z" level=info msg="StopPodSandbox for \"6d18f0f0b855a4997f454d04a8504dd5d0fe82e40fe180e414d3e76aff02d194\"" Dec 13 01:30:31.031532 containerd[1956]: time="2024-12-13T01:30:31.030380664Z" level=info msg="Ensure that sandbox 6d18f0f0b855a4997f454d04a8504dd5d0fe82e40fe180e414d3e76aff02d194 in task-service has been cleanup successfully" Dec 13 01:30:31.032587 containerd[1956]: time="2024-12-13T01:30:31.032499998Z" level=info msg="Ensure that sandbox 4d2904961290749dbda9039d6da12f7ad0ecaf8fb204246801402ca631e5c577 in task-service has been cleanup successfully" Dec 13 01:30:31.041689 kubelet[3141]: I1213 01:30:31.040778 3141 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a08fd61ae5da2a34f88b90565ef18bcf8003849e01be7621285d4bcfe4abe94b" Dec 13 01:30:31.045589 containerd[1956]: time="2024-12-13T01:30:31.044239507Z" level=info msg="StopPodSandbox for \"a08fd61ae5da2a34f88b90565ef18bcf8003849e01be7621285d4bcfe4abe94b\"" Dec 13 01:30:31.045589 containerd[1956]: time="2024-12-13T01:30:31.045257398Z" level=info msg="Ensure that sandbox a08fd61ae5da2a34f88b90565ef18bcf8003849e01be7621285d4bcfe4abe94b in task-service has been cleanup successfully" Dec 13 01:30:31.149373 kubelet[3141]: E1213 01:30:31.149333 3141 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 13 01:30:31.149952 kubelet[3141]: E1213 01:30:31.149607 3141 projected.go:194] Error preparing data for projected volume kube-api-access-s4p7d for pod calico-apiserver/calico-apiserver-6fb57f9dbd-4vmnq: failed to sync configmap cache: timed out waiting for the condition Dec 13 01:30:31.152370 kubelet[3141]: E1213 01:30:31.151306 3141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6789e551-cd4a-4631-b879-423157868f76-kube-api-access-s4p7d podName:6789e551-cd4a-4631-b879-423157868f76 nodeName:}" failed. No retries permitted until 2024-12-13 01:30:31.649776148 +0000 UTC m=+36.142434185 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s4p7d" (UniqueName: "kubernetes.io/projected/6789e551-cd4a-4631-b879-423157868f76-kube-api-access-s4p7d") pod "calico-apiserver-6fb57f9dbd-4vmnq" (UID: "6789e551-cd4a-4631-b879-423157868f76") : failed to sync configmap cache: timed out waiting for the condition Dec 13 01:30:31.179209 containerd[1956]: time="2024-12-13T01:30:31.179062987Z" level=error msg="StopPodSandbox for \"6d18f0f0b855a4997f454d04a8504dd5d0fe82e40fe180e414d3e76aff02d194\" failed" error="failed to destroy network for sandbox \"6d18f0f0b855a4997f454d04a8504dd5d0fe82e40fe180e414d3e76aff02d194\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:30:31.180378 kubelet[3141]: E1213 01:30:31.180143 3141 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6d18f0f0b855a4997f454d04a8504dd5d0fe82e40fe180e414d3e76aff02d194\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6d18f0f0b855a4997f454d04a8504dd5d0fe82e40fe180e414d3e76aff02d194" Dec 13 01:30:31.180378 kubelet[3141]: E1213 01:30:31.180222 3141 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6d18f0f0b855a4997f454d04a8504dd5d0fe82e40fe180e414d3e76aff02d194"} Dec 13 01:30:31.180378 kubelet[3141]: E1213 01:30:31.180298 3141 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d3ab87f4-6e23-4e25-bdd7-87d4f2bfc4d4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6d18f0f0b855a4997f454d04a8504dd5d0fe82e40fe180e414d3e76aff02d194\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:30:31.180378 kubelet[3141]: E1213 01:30:31.180331 3141 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d3ab87f4-6e23-4e25-bdd7-87d4f2bfc4d4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6d18f0f0b855a4997f454d04a8504dd5d0fe82e40fe180e414d3e76aff02d194\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-57c86ff57f-7c9qb" podUID="d3ab87f4-6e23-4e25-bdd7-87d4f2bfc4d4" Dec 13 01:30:31.189226 containerd[1956]: time="2024-12-13T01:30:31.189111331Z" level=error msg="StopPodSandbox for \"4d2904961290749dbda9039d6da12f7ad0ecaf8fb204246801402ca631e5c577\" failed" error="failed to destroy network for sandbox \"4d2904961290749dbda9039d6da12f7ad0ecaf8fb204246801402ca631e5c577\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:30:31.189452 kubelet[3141]: E1213 01:30:31.189404 3141 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4d2904961290749dbda9039d6da12f7ad0ecaf8fb204246801402ca631e5c577\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4d2904961290749dbda9039d6da12f7ad0ecaf8fb204246801402ca631e5c577" Dec 13 01:30:31.189668 kubelet[3141]: E1213 01:30:31.189599 3141 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4d2904961290749dbda9039d6da12f7ad0ecaf8fb204246801402ca631e5c577"} Dec 13 01:30:31.189668 kubelet[3141]: E1213 01:30:31.189651 3141 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1318da38-1af2-423d-bab0-f7184d00175d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4d2904961290749dbda9039d6da12f7ad0ecaf8fb204246801402ca631e5c577\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:30:31.189912 kubelet[3141]: E1213 01:30:31.189683 3141 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1318da38-1af2-423d-bab0-f7184d00175d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4d2904961290749dbda9039d6da12f7ad0ecaf8fb204246801402ca631e5c577\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-j9snz" podUID="1318da38-1af2-423d-bab0-f7184d00175d" Dec 13 01:30:31.191310 containerd[1956]: time="2024-12-13T01:30:31.191270625Z" level=error msg="StopPodSandbox for \"a08fd61ae5da2a34f88b90565ef18bcf8003849e01be7621285d4bcfe4abe94b\" failed" error="failed to destroy network for sandbox \"a08fd61ae5da2a34f88b90565ef18bcf8003849e01be7621285d4bcfe4abe94b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:30:31.191497 kubelet[3141]: E1213 01:30:31.191457 3141 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a08fd61ae5da2a34f88b90565ef18bcf8003849e01be7621285d4bcfe4abe94b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a08fd61ae5da2a34f88b90565ef18bcf8003849e01be7621285d4bcfe4abe94b" Dec 13 01:30:31.191564 kubelet[3141]: E1213 01:30:31.191502 3141 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a08fd61ae5da2a34f88b90565ef18bcf8003849e01be7621285d4bcfe4abe94b"} Dec 13 01:30:31.191564 kubelet[3141]: E1213 01:30:31.191543 3141 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3c9798d9-0918-4f04-b830-5e93da684068\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a08fd61ae5da2a34f88b90565ef18bcf8003849e01be7621285d4bcfe4abe94b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:30:31.191671 kubelet[3141]: E1213 01:30:31.191574 3141 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3c9798d9-0918-4f04-b830-5e93da684068\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a08fd61ae5da2a34f88b90565ef18bcf8003849e01be7621285d4bcfe4abe94b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-9ljzf" podUID="3c9798d9-0918-4f04-b830-5e93da684068" Dec 13 01:30:31.254639 kubelet[3141]: E1213 01:30:31.254595 3141 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 13 01:30:31.255141 kubelet[3141]: E1213 01:30:31.254718 3141 projected.go:194] Error preparing data for projected volume kube-api-access-7vdnj for pod calico-apiserver/calico-apiserver-6fb57f9dbd-9szhv: failed to sync configmap cache: timed out waiting for the condition Dec 13 01:30:31.255141 kubelet[3141]: E1213 01:30:31.254795 3141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e810db5b-2666-45f5-b096-3468d8993a9c-kube-api-access-7vdnj podName:e810db5b-2666-45f5-b096-3468d8993a9c nodeName:}" failed. No retries permitted until 2024-12-13 01:30:31.754773267 +0000 UTC m=+36.247431292 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-7vdnj" (UniqueName: "kubernetes.io/projected/e810db5b-2666-45f5-b096-3468d8993a9c-kube-api-access-7vdnj") pod "calico-apiserver-6fb57f9dbd-9szhv" (UID: "e810db5b-2666-45f5-b096-3468d8993a9c") : failed to sync configmap cache: timed out waiting for the condition Dec 13 01:30:31.748131 systemd[1]: Created slice kubepods-besteffort-pod38d9e318_7884_46ef_aa8d_69d6c11c0096.slice - libcontainer container kubepods-besteffort-pod38d9e318_7884_46ef_aa8d_69d6c11c0096.slice. Dec 13 01:30:31.751559 containerd[1956]: time="2024-12-13T01:30:31.751516659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tjm8g,Uid:38d9e318-7884-46ef-aa8d-69d6c11c0096,Namespace:calico-system,Attempt:0,}" Dec 13 01:30:31.812049 containerd[1956]: time="2024-12-13T01:30:31.812001355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fb57f9dbd-4vmnq,Uid:6789e551-cd4a-4631-b879-423157868f76,Namespace:calico-apiserver,Attempt:0,}" Dec 13 01:30:31.889195 containerd[1956]: time="2024-12-13T01:30:31.889143570Z" level=error msg="Failed to destroy network for sandbox \"b62617f63c1f5ae13f1945f3db2a067af07dc896b798844d9b94b95a9239b8ba\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:30:31.891352 containerd[1956]: time="2024-12-13T01:30:31.891299059Z" level=error msg="encountered an error cleaning up failed sandbox \"b62617f63c1f5ae13f1945f3db2a067af07dc896b798844d9b94b95a9239b8ba\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:30:31.891768 containerd[1956]: time="2024-12-13T01:30:31.891377282Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tjm8g,Uid:38d9e318-7884-46ef-aa8d-69d6c11c0096,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b62617f63c1f5ae13f1945f3db2a067af07dc896b798844d9b94b95a9239b8ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:30:31.892034 kubelet[3141]: E1213 01:30:31.891976 3141 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b62617f63c1f5ae13f1945f3db2a067af07dc896b798844d9b94b95a9239b8ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:30:31.892133 kubelet[3141]: E1213 01:30:31.892034 3141 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b62617f63c1f5ae13f1945f3db2a067af07dc896b798844d9b94b95a9239b8ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tjm8g" Dec 13 01:30:31.892133 kubelet[3141]: E1213 01:30:31.892061 3141 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b62617f63c1f5ae13f1945f3db2a067af07dc896b798844d9b94b95a9239b8ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tjm8g" Dec 13 01:30:31.892889 kubelet[3141]: E1213 01:30:31.892129 3141 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-tjm8g_calico-system(38d9e318-7884-46ef-aa8d-69d6c11c0096)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-tjm8g_calico-system(38d9e318-7884-46ef-aa8d-69d6c11c0096)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b62617f63c1f5ae13f1945f3db2a067af07dc896b798844d9b94b95a9239b8ba\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tjm8g" podUID="38d9e318-7884-46ef-aa8d-69d6c11c0096" Dec 13 01:30:31.943467 containerd[1956]: time="2024-12-13T01:30:31.943414431Z" level=error msg="Failed to destroy network for sandbox \"873558a6d702d97249fa217e19a1988a1390f250aee4c92d5f770a1d2589431e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:30:31.944002 containerd[1956]: time="2024-12-13T01:30:31.943947954Z" level=error msg="encountered an error cleaning up failed sandbox \"873558a6d702d97249fa217e19a1988a1390f250aee4c92d5f770a1d2589431e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:30:31.944064 containerd[1956]: time="2024-12-13T01:30:31.944029249Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fb57f9dbd-4vmnq,Uid:6789e551-cd4a-4631-b879-423157868f76,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"873558a6d702d97249fa217e19a1988a1390f250aee4c92d5f770a1d2589431e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:30:31.944343 kubelet[3141]: E1213 01:30:31.944293 3141 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"873558a6d702d97249fa217e19a1988a1390f250aee4c92d5f770a1d2589431e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:30:31.944441 kubelet[3141]: E1213 01:30:31.944357 3141 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"873558a6d702d97249fa217e19a1988a1390f250aee4c92d5f770a1d2589431e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6fb57f9dbd-4vmnq" Dec 13 01:30:31.944441 kubelet[3141]: E1213 01:30:31.944386 3141 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"873558a6d702d97249fa217e19a1988a1390f250aee4c92d5f770a1d2589431e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6fb57f9dbd-4vmnq" Dec 13 01:30:31.944527 kubelet[3141]: E1213 01:30:31.944439 3141 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6fb57f9dbd-4vmnq_calico-apiserver(6789e551-cd4a-4631-b879-423157868f76)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6fb57f9dbd-4vmnq_calico-apiserver(6789e551-cd4a-4631-b879-423157868f76)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"873558a6d702d97249fa217e19a1988a1390f250aee4c92d5f770a1d2589431e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6fb57f9dbd-4vmnq" podUID="6789e551-cd4a-4631-b879-423157868f76" Dec 13 01:30:32.047528 kubelet[3141]: I1213 01:30:32.044682 3141 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="873558a6d702d97249fa217e19a1988a1390f250aee4c92d5f770a1d2589431e" Dec 13 01:30:32.048759 containerd[1956]: time="2024-12-13T01:30:32.046593923Z" level=info msg="StopPodSandbox for \"873558a6d702d97249fa217e19a1988a1390f250aee4c92d5f770a1d2589431e\"" Dec 13 01:30:32.048759 containerd[1956]: time="2024-12-13T01:30:32.046782463Z" level=info msg="Ensure that sandbox 873558a6d702d97249fa217e19a1988a1390f250aee4c92d5f770a1d2589431e in task-service has been cleanup successfully" Dec 13 01:30:32.049541 kubelet[3141]: I1213 01:30:32.049480 3141 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b62617f63c1f5ae13f1945f3db2a067af07dc896b798844d9b94b95a9239b8ba" Dec 13 01:30:32.068358 containerd[1956]: time="2024-12-13T01:30:32.068300326Z" level=info msg="StopPodSandbox for \"b62617f63c1f5ae13f1945f3db2a067af07dc896b798844d9b94b95a9239b8ba\"" Dec 13 01:30:32.069174 containerd[1956]: time="2024-12-13T01:30:32.069132000Z" level=info msg="Ensure that sandbox b62617f63c1f5ae13f1945f3db2a067af07dc896b798844d9b94b95a9239b8ba in task-service has been cleanup successfully" Dec 13 01:30:32.141620 containerd[1956]: time="2024-12-13T01:30:32.141473153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fb57f9dbd-9szhv,Uid:e810db5b-2666-45f5-b096-3468d8993a9c,Namespace:calico-apiserver,Attempt:0,}" Dec 13 01:30:32.149831 containerd[1956]: time="2024-12-13T01:30:32.149744172Z" level=error msg="StopPodSandbox for \"873558a6d702d97249fa217e19a1988a1390f250aee4c92d5f770a1d2589431e\" failed" error="failed to destroy network for sandbox \"873558a6d702d97249fa217e19a1988a1390f250aee4c92d5f770a1d2589431e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:30:32.150446 kubelet[3141]: E1213 01:30:32.150405 3141 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"873558a6d702d97249fa217e19a1988a1390f250aee4c92d5f770a1d2589431e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="873558a6d702d97249fa217e19a1988a1390f250aee4c92d5f770a1d2589431e" Dec 13 01:30:32.150749 kubelet[3141]: E1213 01:30:32.150699 3141 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"873558a6d702d97249fa217e19a1988a1390f250aee4c92d5f770a1d2589431e"} Dec 13 01:30:32.150972 kubelet[3141]: E1213 01:30:32.150905 3141 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6789e551-cd4a-4631-b879-423157868f76\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"873558a6d702d97249fa217e19a1988a1390f250aee4c92d5f770a1d2589431e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:30:32.152056 kubelet[3141]: E1213 01:30:32.151513 3141 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6789e551-cd4a-4631-b879-423157868f76\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"873558a6d702d97249fa217e19a1988a1390f250aee4c92d5f770a1d2589431e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6fb57f9dbd-4vmnq" podUID="6789e551-cd4a-4631-b879-423157868f76" Dec 13 01:30:32.177062 containerd[1956]: time="2024-12-13T01:30:32.176925796Z" level=error msg="StopPodSandbox for \"b62617f63c1f5ae13f1945f3db2a067af07dc896b798844d9b94b95a9239b8ba\" failed" error="failed to destroy network for sandbox \"b62617f63c1f5ae13f1945f3db2a067af07dc896b798844d9b94b95a9239b8ba\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:30:32.177375 kubelet[3141]: E1213 01:30:32.177241 3141 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b62617f63c1f5ae13f1945f3db2a067af07dc896b798844d9b94b95a9239b8ba\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b62617f63c1f5ae13f1945f3db2a067af07dc896b798844d9b94b95a9239b8ba" Dec 13 01:30:32.177477 kubelet[3141]: E1213 01:30:32.177396 3141 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b62617f63c1f5ae13f1945f3db2a067af07dc896b798844d9b94b95a9239b8ba"} Dec 13 01:30:32.177477 kubelet[3141]: E1213 01:30:32.177441 3141 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"38d9e318-7884-46ef-aa8d-69d6c11c0096\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b62617f63c1f5ae13f1945f3db2a067af07dc896b798844d9b94b95a9239b8ba\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:30:32.177721 kubelet[3141]: E1213 01:30:32.177477 3141 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"38d9e318-7884-46ef-aa8d-69d6c11c0096\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b62617f63c1f5ae13f1945f3db2a067af07dc896b798844d9b94b95a9239b8ba\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tjm8g" podUID="38d9e318-7884-46ef-aa8d-69d6c11c0096" Dec 13 01:30:32.248811 containerd[1956]: time="2024-12-13T01:30:32.248759716Z" level=error msg="Failed to destroy network for sandbox \"23cc170db8b9fa1f9e229c1fc20683d894e460f023587753f52368bf6e72c0bc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:30:32.249203 containerd[1956]: time="2024-12-13T01:30:32.249159003Z" level=error msg="encountered an error cleaning up failed sandbox \"23cc170db8b9fa1f9e229c1fc20683d894e460f023587753f52368bf6e72c0bc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:30:32.249391 containerd[1956]: time="2024-12-13T01:30:32.249354104Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fb57f9dbd-9szhv,Uid:e810db5b-2666-45f5-b096-3468d8993a9c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"23cc170db8b9fa1f9e229c1fc20683d894e460f023587753f52368bf6e72c0bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:30:32.249663 kubelet[3141]: E1213 01:30:32.249628 3141 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23cc170db8b9fa1f9e229c1fc20683d894e460f023587753f52368bf6e72c0bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:30:32.249749 kubelet[3141]: E1213 01:30:32.249701 3141 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23cc170db8b9fa1f9e229c1fc20683d894e460f023587753f52368bf6e72c0bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6fb57f9dbd-9szhv" Dec 13 01:30:32.249810 kubelet[3141]: E1213 01:30:32.249740 3141 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23cc170db8b9fa1f9e229c1fc20683d894e460f023587753f52368bf6e72c0bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6fb57f9dbd-9szhv" Dec 13 01:30:32.249882 kubelet[3141]: E1213 01:30:32.249799 3141 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6fb57f9dbd-9szhv_calico-apiserver(e810db5b-2666-45f5-b096-3468d8993a9c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6fb57f9dbd-9szhv_calico-apiserver(e810db5b-2666-45f5-b096-3468d8993a9c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"23cc170db8b9fa1f9e229c1fc20683d894e460f023587753f52368bf6e72c0bc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6fb57f9dbd-9szhv" podUID="e810db5b-2666-45f5-b096-3468d8993a9c" Dec 13 01:30:32.734675 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-873558a6d702d97249fa217e19a1988a1390f250aee4c92d5f770a1d2589431e-shm.mount: Deactivated successfully. Dec 13 01:30:32.734799 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b62617f63c1f5ae13f1945f3db2a067af07dc896b798844d9b94b95a9239b8ba-shm.mount: Deactivated successfully. Dec 13 01:30:33.056865 kubelet[3141]: I1213 01:30:33.056660 3141 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23cc170db8b9fa1f9e229c1fc20683d894e460f023587753f52368bf6e72c0bc" Dec 13 01:30:33.058817 containerd[1956]: time="2024-12-13T01:30:33.058465580Z" level=info msg="StopPodSandbox for \"23cc170db8b9fa1f9e229c1fc20683d894e460f023587753f52368bf6e72c0bc\"" Dec 13 01:30:33.076685 containerd[1956]: time="2024-12-13T01:30:33.076623856Z" level=info msg="Ensure that sandbox 23cc170db8b9fa1f9e229c1fc20683d894e460f023587753f52368bf6e72c0bc in task-service has been cleanup successfully" Dec 13 01:30:33.183055 containerd[1956]: time="2024-12-13T01:30:33.183000820Z" level=error msg="StopPodSandbox for \"23cc170db8b9fa1f9e229c1fc20683d894e460f023587753f52368bf6e72c0bc\" failed" error="failed to destroy network for sandbox \"23cc170db8b9fa1f9e229c1fc20683d894e460f023587753f52368bf6e72c0bc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:30:33.183288 kubelet[3141]: E1213 01:30:33.183248 3141 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"23cc170db8b9fa1f9e229c1fc20683d894e460f023587753f52368bf6e72c0bc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="23cc170db8b9fa1f9e229c1fc20683d894e460f023587753f52368bf6e72c0bc" Dec 13 01:30:33.183600 kubelet[3141]: E1213 01:30:33.183304 3141 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"23cc170db8b9fa1f9e229c1fc20683d894e460f023587753f52368bf6e72c0bc"} Dec 13 01:30:33.183600 kubelet[3141]: E1213 01:30:33.183523 3141 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e810db5b-2666-45f5-b096-3468d8993a9c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"23cc170db8b9fa1f9e229c1fc20683d894e460f023587753f52368bf6e72c0bc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:30:33.183600 kubelet[3141]: E1213 01:30:33.183564 3141 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e810db5b-2666-45f5-b096-3468d8993a9c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"23cc170db8b9fa1f9e229c1fc20683d894e460f023587753f52368bf6e72c0bc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6fb57f9dbd-9szhv" podUID="e810db5b-2666-45f5-b096-3468d8993a9c" Dec 13 01:30:40.278650 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount975623061.mount: Deactivated successfully. Dec 13 01:30:40.501704 containerd[1956]: time="2024-12-13T01:30:40.501419315Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Dec 13 01:30:40.587128 containerd[1956]: time="2024-12-13T01:30:40.586757161Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:30:40.704251 containerd[1956]: time="2024-12-13T01:30:40.702935923Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:30:40.704251 containerd[1956]: time="2024-12-13T01:30:40.703814897Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 9.683386358s" Dec 13 01:30:40.704251 containerd[1956]: time="2024-12-13T01:30:40.703914981Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Dec 13 01:30:40.706887 containerd[1956]: time="2024-12-13T01:30:40.706649985Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:30:40.826983 containerd[1956]: time="2024-12-13T01:30:40.826939215Z" level=info msg="CreateContainer within sandbox \"35dd47a49f97e5994344b5910b663ea539d8ae338930c5701f2b4e6aae1208d4\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 01:30:40.927123 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2491596301.mount: Deactivated successfully. Dec 13 01:30:40.963186 containerd[1956]: time="2024-12-13T01:30:40.963138655Z" level=info msg="CreateContainer within sandbox \"35dd47a49f97e5994344b5910b663ea539d8ae338930c5701f2b4e6aae1208d4\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"96547c1ac5465a91baed4fc3d3b6387050cd9c868ba65ce6d964288bb77c367c\"" Dec 13 01:30:40.972993 containerd[1956]: time="2024-12-13T01:30:40.972574956Z" level=info msg="StartContainer for \"96547c1ac5465a91baed4fc3d3b6387050cd9c868ba65ce6d964288bb77c367c\"" Dec 13 01:30:41.346190 systemd[1]: Started cri-containerd-96547c1ac5465a91baed4fc3d3b6387050cd9c868ba65ce6d964288bb77c367c.scope - libcontainer container 96547c1ac5465a91baed4fc3d3b6387050cd9c868ba65ce6d964288bb77c367c. Dec 13 01:30:41.407993 containerd[1956]: time="2024-12-13T01:30:41.407858128Z" level=info msg="StartContainer for \"96547c1ac5465a91baed4fc3d3b6387050cd9c868ba65ce6d964288bb77c367c\" returns successfully" Dec 13 01:30:41.622702 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 01:30:41.624359 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 13 01:30:42.223530 kubelet[3141]: I1213 01:30:42.209506 3141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-75pkx" podStartSLOduration=2.514360712 podStartE2EDuration="26.173735966s" podCreationTimestamp="2024-12-13 01:30:16 +0000 UTC" firstStartedPulling="2024-12-13 01:30:17.078799646 +0000 UTC m=+21.571457672" lastFinishedPulling="2024-12-13 01:30:40.738174902 +0000 UTC m=+45.230832926" observedRunningTime="2024-12-13 01:30:42.173242127 +0000 UTC m=+46.665900172" watchObservedRunningTime="2024-12-13 01:30:42.173735966 +0000 UTC m=+46.666394010" Dec 13 01:30:43.177504 systemd[1]: run-containerd-runc-k8s.io-96547c1ac5465a91baed4fc3d3b6387050cd9c868ba65ce6d964288bb77c367c-runc.mK637m.mount: Deactivated successfully. Dec 13 01:30:43.863536 kernel: bpftool[4613]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Dec 13 01:30:44.134584 (udev-worker)[4425]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:30:44.139344 systemd-networkd[1800]: vxlan.calico: Link UP Dec 13 01:30:44.139357 systemd-networkd[1800]: vxlan.calico: Gained carrier Dec 13 01:30:44.180703 (udev-worker)[4420]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:30:44.754542 containerd[1956]: time="2024-12-13T01:30:44.753368099Z" level=info msg="StopPodSandbox for \"a08fd61ae5da2a34f88b90565ef18bcf8003849e01be7621285d4bcfe4abe94b\"" Dec 13 01:30:44.754542 containerd[1956]: time="2024-12-13T01:30:44.753437777Z" level=info msg="StopPodSandbox for \"873558a6d702d97249fa217e19a1988a1390f250aee4c92d5f770a1d2589431e\"" Dec 13 01:30:45.404796 containerd[1956]: 2024-12-13 01:30:44.909 [INFO][4708] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="873558a6d702d97249fa217e19a1988a1390f250aee4c92d5f770a1d2589431e" Dec 13 01:30:45.404796 containerd[1956]: 2024-12-13 01:30:44.910 [INFO][4708] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="873558a6d702d97249fa217e19a1988a1390f250aee4c92d5f770a1d2589431e" iface="eth0" netns="/var/run/netns/cni-3c858aa0-2a8f-99ff-38d6-4e74558c1d41" Dec 13 01:30:45.404796 containerd[1956]: 2024-12-13 01:30:44.910 [INFO][4708] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="873558a6d702d97249fa217e19a1988a1390f250aee4c92d5f770a1d2589431e" iface="eth0" netns="/var/run/netns/cni-3c858aa0-2a8f-99ff-38d6-4e74558c1d41" Dec 13 01:30:45.404796 containerd[1956]: 2024-12-13 01:30:44.913 [INFO][4708] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="873558a6d702d97249fa217e19a1988a1390f250aee4c92d5f770a1d2589431e" iface="eth0" netns="/var/run/netns/cni-3c858aa0-2a8f-99ff-38d6-4e74558c1d41" Dec 13 01:30:45.404796 containerd[1956]: 2024-12-13 01:30:44.913 [INFO][4708] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="873558a6d702d97249fa217e19a1988a1390f250aee4c92d5f770a1d2589431e" Dec 13 01:30:45.404796 containerd[1956]: 2024-12-13 01:30:44.913 [INFO][4708] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="873558a6d702d97249fa217e19a1988a1390f250aee4c92d5f770a1d2589431e" Dec 13 01:30:45.404796 containerd[1956]: 2024-12-13 01:30:45.347 [INFO][4720] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="873558a6d702d97249fa217e19a1988a1390f250aee4c92d5f770a1d2589431e" HandleID="k8s-pod-network.873558a6d702d97249fa217e19a1988a1390f250aee4c92d5f770a1d2589431e" Workload="ip--172--31--31--20-k8s-calico--apiserver--6fb57f9dbd--4vmnq-eth0" Dec 13 01:30:45.404796 containerd[1956]: 2024-12-13 01:30:45.347 [INFO][4720] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:30:45.404796 containerd[1956]: 2024-12-13 01:30:45.348 [INFO][4720] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:30:45.404796 containerd[1956]: 2024-12-13 01:30:45.366 [WARNING][4720] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="873558a6d702d97249fa217e19a1988a1390f250aee4c92d5f770a1d2589431e" HandleID="k8s-pod-network.873558a6d702d97249fa217e19a1988a1390f250aee4c92d5f770a1d2589431e" Workload="ip--172--31--31--20-k8s-calico--apiserver--6fb57f9dbd--4vmnq-eth0" Dec 13 01:30:45.404796 containerd[1956]: 2024-12-13 01:30:45.366 [INFO][4720] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="873558a6d702d97249fa217e19a1988a1390f250aee4c92d5f770a1d2589431e" HandleID="k8s-pod-network.873558a6d702d97249fa217e19a1988a1390f250aee4c92d5f770a1d2589431e" Workload="ip--172--31--31--20-k8s-calico--apiserver--6fb57f9dbd--4vmnq-eth0" Dec 13 01:30:45.404796 containerd[1956]: 2024-12-13 01:30:45.369 [INFO][4720] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:30:45.404796 containerd[1956]: 2024-12-13 01:30:45.390 [INFO][4708] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="873558a6d702d97249fa217e19a1988a1390f250aee4c92d5f770a1d2589431e" Dec 13 01:30:45.414602 systemd[1]: run-netns-cni\x2d3c858aa0\x2d2a8f\x2d99ff\x2d38d6\x2d4e74558c1d41.mount: Deactivated successfully. Dec 13 01:30:45.434181 containerd[1956]: time="2024-12-13T01:30:45.434131687Z" level=info msg="TearDown network for sandbox \"873558a6d702d97249fa217e19a1988a1390f250aee4c92d5f770a1d2589431e\" successfully" Dec 13 01:30:45.434181 containerd[1956]: time="2024-12-13T01:30:45.434197873Z" level=info msg="StopPodSandbox for \"873558a6d702d97249fa217e19a1988a1390f250aee4c92d5f770a1d2589431e\" returns successfully" Dec 13 01:30:45.446952 containerd[1956]: 2024-12-13 01:30:44.907 [INFO][4709] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a08fd61ae5da2a34f88b90565ef18bcf8003849e01be7621285d4bcfe4abe94b" Dec 13 01:30:45.446952 containerd[1956]: 2024-12-13 01:30:44.911 [INFO][4709] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a08fd61ae5da2a34f88b90565ef18bcf8003849e01be7621285d4bcfe4abe94b" iface="eth0" netns="/var/run/netns/cni-3fb05d5e-3692-fa04-4218-668de309b1bf" Dec 13 01:30:45.446952 containerd[1956]: 2024-12-13 01:30:44.912 [INFO][4709] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a08fd61ae5da2a34f88b90565ef18bcf8003849e01be7621285d4bcfe4abe94b" iface="eth0" netns="/var/run/netns/cni-3fb05d5e-3692-fa04-4218-668de309b1bf" Dec 13 01:30:45.446952 containerd[1956]: 2024-12-13 01:30:44.913 [INFO][4709] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a08fd61ae5da2a34f88b90565ef18bcf8003849e01be7621285d4bcfe4abe94b" iface="eth0" netns="/var/run/netns/cni-3fb05d5e-3692-fa04-4218-668de309b1bf" Dec 13 01:30:45.446952 containerd[1956]: 2024-12-13 01:30:44.913 [INFO][4709] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a08fd61ae5da2a34f88b90565ef18bcf8003849e01be7621285d4bcfe4abe94b" Dec 13 01:30:45.446952 containerd[1956]: 2024-12-13 01:30:44.913 [INFO][4709] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a08fd61ae5da2a34f88b90565ef18bcf8003849e01be7621285d4bcfe4abe94b" Dec 13 01:30:45.446952 containerd[1956]: 2024-12-13 01:30:45.344 [INFO][4721] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a08fd61ae5da2a34f88b90565ef18bcf8003849e01be7621285d4bcfe4abe94b" HandleID="k8s-pod-network.a08fd61ae5da2a34f88b90565ef18bcf8003849e01be7621285d4bcfe4abe94b" Workload="ip--172--31--31--20-k8s-coredns--6f6b679f8f--9ljzf-eth0" Dec 13 01:30:45.446952 containerd[1956]: 2024-12-13 01:30:45.348 [INFO][4721] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:30:45.446952 containerd[1956]: 2024-12-13 01:30:45.369 [INFO][4721] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:30:45.446952 containerd[1956]: 2024-12-13 01:30:45.420 [WARNING][4721] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a08fd61ae5da2a34f88b90565ef18bcf8003849e01be7621285d4bcfe4abe94b" HandleID="k8s-pod-network.a08fd61ae5da2a34f88b90565ef18bcf8003849e01be7621285d4bcfe4abe94b" Workload="ip--172--31--31--20-k8s-coredns--6f6b679f8f--9ljzf-eth0" Dec 13 01:30:45.446952 containerd[1956]: 2024-12-13 01:30:45.420 [INFO][4721] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a08fd61ae5da2a34f88b90565ef18bcf8003849e01be7621285d4bcfe4abe94b" HandleID="k8s-pod-network.a08fd61ae5da2a34f88b90565ef18bcf8003849e01be7621285d4bcfe4abe94b" Workload="ip--172--31--31--20-k8s-coredns--6f6b679f8f--9ljzf-eth0" Dec 13 01:30:45.446952 containerd[1956]: 2024-12-13 01:30:45.434 [INFO][4721] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:30:45.446952 containerd[1956]: 2024-12-13 01:30:45.441 [INFO][4709] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a08fd61ae5da2a34f88b90565ef18bcf8003849e01be7621285d4bcfe4abe94b" Dec 13 01:30:45.454002 containerd[1956]: time="2024-12-13T01:30:45.447102785Z" level=info msg="TearDown network for sandbox \"a08fd61ae5da2a34f88b90565ef18bcf8003849e01be7621285d4bcfe4abe94b\" successfully" Dec 13 01:30:45.454002 containerd[1956]: time="2024-12-13T01:30:45.447133859Z" level=info msg="StopPodSandbox for \"a08fd61ae5da2a34f88b90565ef18bcf8003849e01be7621285d4bcfe4abe94b\" returns successfully" Dec 13 01:30:45.454002 containerd[1956]: time="2024-12-13T01:30:45.450992752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-9ljzf,Uid:3c9798d9-0918-4f04-b830-5e93da684068,Namespace:kube-system,Attempt:1,}" Dec 13 01:30:45.454002 containerd[1956]: time="2024-12-13T01:30:45.451962397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fb57f9dbd-4vmnq,Uid:6789e551-cd4a-4631-b879-423157868f76,Namespace:calico-apiserver,Attempt:1,}" Dec 13 01:30:45.454546 systemd[1]: run-netns-cni\x2d3fb05d5e\x2d3692\x2dfa04\x2d4218\x2d668de309b1bf.mount: Deactivated successfully. Dec 13 01:30:45.748370 containerd[1956]: time="2024-12-13T01:30:45.747968107Z" level=info msg="StopPodSandbox for \"6d18f0f0b855a4997f454d04a8504dd5d0fe82e40fe180e414d3e76aff02d194\"" Dec 13 01:30:45.749009 containerd[1956]: time="2024-12-13T01:30:45.748979510Z" level=info msg="StopPodSandbox for \"4d2904961290749dbda9039d6da12f7ad0ecaf8fb204246801402ca631e5c577\"" Dec 13 01:30:45.770459 systemd-networkd[1800]: vxlan.calico: Gained IPv6LL Dec 13 01:30:46.046552 systemd-networkd[1800]: calidef8caba816: Link UP Dec 13 01:30:46.047301 systemd-networkd[1800]: calidef8caba816: Gained carrier Dec 13 01:30:46.129810 containerd[1956]: 2024-12-13 01:30:45.688 [INFO][4735] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--20-k8s-coredns--6f6b679f8f--9ljzf-eth0 coredns-6f6b679f8f- kube-system 3c9798d9-0918-4f04-b830-5e93da684068 758 0 2024-12-13 01:30:01 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-31-20 coredns-6f6b679f8f-9ljzf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calidef8caba816 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="07ec5634423a4f496f0e344f399a5a54b77b736fcc130e161f443836bd37fc8e" Namespace="kube-system" Pod="coredns-6f6b679f8f-9ljzf" WorkloadEndpoint="ip--172--31--31--20-k8s-coredns--6f6b679f8f--9ljzf-" Dec 13 01:30:46.129810 containerd[1956]: 2024-12-13 01:30:45.688 [INFO][4735] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="07ec5634423a4f496f0e344f399a5a54b77b736fcc130e161f443836bd37fc8e" Namespace="kube-system" Pod="coredns-6f6b679f8f-9ljzf" WorkloadEndpoint="ip--172--31--31--20-k8s-coredns--6f6b679f8f--9ljzf-eth0" Dec 13 01:30:46.129810 containerd[1956]: 2024-12-13 01:30:45.829 [INFO][4762] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="07ec5634423a4f496f0e344f399a5a54b77b736fcc130e161f443836bd37fc8e" HandleID="k8s-pod-network.07ec5634423a4f496f0e344f399a5a54b77b736fcc130e161f443836bd37fc8e" Workload="ip--172--31--31--20-k8s-coredns--6f6b679f8f--9ljzf-eth0" Dec 13 01:30:46.129810 containerd[1956]: 2024-12-13 01:30:45.860 [INFO][4762] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="07ec5634423a4f496f0e344f399a5a54b77b736fcc130e161f443836bd37fc8e" HandleID="k8s-pod-network.07ec5634423a4f496f0e344f399a5a54b77b736fcc130e161f443836bd37fc8e" Workload="ip--172--31--31--20-k8s-coredns--6f6b679f8f--9ljzf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000385cf0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-31-20", "pod":"coredns-6f6b679f8f-9ljzf", "timestamp":"2024-12-13 01:30:45.829544977 +0000 UTC"}, Hostname:"ip-172-31-31-20", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:30:46.129810 containerd[1956]: 2024-12-13 01:30:45.861 [INFO][4762] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:30:46.129810 containerd[1956]: 2024-12-13 01:30:45.862 [INFO][4762] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:30:46.129810 containerd[1956]: 2024-12-13 01:30:45.862 [INFO][4762] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-20' Dec 13 01:30:46.129810 containerd[1956]: 2024-12-13 01:30:45.869 [INFO][4762] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.07ec5634423a4f496f0e344f399a5a54b77b736fcc130e161f443836bd37fc8e" host="ip-172-31-31-20" Dec 13 01:30:46.129810 containerd[1956]: 2024-12-13 01:30:45.900 [INFO][4762] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-31-20" Dec 13 01:30:46.129810 containerd[1956]: 2024-12-13 01:30:45.927 [INFO][4762] ipam/ipam.go 489: Trying affinity for 192.168.55.128/26 host="ip-172-31-31-20" Dec 13 01:30:46.129810 containerd[1956]: 2024-12-13 01:30:45.933 [INFO][4762] ipam/ipam.go 155: Attempting to load block cidr=192.168.55.128/26 host="ip-172-31-31-20" Dec 13 01:30:46.129810 containerd[1956]: 2024-12-13 01:30:45.942 [INFO][4762] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.55.128/26 host="ip-172-31-31-20" Dec 13 01:30:46.129810 containerd[1956]: 2024-12-13 01:30:45.942 [INFO][4762] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.55.128/26 handle="k8s-pod-network.07ec5634423a4f496f0e344f399a5a54b77b736fcc130e161f443836bd37fc8e" host="ip-172-31-31-20" Dec 13 01:30:46.129810 containerd[1956]: 2024-12-13 01:30:45.953 [INFO][4762] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.07ec5634423a4f496f0e344f399a5a54b77b736fcc130e161f443836bd37fc8e Dec 13 01:30:46.129810 containerd[1956]: 2024-12-13 01:30:45.962 [INFO][4762] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.55.128/26 handle="k8s-pod-network.07ec5634423a4f496f0e344f399a5a54b77b736fcc130e161f443836bd37fc8e" host="ip-172-31-31-20" Dec 13 01:30:46.129810 containerd[1956]: 2024-12-13 01:30:45.990 [INFO][4762] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.55.129/26] block=192.168.55.128/26 handle="k8s-pod-network.07ec5634423a4f496f0e344f399a5a54b77b736fcc130e161f443836bd37fc8e" host="ip-172-31-31-20" Dec 13 01:30:46.129810 containerd[1956]: 2024-12-13 01:30:45.990 [INFO][4762] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.55.129/26] handle="k8s-pod-network.07ec5634423a4f496f0e344f399a5a54b77b736fcc130e161f443836bd37fc8e" host="ip-172-31-31-20" Dec 13 01:30:46.129810 containerd[1956]: 2024-12-13 01:30:45.990 [INFO][4762] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:30:46.129810 containerd[1956]: 2024-12-13 01:30:45.990 [INFO][4762] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.55.129/26] IPv6=[] ContainerID="07ec5634423a4f496f0e344f399a5a54b77b736fcc130e161f443836bd37fc8e" HandleID="k8s-pod-network.07ec5634423a4f496f0e344f399a5a54b77b736fcc130e161f443836bd37fc8e" Workload="ip--172--31--31--20-k8s-coredns--6f6b679f8f--9ljzf-eth0" Dec 13 01:30:46.138042 containerd[1956]: 2024-12-13 01:30:46.022 [INFO][4735] cni-plugin/k8s.go 386: Populated endpoint ContainerID="07ec5634423a4f496f0e344f399a5a54b77b736fcc130e161f443836bd37fc8e" Namespace="kube-system" Pod="coredns-6f6b679f8f-9ljzf" WorkloadEndpoint="ip--172--31--31--20-k8s-coredns--6f6b679f8f--9ljzf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--20-k8s-coredns--6f6b679f8f--9ljzf-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"3c9798d9-0918-4f04-b830-5e93da684068", ResourceVersion:"758", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 30, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-20", ContainerID:"", Pod:"coredns-6f6b679f8f-9ljzf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.55.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidef8caba816", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:30:46.138042 containerd[1956]: 2024-12-13 01:30:46.022 [INFO][4735] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.55.129/32] ContainerID="07ec5634423a4f496f0e344f399a5a54b77b736fcc130e161f443836bd37fc8e" Namespace="kube-system" Pod="coredns-6f6b679f8f-9ljzf" WorkloadEndpoint="ip--172--31--31--20-k8s-coredns--6f6b679f8f--9ljzf-eth0" Dec 13 01:30:46.138042 containerd[1956]: 2024-12-13 01:30:46.022 [INFO][4735] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidef8caba816 ContainerID="07ec5634423a4f496f0e344f399a5a54b77b736fcc130e161f443836bd37fc8e" Namespace="kube-system" Pod="coredns-6f6b679f8f-9ljzf" WorkloadEndpoint="ip--172--31--31--20-k8s-coredns--6f6b679f8f--9ljzf-eth0" Dec 13 01:30:46.138042 containerd[1956]: 2024-12-13 01:30:46.048 [INFO][4735] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="07ec5634423a4f496f0e344f399a5a54b77b736fcc130e161f443836bd37fc8e" Namespace="kube-system" Pod="coredns-6f6b679f8f-9ljzf" WorkloadEndpoint="ip--172--31--31--20-k8s-coredns--6f6b679f8f--9ljzf-eth0" Dec 13 01:30:46.138042 containerd[1956]: 2024-12-13 01:30:46.050 [INFO][4735] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="07ec5634423a4f496f0e344f399a5a54b77b736fcc130e161f443836bd37fc8e" Namespace="kube-system" Pod="coredns-6f6b679f8f-9ljzf" WorkloadEndpoint="ip--172--31--31--20-k8s-coredns--6f6b679f8f--9ljzf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--20-k8s-coredns--6f6b679f8f--9ljzf-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"3c9798d9-0918-4f04-b830-5e93da684068", ResourceVersion:"758", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 30, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-20", ContainerID:"07ec5634423a4f496f0e344f399a5a54b77b736fcc130e161f443836bd37fc8e", Pod:"coredns-6f6b679f8f-9ljzf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.55.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidef8caba816", MAC:"0e:e3:e1:66:ac:b6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:30:46.138042 containerd[1956]: 2024-12-13 01:30:46.097 [INFO][4735] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="07ec5634423a4f496f0e344f399a5a54b77b736fcc130e161f443836bd37fc8e" Namespace="kube-system" Pod="coredns-6f6b679f8f-9ljzf" WorkloadEndpoint="ip--172--31--31--20-k8s-coredns--6f6b679f8f--9ljzf-eth0" Dec 13 01:30:46.236374 systemd-networkd[1800]: calif708f463ef0: Link UP Dec 13 01:30:46.243194 systemd-networkd[1800]: calif708f463ef0: Gained carrier Dec 13 01:30:46.317973 containerd[1956]: 2024-12-13 01:30:45.986 [INFO][4798] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6d18f0f0b855a4997f454d04a8504dd5d0fe82e40fe180e414d3e76aff02d194" Dec 13 01:30:46.317973 containerd[1956]: 2024-12-13 01:30:45.987 [INFO][4798] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6d18f0f0b855a4997f454d04a8504dd5d0fe82e40fe180e414d3e76aff02d194" iface="eth0" netns="/var/run/netns/cni-adaf7dd5-63bf-040f-48ce-ef9cb4fdceee" Dec 13 01:30:46.317973 containerd[1956]: 2024-12-13 01:30:45.989 [INFO][4798] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6d18f0f0b855a4997f454d04a8504dd5d0fe82e40fe180e414d3e76aff02d194" iface="eth0" netns="/var/run/netns/cni-adaf7dd5-63bf-040f-48ce-ef9cb4fdceee" Dec 13 01:30:46.317973 containerd[1956]: 2024-12-13 01:30:45.990 [INFO][4798] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6d18f0f0b855a4997f454d04a8504dd5d0fe82e40fe180e414d3e76aff02d194" iface="eth0" netns="/var/run/netns/cni-adaf7dd5-63bf-040f-48ce-ef9cb4fdceee" Dec 13 01:30:46.317973 containerd[1956]: 2024-12-13 01:30:45.991 [INFO][4798] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6d18f0f0b855a4997f454d04a8504dd5d0fe82e40fe180e414d3e76aff02d194" Dec 13 01:30:46.317973 containerd[1956]: 2024-12-13 01:30:45.992 [INFO][4798] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6d18f0f0b855a4997f454d04a8504dd5d0fe82e40fe180e414d3e76aff02d194" Dec 13 01:30:46.317973 containerd[1956]: 2024-12-13 01:30:46.199 [INFO][4815] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6d18f0f0b855a4997f454d04a8504dd5d0fe82e40fe180e414d3e76aff02d194" HandleID="k8s-pod-network.6d18f0f0b855a4997f454d04a8504dd5d0fe82e40fe180e414d3e76aff02d194" Workload="ip--172--31--31--20-k8s-calico--kube--controllers--57c86ff57f--7c9qb-eth0" Dec 13 01:30:46.317973 containerd[1956]: 2024-12-13 01:30:46.205 [INFO][4815] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:30:46.317973 containerd[1956]: 2024-12-13 01:30:46.205 [INFO][4815] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:30:46.317973 containerd[1956]: 2024-12-13 01:30:46.272 [WARNING][4815] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6d18f0f0b855a4997f454d04a8504dd5d0fe82e40fe180e414d3e76aff02d194" HandleID="k8s-pod-network.6d18f0f0b855a4997f454d04a8504dd5d0fe82e40fe180e414d3e76aff02d194" Workload="ip--172--31--31--20-k8s-calico--kube--controllers--57c86ff57f--7c9qb-eth0" Dec 13 01:30:46.317973 containerd[1956]: 2024-12-13 01:30:46.273 [INFO][4815] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6d18f0f0b855a4997f454d04a8504dd5d0fe82e40fe180e414d3e76aff02d194" HandleID="k8s-pod-network.6d18f0f0b855a4997f454d04a8504dd5d0fe82e40fe180e414d3e76aff02d194" Workload="ip--172--31--31--20-k8s-calico--kube--controllers--57c86ff57f--7c9qb-eth0" Dec 13 01:30:46.317973 containerd[1956]: 2024-12-13 01:30:46.278 [INFO][4815] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:30:46.317973 containerd[1956]: 2024-12-13 01:30:46.308 [INFO][4798] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6d18f0f0b855a4997f454d04a8504dd5d0fe82e40fe180e414d3e76aff02d194" Dec 13 01:30:46.317973 containerd[1956]: time="2024-12-13T01:30:46.314064364Z" level=info msg="TearDown network for sandbox \"6d18f0f0b855a4997f454d04a8504dd5d0fe82e40fe180e414d3e76aff02d194\" successfully" Dec 13 01:30:46.317973 containerd[1956]: time="2024-12-13T01:30:46.314137390Z" level=info msg="StopPodSandbox for \"6d18f0f0b855a4997f454d04a8504dd5d0fe82e40fe180e414d3e76aff02d194\" returns successfully" Dec 13 01:30:46.323529 systemd[1]: run-netns-cni\x2dadaf7dd5\x2d63bf\x2d040f\x2d48ce\x2def9cb4fdceee.mount: Deactivated successfully. Dec 13 01:30:46.334665 containerd[1956]: time="2024-12-13T01:30:46.334157979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57c86ff57f-7c9qb,Uid:d3ab87f4-6e23-4e25-bdd7-87d4f2bfc4d4,Namespace:calico-system,Attempt:1,}" Dec 13 01:30:46.348654 containerd[1956]: 2024-12-13 01:30:45.695 [INFO][4744] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--20-k8s-calico--apiserver--6fb57f9dbd--4vmnq-eth0 calico-apiserver-6fb57f9dbd- calico-apiserver 6789e551-cd4a-4631-b879-423157868f76 759 0 2024-12-13 01:30:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6fb57f9dbd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-31-20 calico-apiserver-6fb57f9dbd-4vmnq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif708f463ef0 [] []}} ContainerID="09edb1ef1131b8438a613a1e0258a71f9b641b003e1694d0e44b4fe0ee6d4c03" Namespace="calico-apiserver" Pod="calico-apiserver-6fb57f9dbd-4vmnq" WorkloadEndpoint="ip--172--31--31--20-k8s-calico--apiserver--6fb57f9dbd--4vmnq-" Dec 13 01:30:46.348654 containerd[1956]: 2024-12-13 01:30:45.695 [INFO][4744] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="09edb1ef1131b8438a613a1e0258a71f9b641b003e1694d0e44b4fe0ee6d4c03" Namespace="calico-apiserver" Pod="calico-apiserver-6fb57f9dbd-4vmnq" WorkloadEndpoint="ip--172--31--31--20-k8s-calico--apiserver--6fb57f9dbd--4vmnq-eth0" Dec 13 01:30:46.348654 containerd[1956]: 2024-12-13 01:30:45.861 [INFO][4761] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="09edb1ef1131b8438a613a1e0258a71f9b641b003e1694d0e44b4fe0ee6d4c03" HandleID="k8s-pod-network.09edb1ef1131b8438a613a1e0258a71f9b641b003e1694d0e44b4fe0ee6d4c03" Workload="ip--172--31--31--20-k8s-calico--apiserver--6fb57f9dbd--4vmnq-eth0" Dec 13 01:30:46.348654 containerd[1956]: 2024-12-13 01:30:45.895 [INFO][4761] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="09edb1ef1131b8438a613a1e0258a71f9b641b003e1694d0e44b4fe0ee6d4c03" HandleID="k8s-pod-network.09edb1ef1131b8438a613a1e0258a71f9b641b003e1694d0e44b4fe0ee6d4c03" Workload="ip--172--31--31--20-k8s-calico--apiserver--6fb57f9dbd--4vmnq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ed960), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-31-20", "pod":"calico-apiserver-6fb57f9dbd-4vmnq", "timestamp":"2024-12-13 01:30:45.861085848 +0000 UTC"}, Hostname:"ip-172-31-31-20", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:30:46.348654 containerd[1956]: 2024-12-13 01:30:45.896 [INFO][4761] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:30:46.348654 containerd[1956]: 2024-12-13 01:30:45.990 [INFO][4761] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:30:46.348654 containerd[1956]: 2024-12-13 01:30:45.991 [INFO][4761] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-20' Dec 13 01:30:46.348654 containerd[1956]: 2024-12-13 01:30:46.015 [INFO][4761] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.09edb1ef1131b8438a613a1e0258a71f9b641b003e1694d0e44b4fe0ee6d4c03" host="ip-172-31-31-20" Dec 13 01:30:46.348654 containerd[1956]: 2024-12-13 01:30:46.026 [INFO][4761] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-31-20" Dec 13 01:30:46.348654 containerd[1956]: 2024-12-13 01:30:46.065 [INFO][4761] ipam/ipam.go 489: Trying affinity for 192.168.55.128/26 host="ip-172-31-31-20" Dec 13 01:30:46.348654 containerd[1956]: 2024-12-13 01:30:46.077 [INFO][4761] ipam/ipam.go 155: Attempting to load block cidr=192.168.55.128/26 host="ip-172-31-31-20" Dec 13 01:30:46.348654 containerd[1956]: 2024-12-13 01:30:46.097 [INFO][4761] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.55.128/26 host="ip-172-31-31-20" Dec 13 01:30:46.348654 containerd[1956]: 2024-12-13 01:30:46.105 [INFO][4761] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.55.128/26 handle="k8s-pod-network.09edb1ef1131b8438a613a1e0258a71f9b641b003e1694d0e44b4fe0ee6d4c03" host="ip-172-31-31-20" Dec 13 01:30:46.348654 containerd[1956]: 2024-12-13 01:30:46.115 [INFO][4761] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.09edb1ef1131b8438a613a1e0258a71f9b641b003e1694d0e44b4fe0ee6d4c03 Dec 13 01:30:46.348654 containerd[1956]: 2024-12-13 01:30:46.164 [INFO][4761] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.55.128/26 handle="k8s-pod-network.09edb1ef1131b8438a613a1e0258a71f9b641b003e1694d0e44b4fe0ee6d4c03" host="ip-172-31-31-20" Dec 13 01:30:46.348654 containerd[1956]: 2024-12-13 01:30:46.187 [INFO][4761] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.55.130/26] block=192.168.55.128/26 handle="k8s-pod-network.09edb1ef1131b8438a613a1e0258a71f9b641b003e1694d0e44b4fe0ee6d4c03" host="ip-172-31-31-20" Dec 13 01:30:46.348654 containerd[1956]: 2024-12-13 01:30:46.187 [INFO][4761] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.55.130/26] handle="k8s-pod-network.09edb1ef1131b8438a613a1e0258a71f9b641b003e1694d0e44b4fe0ee6d4c03" host="ip-172-31-31-20" Dec 13 01:30:46.348654 containerd[1956]: 2024-12-13 01:30:46.187 [INFO][4761] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:30:46.348654 containerd[1956]: 2024-12-13 01:30:46.187 [INFO][4761] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.55.130/26] IPv6=[] ContainerID="09edb1ef1131b8438a613a1e0258a71f9b641b003e1694d0e44b4fe0ee6d4c03" HandleID="k8s-pod-network.09edb1ef1131b8438a613a1e0258a71f9b641b003e1694d0e44b4fe0ee6d4c03" Workload="ip--172--31--31--20-k8s-calico--apiserver--6fb57f9dbd--4vmnq-eth0" Dec 13 01:30:46.349833 containerd[1956]: 2024-12-13 01:30:46.207 [INFO][4744] cni-plugin/k8s.go 386: Populated endpoint ContainerID="09edb1ef1131b8438a613a1e0258a71f9b641b003e1694d0e44b4fe0ee6d4c03" Namespace="calico-apiserver" Pod="calico-apiserver-6fb57f9dbd-4vmnq" WorkloadEndpoint="ip--172--31--31--20-k8s-calico--apiserver--6fb57f9dbd--4vmnq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--20-k8s-calico--apiserver--6fb57f9dbd--4vmnq-eth0", GenerateName:"calico-apiserver-6fb57f9dbd-", Namespace:"calico-apiserver", SelfLink:"", UID:"6789e551-cd4a-4631-b879-423157868f76", ResourceVersion:"759", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 30, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fb57f9dbd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-20", ContainerID:"", Pod:"calico-apiserver-6fb57f9dbd-4vmnq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.55.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif708f463ef0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:30:46.349833 containerd[1956]: 2024-12-13 01:30:46.208 [INFO][4744] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.55.130/32] ContainerID="09edb1ef1131b8438a613a1e0258a71f9b641b003e1694d0e44b4fe0ee6d4c03" Namespace="calico-apiserver" Pod="calico-apiserver-6fb57f9dbd-4vmnq" WorkloadEndpoint="ip--172--31--31--20-k8s-calico--apiserver--6fb57f9dbd--4vmnq-eth0" Dec 13 01:30:46.349833 containerd[1956]: 2024-12-13 01:30:46.208 [INFO][4744] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif708f463ef0 ContainerID="09edb1ef1131b8438a613a1e0258a71f9b641b003e1694d0e44b4fe0ee6d4c03" Namespace="calico-apiserver" Pod="calico-apiserver-6fb57f9dbd-4vmnq" WorkloadEndpoint="ip--172--31--31--20-k8s-calico--apiserver--6fb57f9dbd--4vmnq-eth0" Dec 13 01:30:46.349833 containerd[1956]: 2024-12-13 01:30:46.246 [INFO][4744] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="09edb1ef1131b8438a613a1e0258a71f9b641b003e1694d0e44b4fe0ee6d4c03" Namespace="calico-apiserver" Pod="calico-apiserver-6fb57f9dbd-4vmnq" WorkloadEndpoint="ip--172--31--31--20-k8s-calico--apiserver--6fb57f9dbd--4vmnq-eth0" Dec 13 01:30:46.349833 containerd[1956]: 2024-12-13 01:30:46.248 [INFO][4744] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="09edb1ef1131b8438a613a1e0258a71f9b641b003e1694d0e44b4fe0ee6d4c03" Namespace="calico-apiserver" Pod="calico-apiserver-6fb57f9dbd-4vmnq" WorkloadEndpoint="ip--172--31--31--20-k8s-calico--apiserver--6fb57f9dbd--4vmnq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--20-k8s-calico--apiserver--6fb57f9dbd--4vmnq-eth0", GenerateName:"calico-apiserver-6fb57f9dbd-", Namespace:"calico-apiserver", SelfLink:"", UID:"6789e551-cd4a-4631-b879-423157868f76", ResourceVersion:"759", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 30, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fb57f9dbd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-20", ContainerID:"09edb1ef1131b8438a613a1e0258a71f9b641b003e1694d0e44b4fe0ee6d4c03", Pod:"calico-apiserver-6fb57f9dbd-4vmnq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.55.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif708f463ef0", MAC:"7e:72:79:fc:26:71", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:30:46.349833 containerd[1956]: 2024-12-13 01:30:46.303 [INFO][4744] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="09edb1ef1131b8438a613a1e0258a71f9b641b003e1694d0e44b4fe0ee6d4c03" Namespace="calico-apiserver" Pod="calico-apiserver-6fb57f9dbd-4vmnq" WorkloadEndpoint="ip--172--31--31--20-k8s-calico--apiserver--6fb57f9dbd--4vmnq-eth0" Dec 13 01:30:46.387195 containerd[1956]: 2024-12-13 01:30:46.008 [INFO][4797] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4d2904961290749dbda9039d6da12f7ad0ecaf8fb204246801402ca631e5c577" Dec 13 01:30:46.387195 containerd[1956]: 2024-12-13 01:30:46.010 [INFO][4797] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4d2904961290749dbda9039d6da12f7ad0ecaf8fb204246801402ca631e5c577" iface="eth0" netns="/var/run/netns/cni-46124437-aecc-53a7-76b3-6c7ee2fab20d" Dec 13 01:30:46.387195 containerd[1956]: 2024-12-13 01:30:46.011 [INFO][4797] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4d2904961290749dbda9039d6da12f7ad0ecaf8fb204246801402ca631e5c577" iface="eth0" netns="/var/run/netns/cni-46124437-aecc-53a7-76b3-6c7ee2fab20d" Dec 13 01:30:46.387195 containerd[1956]: 2024-12-13 01:30:46.012 [INFO][4797] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4d2904961290749dbda9039d6da12f7ad0ecaf8fb204246801402ca631e5c577" iface="eth0" netns="/var/run/netns/cni-46124437-aecc-53a7-76b3-6c7ee2fab20d" Dec 13 01:30:46.387195 containerd[1956]: 2024-12-13 01:30:46.013 [INFO][4797] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4d2904961290749dbda9039d6da12f7ad0ecaf8fb204246801402ca631e5c577" Dec 13 01:30:46.387195 containerd[1956]: 2024-12-13 01:30:46.013 [INFO][4797] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4d2904961290749dbda9039d6da12f7ad0ecaf8fb204246801402ca631e5c577" Dec 13 01:30:46.387195 containerd[1956]: 2024-12-13 01:30:46.233 [INFO][4820] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4d2904961290749dbda9039d6da12f7ad0ecaf8fb204246801402ca631e5c577" HandleID="k8s-pod-network.4d2904961290749dbda9039d6da12f7ad0ecaf8fb204246801402ca631e5c577" Workload="ip--172--31--31--20-k8s-coredns--6f6b679f8f--j9snz-eth0" Dec 13 01:30:46.387195 containerd[1956]: 2024-12-13 01:30:46.235 [INFO][4820] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:30:46.387195 containerd[1956]: 2024-12-13 01:30:46.278 [INFO][4820] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:30:46.387195 containerd[1956]: 2024-12-13 01:30:46.338 [WARNING][4820] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4d2904961290749dbda9039d6da12f7ad0ecaf8fb204246801402ca631e5c577" HandleID="k8s-pod-network.4d2904961290749dbda9039d6da12f7ad0ecaf8fb204246801402ca631e5c577" Workload="ip--172--31--31--20-k8s-coredns--6f6b679f8f--j9snz-eth0" Dec 13 01:30:46.387195 containerd[1956]: 2024-12-13 01:30:46.338 [INFO][4820] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4d2904961290749dbda9039d6da12f7ad0ecaf8fb204246801402ca631e5c577" HandleID="k8s-pod-network.4d2904961290749dbda9039d6da12f7ad0ecaf8fb204246801402ca631e5c577" Workload="ip--172--31--31--20-k8s-coredns--6f6b679f8f--j9snz-eth0" Dec 13 01:30:46.387195 containerd[1956]: 2024-12-13 01:30:46.342 [INFO][4820] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:30:46.387195 containerd[1956]: 2024-12-13 01:30:46.359 [INFO][4797] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4d2904961290749dbda9039d6da12f7ad0ecaf8fb204246801402ca631e5c577" Dec 13 01:30:46.390202 containerd[1956]: time="2024-12-13T01:30:46.388814957Z" level=info msg="TearDown network for sandbox \"4d2904961290749dbda9039d6da12f7ad0ecaf8fb204246801402ca631e5c577\" successfully" Dec 13 01:30:46.390202 containerd[1956]: time="2024-12-13T01:30:46.388883340Z" level=info msg="StopPodSandbox for \"4d2904961290749dbda9039d6da12f7ad0ecaf8fb204246801402ca631e5c577\" returns successfully" Dec 13 01:30:46.399905 containerd[1956]: time="2024-12-13T01:30:46.398931739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-j9snz,Uid:1318da38-1af2-423d-bab0-f7184d00175d,Namespace:kube-system,Attempt:1,}" Dec 13 01:30:46.409939 systemd[1]: run-netns-cni\x2d46124437\x2daecc\x2d53a7\x2d76b3\x2d6c7ee2fab20d.mount: Deactivated successfully. Dec 13 01:30:46.442941 containerd[1956]: time="2024-12-13T01:30:46.442544797Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:30:46.442941 containerd[1956]: time="2024-12-13T01:30:46.442625501Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:30:46.442941 containerd[1956]: time="2024-12-13T01:30:46.442650009Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:30:46.442941 containerd[1956]: time="2024-12-13T01:30:46.442752646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:30:46.530172 containerd[1956]: time="2024-12-13T01:30:46.529789859Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:30:46.530172 containerd[1956]: time="2024-12-13T01:30:46.529883012Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:30:46.530172 containerd[1956]: time="2024-12-13T01:30:46.529908035Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:30:46.530172 containerd[1956]: time="2024-12-13T01:30:46.530033132Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:30:46.561674 systemd[1]: Started cri-containerd-07ec5634423a4f496f0e344f399a5a54b77b736fcc130e161f443836bd37fc8e.scope - libcontainer container 07ec5634423a4f496f0e344f399a5a54b77b736fcc130e161f443836bd37fc8e. Dec 13 01:30:46.681886 systemd[1]: Started cri-containerd-09edb1ef1131b8438a613a1e0258a71f9b641b003e1694d0e44b4fe0ee6d4c03.scope - libcontainer container 09edb1ef1131b8438a613a1e0258a71f9b641b003e1694d0e44b4fe0ee6d4c03. Dec 13 01:30:46.793998 containerd[1956]: time="2024-12-13T01:30:46.791141200Z" level=info msg="StopPodSandbox for \"b62617f63c1f5ae13f1945f3db2a067af07dc896b798844d9b94b95a9239b8ba\"" Dec 13 01:30:46.863415 containerd[1956]: time="2024-12-13T01:30:46.863372543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-9ljzf,Uid:3c9798d9-0918-4f04-b830-5e93da684068,Namespace:kube-system,Attempt:1,} returns sandbox id \"07ec5634423a4f496f0e344f399a5a54b77b736fcc130e161f443836bd37fc8e\"" Dec 13 01:30:46.878811 containerd[1956]: time="2024-12-13T01:30:46.878684603Z" level=info msg="CreateContainer within sandbox \"07ec5634423a4f496f0e344f399a5a54b77b736fcc130e161f443836bd37fc8e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:30:46.992233 containerd[1956]: time="2024-12-13T01:30:46.991747155Z" level=info msg="CreateContainer within sandbox \"07ec5634423a4f496f0e344f399a5a54b77b736fcc130e161f443836bd37fc8e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b16cb99e4a28ddebf1a96e6970eb8cac273b873ad226c4cfdca7bd3a6c4a12b3\"" Dec 13 01:30:46.999201 containerd[1956]: time="2024-12-13T01:30:46.999142607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fb57f9dbd-4vmnq,Uid:6789e551-cd4a-4631-b879-423157868f76,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"09edb1ef1131b8438a613a1e0258a71f9b641b003e1694d0e44b4fe0ee6d4c03\"" Dec 13 01:30:47.005081 containerd[1956]: time="2024-12-13T01:30:47.005027820Z" level=info msg="StartContainer for \"b16cb99e4a28ddebf1a96e6970eb8cac273b873ad226c4cfdca7bd3a6c4a12b3\"" Dec 13 01:30:47.030076 containerd[1956]: time="2024-12-13T01:30:47.030013913Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 01:30:47.200597 systemd[1]: Started cri-containerd-b16cb99e4a28ddebf1a96e6970eb8cac273b873ad226c4cfdca7bd3a6c4a12b3.scope - libcontainer container b16cb99e4a28ddebf1a96e6970eb8cac273b873ad226c4cfdca7bd3a6c4a12b3. Dec 13 01:30:47.329225 containerd[1956]: time="2024-12-13T01:30:47.329177689Z" level=info msg="StartContainer for \"b16cb99e4a28ddebf1a96e6970eb8cac273b873ad226c4cfdca7bd3a6c4a12b3\" returns successfully" Dec 13 01:30:47.398728 systemd-networkd[1800]: cali8c065d7234e: Link UP Dec 13 01:30:47.399081 systemd-networkd[1800]: cali8c065d7234e: Gained carrier Dec 13 01:30:47.476573 containerd[1956]: 2024-12-13 01:30:46.725 [INFO][4877] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--20-k8s-calico--kube--controllers--57c86ff57f--7c9qb-eth0 calico-kube-controllers-57c86ff57f- calico-system d3ab87f4-6e23-4e25-bdd7-87d4f2bfc4d4 765 0 2024-12-13 01:30:16 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:57c86ff57f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-31-20 calico-kube-controllers-57c86ff57f-7c9qb eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali8c065d7234e [] []}} ContainerID="68e7539875e90452f0bb44eb7ce44da162263107a490c1781651264f4dce5f43" Namespace="calico-system" Pod="calico-kube-controllers-57c86ff57f-7c9qb" WorkloadEndpoint="ip--172--31--31--20-k8s-calico--kube--controllers--57c86ff57f--7c9qb-" Dec 13 01:30:47.476573 containerd[1956]: 2024-12-13 01:30:46.725 [INFO][4877] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="68e7539875e90452f0bb44eb7ce44da162263107a490c1781651264f4dce5f43" Namespace="calico-system" Pod="calico-kube-controllers-57c86ff57f-7c9qb" WorkloadEndpoint="ip--172--31--31--20-k8s-calico--kube--controllers--57c86ff57f--7c9qb-eth0" Dec 13 01:30:47.476573 containerd[1956]: 2024-12-13 01:30:47.081 [INFO][4952] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="68e7539875e90452f0bb44eb7ce44da162263107a490c1781651264f4dce5f43" HandleID="k8s-pod-network.68e7539875e90452f0bb44eb7ce44da162263107a490c1781651264f4dce5f43" Workload="ip--172--31--31--20-k8s-calico--kube--controllers--57c86ff57f--7c9qb-eth0" Dec 13 01:30:47.476573 containerd[1956]: 2024-12-13 01:30:47.234 [INFO][4952] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="68e7539875e90452f0bb44eb7ce44da162263107a490c1781651264f4dce5f43" HandleID="k8s-pod-network.68e7539875e90452f0bb44eb7ce44da162263107a490c1781651264f4dce5f43" Workload="ip--172--31--31--20-k8s-calico--kube--controllers--57c86ff57f--7c9qb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001039e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-31-20", "pod":"calico-kube-controllers-57c86ff57f-7c9qb", "timestamp":"2024-12-13 01:30:47.081600424 +0000 UTC"}, Hostname:"ip-172-31-31-20", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:30:47.476573 containerd[1956]: 2024-12-13 01:30:47.234 [INFO][4952] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:30:47.476573 containerd[1956]: 2024-12-13 01:30:47.236 [INFO][4952] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:30:47.476573 containerd[1956]: 2024-12-13 01:30:47.236 [INFO][4952] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-20' Dec 13 01:30:47.476573 containerd[1956]: 2024-12-13 01:30:47.258 [INFO][4952] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.68e7539875e90452f0bb44eb7ce44da162263107a490c1781651264f4dce5f43" host="ip-172-31-31-20" Dec 13 01:30:47.476573 containerd[1956]: 2024-12-13 01:30:47.292 [INFO][4952] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-31-20" Dec 13 01:30:47.476573 containerd[1956]: 2024-12-13 01:30:47.316 [INFO][4952] ipam/ipam.go 489: Trying affinity for 192.168.55.128/26 host="ip-172-31-31-20" Dec 13 01:30:47.476573 containerd[1956]: 2024-12-13 01:30:47.323 [INFO][4952] ipam/ipam.go 155: Attempting to load block cidr=192.168.55.128/26 host="ip-172-31-31-20" Dec 13 01:30:47.476573 containerd[1956]: 2024-12-13 01:30:47.334 [INFO][4952] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.55.128/26 host="ip-172-31-31-20" Dec 13 01:30:47.476573 containerd[1956]: 2024-12-13 01:30:47.334 [INFO][4952] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.55.128/26 handle="k8s-pod-network.68e7539875e90452f0bb44eb7ce44da162263107a490c1781651264f4dce5f43" host="ip-172-31-31-20" Dec 13 01:30:47.476573 containerd[1956]: 2024-12-13 01:30:47.347 [INFO][4952] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.68e7539875e90452f0bb44eb7ce44da162263107a490c1781651264f4dce5f43 Dec 13 01:30:47.476573 containerd[1956]: 2024-12-13 01:30:47.357 [INFO][4952] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.55.128/26 handle="k8s-pod-network.68e7539875e90452f0bb44eb7ce44da162263107a490c1781651264f4dce5f43" host="ip-172-31-31-20" Dec 13 01:30:47.476573 containerd[1956]: 2024-12-13 01:30:47.376 [INFO][4952] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.55.131/26] block=192.168.55.128/26 handle="k8s-pod-network.68e7539875e90452f0bb44eb7ce44da162263107a490c1781651264f4dce5f43" host="ip-172-31-31-20" Dec 13 01:30:47.476573 containerd[1956]: 2024-12-13 01:30:47.377 [INFO][4952] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.55.131/26] handle="k8s-pod-network.68e7539875e90452f0bb44eb7ce44da162263107a490c1781651264f4dce5f43" host="ip-172-31-31-20" Dec 13 01:30:47.476573 containerd[1956]: 2024-12-13 01:30:47.377 [INFO][4952] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:30:47.476573 containerd[1956]: 2024-12-13 01:30:47.377 [INFO][4952] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.55.131/26] IPv6=[] ContainerID="68e7539875e90452f0bb44eb7ce44da162263107a490c1781651264f4dce5f43" HandleID="k8s-pod-network.68e7539875e90452f0bb44eb7ce44da162263107a490c1781651264f4dce5f43" Workload="ip--172--31--31--20-k8s-calico--kube--controllers--57c86ff57f--7c9qb-eth0" Dec 13 01:30:47.477572 containerd[1956]: 2024-12-13 01:30:47.386 [INFO][4877] cni-plugin/k8s.go 386: Populated endpoint ContainerID="68e7539875e90452f0bb44eb7ce44da162263107a490c1781651264f4dce5f43" Namespace="calico-system" Pod="calico-kube-controllers-57c86ff57f-7c9qb" WorkloadEndpoint="ip--172--31--31--20-k8s-calico--kube--controllers--57c86ff57f--7c9qb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--20-k8s-calico--kube--controllers--57c86ff57f--7c9qb-eth0", GenerateName:"calico-kube-controllers-57c86ff57f-", Namespace:"calico-system", SelfLink:"", UID:"d3ab87f4-6e23-4e25-bdd7-87d4f2bfc4d4", ResourceVersion:"765", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 30, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"57c86ff57f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-20", ContainerID:"", Pod:"calico-kube-controllers-57c86ff57f-7c9qb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.55.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8c065d7234e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:30:47.477572 containerd[1956]: 2024-12-13 01:30:47.386 [INFO][4877] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.55.131/32] ContainerID="68e7539875e90452f0bb44eb7ce44da162263107a490c1781651264f4dce5f43" Namespace="calico-system" Pod="calico-kube-controllers-57c86ff57f-7c9qb" WorkloadEndpoint="ip--172--31--31--20-k8s-calico--kube--controllers--57c86ff57f--7c9qb-eth0" Dec 13 01:30:47.477572 containerd[1956]: 2024-12-13 01:30:47.386 [INFO][4877] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8c065d7234e ContainerID="68e7539875e90452f0bb44eb7ce44da162263107a490c1781651264f4dce5f43" Namespace="calico-system" Pod="calico-kube-controllers-57c86ff57f-7c9qb" WorkloadEndpoint="ip--172--31--31--20-k8s-calico--kube--controllers--57c86ff57f--7c9qb-eth0" Dec 13 01:30:47.477572 containerd[1956]: 2024-12-13 01:30:47.403 [INFO][4877] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="68e7539875e90452f0bb44eb7ce44da162263107a490c1781651264f4dce5f43" Namespace="calico-system" Pod="calico-kube-controllers-57c86ff57f-7c9qb" WorkloadEndpoint="ip--172--31--31--20-k8s-calico--kube--controllers--57c86ff57f--7c9qb-eth0" Dec 13 01:30:47.477572 containerd[1956]: 2024-12-13 01:30:47.420 [INFO][4877] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="68e7539875e90452f0bb44eb7ce44da162263107a490c1781651264f4dce5f43" Namespace="calico-system" Pod="calico-kube-controllers-57c86ff57f-7c9qb" WorkloadEndpoint="ip--172--31--31--20-k8s-calico--kube--controllers--57c86ff57f--7c9qb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--20-k8s-calico--kube--controllers--57c86ff57f--7c9qb-eth0", GenerateName:"calico-kube-controllers-57c86ff57f-", Namespace:"calico-system", SelfLink:"", UID:"d3ab87f4-6e23-4e25-bdd7-87d4f2bfc4d4", ResourceVersion:"765", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 30, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"57c86ff57f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-20", ContainerID:"68e7539875e90452f0bb44eb7ce44da162263107a490c1781651264f4dce5f43", Pod:"calico-kube-controllers-57c86ff57f-7c9qb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.55.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8c065d7234e", MAC:"0e:89:8f:bb:d7:63", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:30:47.477572 containerd[1956]: 2024-12-13 01:30:47.469 [INFO][4877] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="68e7539875e90452f0bb44eb7ce44da162263107a490c1781651264f4dce5f43" Namespace="calico-system" Pod="calico-kube-controllers-57c86ff57f-7c9qb" WorkloadEndpoint="ip--172--31--31--20-k8s-calico--kube--controllers--57c86ff57f--7c9qb-eth0" Dec 13 01:30:47.585302 containerd[1956]: time="2024-12-13T01:30:47.584798794Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:30:47.585302 containerd[1956]: time="2024-12-13T01:30:47.584921088Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:30:47.585302 containerd[1956]: time="2024-12-13T01:30:47.584939910Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:30:47.587902 containerd[1956]: time="2024-12-13T01:30:47.587071137Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:30:47.636522 systemd-networkd[1800]: cali6787942ab30: Link UP Dec 13 01:30:47.642113 systemd[1]: Started cri-containerd-68e7539875e90452f0bb44eb7ce44da162263107a490c1781651264f4dce5f43.scope - libcontainer container 68e7539875e90452f0bb44eb7ce44da162263107a490c1781651264f4dce5f43. Dec 13 01:30:47.658621 systemd-networkd[1800]: cali6787942ab30: Gained carrier Dec 13 01:30:47.701862 containerd[1956]: 2024-12-13 01:30:47.146 [INFO][4980] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b62617f63c1f5ae13f1945f3db2a067af07dc896b798844d9b94b95a9239b8ba" Dec 13 01:30:47.701862 containerd[1956]: 2024-12-13 01:30:47.147 [INFO][4980] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b62617f63c1f5ae13f1945f3db2a067af07dc896b798844d9b94b95a9239b8ba" iface="eth0" netns="/var/run/netns/cni-48c1e174-b8e5-99b9-f8d5-635e2c182ccd" Dec 13 01:30:47.701862 containerd[1956]: 2024-12-13 01:30:47.150 [INFO][4980] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b62617f63c1f5ae13f1945f3db2a067af07dc896b798844d9b94b95a9239b8ba" iface="eth0" netns="/var/run/netns/cni-48c1e174-b8e5-99b9-f8d5-635e2c182ccd" Dec 13 01:30:47.701862 containerd[1956]: 2024-12-13 01:30:47.150 [INFO][4980] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b62617f63c1f5ae13f1945f3db2a067af07dc896b798844d9b94b95a9239b8ba" iface="eth0" netns="/var/run/netns/cni-48c1e174-b8e5-99b9-f8d5-635e2c182ccd" Dec 13 01:30:47.701862 containerd[1956]: 2024-12-13 01:30:47.150 [INFO][4980] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b62617f63c1f5ae13f1945f3db2a067af07dc896b798844d9b94b95a9239b8ba" Dec 13 01:30:47.701862 containerd[1956]: 2024-12-13 01:30:47.150 [INFO][4980] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b62617f63c1f5ae13f1945f3db2a067af07dc896b798844d9b94b95a9239b8ba" Dec 13 01:30:47.701862 containerd[1956]: 2024-12-13 01:30:47.243 [INFO][5011] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b62617f63c1f5ae13f1945f3db2a067af07dc896b798844d9b94b95a9239b8ba" HandleID="k8s-pod-network.b62617f63c1f5ae13f1945f3db2a067af07dc896b798844d9b94b95a9239b8ba" Workload="ip--172--31--31--20-k8s-csi--node--driver--tjm8g-eth0" Dec 13 01:30:47.701862 containerd[1956]: 2024-12-13 01:30:47.243 [INFO][5011] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:30:47.701862 containerd[1956]: 2024-12-13 01:30:47.573 [INFO][5011] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:30:47.701862 containerd[1956]: 2024-12-13 01:30:47.669 [WARNING][5011] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b62617f63c1f5ae13f1945f3db2a067af07dc896b798844d9b94b95a9239b8ba" HandleID="k8s-pod-network.b62617f63c1f5ae13f1945f3db2a067af07dc896b798844d9b94b95a9239b8ba" Workload="ip--172--31--31--20-k8s-csi--node--driver--tjm8g-eth0" Dec 13 01:30:47.701862 containerd[1956]: 2024-12-13 01:30:47.669 [INFO][5011] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b62617f63c1f5ae13f1945f3db2a067af07dc896b798844d9b94b95a9239b8ba" HandleID="k8s-pod-network.b62617f63c1f5ae13f1945f3db2a067af07dc896b798844d9b94b95a9239b8ba" Workload="ip--172--31--31--20-k8s-csi--node--driver--tjm8g-eth0" Dec 13 01:30:47.701862 containerd[1956]: 2024-12-13 01:30:47.689 [INFO][5011] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:30:47.701862 containerd[1956]: 2024-12-13 01:30:47.696 [INFO][4980] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b62617f63c1f5ae13f1945f3db2a067af07dc896b798844d9b94b95a9239b8ba" Dec 13 01:30:47.704624 containerd[1956]: time="2024-12-13T01:30:47.704581489Z" level=info msg="TearDown network for sandbox \"b62617f63c1f5ae13f1945f3db2a067af07dc896b798844d9b94b95a9239b8ba\" successfully" Dec 13 01:30:47.705620 containerd[1956]: time="2024-12-13T01:30:47.704864041Z" level=info msg="StopPodSandbox for \"b62617f63c1f5ae13f1945f3db2a067af07dc896b798844d9b94b95a9239b8ba\" returns successfully" Dec 13 01:30:47.710864 containerd[1956]: time="2024-12-13T01:30:47.708067741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tjm8g,Uid:38d9e318-7884-46ef-aa8d-69d6c11c0096,Namespace:calico-system,Attempt:1,}" Dec 13 01:30:47.712463 systemd[1]: run-netns-cni\x2d48c1e174\x2db8e5\x2d99b9\x2df8d5\x2d635e2c182ccd.mount: Deactivated successfully. Dec 13 01:30:47.723102 containerd[1956]: 2024-12-13 01:30:46.782 [INFO][4898] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--20-k8s-coredns--6f6b679f8f--j9snz-eth0 coredns-6f6b679f8f- kube-system 1318da38-1af2-423d-bab0-f7184d00175d 767 0 2024-12-13 01:30:01 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-31-20 coredns-6f6b679f8f-j9snz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6787942ab30 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="193dec694936b61e177acdf66a02c196faddd3bd52c44b44f0bc09a345220808" Namespace="kube-system" Pod="coredns-6f6b679f8f-j9snz" WorkloadEndpoint="ip--172--31--31--20-k8s-coredns--6f6b679f8f--j9snz-" Dec 13 01:30:47.723102 containerd[1956]: 2024-12-13 01:30:46.785 [INFO][4898] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="193dec694936b61e177acdf66a02c196faddd3bd52c44b44f0bc09a345220808" Namespace="kube-system" Pod="coredns-6f6b679f8f-j9snz" WorkloadEndpoint="ip--172--31--31--20-k8s-coredns--6f6b679f8f--j9snz-eth0" Dec 13 01:30:47.723102 containerd[1956]: 2024-12-13 01:30:47.171 [INFO][4968] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="193dec694936b61e177acdf66a02c196faddd3bd52c44b44f0bc09a345220808" HandleID="k8s-pod-network.193dec694936b61e177acdf66a02c196faddd3bd52c44b44f0bc09a345220808" Workload="ip--172--31--31--20-k8s-coredns--6f6b679f8f--j9snz-eth0" Dec 13 01:30:47.723102 containerd[1956]: 2024-12-13 01:30:47.235 [INFO][4968] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="193dec694936b61e177acdf66a02c196faddd3bd52c44b44f0bc09a345220808" HandleID="k8s-pod-network.193dec694936b61e177acdf66a02c196faddd3bd52c44b44f0bc09a345220808" Workload="ip--172--31--31--20-k8s-coredns--6f6b679f8f--j9snz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031b210), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-31-20", "pod":"coredns-6f6b679f8f-j9snz", "timestamp":"2024-12-13 01:30:47.171100377 +0000 UTC"}, Hostname:"ip-172-31-31-20", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:30:47.723102 containerd[1956]: 2024-12-13 01:30:47.236 [INFO][4968] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:30:47.723102 containerd[1956]: 2024-12-13 01:30:47.377 [INFO][4968] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:30:47.723102 containerd[1956]: 2024-12-13 01:30:47.378 [INFO][4968] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-20' Dec 13 01:30:47.723102 containerd[1956]: 2024-12-13 01:30:47.393 [INFO][4968] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.193dec694936b61e177acdf66a02c196faddd3bd52c44b44f0bc09a345220808" host="ip-172-31-31-20" Dec 13 01:30:47.723102 containerd[1956]: 2024-12-13 01:30:47.456 [INFO][4968] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-31-20" Dec 13 01:30:47.723102 containerd[1956]: 2024-12-13 01:30:47.485 [INFO][4968] ipam/ipam.go 489: Trying affinity for 192.168.55.128/26 host="ip-172-31-31-20" Dec 13 01:30:47.723102 containerd[1956]: 2024-12-13 01:30:47.496 [INFO][4968] ipam/ipam.go 155: Attempting to load block cidr=192.168.55.128/26 host="ip-172-31-31-20" Dec 13 01:30:47.723102 containerd[1956]: 2024-12-13 01:30:47.506 [INFO][4968] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.55.128/26 host="ip-172-31-31-20" Dec 13 01:30:47.723102 containerd[1956]: 2024-12-13 01:30:47.506 [INFO][4968] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.55.128/26 handle="k8s-pod-network.193dec694936b61e177acdf66a02c196faddd3bd52c44b44f0bc09a345220808" host="ip-172-31-31-20" Dec 13 01:30:47.723102 containerd[1956]: 2024-12-13 01:30:47.511 [INFO][4968] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.193dec694936b61e177acdf66a02c196faddd3bd52c44b44f0bc09a345220808 Dec 13 01:30:47.723102 containerd[1956]: 2024-12-13 01:30:47.550 [INFO][4968] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.55.128/26 handle="k8s-pod-network.193dec694936b61e177acdf66a02c196faddd3bd52c44b44f0bc09a345220808" host="ip-172-31-31-20" Dec 13 01:30:47.723102 containerd[1956]: 2024-12-13 01:30:47.571 [INFO][4968] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.55.132/26] block=192.168.55.128/26 handle="k8s-pod-network.193dec694936b61e177acdf66a02c196faddd3bd52c44b44f0bc09a345220808" host="ip-172-31-31-20" Dec 13 01:30:47.723102 containerd[1956]: 2024-12-13 01:30:47.572 [INFO][4968] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.55.132/26] handle="k8s-pod-network.193dec694936b61e177acdf66a02c196faddd3bd52c44b44f0bc09a345220808" host="ip-172-31-31-20" Dec 13 01:30:47.723102 containerd[1956]: 2024-12-13 01:30:47.572 [INFO][4968] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:30:47.723102 containerd[1956]: 2024-12-13 01:30:47.573 [INFO][4968] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.55.132/26] IPv6=[] ContainerID="193dec694936b61e177acdf66a02c196faddd3bd52c44b44f0bc09a345220808" HandleID="k8s-pod-network.193dec694936b61e177acdf66a02c196faddd3bd52c44b44f0bc09a345220808" Workload="ip--172--31--31--20-k8s-coredns--6f6b679f8f--j9snz-eth0" Dec 13 01:30:47.725622 containerd[1956]: 2024-12-13 01:30:47.596 [INFO][4898] cni-plugin/k8s.go 386: Populated endpoint ContainerID="193dec694936b61e177acdf66a02c196faddd3bd52c44b44f0bc09a345220808" Namespace="kube-system" Pod="coredns-6f6b679f8f-j9snz" WorkloadEndpoint="ip--172--31--31--20-k8s-coredns--6f6b679f8f--j9snz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--20-k8s-coredns--6f6b679f8f--j9snz-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"1318da38-1af2-423d-bab0-f7184d00175d", ResourceVersion:"767", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 30, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-20", ContainerID:"", Pod:"coredns-6f6b679f8f-j9snz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.55.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6787942ab30", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:30:47.725622 containerd[1956]: 2024-12-13 01:30:47.604 [INFO][4898] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.55.132/32] ContainerID="193dec694936b61e177acdf66a02c196faddd3bd52c44b44f0bc09a345220808" Namespace="kube-system" Pod="coredns-6f6b679f8f-j9snz" WorkloadEndpoint="ip--172--31--31--20-k8s-coredns--6f6b679f8f--j9snz-eth0" Dec 13 01:30:47.725622 containerd[1956]: 2024-12-13 01:30:47.605 [INFO][4898] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6787942ab30 ContainerID="193dec694936b61e177acdf66a02c196faddd3bd52c44b44f0bc09a345220808" Namespace="kube-system" Pod="coredns-6f6b679f8f-j9snz" WorkloadEndpoint="ip--172--31--31--20-k8s-coredns--6f6b679f8f--j9snz-eth0" Dec 13 01:30:47.725622 containerd[1956]: 2024-12-13 01:30:47.658 [INFO][4898] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="193dec694936b61e177acdf66a02c196faddd3bd52c44b44f0bc09a345220808" Namespace="kube-system" Pod="coredns-6f6b679f8f-j9snz" WorkloadEndpoint="ip--172--31--31--20-k8s-coredns--6f6b679f8f--j9snz-eth0" Dec 13 01:30:47.725622 containerd[1956]: 2024-12-13 01:30:47.659 [INFO][4898] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="193dec694936b61e177acdf66a02c196faddd3bd52c44b44f0bc09a345220808" Namespace="kube-system" Pod="coredns-6f6b679f8f-j9snz" WorkloadEndpoint="ip--172--31--31--20-k8s-coredns--6f6b679f8f--j9snz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--20-k8s-coredns--6f6b679f8f--j9snz-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"1318da38-1af2-423d-bab0-f7184d00175d", ResourceVersion:"767", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 30, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-20", ContainerID:"193dec694936b61e177acdf66a02c196faddd3bd52c44b44f0bc09a345220808", Pod:"coredns-6f6b679f8f-j9snz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.55.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6787942ab30", MAC:"4e:6b:77:4d:f5:9a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:30:47.725622 containerd[1956]: 2024-12-13 01:30:47.716 [INFO][4898] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="193dec694936b61e177acdf66a02c196faddd3bd52c44b44f0bc09a345220808" Namespace="kube-system" Pod="coredns-6f6b679f8f-j9snz" WorkloadEndpoint="ip--172--31--31--20-k8s-coredns--6f6b679f8f--j9snz-eth0" Dec 13 01:30:47.741335 containerd[1956]: time="2024-12-13T01:30:47.741226890Z" level=info msg="StopPodSandbox for \"23cc170db8b9fa1f9e229c1fc20683d894e460f023587753f52368bf6e72c0bc\"" Dec 13 01:30:47.822687 containerd[1956]: time="2024-12-13T01:30:47.818251522Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:30:47.822687 containerd[1956]: time="2024-12-13T01:30:47.818355281Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:30:47.822687 containerd[1956]: time="2024-12-13T01:30:47.818378243Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:30:47.822687 containerd[1956]: time="2024-12-13T01:30:47.818793789Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:30:47.880960 systemd-networkd[1800]: calidef8caba816: Gained IPv6LL Dec 13 01:30:47.881323 systemd-networkd[1800]: calif708f463ef0: Gained IPv6LL Dec 13 01:30:47.900714 systemd[1]: Started cri-containerd-193dec694936b61e177acdf66a02c196faddd3bd52c44b44f0bc09a345220808.scope - libcontainer container 193dec694936b61e177acdf66a02c196faddd3bd52c44b44f0bc09a345220808. Dec 13 01:30:48.095903 containerd[1956]: time="2024-12-13T01:30:48.095229326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-j9snz,Uid:1318da38-1af2-423d-bab0-f7184d00175d,Namespace:kube-system,Attempt:1,} returns sandbox id \"193dec694936b61e177acdf66a02c196faddd3bd52c44b44f0bc09a345220808\"" Dec 13 01:30:48.113100 containerd[1956]: time="2024-12-13T01:30:48.112979751Z" level=info msg="CreateContainer within sandbox \"193dec694936b61e177acdf66a02c196faddd3bd52c44b44f0bc09a345220808\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:30:48.177246 containerd[1956]: time="2024-12-13T01:30:48.176754213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57c86ff57f-7c9qb,Uid:d3ab87f4-6e23-4e25-bdd7-87d4f2bfc4d4,Namespace:calico-system,Attempt:1,} returns sandbox id \"68e7539875e90452f0bb44eb7ce44da162263107a490c1781651264f4dce5f43\"" Dec 13 01:30:48.182949 containerd[1956]: time="2024-12-13T01:30:48.182806857Z" level=info msg="CreateContainer within sandbox \"193dec694936b61e177acdf66a02c196faddd3bd52c44b44f0bc09a345220808\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"be3a51207354aa566aeb7781b80dab89d417a4aaca141de78330e0c2a3229db4\"" Dec 13 01:30:48.185101 containerd[1956]: time="2024-12-13T01:30:48.185067629Z" level=info msg="StartContainer for \"be3a51207354aa566aeb7781b80dab89d417a4aaca141de78330e0c2a3229db4\"" Dec 13 01:30:48.268181 containerd[1956]: 2024-12-13 01:30:48.007 [INFO][5126] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="23cc170db8b9fa1f9e229c1fc20683d894e460f023587753f52368bf6e72c0bc" Dec 13 01:30:48.268181 containerd[1956]: 2024-12-13 01:30:48.007 [INFO][5126] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="23cc170db8b9fa1f9e229c1fc20683d894e460f023587753f52368bf6e72c0bc" iface="eth0" netns="/var/run/netns/cni-31094930-9178-ac3c-048a-99e279e70425" Dec 13 01:30:48.268181 containerd[1956]: 2024-12-13 01:30:48.010 [INFO][5126] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="23cc170db8b9fa1f9e229c1fc20683d894e460f023587753f52368bf6e72c0bc" iface="eth0" netns="/var/run/netns/cni-31094930-9178-ac3c-048a-99e279e70425" Dec 13 01:30:48.268181 containerd[1956]: 2024-12-13 01:30:48.011 [INFO][5126] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="23cc170db8b9fa1f9e229c1fc20683d894e460f023587753f52368bf6e72c0bc" iface="eth0" netns="/var/run/netns/cni-31094930-9178-ac3c-048a-99e279e70425" Dec 13 01:30:48.268181 containerd[1956]: 2024-12-13 01:30:48.012 [INFO][5126] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="23cc170db8b9fa1f9e229c1fc20683d894e460f023587753f52368bf6e72c0bc" Dec 13 01:30:48.268181 containerd[1956]: 2024-12-13 01:30:48.013 [INFO][5126] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="23cc170db8b9fa1f9e229c1fc20683d894e460f023587753f52368bf6e72c0bc" Dec 13 01:30:48.268181 containerd[1956]: 2024-12-13 01:30:48.195 [INFO][5166] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="23cc170db8b9fa1f9e229c1fc20683d894e460f023587753f52368bf6e72c0bc" HandleID="k8s-pod-network.23cc170db8b9fa1f9e229c1fc20683d894e460f023587753f52368bf6e72c0bc" Workload="ip--172--31--31--20-k8s-calico--apiserver--6fb57f9dbd--9szhv-eth0" Dec 13 01:30:48.268181 containerd[1956]: 2024-12-13 01:30:48.197 [INFO][5166] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:30:48.268181 containerd[1956]: 2024-12-13 01:30:48.197 [INFO][5166] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:30:48.268181 containerd[1956]: 2024-12-13 01:30:48.236 [WARNING][5166] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="23cc170db8b9fa1f9e229c1fc20683d894e460f023587753f52368bf6e72c0bc" HandleID="k8s-pod-network.23cc170db8b9fa1f9e229c1fc20683d894e460f023587753f52368bf6e72c0bc" Workload="ip--172--31--31--20-k8s-calico--apiserver--6fb57f9dbd--9szhv-eth0" Dec 13 01:30:48.268181 containerd[1956]: 2024-12-13 01:30:48.237 [INFO][5166] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="23cc170db8b9fa1f9e229c1fc20683d894e460f023587753f52368bf6e72c0bc" HandleID="k8s-pod-network.23cc170db8b9fa1f9e229c1fc20683d894e460f023587753f52368bf6e72c0bc" Workload="ip--172--31--31--20-k8s-calico--apiserver--6fb57f9dbd--9szhv-eth0" Dec 13 01:30:48.268181 containerd[1956]: 2024-12-13 01:30:48.249 [INFO][5166] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:30:48.268181 containerd[1956]: 2024-12-13 01:30:48.255 [INFO][5126] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="23cc170db8b9fa1f9e229c1fc20683d894e460f023587753f52368bf6e72c0bc" Dec 13 01:30:48.270868 containerd[1956]: time="2024-12-13T01:30:48.270133243Z" level=info msg="TearDown network for sandbox \"23cc170db8b9fa1f9e229c1fc20683d894e460f023587753f52368bf6e72c0bc\" successfully" Dec 13 01:30:48.270868 containerd[1956]: time="2024-12-13T01:30:48.270204196Z" level=info msg="StopPodSandbox for \"23cc170db8b9fa1f9e229c1fc20683d894e460f023587753f52368bf6e72c0bc\" returns successfully" Dec 13 01:30:48.271907 containerd[1956]: time="2024-12-13T01:30:48.271823880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fb57f9dbd-9szhv,Uid:e810db5b-2666-45f5-b096-3468d8993a9c,Namespace:calico-apiserver,Attempt:1,}" Dec 13 01:30:48.284017 kubelet[3141]: I1213 01:30:48.283940 3141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-9ljzf" podStartSLOduration=47.283912882 podStartE2EDuration="47.283912882s" podCreationTimestamp="2024-12-13 01:30:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:30:48.281599011 +0000 UTC m=+52.774257055" watchObservedRunningTime="2024-12-13 01:30:48.283912882 +0000 UTC m=+52.776570926" Dec 13 01:30:48.335087 systemd[1]: Started cri-containerd-be3a51207354aa566aeb7781b80dab89d417a4aaca141de78330e0c2a3229db4.scope - libcontainer container be3a51207354aa566aeb7781b80dab89d417a4aaca141de78330e0c2a3229db4. Dec 13 01:30:48.424166 systemd[1]: run-netns-cni\x2d31094930\x2d9178\x2dac3c\x2d048a\x2d99e279e70425.mount: Deactivated successfully. Dec 13 01:30:48.619886 systemd[1]: Started sshd@7-172.31.31.20:22-139.178.68.195:43632.service - OpenSSH per-connection server daemon (139.178.68.195:43632). Dec 13 01:30:48.643023 containerd[1956]: time="2024-12-13T01:30:48.640356398Z" level=info msg="StartContainer for \"be3a51207354aa566aeb7781b80dab89d417a4aaca141de78330e0c2a3229db4\" returns successfully" Dec 13 01:30:48.901437 systemd-networkd[1800]: cali90f5dd138a6: Link UP Dec 13 01:30:48.909783 systemd-networkd[1800]: cali90f5dd138a6: Gained carrier Dec 13 01:30:48.950891 sshd[5236]: Accepted publickey for core from 139.178.68.195 port 43632 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:30:48.962713 sshd[5236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:30:48.986304 systemd-logind[1938]: New session 8 of user core. Dec 13 01:30:48.992437 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 01:30:49.008909 containerd[1956]: 2024-12-13 01:30:47.966 [INFO][5111] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--20-k8s-csi--node--driver--tjm8g-eth0 csi-node-driver- calico-system 38d9e318-7884-46ef-aa8d-69d6c11c0096 778 0 2024-12-13 01:30:16 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-31-20 csi-node-driver-tjm8g eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali90f5dd138a6 [] []}} ContainerID="46c680d582bb90be085ff271f67541eb279208850b80a9eb29e7dd8c5283ae65" Namespace="calico-system" Pod="csi-node-driver-tjm8g" WorkloadEndpoint="ip--172--31--31--20-k8s-csi--node--driver--tjm8g-" Dec 13 01:30:49.008909 containerd[1956]: 2024-12-13 01:30:47.968 [INFO][5111] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="46c680d582bb90be085ff271f67541eb279208850b80a9eb29e7dd8c5283ae65" Namespace="calico-system" Pod="csi-node-driver-tjm8g" WorkloadEndpoint="ip--172--31--31--20-k8s-csi--node--driver--tjm8g-eth0" Dec 13 01:30:49.008909 containerd[1956]: 2024-12-13 01:30:48.176 [INFO][5162] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="46c680d582bb90be085ff271f67541eb279208850b80a9eb29e7dd8c5283ae65" HandleID="k8s-pod-network.46c680d582bb90be085ff271f67541eb279208850b80a9eb29e7dd8c5283ae65" Workload="ip--172--31--31--20-k8s-csi--node--driver--tjm8g-eth0" Dec 13 01:30:49.008909 containerd[1956]: 2024-12-13 01:30:48.463 [INFO][5162] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="46c680d582bb90be085ff271f67541eb279208850b80a9eb29e7dd8c5283ae65" HandleID="k8s-pod-network.46c680d582bb90be085ff271f67541eb279208850b80a9eb29e7dd8c5283ae65" Workload="ip--172--31--31--20-k8s-csi--node--driver--tjm8g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00044bb60), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-31-20", "pod":"csi-node-driver-tjm8g", "timestamp":"2024-12-13 01:30:48.172150917 +0000 UTC"}, Hostname:"ip-172-31-31-20", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:30:49.008909 containerd[1956]: 2024-12-13 01:30:48.464 [INFO][5162] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:30:49.008909 containerd[1956]: 2024-12-13 01:30:48.465 [INFO][5162] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:30:49.008909 containerd[1956]: 2024-12-13 01:30:48.466 [INFO][5162] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-20' Dec 13 01:30:49.008909 containerd[1956]: 2024-12-13 01:30:48.543 [INFO][5162] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.46c680d582bb90be085ff271f67541eb279208850b80a9eb29e7dd8c5283ae65" host="ip-172-31-31-20" Dec 13 01:30:49.008909 containerd[1956]: 2024-12-13 01:30:48.586 [INFO][5162] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-31-20" Dec 13 01:30:49.008909 containerd[1956]: 2024-12-13 01:30:48.614 [INFO][5162] ipam/ipam.go 489: Trying affinity for 192.168.55.128/26 host="ip-172-31-31-20" Dec 13 01:30:49.008909 containerd[1956]: 2024-12-13 01:30:48.646 [INFO][5162] ipam/ipam.go 155: Attempting to load block cidr=192.168.55.128/26 host="ip-172-31-31-20" Dec 13 01:30:49.008909 containerd[1956]: 2024-12-13 01:30:48.685 [INFO][5162] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.55.128/26 host="ip-172-31-31-20" Dec 13 01:30:49.008909 containerd[1956]: 2024-12-13 01:30:48.686 [INFO][5162] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.55.128/26 handle="k8s-pod-network.46c680d582bb90be085ff271f67541eb279208850b80a9eb29e7dd8c5283ae65" host="ip-172-31-31-20" Dec 13 01:30:49.008909 containerd[1956]: 2024-12-13 01:30:48.723 [INFO][5162] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.46c680d582bb90be085ff271f67541eb279208850b80a9eb29e7dd8c5283ae65 Dec 13 01:30:49.008909 containerd[1956]: 2024-12-13 01:30:48.780 [INFO][5162] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.55.128/26 handle="k8s-pod-network.46c680d582bb90be085ff271f67541eb279208850b80a9eb29e7dd8c5283ae65" host="ip-172-31-31-20" Dec 13 01:30:49.008909 containerd[1956]: 2024-12-13 01:30:48.819 [INFO][5162] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.55.133/26] block=192.168.55.128/26 handle="k8s-pod-network.46c680d582bb90be085ff271f67541eb279208850b80a9eb29e7dd8c5283ae65" host="ip-172-31-31-20" Dec 13 01:30:49.008909 containerd[1956]: 2024-12-13 01:30:48.820 [INFO][5162] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.55.133/26] handle="k8s-pod-network.46c680d582bb90be085ff271f67541eb279208850b80a9eb29e7dd8c5283ae65" host="ip-172-31-31-20" Dec 13 01:30:49.008909 containerd[1956]: 2024-12-13 01:30:48.820 [INFO][5162] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:30:49.008909 containerd[1956]: 2024-12-13 01:30:48.820 [INFO][5162] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.55.133/26] IPv6=[] ContainerID="46c680d582bb90be085ff271f67541eb279208850b80a9eb29e7dd8c5283ae65" HandleID="k8s-pod-network.46c680d582bb90be085ff271f67541eb279208850b80a9eb29e7dd8c5283ae65" Workload="ip--172--31--31--20-k8s-csi--node--driver--tjm8g-eth0" Dec 13 01:30:49.013232 containerd[1956]: 2024-12-13 01:30:48.856 [INFO][5111] cni-plugin/k8s.go 386: Populated endpoint ContainerID="46c680d582bb90be085ff271f67541eb279208850b80a9eb29e7dd8c5283ae65" Namespace="calico-system" Pod="csi-node-driver-tjm8g" WorkloadEndpoint="ip--172--31--31--20-k8s-csi--node--driver--tjm8g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--20-k8s-csi--node--driver--tjm8g-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"38d9e318-7884-46ef-aa8d-69d6c11c0096", ResourceVersion:"778", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 30, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-20", ContainerID:"", Pod:"csi-node-driver-tjm8g", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.55.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali90f5dd138a6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:30:49.013232 containerd[1956]: 2024-12-13 01:30:48.860 [INFO][5111] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.55.133/32] ContainerID="46c680d582bb90be085ff271f67541eb279208850b80a9eb29e7dd8c5283ae65" Namespace="calico-system" Pod="csi-node-driver-tjm8g" WorkloadEndpoint="ip--172--31--31--20-k8s-csi--node--driver--tjm8g-eth0" Dec 13 01:30:49.013232 containerd[1956]: 2024-12-13 01:30:48.861 [INFO][5111] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali90f5dd138a6 ContainerID="46c680d582bb90be085ff271f67541eb279208850b80a9eb29e7dd8c5283ae65" Namespace="calico-system" Pod="csi-node-driver-tjm8g" WorkloadEndpoint="ip--172--31--31--20-k8s-csi--node--driver--tjm8g-eth0" Dec 13 01:30:49.013232 containerd[1956]: 2024-12-13 01:30:48.903 [INFO][5111] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="46c680d582bb90be085ff271f67541eb279208850b80a9eb29e7dd8c5283ae65" Namespace="calico-system" Pod="csi-node-driver-tjm8g" WorkloadEndpoint="ip--172--31--31--20-k8s-csi--node--driver--tjm8g-eth0" Dec 13 01:30:49.013232 containerd[1956]: 2024-12-13 01:30:48.903 [INFO][5111] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="46c680d582bb90be085ff271f67541eb279208850b80a9eb29e7dd8c5283ae65" Namespace="calico-system" Pod="csi-node-driver-tjm8g" WorkloadEndpoint="ip--172--31--31--20-k8s-csi--node--driver--tjm8g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--20-k8s-csi--node--driver--tjm8g-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"38d9e318-7884-46ef-aa8d-69d6c11c0096", ResourceVersion:"778", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 30, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-20", ContainerID:"46c680d582bb90be085ff271f67541eb279208850b80a9eb29e7dd8c5283ae65", Pod:"csi-node-driver-tjm8g", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.55.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali90f5dd138a6", MAC:"5e:1e:e5:17:0f:fd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:30:49.013232 containerd[1956]: 2024-12-13 01:30:48.996 [INFO][5111] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="46c680d582bb90be085ff271f67541eb279208850b80a9eb29e7dd8c5283ae65" Namespace="calico-system" Pod="csi-node-driver-tjm8g" WorkloadEndpoint="ip--172--31--31--20-k8s-csi--node--driver--tjm8g-eth0" Dec 13 01:30:49.046091 systemd-networkd[1800]: cali8c065d7234e: Gained IPv6LL Dec 13 01:30:49.136624 containerd[1956]: time="2024-12-13T01:30:49.135595657Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:30:49.136624 containerd[1956]: time="2024-12-13T01:30:49.135715939Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:30:49.136624 containerd[1956]: time="2024-12-13T01:30:49.135749061Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:30:49.136624 containerd[1956]: time="2024-12-13T01:30:49.136551319Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:30:49.262933 systemd[1]: Started cri-containerd-46c680d582bb90be085ff271f67541eb279208850b80a9eb29e7dd8c5283ae65.scope - libcontainer container 46c680d582bb90be085ff271f67541eb279208850b80a9eb29e7dd8c5283ae65. Dec 13 01:30:49.454986 kubelet[3141]: I1213 01:30:49.449680 3141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-j9snz" podStartSLOduration=48.449656687 podStartE2EDuration="48.449656687s" podCreationTimestamp="2024-12-13 01:30:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:30:49.348786602 +0000 UTC m=+53.841444645" watchObservedRunningTime="2024-12-13 01:30:49.449656687 +0000 UTC m=+53.942314730" Dec 13 01:30:49.542080 systemd-networkd[1800]: cali6787942ab30: Gained IPv6LL Dec 13 01:30:49.602176 systemd-networkd[1800]: cali6936f694ae2: Link UP Dec 13 01:30:49.606446 systemd-networkd[1800]: cali6936f694ae2: Gained carrier Dec 13 01:30:49.699799 containerd[1956]: 2024-12-13 01:30:48.592 [INFO][5212] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--20-k8s-calico--apiserver--6fb57f9dbd--9szhv-eth0 calico-apiserver-6fb57f9dbd- calico-apiserver e810db5b-2666-45f5-b096-3468d8993a9c 789 0 2024-12-13 01:30:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6fb57f9dbd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-31-20 calico-apiserver-6fb57f9dbd-9szhv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6936f694ae2 [] []}} ContainerID="73cd84a9593ecff4183325c97444bccc5d9463460d0e50d9b2148df75bc19ecb" Namespace="calico-apiserver" Pod="calico-apiserver-6fb57f9dbd-9szhv" WorkloadEndpoint="ip--172--31--31--20-k8s-calico--apiserver--6fb57f9dbd--9szhv-" Dec 13 01:30:49.699799 containerd[1956]: 2024-12-13 01:30:48.594 [INFO][5212] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="73cd84a9593ecff4183325c97444bccc5d9463460d0e50d9b2148df75bc19ecb" Namespace="calico-apiserver" Pod="calico-apiserver-6fb57f9dbd-9szhv" WorkloadEndpoint="ip--172--31--31--20-k8s-calico--apiserver--6fb57f9dbd--9szhv-eth0" Dec 13 01:30:49.699799 containerd[1956]: 2024-12-13 01:30:48.973 [INFO][5244] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="73cd84a9593ecff4183325c97444bccc5d9463460d0e50d9b2148df75bc19ecb" HandleID="k8s-pod-network.73cd84a9593ecff4183325c97444bccc5d9463460d0e50d9b2148df75bc19ecb" Workload="ip--172--31--31--20-k8s-calico--apiserver--6fb57f9dbd--9szhv-eth0" Dec 13 01:30:49.699799 containerd[1956]: 2024-12-13 01:30:49.114 [INFO][5244] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="73cd84a9593ecff4183325c97444bccc5d9463460d0e50d9b2148df75bc19ecb" HandleID="k8s-pod-network.73cd84a9593ecff4183325c97444bccc5d9463460d0e50d9b2148df75bc19ecb" Workload="ip--172--31--31--20-k8s-calico--apiserver--6fb57f9dbd--9szhv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000482430), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-31-20", "pod":"calico-apiserver-6fb57f9dbd-9szhv", "timestamp":"2024-12-13 01:30:48.973660053 +0000 UTC"}, Hostname:"ip-172-31-31-20", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:30:49.699799 containerd[1956]: 2024-12-13 01:30:49.114 [INFO][5244] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:30:49.699799 containerd[1956]: 2024-12-13 01:30:49.115 [INFO][5244] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:30:49.699799 containerd[1956]: 2024-12-13 01:30:49.115 [INFO][5244] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-20' Dec 13 01:30:49.699799 containerd[1956]: 2024-12-13 01:30:49.245 [INFO][5244] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.73cd84a9593ecff4183325c97444bccc5d9463460d0e50d9b2148df75bc19ecb" host="ip-172-31-31-20" Dec 13 01:30:49.699799 containerd[1956]: 2024-12-13 01:30:49.311 [INFO][5244] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-31-20" Dec 13 01:30:49.699799 containerd[1956]: 2024-12-13 01:30:49.351 [INFO][5244] ipam/ipam.go 489: Trying affinity for 192.168.55.128/26 host="ip-172-31-31-20" Dec 13 01:30:49.699799 containerd[1956]: 2024-12-13 01:30:49.360 [INFO][5244] ipam/ipam.go 155: Attempting to load block cidr=192.168.55.128/26 host="ip-172-31-31-20" Dec 13 01:30:49.699799 containerd[1956]: 2024-12-13 01:30:49.392 [INFO][5244] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.55.128/26 host="ip-172-31-31-20" Dec 13 01:30:49.699799 containerd[1956]: 2024-12-13 01:30:49.392 [INFO][5244] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.55.128/26 handle="k8s-pod-network.73cd84a9593ecff4183325c97444bccc5d9463460d0e50d9b2148df75bc19ecb" host="ip-172-31-31-20" Dec 13 01:30:49.699799 containerd[1956]: 2024-12-13 01:30:49.408 [INFO][5244] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.73cd84a9593ecff4183325c97444bccc5d9463460d0e50d9b2148df75bc19ecb Dec 13 01:30:49.699799 containerd[1956]: 2024-12-13 01:30:49.475 [INFO][5244] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.55.128/26 handle="k8s-pod-network.73cd84a9593ecff4183325c97444bccc5d9463460d0e50d9b2148df75bc19ecb" host="ip-172-31-31-20" Dec 13 01:30:49.699799 containerd[1956]: 2024-12-13 01:30:49.549 [INFO][5244] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.55.134/26] block=192.168.55.128/26 handle="k8s-pod-network.73cd84a9593ecff4183325c97444bccc5d9463460d0e50d9b2148df75bc19ecb" host="ip-172-31-31-20" Dec 13 01:30:49.699799 containerd[1956]: 2024-12-13 01:30:49.552 [INFO][5244] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.55.134/26] handle="k8s-pod-network.73cd84a9593ecff4183325c97444bccc5d9463460d0e50d9b2148df75bc19ecb" host="ip-172-31-31-20" Dec 13 01:30:49.699799 containerd[1956]: 2024-12-13 01:30:49.553 [INFO][5244] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:30:49.699799 containerd[1956]: 2024-12-13 01:30:49.553 [INFO][5244] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.55.134/26] IPv6=[] ContainerID="73cd84a9593ecff4183325c97444bccc5d9463460d0e50d9b2148df75bc19ecb" HandleID="k8s-pod-network.73cd84a9593ecff4183325c97444bccc5d9463460d0e50d9b2148df75bc19ecb" Workload="ip--172--31--31--20-k8s-calico--apiserver--6fb57f9dbd--9szhv-eth0" Dec 13 01:30:49.701642 containerd[1956]: 2024-12-13 01:30:49.564 [INFO][5212] cni-plugin/k8s.go 386: Populated endpoint ContainerID="73cd84a9593ecff4183325c97444bccc5d9463460d0e50d9b2148df75bc19ecb" Namespace="calico-apiserver" Pod="calico-apiserver-6fb57f9dbd-9szhv" WorkloadEndpoint="ip--172--31--31--20-k8s-calico--apiserver--6fb57f9dbd--9szhv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--20-k8s-calico--apiserver--6fb57f9dbd--9szhv-eth0", GenerateName:"calico-apiserver-6fb57f9dbd-", Namespace:"calico-apiserver", SelfLink:"", UID:"e810db5b-2666-45f5-b096-3468d8993a9c", ResourceVersion:"789", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 30, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fb57f9dbd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-20", ContainerID:"", Pod:"calico-apiserver-6fb57f9dbd-9szhv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.55.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6936f694ae2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:30:49.701642 containerd[1956]: 2024-12-13 01:30:49.564 [INFO][5212] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.55.134/32] ContainerID="73cd84a9593ecff4183325c97444bccc5d9463460d0e50d9b2148df75bc19ecb" Namespace="calico-apiserver" Pod="calico-apiserver-6fb57f9dbd-9szhv" WorkloadEndpoint="ip--172--31--31--20-k8s-calico--apiserver--6fb57f9dbd--9szhv-eth0" Dec 13 01:30:49.701642 containerd[1956]: 2024-12-13 01:30:49.569 [INFO][5212] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6936f694ae2 ContainerID="73cd84a9593ecff4183325c97444bccc5d9463460d0e50d9b2148df75bc19ecb" Namespace="calico-apiserver" Pod="calico-apiserver-6fb57f9dbd-9szhv" WorkloadEndpoint="ip--172--31--31--20-k8s-calico--apiserver--6fb57f9dbd--9szhv-eth0" Dec 13 01:30:49.701642 containerd[1956]: 2024-12-13 01:30:49.603 [INFO][5212] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="73cd84a9593ecff4183325c97444bccc5d9463460d0e50d9b2148df75bc19ecb" Namespace="calico-apiserver" Pod="calico-apiserver-6fb57f9dbd-9szhv" WorkloadEndpoint="ip--172--31--31--20-k8s-calico--apiserver--6fb57f9dbd--9szhv-eth0" Dec 13 01:30:49.701642 containerd[1956]: 2024-12-13 01:30:49.604 [INFO][5212] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="73cd84a9593ecff4183325c97444bccc5d9463460d0e50d9b2148df75bc19ecb" Namespace="calico-apiserver" Pod="calico-apiserver-6fb57f9dbd-9szhv" WorkloadEndpoint="ip--172--31--31--20-k8s-calico--apiserver--6fb57f9dbd--9szhv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--20-k8s-calico--apiserver--6fb57f9dbd--9szhv-eth0", GenerateName:"calico-apiserver-6fb57f9dbd-", Namespace:"calico-apiserver", SelfLink:"", UID:"e810db5b-2666-45f5-b096-3468d8993a9c", ResourceVersion:"789", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 30, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fb57f9dbd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-20", ContainerID:"73cd84a9593ecff4183325c97444bccc5d9463460d0e50d9b2148df75bc19ecb", Pod:"calico-apiserver-6fb57f9dbd-9szhv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.55.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6936f694ae2", MAC:"32:df:ca:c5:43:db", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:30:49.701642 containerd[1956]: 2024-12-13 01:30:49.673 [INFO][5212] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="73cd84a9593ecff4183325c97444bccc5d9463460d0e50d9b2148df75bc19ecb" Namespace="calico-apiserver" Pod="calico-apiserver-6fb57f9dbd-9szhv" WorkloadEndpoint="ip--172--31--31--20-k8s-calico--apiserver--6fb57f9dbd--9szhv-eth0" Dec 13 01:30:49.714453 containerd[1956]: time="2024-12-13T01:30:49.714330506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tjm8g,Uid:38d9e318-7884-46ef-aa8d-69d6c11c0096,Namespace:calico-system,Attempt:1,} returns sandbox id \"46c680d582bb90be085ff271f67541eb279208850b80a9eb29e7dd8c5283ae65\"" Dec 13 01:30:49.845097 containerd[1956]: time="2024-12-13T01:30:49.841391223Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:30:49.845097 containerd[1956]: time="2024-12-13T01:30:49.841473666Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:30:49.845097 containerd[1956]: time="2024-12-13T01:30:49.841498002Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:30:49.845097 containerd[1956]: time="2024-12-13T01:30:49.841612336Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:30:49.924936 systemd[1]: Started cri-containerd-73cd84a9593ecff4183325c97444bccc5d9463460d0e50d9b2148df75bc19ecb.scope - libcontainer container 73cd84a9593ecff4183325c97444bccc5d9463460d0e50d9b2148df75bc19ecb. Dec 13 01:30:50.184425 containerd[1956]: time="2024-12-13T01:30:50.183529388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fb57f9dbd-9szhv,Uid:e810db5b-2666-45f5-b096-3468d8993a9c,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"73cd84a9593ecff4183325c97444bccc5d9463460d0e50d9b2148df75bc19ecb\"" Dec 13 01:30:50.383104 sshd[5236]: pam_unix(sshd:session): session closed for user core Dec 13 01:30:50.427046 systemd[1]: sshd@7-172.31.31.20:22-139.178.68.195:43632.service: Deactivated successfully. Dec 13 01:30:50.431687 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 01:30:50.437251 systemd-logind[1938]: Session 8 logged out. Waiting for processes to exit. Dec 13 01:30:50.442220 systemd-logind[1938]: Removed session 8. Dec 13 01:30:50.694735 systemd-networkd[1800]: cali90f5dd138a6: Gained IPv6LL Dec 13 01:30:51.335161 systemd-networkd[1800]: cali6936f694ae2: Gained IPv6LL Dec 13 01:30:51.750238 systemd[1]: run-containerd-runc-k8s.io-96547c1ac5465a91baed4fc3d3b6387050cd9c868ba65ce6d964288bb77c367c-runc.DEKDmQ.mount: Deactivated successfully. Dec 13 01:30:52.260258 containerd[1956]: time="2024-12-13T01:30:52.260204634Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:30:52.262416 containerd[1956]: time="2024-12-13T01:30:52.262357856Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Dec 13 01:30:52.265070 containerd[1956]: time="2024-12-13T01:30:52.264680867Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:30:52.268446 containerd[1956]: time="2024-12-13T01:30:52.268397953Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:30:52.269383 containerd[1956]: time="2024-12-13T01:30:52.269343172Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 5.23910372s" Dec 13 01:30:52.269551 containerd[1956]: time="2024-12-13T01:30:52.269529032Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Dec 13 01:30:52.270875 containerd[1956]: time="2024-12-13T01:30:52.270818723Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Dec 13 01:30:52.274220 containerd[1956]: time="2024-12-13T01:30:52.274162723Z" level=info msg="CreateContainer within sandbox \"09edb1ef1131b8438a613a1e0258a71f9b641b003e1694d0e44b4fe0ee6d4c03\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 01:30:52.300369 containerd[1956]: time="2024-12-13T01:30:52.300316454Z" level=info msg="CreateContainer within sandbox \"09edb1ef1131b8438a613a1e0258a71f9b641b003e1694d0e44b4fe0ee6d4c03\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"76e3863cb20c26aca7ec48ff9f12a539f209d3b1698497e3dec3db022c95331a\"" Dec 13 01:30:52.303874 containerd[1956]: time="2024-12-13T01:30:52.303563864Z" level=info msg="StartContainer for \"76e3863cb20c26aca7ec48ff9f12a539f209d3b1698497e3dec3db022c95331a\"" Dec 13 01:30:52.403349 systemd[1]: Started cri-containerd-76e3863cb20c26aca7ec48ff9f12a539f209d3b1698497e3dec3db022c95331a.scope - libcontainer container 76e3863cb20c26aca7ec48ff9f12a539f209d3b1698497e3dec3db022c95331a. Dec 13 01:30:52.472884 containerd[1956]: time="2024-12-13T01:30:52.472821758Z" level=info msg="StartContainer for \"76e3863cb20c26aca7ec48ff9f12a539f209d3b1698497e3dec3db022c95331a\" returns successfully" Dec 13 01:30:52.739163 systemd[1]: run-containerd-runc-k8s.io-76e3863cb20c26aca7ec48ff9f12a539f209d3b1698497e3dec3db022c95331a-runc.AcnqBd.mount: Deactivated successfully. Dec 13 01:30:53.414525 kubelet[3141]: I1213 01:30:53.414456 3141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6fb57f9dbd-4vmnq" podStartSLOduration=33.169801768 podStartE2EDuration="38.414433008s" podCreationTimestamp="2024-12-13 01:30:15 +0000 UTC" firstStartedPulling="2024-12-13 01:30:47.026044721 +0000 UTC m=+51.518702753" lastFinishedPulling="2024-12-13 01:30:52.270675969 +0000 UTC m=+56.763333993" observedRunningTime="2024-12-13 01:30:53.413219446 +0000 UTC m=+57.905877490" watchObservedRunningTime="2024-12-13 01:30:53.414433008 +0000 UTC m=+57.907091044" Dec 13 01:30:53.533178 ntpd[1930]: Listen normally on 7 vxlan.calico 192.168.55.128:123 Dec 13 01:30:53.534210 ntpd[1930]: 13 Dec 01:30:53 ntpd[1930]: Listen normally on 7 vxlan.calico 192.168.55.128:123 Dec 13 01:30:53.534210 ntpd[1930]: 13 Dec 01:30:53 ntpd[1930]: Listen normally on 8 vxlan.calico [fe80::6434:8dff:fe5d:fd8d%4]:123 Dec 13 01:30:53.534210 ntpd[1930]: 13 Dec 01:30:53 ntpd[1930]: Listen normally on 9 calidef8caba816 [fe80::ecee:eeff:feee:eeee%7]:123 Dec 13 01:30:53.534210 ntpd[1930]: 13 Dec 01:30:53 ntpd[1930]: Listen normally on 10 calif708f463ef0 [fe80::ecee:eeff:feee:eeee%8]:123 Dec 13 01:30:53.534210 ntpd[1930]: 13 Dec 01:30:53 ntpd[1930]: Listen normally on 11 cali8c065d7234e [fe80::ecee:eeff:feee:eeee%9]:123 Dec 13 01:30:53.534210 ntpd[1930]: 13 Dec 01:30:53 ntpd[1930]: Listen normally on 12 cali6787942ab30 [fe80::ecee:eeff:feee:eeee%10]:123 Dec 13 01:30:53.534210 ntpd[1930]: 13 Dec 01:30:53 ntpd[1930]: Listen normally on 13 cali90f5dd138a6 [fe80::ecee:eeff:feee:eeee%11]:123 Dec 13 01:30:53.534210 ntpd[1930]: 13 Dec 01:30:53 ntpd[1930]: Listen normally on 14 cali6936f694ae2 [fe80::ecee:eeff:feee:eeee%12]:123 Dec 13 01:30:53.533303 ntpd[1930]: Listen normally on 8 vxlan.calico [fe80::6434:8dff:fe5d:fd8d%4]:123 Dec 13 01:30:53.533362 ntpd[1930]: Listen normally on 9 calidef8caba816 [fe80::ecee:eeff:feee:eeee%7]:123 Dec 13 01:30:53.533404 ntpd[1930]: Listen normally on 10 calif708f463ef0 [fe80::ecee:eeff:feee:eeee%8]:123 Dec 13 01:30:53.533441 ntpd[1930]: Listen normally on 11 cali8c065d7234e [fe80::ecee:eeff:feee:eeee%9]:123 Dec 13 01:30:53.533480 ntpd[1930]: Listen normally on 12 cali6787942ab30 [fe80::ecee:eeff:feee:eeee%10]:123 Dec 13 01:30:53.533523 ntpd[1930]: Listen normally on 13 cali90f5dd138a6 [fe80::ecee:eeff:feee:eeee%11]:123 Dec 13 01:30:53.533562 ntpd[1930]: Listen normally on 14 cali6936f694ae2 [fe80::ecee:eeff:feee:eeee%12]:123 Dec 13 01:30:54.395108 kubelet[3141]: I1213 01:30:54.395066 3141 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:30:55.442307 systemd[1]: Started sshd@8-172.31.31.20:22-139.178.68.195:43640.service - OpenSSH per-connection server daemon (139.178.68.195:43640). Dec 13 01:30:55.721806 sshd[5468]: Accepted publickey for core from 139.178.68.195 port 43640 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:30:55.724930 sshd[5468]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:30:55.741283 systemd-logind[1938]: New session 9 of user core. Dec 13 01:30:55.747225 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 01:30:55.863535 containerd[1956]: time="2024-12-13T01:30:55.862160261Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:30:55.868544 containerd[1956]: time="2024-12-13T01:30:55.867651491Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Dec 13 01:30:55.870743 containerd[1956]: time="2024-12-13T01:30:55.870699677Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:30:55.884432 containerd[1956]: time="2024-12-13T01:30:55.884154324Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:30:55.889029 containerd[1956]: time="2024-12-13T01:30:55.886147452Z" level=info msg="StopPodSandbox for \"b62617f63c1f5ae13f1945f3db2a067af07dc896b798844d9b94b95a9239b8ba\"" Dec 13 01:30:55.891349 containerd[1956]: time="2024-12-13T01:30:55.890419679Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 3.619536479s" Dec 13 01:30:55.891349 containerd[1956]: time="2024-12-13T01:30:55.890472834Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Dec 13 01:30:55.934246 containerd[1956]: time="2024-12-13T01:30:55.934201790Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Dec 13 01:30:55.965322 containerd[1956]: time="2024-12-13T01:30:55.965275631Z" level=info msg="CreateContainer within sandbox \"68e7539875e90452f0bb44eb7ce44da162263107a490c1781651264f4dce5f43\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Dec 13 01:30:56.021696 containerd[1956]: time="2024-12-13T01:30:56.021560177Z" level=info msg="CreateContainer within sandbox \"68e7539875e90452f0bb44eb7ce44da162263107a490c1781651264f4dce5f43\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"09ba9f5e3af8256573356886b9815a1a16de0e0a8d7e652849f23ded0dfb6c4f\"" Dec 13 01:30:56.022995 containerd[1956]: time="2024-12-13T01:30:56.022966900Z" level=info msg="StartContainer for \"09ba9f5e3af8256573356886b9815a1a16de0e0a8d7e652849f23ded0dfb6c4f\"" Dec 13 01:30:56.120620 systemd[1]: Started cri-containerd-09ba9f5e3af8256573356886b9815a1a16de0e0a8d7e652849f23ded0dfb6c4f.scope - libcontainer container 09ba9f5e3af8256573356886b9815a1a16de0e0a8d7e652849f23ded0dfb6c4f. Dec 13 01:30:56.287348 containerd[1956]: time="2024-12-13T01:30:56.287102106Z" level=info msg="StartContainer for \"09ba9f5e3af8256573356886b9815a1a16de0e0a8d7e652849f23ded0dfb6c4f\" returns successfully" Dec 13 01:30:56.579197 containerd[1956]: 2024-12-13 01:30:56.358 [WARNING][5494] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b62617f63c1f5ae13f1945f3db2a067af07dc896b798844d9b94b95a9239b8ba" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--20-k8s-csi--node--driver--tjm8g-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"38d9e318-7884-46ef-aa8d-69d6c11c0096", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 30, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-20", ContainerID:"46c680d582bb90be085ff271f67541eb279208850b80a9eb29e7dd8c5283ae65", Pod:"csi-node-driver-tjm8g", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.55.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali90f5dd138a6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:30:56.579197 containerd[1956]: 2024-12-13 01:30:56.359 [INFO][5494] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b62617f63c1f5ae13f1945f3db2a067af07dc896b798844d9b94b95a9239b8ba" Dec 13 01:30:56.579197 containerd[1956]: 2024-12-13 01:30:56.359 [INFO][5494] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b62617f63c1f5ae13f1945f3db2a067af07dc896b798844d9b94b95a9239b8ba" iface="eth0" netns="" Dec 13 01:30:56.579197 containerd[1956]: 2024-12-13 01:30:56.359 [INFO][5494] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b62617f63c1f5ae13f1945f3db2a067af07dc896b798844d9b94b95a9239b8ba" Dec 13 01:30:56.579197 containerd[1956]: 2024-12-13 01:30:56.359 [INFO][5494] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b62617f63c1f5ae13f1945f3db2a067af07dc896b798844d9b94b95a9239b8ba" Dec 13 01:30:56.579197 containerd[1956]: 2024-12-13 01:30:56.489 [INFO][5535] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b62617f63c1f5ae13f1945f3db2a067af07dc896b798844d9b94b95a9239b8ba" HandleID="k8s-pod-network.b62617f63c1f5ae13f1945f3db2a067af07dc896b798844d9b94b95a9239b8ba" Workload="ip--172--31--31--20-k8s-csi--node--driver--tjm8g-eth0" Dec 13 01:30:56.579197 containerd[1956]: 2024-12-13 01:30:56.491 [INFO][5535] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:30:56.579197 containerd[1956]: 2024-12-13 01:30:56.491 [INFO][5535] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:30:56.579197 containerd[1956]: 2024-12-13 01:30:56.531 [WARNING][5535] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b62617f63c1f5ae13f1945f3db2a067af07dc896b798844d9b94b95a9239b8ba" HandleID="k8s-pod-network.b62617f63c1f5ae13f1945f3db2a067af07dc896b798844d9b94b95a9239b8ba" Workload="ip--172--31--31--20-k8s-csi--node--driver--tjm8g-eth0" Dec 13 01:30:56.579197 containerd[1956]: 2024-12-13 01:30:56.532 [INFO][5535] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b62617f63c1f5ae13f1945f3db2a067af07dc896b798844d9b94b95a9239b8ba" HandleID="k8s-pod-network.b62617f63c1f5ae13f1945f3db2a067af07dc896b798844d9b94b95a9239b8ba" Workload="ip--172--31--31--20-k8s-csi--node--driver--tjm8g-eth0" Dec 13 01:30:56.579197 containerd[1956]: 2024-12-13 01:30:56.542 [INFO][5535] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:30:56.579197 containerd[1956]: 2024-12-13 01:30:56.562 [INFO][5494] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b62617f63c1f5ae13f1945f3db2a067af07dc896b798844d9b94b95a9239b8ba" Dec 13 01:30:56.579197 containerd[1956]: time="2024-12-13T01:30:56.578670545Z" level=info msg="TearDown network for sandbox \"b62617f63c1f5ae13f1945f3db2a067af07dc896b798844d9b94b95a9239b8ba\" successfully" Dec 13 01:30:56.579197 containerd[1956]: time="2024-12-13T01:30:56.579107560Z" level=info msg="StopPodSandbox for \"b62617f63c1f5ae13f1945f3db2a067af07dc896b798844d9b94b95a9239b8ba\" returns successfully" Dec 13 01:30:56.636042 containerd[1956]: time="2024-12-13T01:30:56.635009455Z" level=info msg="RemovePodSandbox for \"b62617f63c1f5ae13f1945f3db2a067af07dc896b798844d9b94b95a9239b8ba\"" Dec 13 01:30:56.636042 containerd[1956]: time="2024-12-13T01:30:56.635165226Z" level=info msg="Forcibly stopping sandbox \"b62617f63c1f5ae13f1945f3db2a067af07dc896b798844d9b94b95a9239b8ba\"" Dec 13 01:30:56.703193 sshd[5468]: pam_unix(sshd:session): session closed for user core Dec 13 01:30:56.707899 systemd-logind[1938]: Session 9 logged out. Waiting for processes to exit. Dec 13 01:30:56.708704 systemd[1]: sshd@8-172.31.31.20:22-139.178.68.195:43640.service: Deactivated successfully. Dec 13 01:30:56.715422 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 01:30:56.721833 systemd-logind[1938]: Removed session 9. Dec 13 01:30:56.795089 kubelet[3141]: I1213 01:30:56.794383 3141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-57c86ff57f-7c9qb" podStartSLOduration=33.070280259 podStartE2EDuration="40.79435973s" podCreationTimestamp="2024-12-13 01:30:16 +0000 UTC" firstStartedPulling="2024-12-13 01:30:48.182101965 +0000 UTC m=+52.674760001" lastFinishedPulling="2024-12-13 01:30:55.906181448 +0000 UTC m=+60.398839472" observedRunningTime="2024-12-13 01:30:56.46491153 +0000 UTC m=+60.957569574" watchObservedRunningTime="2024-12-13 01:30:56.79435973 +0000 UTC m=+61.287017774" Dec 13 01:30:56.878213 containerd[1956]: 2024-12-13 01:30:56.783 [WARNING][5573] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b62617f63c1f5ae13f1945f3db2a067af07dc896b798844d9b94b95a9239b8ba" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--20-k8s-csi--node--driver--tjm8g-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"38d9e318-7884-46ef-aa8d-69d6c11c0096", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 30, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-20", ContainerID:"46c680d582bb90be085ff271f67541eb279208850b80a9eb29e7dd8c5283ae65", Pod:"csi-node-driver-tjm8g", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.55.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali90f5dd138a6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:30:56.878213 containerd[1956]: 2024-12-13 01:30:56.783 [INFO][5573] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b62617f63c1f5ae13f1945f3db2a067af07dc896b798844d9b94b95a9239b8ba" Dec 13 01:30:56.878213 containerd[1956]: 2024-12-13 01:30:56.783 [INFO][5573] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b62617f63c1f5ae13f1945f3db2a067af07dc896b798844d9b94b95a9239b8ba" iface="eth0" netns="" Dec 13 01:30:56.878213 containerd[1956]: 2024-12-13 01:30:56.783 [INFO][5573] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b62617f63c1f5ae13f1945f3db2a067af07dc896b798844d9b94b95a9239b8ba" Dec 13 01:30:56.878213 containerd[1956]: 2024-12-13 01:30:56.783 [INFO][5573] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b62617f63c1f5ae13f1945f3db2a067af07dc896b798844d9b94b95a9239b8ba" Dec 13 01:30:56.878213 containerd[1956]: 2024-12-13 01:30:56.855 [INFO][5586] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b62617f63c1f5ae13f1945f3db2a067af07dc896b798844d9b94b95a9239b8ba" HandleID="k8s-pod-network.b62617f63c1f5ae13f1945f3db2a067af07dc896b798844d9b94b95a9239b8ba" Workload="ip--172--31--31--20-k8s-csi--node--driver--tjm8g-eth0" Dec 13 01:30:56.878213 containerd[1956]: 2024-12-13 01:30:56.855 [INFO][5586] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:30:56.878213 containerd[1956]: 2024-12-13 01:30:56.856 [INFO][5586] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:30:56.878213 containerd[1956]: 2024-12-13 01:30:56.866 [WARNING][5586] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b62617f63c1f5ae13f1945f3db2a067af07dc896b798844d9b94b95a9239b8ba" HandleID="k8s-pod-network.b62617f63c1f5ae13f1945f3db2a067af07dc896b798844d9b94b95a9239b8ba" Workload="ip--172--31--31--20-k8s-csi--node--driver--tjm8g-eth0" Dec 13 01:30:56.878213 containerd[1956]: 2024-12-13 01:30:56.866 [INFO][5586] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b62617f63c1f5ae13f1945f3db2a067af07dc896b798844d9b94b95a9239b8ba" HandleID="k8s-pod-network.b62617f63c1f5ae13f1945f3db2a067af07dc896b798844d9b94b95a9239b8ba" Workload="ip--172--31--31--20-k8s-csi--node--driver--tjm8g-eth0" Dec 13 01:30:56.878213 containerd[1956]: 2024-12-13 01:30:56.869 [INFO][5586] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:30:56.878213 containerd[1956]: 2024-12-13 01:30:56.872 [INFO][5573] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b62617f63c1f5ae13f1945f3db2a067af07dc896b798844d9b94b95a9239b8ba" Dec 13 01:30:56.878213 containerd[1956]: time="2024-12-13T01:30:56.875499762Z" level=info msg="TearDown network for sandbox \"b62617f63c1f5ae13f1945f3db2a067af07dc896b798844d9b94b95a9239b8ba\" successfully" Dec 13 01:30:56.921973 containerd[1956]: time="2024-12-13T01:30:56.921912872Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b62617f63c1f5ae13f1945f3db2a067af07dc896b798844d9b94b95a9239b8ba\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:30:56.963909 containerd[1956]: time="2024-12-13T01:30:56.963820693Z" level=info msg="RemovePodSandbox \"b62617f63c1f5ae13f1945f3db2a067af07dc896b798844d9b94b95a9239b8ba\" returns successfully" Dec 13 01:30:56.992701 containerd[1956]: time="2024-12-13T01:30:56.992475335Z" level=info msg="StopPodSandbox for \"4d2904961290749dbda9039d6da12f7ad0ecaf8fb204246801402ca631e5c577\"" Dec 13 01:30:57.103477 containerd[1956]: 2024-12-13 01:30:57.059 [WARNING][5604] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4d2904961290749dbda9039d6da12f7ad0ecaf8fb204246801402ca631e5c577" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--20-k8s-coredns--6f6b679f8f--j9snz-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"1318da38-1af2-423d-bab0-f7184d00175d", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 30, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-20", ContainerID:"193dec694936b61e177acdf66a02c196faddd3bd52c44b44f0bc09a345220808", Pod:"coredns-6f6b679f8f-j9snz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.55.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6787942ab30", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:30:57.103477 containerd[1956]: 2024-12-13 01:30:57.059 [INFO][5604] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4d2904961290749dbda9039d6da12f7ad0ecaf8fb204246801402ca631e5c577" Dec 13 01:30:57.103477 containerd[1956]: 2024-12-13 01:30:57.059 [INFO][5604] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4d2904961290749dbda9039d6da12f7ad0ecaf8fb204246801402ca631e5c577" iface="eth0" netns="" Dec 13 01:30:57.103477 containerd[1956]: 2024-12-13 01:30:57.059 [INFO][5604] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4d2904961290749dbda9039d6da12f7ad0ecaf8fb204246801402ca631e5c577" Dec 13 01:30:57.103477 containerd[1956]: 2024-12-13 01:30:57.059 [INFO][5604] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4d2904961290749dbda9039d6da12f7ad0ecaf8fb204246801402ca631e5c577" Dec 13 01:30:57.103477 containerd[1956]: 2024-12-13 01:30:57.091 [INFO][5611] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4d2904961290749dbda9039d6da12f7ad0ecaf8fb204246801402ca631e5c577" HandleID="k8s-pod-network.4d2904961290749dbda9039d6da12f7ad0ecaf8fb204246801402ca631e5c577" Workload="ip--172--31--31--20-k8s-coredns--6f6b679f8f--j9snz-eth0" Dec 13 01:30:57.103477 containerd[1956]: 2024-12-13 01:30:57.091 [INFO][5611] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:30:57.103477 containerd[1956]: 2024-12-13 01:30:57.091 [INFO][5611] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:30:57.103477 containerd[1956]: 2024-12-13 01:30:57.097 [WARNING][5611] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4d2904961290749dbda9039d6da12f7ad0ecaf8fb204246801402ca631e5c577" HandleID="k8s-pod-network.4d2904961290749dbda9039d6da12f7ad0ecaf8fb204246801402ca631e5c577" Workload="ip--172--31--31--20-k8s-coredns--6f6b679f8f--j9snz-eth0" Dec 13 01:30:57.103477 containerd[1956]: 2024-12-13 01:30:57.097 [INFO][5611] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4d2904961290749dbda9039d6da12f7ad0ecaf8fb204246801402ca631e5c577" HandleID="k8s-pod-network.4d2904961290749dbda9039d6da12f7ad0ecaf8fb204246801402ca631e5c577" Workload="ip--172--31--31--20-k8s-coredns--6f6b679f8f--j9snz-eth0" Dec 13 01:30:57.103477 containerd[1956]: 2024-12-13 01:30:57.099 [INFO][5611] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:30:57.103477 containerd[1956]: 2024-12-13 01:30:57.101 [INFO][5604] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4d2904961290749dbda9039d6da12f7ad0ecaf8fb204246801402ca631e5c577" Dec 13 01:30:57.103477 containerd[1956]: time="2024-12-13T01:30:57.103333689Z" level=info msg="TearDown network for sandbox \"4d2904961290749dbda9039d6da12f7ad0ecaf8fb204246801402ca631e5c577\" successfully" Dec 13 01:30:57.103477 containerd[1956]: time="2024-12-13T01:30:57.103373108Z" level=info msg="StopPodSandbox for \"4d2904961290749dbda9039d6da12f7ad0ecaf8fb204246801402ca631e5c577\" returns successfully" Dec 13 01:30:57.104346 containerd[1956]: time="2024-12-13T01:30:57.104313382Z" level=info msg="RemovePodSandbox for \"4d2904961290749dbda9039d6da12f7ad0ecaf8fb204246801402ca631e5c577\"" Dec 13 01:30:57.104433 containerd[1956]: time="2024-12-13T01:30:57.104352433Z" level=info msg="Forcibly stopping sandbox \"4d2904961290749dbda9039d6da12f7ad0ecaf8fb204246801402ca631e5c577\"" Dec 13 01:30:57.188139 containerd[1956]: 2024-12-13 01:30:57.149 [WARNING][5629] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4d2904961290749dbda9039d6da12f7ad0ecaf8fb204246801402ca631e5c577" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--20-k8s-coredns--6f6b679f8f--j9snz-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"1318da38-1af2-423d-bab0-f7184d00175d", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 30, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-20", ContainerID:"193dec694936b61e177acdf66a02c196faddd3bd52c44b44f0bc09a345220808", Pod:"coredns-6f6b679f8f-j9snz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.55.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6787942ab30", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:30:57.188139 containerd[1956]: 2024-12-13 01:30:57.149 [INFO][5629] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4d2904961290749dbda9039d6da12f7ad0ecaf8fb204246801402ca631e5c577" Dec 13 01:30:57.188139 containerd[1956]: 2024-12-13 01:30:57.149 [INFO][5629] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4d2904961290749dbda9039d6da12f7ad0ecaf8fb204246801402ca631e5c577" iface="eth0" netns="" Dec 13 01:30:57.188139 containerd[1956]: 2024-12-13 01:30:57.149 [INFO][5629] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4d2904961290749dbda9039d6da12f7ad0ecaf8fb204246801402ca631e5c577" Dec 13 01:30:57.188139 containerd[1956]: 2024-12-13 01:30:57.149 [INFO][5629] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4d2904961290749dbda9039d6da12f7ad0ecaf8fb204246801402ca631e5c577" Dec 13 01:30:57.188139 containerd[1956]: 2024-12-13 01:30:57.175 [INFO][5635] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4d2904961290749dbda9039d6da12f7ad0ecaf8fb204246801402ca631e5c577" HandleID="k8s-pod-network.4d2904961290749dbda9039d6da12f7ad0ecaf8fb204246801402ca631e5c577" Workload="ip--172--31--31--20-k8s-coredns--6f6b679f8f--j9snz-eth0" Dec 13 01:30:57.188139 containerd[1956]: 2024-12-13 01:30:57.175 [INFO][5635] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:30:57.188139 containerd[1956]: 2024-12-13 01:30:57.175 [INFO][5635] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:30:57.188139 containerd[1956]: 2024-12-13 01:30:57.182 [WARNING][5635] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4d2904961290749dbda9039d6da12f7ad0ecaf8fb204246801402ca631e5c577" HandleID="k8s-pod-network.4d2904961290749dbda9039d6da12f7ad0ecaf8fb204246801402ca631e5c577" Workload="ip--172--31--31--20-k8s-coredns--6f6b679f8f--j9snz-eth0" Dec 13 01:30:57.188139 containerd[1956]: 2024-12-13 01:30:57.182 [INFO][5635] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4d2904961290749dbda9039d6da12f7ad0ecaf8fb204246801402ca631e5c577" HandleID="k8s-pod-network.4d2904961290749dbda9039d6da12f7ad0ecaf8fb204246801402ca631e5c577" Workload="ip--172--31--31--20-k8s-coredns--6f6b679f8f--j9snz-eth0" Dec 13 01:30:57.188139 containerd[1956]: 2024-12-13 01:30:57.184 [INFO][5635] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:30:57.188139 containerd[1956]: 2024-12-13 01:30:57.186 [INFO][5629] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4d2904961290749dbda9039d6da12f7ad0ecaf8fb204246801402ca631e5c577" Dec 13 01:30:57.188139 containerd[1956]: time="2024-12-13T01:30:57.188086567Z" level=info msg="TearDown network for sandbox \"4d2904961290749dbda9039d6da12f7ad0ecaf8fb204246801402ca631e5c577\" successfully" Dec 13 01:30:57.194726 containerd[1956]: time="2024-12-13T01:30:57.194534755Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4d2904961290749dbda9039d6da12f7ad0ecaf8fb204246801402ca631e5c577\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:30:57.194726 containerd[1956]: time="2024-12-13T01:30:57.194610170Z" level=info msg="RemovePodSandbox \"4d2904961290749dbda9039d6da12f7ad0ecaf8fb204246801402ca631e5c577\" returns successfully" Dec 13 01:30:57.195204 containerd[1956]: time="2024-12-13T01:30:57.195173613Z" level=info msg="StopPodSandbox for \"a08fd61ae5da2a34f88b90565ef18bcf8003849e01be7621285d4bcfe4abe94b\"" Dec 13 01:30:57.274770 containerd[1956]: 2024-12-13 01:30:57.237 [WARNING][5654] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a08fd61ae5da2a34f88b90565ef18bcf8003849e01be7621285d4bcfe4abe94b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--20-k8s-coredns--6f6b679f8f--9ljzf-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"3c9798d9-0918-4f04-b830-5e93da684068", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 30, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-20", ContainerID:"07ec5634423a4f496f0e344f399a5a54b77b736fcc130e161f443836bd37fc8e", Pod:"coredns-6f6b679f8f-9ljzf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.55.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidef8caba816", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:30:57.274770 containerd[1956]: 2024-12-13 01:30:57.237 [INFO][5654] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a08fd61ae5da2a34f88b90565ef18bcf8003849e01be7621285d4bcfe4abe94b" Dec 13 01:30:57.274770 containerd[1956]: 2024-12-13 01:30:57.237 [INFO][5654] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a08fd61ae5da2a34f88b90565ef18bcf8003849e01be7621285d4bcfe4abe94b" iface="eth0" netns="" Dec 13 01:30:57.274770 containerd[1956]: 2024-12-13 01:30:57.237 [INFO][5654] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a08fd61ae5da2a34f88b90565ef18bcf8003849e01be7621285d4bcfe4abe94b" Dec 13 01:30:57.274770 containerd[1956]: 2024-12-13 01:30:57.237 [INFO][5654] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a08fd61ae5da2a34f88b90565ef18bcf8003849e01be7621285d4bcfe4abe94b" Dec 13 01:30:57.274770 containerd[1956]: 2024-12-13 01:30:57.261 [INFO][5660] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a08fd61ae5da2a34f88b90565ef18bcf8003849e01be7621285d4bcfe4abe94b" HandleID="k8s-pod-network.a08fd61ae5da2a34f88b90565ef18bcf8003849e01be7621285d4bcfe4abe94b" Workload="ip--172--31--31--20-k8s-coredns--6f6b679f8f--9ljzf-eth0" Dec 13 01:30:57.274770 containerd[1956]: 2024-12-13 01:30:57.261 [INFO][5660] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:30:57.274770 containerd[1956]: 2024-12-13 01:30:57.261 [INFO][5660] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:30:57.274770 containerd[1956]: 2024-12-13 01:30:57.269 [WARNING][5660] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a08fd61ae5da2a34f88b90565ef18bcf8003849e01be7621285d4bcfe4abe94b" HandleID="k8s-pod-network.a08fd61ae5da2a34f88b90565ef18bcf8003849e01be7621285d4bcfe4abe94b" Workload="ip--172--31--31--20-k8s-coredns--6f6b679f8f--9ljzf-eth0" Dec 13 01:30:57.274770 containerd[1956]: 2024-12-13 01:30:57.269 [INFO][5660] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a08fd61ae5da2a34f88b90565ef18bcf8003849e01be7621285d4bcfe4abe94b" HandleID="k8s-pod-network.a08fd61ae5da2a34f88b90565ef18bcf8003849e01be7621285d4bcfe4abe94b" Workload="ip--172--31--31--20-k8s-coredns--6f6b679f8f--9ljzf-eth0" Dec 13 01:30:57.274770 containerd[1956]: 2024-12-13 01:30:57.271 [INFO][5660] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:30:57.274770 containerd[1956]: 2024-12-13 01:30:57.273 [INFO][5654] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a08fd61ae5da2a34f88b90565ef18bcf8003849e01be7621285d4bcfe4abe94b" Dec 13 01:30:57.275770 containerd[1956]: time="2024-12-13T01:30:57.274827271Z" level=info msg="TearDown network for sandbox \"a08fd61ae5da2a34f88b90565ef18bcf8003849e01be7621285d4bcfe4abe94b\" successfully" Dec 13 01:30:57.275770 containerd[1956]: time="2024-12-13T01:30:57.274895423Z" level=info msg="StopPodSandbox for \"a08fd61ae5da2a34f88b90565ef18bcf8003849e01be7621285d4bcfe4abe94b\" returns successfully" Dec 13 01:30:57.275770 containerd[1956]: time="2024-12-13T01:30:57.275537911Z" level=info msg="RemovePodSandbox for \"a08fd61ae5da2a34f88b90565ef18bcf8003849e01be7621285d4bcfe4abe94b\"" Dec 13 01:30:57.275770 containerd[1956]: time="2024-12-13T01:30:57.275571098Z" level=info msg="Forcibly stopping sandbox \"a08fd61ae5da2a34f88b90565ef18bcf8003849e01be7621285d4bcfe4abe94b\"" Dec 13 01:30:57.378465 containerd[1956]: 2024-12-13 01:30:57.331 [WARNING][5678] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a08fd61ae5da2a34f88b90565ef18bcf8003849e01be7621285d4bcfe4abe94b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--20-k8s-coredns--6f6b679f8f--9ljzf-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"3c9798d9-0918-4f04-b830-5e93da684068", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 30, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-20", ContainerID:"07ec5634423a4f496f0e344f399a5a54b77b736fcc130e161f443836bd37fc8e", Pod:"coredns-6f6b679f8f-9ljzf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.55.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidef8caba816", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:30:57.378465 containerd[1956]: 2024-12-13 01:30:57.332 [INFO][5678] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a08fd61ae5da2a34f88b90565ef18bcf8003849e01be7621285d4bcfe4abe94b" Dec 13 01:30:57.378465 containerd[1956]: 2024-12-13 01:30:57.332 [INFO][5678] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a08fd61ae5da2a34f88b90565ef18bcf8003849e01be7621285d4bcfe4abe94b" iface="eth0" netns="" Dec 13 01:30:57.378465 containerd[1956]: 2024-12-13 01:30:57.332 [INFO][5678] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a08fd61ae5da2a34f88b90565ef18bcf8003849e01be7621285d4bcfe4abe94b" Dec 13 01:30:57.378465 containerd[1956]: 2024-12-13 01:30:57.332 [INFO][5678] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a08fd61ae5da2a34f88b90565ef18bcf8003849e01be7621285d4bcfe4abe94b" Dec 13 01:30:57.378465 containerd[1956]: 2024-12-13 01:30:57.363 [INFO][5685] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a08fd61ae5da2a34f88b90565ef18bcf8003849e01be7621285d4bcfe4abe94b" HandleID="k8s-pod-network.a08fd61ae5da2a34f88b90565ef18bcf8003849e01be7621285d4bcfe4abe94b" Workload="ip--172--31--31--20-k8s-coredns--6f6b679f8f--9ljzf-eth0" Dec 13 01:30:57.378465 containerd[1956]: 2024-12-13 01:30:57.363 [INFO][5685] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:30:57.378465 containerd[1956]: 2024-12-13 01:30:57.363 [INFO][5685] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:30:57.378465 containerd[1956]: 2024-12-13 01:30:57.372 [WARNING][5685] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a08fd61ae5da2a34f88b90565ef18bcf8003849e01be7621285d4bcfe4abe94b" HandleID="k8s-pod-network.a08fd61ae5da2a34f88b90565ef18bcf8003849e01be7621285d4bcfe4abe94b" Workload="ip--172--31--31--20-k8s-coredns--6f6b679f8f--9ljzf-eth0" Dec 13 01:30:57.378465 containerd[1956]: 2024-12-13 01:30:57.372 [INFO][5685] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a08fd61ae5da2a34f88b90565ef18bcf8003849e01be7621285d4bcfe4abe94b" HandleID="k8s-pod-network.a08fd61ae5da2a34f88b90565ef18bcf8003849e01be7621285d4bcfe4abe94b" Workload="ip--172--31--31--20-k8s-coredns--6f6b679f8f--9ljzf-eth0" Dec 13 01:30:57.378465 containerd[1956]: 2024-12-13 01:30:57.374 [INFO][5685] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:30:57.378465 containerd[1956]: 2024-12-13 01:30:57.376 [INFO][5678] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a08fd61ae5da2a34f88b90565ef18bcf8003849e01be7621285d4bcfe4abe94b" Dec 13 01:30:57.379748 containerd[1956]: time="2024-12-13T01:30:57.378509342Z" level=info msg="TearDown network for sandbox \"a08fd61ae5da2a34f88b90565ef18bcf8003849e01be7621285d4bcfe4abe94b\" successfully" Dec 13 01:30:57.399927 containerd[1956]: time="2024-12-13T01:30:57.399872082Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a08fd61ae5da2a34f88b90565ef18bcf8003849e01be7621285d4bcfe4abe94b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:30:57.400262 containerd[1956]: time="2024-12-13T01:30:57.399944889Z" level=info msg="RemovePodSandbox \"a08fd61ae5da2a34f88b90565ef18bcf8003849e01be7621285d4bcfe4abe94b\" returns successfully" Dec 13 01:30:57.400511 containerd[1956]: time="2024-12-13T01:30:57.400475741Z" level=info msg="StopPodSandbox for \"873558a6d702d97249fa217e19a1988a1390f250aee4c92d5f770a1d2589431e\"" Dec 13 01:30:57.499112 containerd[1956]: 2024-12-13 01:30:57.453 [WARNING][5704] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="873558a6d702d97249fa217e19a1988a1390f250aee4c92d5f770a1d2589431e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--20-k8s-calico--apiserver--6fb57f9dbd--4vmnq-eth0", GenerateName:"calico-apiserver-6fb57f9dbd-", Namespace:"calico-apiserver", SelfLink:"", UID:"6789e551-cd4a-4631-b879-423157868f76", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 30, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fb57f9dbd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-20", ContainerID:"09edb1ef1131b8438a613a1e0258a71f9b641b003e1694d0e44b4fe0ee6d4c03", Pod:"calico-apiserver-6fb57f9dbd-4vmnq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.55.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif708f463ef0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:30:57.499112 containerd[1956]: 2024-12-13 01:30:57.453 [INFO][5704] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="873558a6d702d97249fa217e19a1988a1390f250aee4c92d5f770a1d2589431e" Dec 13 01:30:57.499112 containerd[1956]: 2024-12-13 01:30:57.453 [INFO][5704] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="873558a6d702d97249fa217e19a1988a1390f250aee4c92d5f770a1d2589431e" iface="eth0" netns="" Dec 13 01:30:57.499112 containerd[1956]: 2024-12-13 01:30:57.453 [INFO][5704] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="873558a6d702d97249fa217e19a1988a1390f250aee4c92d5f770a1d2589431e" Dec 13 01:30:57.499112 containerd[1956]: 2024-12-13 01:30:57.453 [INFO][5704] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="873558a6d702d97249fa217e19a1988a1390f250aee4c92d5f770a1d2589431e" Dec 13 01:30:57.499112 containerd[1956]: 2024-12-13 01:30:57.480 [INFO][5710] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="873558a6d702d97249fa217e19a1988a1390f250aee4c92d5f770a1d2589431e" HandleID="k8s-pod-network.873558a6d702d97249fa217e19a1988a1390f250aee4c92d5f770a1d2589431e" Workload="ip--172--31--31--20-k8s-calico--apiserver--6fb57f9dbd--4vmnq-eth0" Dec 13 01:30:57.499112 containerd[1956]: 2024-12-13 01:30:57.480 [INFO][5710] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:30:57.499112 containerd[1956]: 2024-12-13 01:30:57.480 [INFO][5710] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:30:57.499112 containerd[1956]: 2024-12-13 01:30:57.491 [WARNING][5710] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="873558a6d702d97249fa217e19a1988a1390f250aee4c92d5f770a1d2589431e" HandleID="k8s-pod-network.873558a6d702d97249fa217e19a1988a1390f250aee4c92d5f770a1d2589431e" Workload="ip--172--31--31--20-k8s-calico--apiserver--6fb57f9dbd--4vmnq-eth0" Dec 13 01:30:57.499112 containerd[1956]: 2024-12-13 01:30:57.491 [INFO][5710] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="873558a6d702d97249fa217e19a1988a1390f250aee4c92d5f770a1d2589431e" HandleID="k8s-pod-network.873558a6d702d97249fa217e19a1988a1390f250aee4c92d5f770a1d2589431e" Workload="ip--172--31--31--20-k8s-calico--apiserver--6fb57f9dbd--4vmnq-eth0" Dec 13 01:30:57.499112 containerd[1956]: 2024-12-13 01:30:57.495 [INFO][5710] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:30:57.499112 containerd[1956]: 2024-12-13 01:30:57.497 [INFO][5704] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="873558a6d702d97249fa217e19a1988a1390f250aee4c92d5f770a1d2589431e" Dec 13 01:30:57.500381 containerd[1956]: time="2024-12-13T01:30:57.499158791Z" level=info msg="TearDown network for sandbox \"873558a6d702d97249fa217e19a1988a1390f250aee4c92d5f770a1d2589431e\" successfully" Dec 13 01:30:57.500381 containerd[1956]: time="2024-12-13T01:30:57.499188513Z" level=info msg="StopPodSandbox for \"873558a6d702d97249fa217e19a1988a1390f250aee4c92d5f770a1d2589431e\" returns successfully" Dec 13 01:30:57.500381 containerd[1956]: time="2024-12-13T01:30:57.499879864Z" level=info msg="RemovePodSandbox for \"873558a6d702d97249fa217e19a1988a1390f250aee4c92d5f770a1d2589431e\"" Dec 13 01:30:57.500381 containerd[1956]: time="2024-12-13T01:30:57.499914790Z" level=info msg="Forcibly stopping sandbox \"873558a6d702d97249fa217e19a1988a1390f250aee4c92d5f770a1d2589431e\"" Dec 13 01:30:57.632670 containerd[1956]: 2024-12-13 01:30:57.587 [WARNING][5729] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="873558a6d702d97249fa217e19a1988a1390f250aee4c92d5f770a1d2589431e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--20-k8s-calico--apiserver--6fb57f9dbd--4vmnq-eth0", GenerateName:"calico-apiserver-6fb57f9dbd-", Namespace:"calico-apiserver", SelfLink:"", UID:"6789e551-cd4a-4631-b879-423157868f76", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 30, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fb57f9dbd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-20", ContainerID:"09edb1ef1131b8438a613a1e0258a71f9b641b003e1694d0e44b4fe0ee6d4c03", Pod:"calico-apiserver-6fb57f9dbd-4vmnq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.55.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif708f463ef0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:30:57.632670 containerd[1956]: 2024-12-13 01:30:57.589 [INFO][5729] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="873558a6d702d97249fa217e19a1988a1390f250aee4c92d5f770a1d2589431e" Dec 13 01:30:57.632670 containerd[1956]: 2024-12-13 01:30:57.589 [INFO][5729] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="873558a6d702d97249fa217e19a1988a1390f250aee4c92d5f770a1d2589431e" iface="eth0" netns="" Dec 13 01:30:57.632670 containerd[1956]: 2024-12-13 01:30:57.589 [INFO][5729] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="873558a6d702d97249fa217e19a1988a1390f250aee4c92d5f770a1d2589431e" Dec 13 01:30:57.632670 containerd[1956]: 2024-12-13 01:30:57.589 [INFO][5729] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="873558a6d702d97249fa217e19a1988a1390f250aee4c92d5f770a1d2589431e" Dec 13 01:30:57.632670 containerd[1956]: 2024-12-13 01:30:57.619 [INFO][5735] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="873558a6d702d97249fa217e19a1988a1390f250aee4c92d5f770a1d2589431e" HandleID="k8s-pod-network.873558a6d702d97249fa217e19a1988a1390f250aee4c92d5f770a1d2589431e" Workload="ip--172--31--31--20-k8s-calico--apiserver--6fb57f9dbd--4vmnq-eth0" Dec 13 01:30:57.632670 containerd[1956]: 2024-12-13 01:30:57.620 [INFO][5735] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:30:57.632670 containerd[1956]: 2024-12-13 01:30:57.620 [INFO][5735] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:30:57.632670 containerd[1956]: 2024-12-13 01:30:57.626 [WARNING][5735] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="873558a6d702d97249fa217e19a1988a1390f250aee4c92d5f770a1d2589431e" HandleID="k8s-pod-network.873558a6d702d97249fa217e19a1988a1390f250aee4c92d5f770a1d2589431e" Workload="ip--172--31--31--20-k8s-calico--apiserver--6fb57f9dbd--4vmnq-eth0" Dec 13 01:30:57.632670 containerd[1956]: 2024-12-13 01:30:57.626 [INFO][5735] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="873558a6d702d97249fa217e19a1988a1390f250aee4c92d5f770a1d2589431e" HandleID="k8s-pod-network.873558a6d702d97249fa217e19a1988a1390f250aee4c92d5f770a1d2589431e" Workload="ip--172--31--31--20-k8s-calico--apiserver--6fb57f9dbd--4vmnq-eth0" Dec 13 01:30:57.632670 containerd[1956]: 2024-12-13 01:30:57.628 [INFO][5735] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:30:57.632670 containerd[1956]: 2024-12-13 01:30:57.630 [INFO][5729] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="873558a6d702d97249fa217e19a1988a1390f250aee4c92d5f770a1d2589431e" Dec 13 01:30:57.633390 containerd[1956]: time="2024-12-13T01:30:57.632711899Z" level=info msg="TearDown network for sandbox \"873558a6d702d97249fa217e19a1988a1390f250aee4c92d5f770a1d2589431e\" successfully" Dec 13 01:30:57.638626 containerd[1956]: time="2024-12-13T01:30:57.638458126Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"873558a6d702d97249fa217e19a1988a1390f250aee4c92d5f770a1d2589431e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:30:57.638626 containerd[1956]: time="2024-12-13T01:30:57.638610812Z" level=info msg="RemovePodSandbox \"873558a6d702d97249fa217e19a1988a1390f250aee4c92d5f770a1d2589431e\" returns successfully" Dec 13 01:30:57.639270 containerd[1956]: time="2024-12-13T01:30:57.639235871Z" level=info msg="StopPodSandbox for \"6d18f0f0b855a4997f454d04a8504dd5d0fe82e40fe180e414d3e76aff02d194\"" Dec 13 01:30:57.732724 containerd[1956]: 2024-12-13 01:30:57.685 [WARNING][5753] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6d18f0f0b855a4997f454d04a8504dd5d0fe82e40fe180e414d3e76aff02d194" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--20-k8s-calico--kube--controllers--57c86ff57f--7c9qb-eth0", GenerateName:"calico-kube-controllers-57c86ff57f-", Namespace:"calico-system", SelfLink:"", UID:"d3ab87f4-6e23-4e25-bdd7-87d4f2bfc4d4", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 30, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"57c86ff57f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-20", ContainerID:"68e7539875e90452f0bb44eb7ce44da162263107a490c1781651264f4dce5f43", Pod:"calico-kube-controllers-57c86ff57f-7c9qb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.55.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8c065d7234e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:30:57.732724 containerd[1956]: 2024-12-13 01:30:57.685 [INFO][5753] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6d18f0f0b855a4997f454d04a8504dd5d0fe82e40fe180e414d3e76aff02d194" Dec 13 01:30:57.732724 containerd[1956]: 2024-12-13 01:30:57.685 [INFO][5753] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6d18f0f0b855a4997f454d04a8504dd5d0fe82e40fe180e414d3e76aff02d194" iface="eth0" netns="" Dec 13 01:30:57.732724 containerd[1956]: 2024-12-13 01:30:57.685 [INFO][5753] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6d18f0f0b855a4997f454d04a8504dd5d0fe82e40fe180e414d3e76aff02d194" Dec 13 01:30:57.732724 containerd[1956]: 2024-12-13 01:30:57.686 [INFO][5753] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6d18f0f0b855a4997f454d04a8504dd5d0fe82e40fe180e414d3e76aff02d194" Dec 13 01:30:57.732724 containerd[1956]: 2024-12-13 01:30:57.717 [INFO][5759] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6d18f0f0b855a4997f454d04a8504dd5d0fe82e40fe180e414d3e76aff02d194" HandleID="k8s-pod-network.6d18f0f0b855a4997f454d04a8504dd5d0fe82e40fe180e414d3e76aff02d194" Workload="ip--172--31--31--20-k8s-calico--kube--controllers--57c86ff57f--7c9qb-eth0" Dec 13 01:30:57.732724 containerd[1956]: 2024-12-13 01:30:57.718 [INFO][5759] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:30:57.732724 containerd[1956]: 2024-12-13 01:30:57.718 [INFO][5759] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:30:57.732724 containerd[1956]: 2024-12-13 01:30:57.725 [WARNING][5759] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6d18f0f0b855a4997f454d04a8504dd5d0fe82e40fe180e414d3e76aff02d194" HandleID="k8s-pod-network.6d18f0f0b855a4997f454d04a8504dd5d0fe82e40fe180e414d3e76aff02d194" Workload="ip--172--31--31--20-k8s-calico--kube--controllers--57c86ff57f--7c9qb-eth0" Dec 13 01:30:57.732724 containerd[1956]: 2024-12-13 01:30:57.725 [INFO][5759] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6d18f0f0b855a4997f454d04a8504dd5d0fe82e40fe180e414d3e76aff02d194" HandleID="k8s-pod-network.6d18f0f0b855a4997f454d04a8504dd5d0fe82e40fe180e414d3e76aff02d194" Workload="ip--172--31--31--20-k8s-calico--kube--controllers--57c86ff57f--7c9qb-eth0" Dec 13 01:30:57.732724 containerd[1956]: 2024-12-13 01:30:57.728 [INFO][5759] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:30:57.732724 containerd[1956]: 2024-12-13 01:30:57.730 [INFO][5753] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6d18f0f0b855a4997f454d04a8504dd5d0fe82e40fe180e414d3e76aff02d194" Dec 13 01:30:57.732724 containerd[1956]: time="2024-12-13T01:30:57.732606002Z" level=info msg="TearDown network for sandbox \"6d18f0f0b855a4997f454d04a8504dd5d0fe82e40fe180e414d3e76aff02d194\" successfully" Dec 13 01:30:57.732724 containerd[1956]: time="2024-12-13T01:30:57.732628983Z" level=info msg="StopPodSandbox for \"6d18f0f0b855a4997f454d04a8504dd5d0fe82e40fe180e414d3e76aff02d194\" returns successfully" Dec 13 01:30:57.734211 containerd[1956]: time="2024-12-13T01:30:57.733200154Z" level=info msg="RemovePodSandbox for \"6d18f0f0b855a4997f454d04a8504dd5d0fe82e40fe180e414d3e76aff02d194\"" Dec 13 01:30:57.734211 containerd[1956]: time="2024-12-13T01:30:57.733233855Z" level=info msg="Forcibly stopping sandbox \"6d18f0f0b855a4997f454d04a8504dd5d0fe82e40fe180e414d3e76aff02d194\"" Dec 13 01:30:57.820449 containerd[1956]: 2024-12-13 01:30:57.778 [WARNING][5777] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6d18f0f0b855a4997f454d04a8504dd5d0fe82e40fe180e414d3e76aff02d194" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--20-k8s-calico--kube--controllers--57c86ff57f--7c9qb-eth0", GenerateName:"calico-kube-controllers-57c86ff57f-", Namespace:"calico-system", SelfLink:"", UID:"d3ab87f4-6e23-4e25-bdd7-87d4f2bfc4d4", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 30, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"57c86ff57f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-20", ContainerID:"68e7539875e90452f0bb44eb7ce44da162263107a490c1781651264f4dce5f43", Pod:"calico-kube-controllers-57c86ff57f-7c9qb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.55.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8c065d7234e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:30:57.820449 containerd[1956]: 2024-12-13 01:30:57.778 [INFO][5777] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6d18f0f0b855a4997f454d04a8504dd5d0fe82e40fe180e414d3e76aff02d194" Dec 13 01:30:57.820449 containerd[1956]: 2024-12-13 01:30:57.778 [INFO][5777] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6d18f0f0b855a4997f454d04a8504dd5d0fe82e40fe180e414d3e76aff02d194" iface="eth0" netns="" Dec 13 01:30:57.820449 containerd[1956]: 2024-12-13 01:30:57.778 [INFO][5777] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6d18f0f0b855a4997f454d04a8504dd5d0fe82e40fe180e414d3e76aff02d194" Dec 13 01:30:57.820449 containerd[1956]: 2024-12-13 01:30:57.778 [INFO][5777] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6d18f0f0b855a4997f454d04a8504dd5d0fe82e40fe180e414d3e76aff02d194" Dec 13 01:30:57.820449 containerd[1956]: 2024-12-13 01:30:57.807 [INFO][5784] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6d18f0f0b855a4997f454d04a8504dd5d0fe82e40fe180e414d3e76aff02d194" HandleID="k8s-pod-network.6d18f0f0b855a4997f454d04a8504dd5d0fe82e40fe180e414d3e76aff02d194" Workload="ip--172--31--31--20-k8s-calico--kube--controllers--57c86ff57f--7c9qb-eth0" Dec 13 01:30:57.820449 containerd[1956]: 2024-12-13 01:30:57.807 [INFO][5784] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:30:57.820449 containerd[1956]: 2024-12-13 01:30:57.807 [INFO][5784] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:30:57.820449 containerd[1956]: 2024-12-13 01:30:57.813 [WARNING][5784] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6d18f0f0b855a4997f454d04a8504dd5d0fe82e40fe180e414d3e76aff02d194" HandleID="k8s-pod-network.6d18f0f0b855a4997f454d04a8504dd5d0fe82e40fe180e414d3e76aff02d194" Workload="ip--172--31--31--20-k8s-calico--kube--controllers--57c86ff57f--7c9qb-eth0" Dec 13 01:30:57.820449 containerd[1956]: 2024-12-13 01:30:57.813 [INFO][5784] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6d18f0f0b855a4997f454d04a8504dd5d0fe82e40fe180e414d3e76aff02d194" HandleID="k8s-pod-network.6d18f0f0b855a4997f454d04a8504dd5d0fe82e40fe180e414d3e76aff02d194" Workload="ip--172--31--31--20-k8s-calico--kube--controllers--57c86ff57f--7c9qb-eth0" Dec 13 01:30:57.820449 containerd[1956]: 2024-12-13 01:30:57.815 [INFO][5784] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:30:57.820449 containerd[1956]: 2024-12-13 01:30:57.817 [INFO][5777] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6d18f0f0b855a4997f454d04a8504dd5d0fe82e40fe180e414d3e76aff02d194" Dec 13 01:30:57.820449 containerd[1956]: time="2024-12-13T01:30:57.820421849Z" level=info msg="TearDown network for sandbox \"6d18f0f0b855a4997f454d04a8504dd5d0fe82e40fe180e414d3e76aff02d194\" successfully" Dec 13 01:30:57.839656 containerd[1956]: time="2024-12-13T01:30:57.839539221Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6d18f0f0b855a4997f454d04a8504dd5d0fe82e40fe180e414d3e76aff02d194\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:30:57.840084 containerd[1956]: time="2024-12-13T01:30:57.839652158Z" level=info msg="RemovePodSandbox \"6d18f0f0b855a4997f454d04a8504dd5d0fe82e40fe180e414d3e76aff02d194\" returns successfully" Dec 13 01:30:57.840592 containerd[1956]: time="2024-12-13T01:30:57.840552579Z" level=info msg="StopPodSandbox for \"23cc170db8b9fa1f9e229c1fc20683d894e460f023587753f52368bf6e72c0bc\"" Dec 13 01:30:57.975283 containerd[1956]: 2024-12-13 01:30:57.895 [WARNING][5802] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="23cc170db8b9fa1f9e229c1fc20683d894e460f023587753f52368bf6e72c0bc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--20-k8s-calico--apiserver--6fb57f9dbd--9szhv-eth0", GenerateName:"calico-apiserver-6fb57f9dbd-", Namespace:"calico-apiserver", SelfLink:"", UID:"e810db5b-2666-45f5-b096-3468d8993a9c", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 30, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fb57f9dbd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-20", ContainerID:"73cd84a9593ecff4183325c97444bccc5d9463460d0e50d9b2148df75bc19ecb", Pod:"calico-apiserver-6fb57f9dbd-9szhv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.55.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6936f694ae2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:30:57.975283 containerd[1956]: 2024-12-13 01:30:57.896 [INFO][5802] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="23cc170db8b9fa1f9e229c1fc20683d894e460f023587753f52368bf6e72c0bc" Dec 13 01:30:57.975283 containerd[1956]: 2024-12-13 01:30:57.896 [INFO][5802] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="23cc170db8b9fa1f9e229c1fc20683d894e460f023587753f52368bf6e72c0bc" iface="eth0" netns="" Dec 13 01:30:57.975283 containerd[1956]: 2024-12-13 01:30:57.896 [INFO][5802] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="23cc170db8b9fa1f9e229c1fc20683d894e460f023587753f52368bf6e72c0bc" Dec 13 01:30:57.975283 containerd[1956]: 2024-12-13 01:30:57.896 [INFO][5802] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="23cc170db8b9fa1f9e229c1fc20683d894e460f023587753f52368bf6e72c0bc" Dec 13 01:30:57.975283 containerd[1956]: 2024-12-13 01:30:57.939 [INFO][5808] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="23cc170db8b9fa1f9e229c1fc20683d894e460f023587753f52368bf6e72c0bc" HandleID="k8s-pod-network.23cc170db8b9fa1f9e229c1fc20683d894e460f023587753f52368bf6e72c0bc" Workload="ip--172--31--31--20-k8s-calico--apiserver--6fb57f9dbd--9szhv-eth0" Dec 13 01:30:57.975283 containerd[1956]: 2024-12-13 01:30:57.939 [INFO][5808] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:30:57.975283 containerd[1956]: 2024-12-13 01:30:57.939 [INFO][5808] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:30:57.975283 containerd[1956]: 2024-12-13 01:30:57.965 [WARNING][5808] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="23cc170db8b9fa1f9e229c1fc20683d894e460f023587753f52368bf6e72c0bc" HandleID="k8s-pod-network.23cc170db8b9fa1f9e229c1fc20683d894e460f023587753f52368bf6e72c0bc" Workload="ip--172--31--31--20-k8s-calico--apiserver--6fb57f9dbd--9szhv-eth0" Dec 13 01:30:57.975283 containerd[1956]: 2024-12-13 01:30:57.967 [INFO][5808] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="23cc170db8b9fa1f9e229c1fc20683d894e460f023587753f52368bf6e72c0bc" HandleID="k8s-pod-network.23cc170db8b9fa1f9e229c1fc20683d894e460f023587753f52368bf6e72c0bc" Workload="ip--172--31--31--20-k8s-calico--apiserver--6fb57f9dbd--9szhv-eth0" Dec 13 01:30:57.975283 containerd[1956]: 2024-12-13 01:30:57.970 [INFO][5808] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:30:57.975283 containerd[1956]: 2024-12-13 01:30:57.973 [INFO][5802] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="23cc170db8b9fa1f9e229c1fc20683d894e460f023587753f52368bf6e72c0bc" Dec 13 01:30:57.977100 containerd[1956]: time="2024-12-13T01:30:57.975322603Z" level=info msg="TearDown network for sandbox \"23cc170db8b9fa1f9e229c1fc20683d894e460f023587753f52368bf6e72c0bc\" successfully" Dec 13 01:30:57.977100 containerd[1956]: time="2024-12-13T01:30:57.975350839Z" level=info msg="StopPodSandbox for \"23cc170db8b9fa1f9e229c1fc20683d894e460f023587753f52368bf6e72c0bc\" returns successfully" Dec 13 01:30:57.977100 containerd[1956]: time="2024-12-13T01:30:57.976459571Z" level=info msg="RemovePodSandbox for \"23cc170db8b9fa1f9e229c1fc20683d894e460f023587753f52368bf6e72c0bc\"" Dec 13 01:30:57.977100 containerd[1956]: time="2024-12-13T01:30:57.976497889Z" level=info msg="Forcibly stopping sandbox \"23cc170db8b9fa1f9e229c1fc20683d894e460f023587753f52368bf6e72c0bc\"" Dec 13 01:30:58.084961 containerd[1956]: 2024-12-13 01:30:58.035 [WARNING][5826] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="23cc170db8b9fa1f9e229c1fc20683d894e460f023587753f52368bf6e72c0bc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--20-k8s-calico--apiserver--6fb57f9dbd--9szhv-eth0", GenerateName:"calico-apiserver-6fb57f9dbd-", Namespace:"calico-apiserver", SelfLink:"", UID:"e810db5b-2666-45f5-b096-3468d8993a9c", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 30, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fb57f9dbd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-20", ContainerID:"73cd84a9593ecff4183325c97444bccc5d9463460d0e50d9b2148df75bc19ecb", Pod:"calico-apiserver-6fb57f9dbd-9szhv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.55.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6936f694ae2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:30:58.084961 containerd[1956]: 2024-12-13 01:30:58.036 [INFO][5826] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="23cc170db8b9fa1f9e229c1fc20683d894e460f023587753f52368bf6e72c0bc" Dec 13 01:30:58.084961 containerd[1956]: 2024-12-13 01:30:58.036 [INFO][5826] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="23cc170db8b9fa1f9e229c1fc20683d894e460f023587753f52368bf6e72c0bc" iface="eth0" netns="" Dec 13 01:30:58.084961 containerd[1956]: 2024-12-13 01:30:58.036 [INFO][5826] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="23cc170db8b9fa1f9e229c1fc20683d894e460f023587753f52368bf6e72c0bc" Dec 13 01:30:58.084961 containerd[1956]: 2024-12-13 01:30:58.036 [INFO][5826] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="23cc170db8b9fa1f9e229c1fc20683d894e460f023587753f52368bf6e72c0bc" Dec 13 01:30:58.084961 containerd[1956]: 2024-12-13 01:30:58.068 [INFO][5832] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="23cc170db8b9fa1f9e229c1fc20683d894e460f023587753f52368bf6e72c0bc" HandleID="k8s-pod-network.23cc170db8b9fa1f9e229c1fc20683d894e460f023587753f52368bf6e72c0bc" Workload="ip--172--31--31--20-k8s-calico--apiserver--6fb57f9dbd--9szhv-eth0" Dec 13 01:30:58.084961 containerd[1956]: 2024-12-13 01:30:58.068 [INFO][5832] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:30:58.084961 containerd[1956]: 2024-12-13 01:30:58.068 [INFO][5832] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:30:58.084961 containerd[1956]: 2024-12-13 01:30:58.077 [WARNING][5832] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="23cc170db8b9fa1f9e229c1fc20683d894e460f023587753f52368bf6e72c0bc" HandleID="k8s-pod-network.23cc170db8b9fa1f9e229c1fc20683d894e460f023587753f52368bf6e72c0bc" Workload="ip--172--31--31--20-k8s-calico--apiserver--6fb57f9dbd--9szhv-eth0" Dec 13 01:30:58.084961 containerd[1956]: 2024-12-13 01:30:58.077 [INFO][5832] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="23cc170db8b9fa1f9e229c1fc20683d894e460f023587753f52368bf6e72c0bc" HandleID="k8s-pod-network.23cc170db8b9fa1f9e229c1fc20683d894e460f023587753f52368bf6e72c0bc" Workload="ip--172--31--31--20-k8s-calico--apiserver--6fb57f9dbd--9szhv-eth0" Dec 13 01:30:58.084961 containerd[1956]: 2024-12-13 01:30:58.079 [INFO][5832] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:30:58.084961 containerd[1956]: 2024-12-13 01:30:58.081 [INFO][5826] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="23cc170db8b9fa1f9e229c1fc20683d894e460f023587753f52368bf6e72c0bc" Dec 13 01:30:58.084961 containerd[1956]: time="2024-12-13T01:30:58.084911267Z" level=info msg="TearDown network for sandbox \"23cc170db8b9fa1f9e229c1fc20683d894e460f023587753f52368bf6e72c0bc\" successfully" Dec 13 01:30:58.092994 containerd[1956]: time="2024-12-13T01:30:58.091573178Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"23cc170db8b9fa1f9e229c1fc20683d894e460f023587753f52368bf6e72c0bc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:30:58.092994 containerd[1956]: time="2024-12-13T01:30:58.091720479Z" level=info msg="RemovePodSandbox \"23cc170db8b9fa1f9e229c1fc20683d894e460f023587753f52368bf6e72c0bc\" returns successfully" Dec 13 01:30:58.431737 containerd[1956]: time="2024-12-13T01:30:58.431410798Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:30:58.433973 containerd[1956]: time="2024-12-13T01:30:58.433890352Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Dec 13 01:30:58.438081 containerd[1956]: time="2024-12-13T01:30:58.438005066Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:30:58.441536 containerd[1956]: time="2024-12-13T01:30:58.441467353Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:30:58.442409 containerd[1956]: time="2024-12-13T01:30:58.442165221Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 2.507383938s" Dec 13 01:30:58.442409 containerd[1956]: time="2024-12-13T01:30:58.442211874Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Dec 13 01:30:58.443644 containerd[1956]: time="2024-12-13T01:30:58.443504559Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 01:30:58.448236 containerd[1956]: time="2024-12-13T01:30:58.448194585Z" level=info msg="CreateContainer within sandbox \"46c680d582bb90be085ff271f67541eb279208850b80a9eb29e7dd8c5283ae65\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Dec 13 01:30:58.476382 containerd[1956]: time="2024-12-13T01:30:58.476337742Z" level=info msg="CreateContainer within sandbox \"46c680d582bb90be085ff271f67541eb279208850b80a9eb29e7dd8c5283ae65\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"975e461ffee2552ae873d5d0fea096817ba595a57ec402c3434935219c54936b\"" Dec 13 01:30:58.477316 containerd[1956]: time="2024-12-13T01:30:58.477070095Z" level=info msg="StartContainer for \"975e461ffee2552ae873d5d0fea096817ba595a57ec402c3434935219c54936b\"" Dec 13 01:30:58.577292 systemd[1]: Started cri-containerd-975e461ffee2552ae873d5d0fea096817ba595a57ec402c3434935219c54936b.scope - libcontainer container 975e461ffee2552ae873d5d0fea096817ba595a57ec402c3434935219c54936b. Dec 13 01:30:58.630237 containerd[1956]: time="2024-12-13T01:30:58.630182564Z" level=info msg="StartContainer for \"975e461ffee2552ae873d5d0fea096817ba595a57ec402c3434935219c54936b\" returns successfully" Dec 13 01:30:58.821272 containerd[1956]: time="2024-12-13T01:30:58.821217168Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:30:58.823555 containerd[1956]: time="2024-12-13T01:30:58.823498734Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Dec 13 01:30:58.831179 containerd[1956]: time="2024-12-13T01:30:58.831130255Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 387.588227ms" Dec 13 01:30:58.831695 containerd[1956]: time="2024-12-13T01:30:58.831190106Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Dec 13 01:30:58.834022 containerd[1956]: time="2024-12-13T01:30:58.832990968Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Dec 13 01:30:58.835986 containerd[1956]: time="2024-12-13T01:30:58.835950312Z" level=info msg="CreateContainer within sandbox \"73cd84a9593ecff4183325c97444bccc5d9463460d0e50d9b2148df75bc19ecb\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 01:30:58.866485 containerd[1956]: time="2024-12-13T01:30:58.866437144Z" level=info msg="CreateContainer within sandbox \"73cd84a9593ecff4183325c97444bccc5d9463460d0e50d9b2148df75bc19ecb\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7f5f01a66d00998273427478cca0c0994b2fc17cf3e56ac742e660caac49a7b5\"" Dec 13 01:30:58.867231 containerd[1956]: time="2024-12-13T01:30:58.867005169Z" level=info msg="StartContainer for \"7f5f01a66d00998273427478cca0c0994b2fc17cf3e56ac742e660caac49a7b5\"" Dec 13 01:30:58.920081 systemd[1]: Started cri-containerd-7f5f01a66d00998273427478cca0c0994b2fc17cf3e56ac742e660caac49a7b5.scope - libcontainer container 7f5f01a66d00998273427478cca0c0994b2fc17cf3e56ac742e660caac49a7b5. Dec 13 01:30:59.005462 containerd[1956]: time="2024-12-13T01:30:59.005409691Z" level=info msg="StartContainer for \"7f5f01a66d00998273427478cca0c0994b2fc17cf3e56ac742e660caac49a7b5\" returns successfully" Dec 13 01:30:59.486748 kubelet[3141]: I1213 01:30:59.486538 3141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6fb57f9dbd-9szhv" podStartSLOduration=35.843942603 podStartE2EDuration="44.486514044s" podCreationTimestamp="2024-12-13 01:30:15 +0000 UTC" firstStartedPulling="2024-12-13 01:30:50.189827988 +0000 UTC m=+54.682486026" lastFinishedPulling="2024-12-13 01:30:58.832399439 +0000 UTC m=+63.325057467" observedRunningTime="2024-12-13 01:30:59.479701173 +0000 UTC m=+63.972359219" watchObservedRunningTime="2024-12-13 01:30:59.486514044 +0000 UTC m=+63.979172089" Dec 13 01:31:00.473853 kubelet[3141]: I1213 01:31:00.472748 3141 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:31:00.813495 kubelet[3141]: I1213 01:31:00.810335 3141 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:31:01.108814 containerd[1956]: time="2024-12-13T01:31:01.108476015Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:01.123113 containerd[1956]: time="2024-12-13T01:31:01.123007283Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Dec 13 01:31:01.126077 containerd[1956]: time="2024-12-13T01:31:01.126025424Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:01.142928 containerd[1956]: time="2024-12-13T01:31:01.141471285Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:01.143752 containerd[1956]: time="2024-12-13T01:31:01.143699201Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 2.309786469s" Dec 13 01:31:01.143880 containerd[1956]: time="2024-12-13T01:31:01.143756864Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Dec 13 01:31:01.153285 containerd[1956]: time="2024-12-13T01:31:01.153232427Z" level=info msg="CreateContainer within sandbox \"46c680d582bb90be085ff271f67541eb279208850b80a9eb29e7dd8c5283ae65\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Dec 13 01:31:01.223856 containerd[1956]: time="2024-12-13T01:31:01.223416759Z" level=info msg="CreateContainer within sandbox \"46c680d582bb90be085ff271f67541eb279208850b80a9eb29e7dd8c5283ae65\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"dde6a2cb30aa87d1e3d7a127e0b923b81c15899d050a5e9d0da0a852d2f15747\"" Dec 13 01:31:01.227258 containerd[1956]: time="2024-12-13T01:31:01.225626296Z" level=info msg="StartContainer for \"dde6a2cb30aa87d1e3d7a127e0b923b81c15899d050a5e9d0da0a852d2f15747\"" Dec 13 01:31:01.576114 systemd[1]: Started cri-containerd-dde6a2cb30aa87d1e3d7a127e0b923b81c15899d050a5e9d0da0a852d2f15747.scope - libcontainer container dde6a2cb30aa87d1e3d7a127e0b923b81c15899d050a5e9d0da0a852d2f15747. Dec 13 01:31:01.834658 systemd[1]: Started sshd@9-172.31.31.20:22-139.178.68.195:48150.service - OpenSSH per-connection server daemon (139.178.68.195:48150). Dec 13 01:31:02.372450 containerd[1956]: time="2024-12-13T01:31:02.372389885Z" level=info msg="StartContainer for \"dde6a2cb30aa87d1e3d7a127e0b923b81c15899d050a5e9d0da0a852d2f15747\" returns successfully" Dec 13 01:31:02.689579 sshd[5965]: Accepted publickey for core from 139.178.68.195 port 48150 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:31:02.717579 sshd[5965]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:02.758072 systemd-logind[1938]: New session 10 of user core. Dec 13 01:31:02.762465 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 01:31:03.863232 kubelet[3141]: I1213 01:31:03.862715 3141 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Dec 13 01:31:03.890798 kubelet[3141]: I1213 01:31:03.890675 3141 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Dec 13 01:31:04.031550 sshd[5965]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:04.043769 systemd[1]: sshd@9-172.31.31.20:22-139.178.68.195:48150.service: Deactivated successfully. Dec 13 01:31:04.048652 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 01:31:04.049996 systemd-logind[1938]: Session 10 logged out. Waiting for processes to exit. Dec 13 01:31:04.071606 systemd[1]: Started sshd@10-172.31.31.20:22-139.178.68.195:48166.service - OpenSSH per-connection server daemon (139.178.68.195:48166). Dec 13 01:31:04.085625 systemd-logind[1938]: Removed session 10. Dec 13 01:31:04.252895 sshd[5994]: Accepted publickey for core from 139.178.68.195 port 48166 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:31:04.253960 sshd[5994]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:04.270706 systemd-logind[1938]: New session 11 of user core. Dec 13 01:31:04.283096 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 01:31:04.712144 sshd[5994]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:04.721664 systemd[1]: sshd@10-172.31.31.20:22-139.178.68.195:48166.service: Deactivated successfully. Dec 13 01:31:04.727334 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 01:31:04.732647 systemd-logind[1938]: Session 11 logged out. Waiting for processes to exit. Dec 13 01:31:04.757127 systemd[1]: Started sshd@11-172.31.31.20:22-139.178.68.195:48170.service - OpenSSH per-connection server daemon (139.178.68.195:48170). Dec 13 01:31:04.760898 systemd-logind[1938]: Removed session 11. Dec 13 01:31:04.980384 sshd[6014]: Accepted publickey for core from 139.178.68.195 port 48170 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:31:04.984363 sshd[6014]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:04.997176 systemd-logind[1938]: New session 12 of user core. Dec 13 01:31:05.024970 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 01:31:05.586974 sshd[6014]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:05.597407 systemd[1]: sshd@11-172.31.31.20:22-139.178.68.195:48170.service: Deactivated successfully. Dec 13 01:31:05.603301 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 01:31:05.605102 systemd-logind[1938]: Session 12 logged out. Waiting for processes to exit. Dec 13 01:31:05.608612 systemd-logind[1938]: Removed session 12. Dec 13 01:31:08.543811 kubelet[3141]: I1213 01:31:08.543302 3141 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:31:08.583824 kubelet[3141]: I1213 01:31:08.583650 3141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-tjm8g" podStartSLOduration=41.161050438 podStartE2EDuration="52.583581747s" podCreationTimestamp="2024-12-13 01:30:16 +0000 UTC" firstStartedPulling="2024-12-13 01:30:49.723986373 +0000 UTC m=+54.216644406" lastFinishedPulling="2024-12-13 01:31:01.146517686 +0000 UTC m=+65.639175715" observedRunningTime="2024-12-13 01:31:02.846141698 +0000 UTC m=+67.338799772" watchObservedRunningTime="2024-12-13 01:31:08.583581747 +0000 UTC m=+73.076239796" Dec 13 01:31:10.637402 systemd[1]: Started sshd@12-172.31.31.20:22-139.178.68.195:33946.service - OpenSSH per-connection server daemon (139.178.68.195:33946). Dec 13 01:31:10.825110 sshd[6031]: Accepted publickey for core from 139.178.68.195 port 33946 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:31:10.827180 sshd[6031]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:10.832711 systemd-logind[1938]: New session 13 of user core. Dec 13 01:31:10.839098 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 01:31:11.368509 sshd[6031]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:11.386807 systemd[1]: sshd@12-172.31.31.20:22-139.178.68.195:33946.service: Deactivated successfully. Dec 13 01:31:11.395353 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 01:31:11.398619 systemd-logind[1938]: Session 13 logged out. Waiting for processes to exit. Dec 13 01:31:11.403122 systemd-logind[1938]: Removed session 13. Dec 13 01:31:16.411674 systemd[1]: Started sshd@13-172.31.31.20:22-139.178.68.195:47758.service - OpenSSH per-connection server daemon (139.178.68.195:47758). Dec 13 01:31:16.701697 sshd[6047]: Accepted publickey for core from 139.178.68.195 port 47758 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:31:16.706484 sshd[6047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:16.715184 systemd-logind[1938]: New session 14 of user core. Dec 13 01:31:16.720242 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 01:31:17.660921 sshd[6047]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:17.668994 systemd-logind[1938]: Session 14 logged out. Waiting for processes to exit. Dec 13 01:31:17.670579 systemd[1]: sshd@13-172.31.31.20:22-139.178.68.195:47758.service: Deactivated successfully. Dec 13 01:31:17.675822 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 01:31:17.687612 systemd-logind[1938]: Removed session 14. Dec 13 01:31:22.704265 systemd[1]: Started sshd@14-172.31.31.20:22-139.178.68.195:47772.service - OpenSSH per-connection server daemon (139.178.68.195:47772). Dec 13 01:31:22.918942 sshd[6084]: Accepted publickey for core from 139.178.68.195 port 47772 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:31:22.925429 sshd[6084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:22.941037 systemd-logind[1938]: New session 15 of user core. Dec 13 01:31:22.952150 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 01:31:23.472724 sshd[6084]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:23.486048 systemd[1]: sshd@14-172.31.31.20:22-139.178.68.195:47772.service: Deactivated successfully. Dec 13 01:31:23.496694 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 01:31:23.508565 systemd-logind[1938]: Session 15 logged out. Waiting for processes to exit. Dec 13 01:31:23.510540 systemd-logind[1938]: Removed session 15. Dec 13 01:31:28.509225 systemd[1]: Started sshd@15-172.31.31.20:22-139.178.68.195:34714.service - OpenSSH per-connection server daemon (139.178.68.195:34714). Dec 13 01:31:28.716620 sshd[6122]: Accepted publickey for core from 139.178.68.195 port 34714 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:31:28.719162 sshd[6122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:28.726765 systemd-logind[1938]: New session 16 of user core. Dec 13 01:31:28.737115 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 01:31:29.614502 sshd[6122]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:29.625988 systemd[1]: sshd@15-172.31.31.20:22-139.178.68.195:34714.service: Deactivated successfully. Dec 13 01:31:29.630783 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 01:31:29.635723 systemd-logind[1938]: Session 16 logged out. Waiting for processes to exit. Dec 13 01:31:29.658278 systemd[1]: Started sshd@16-172.31.31.20:22-139.178.68.195:34728.service - OpenSSH per-connection server daemon (139.178.68.195:34728). Dec 13 01:31:29.660652 systemd-logind[1938]: Removed session 16. Dec 13 01:31:29.856974 sshd[6135]: Accepted publickey for core from 139.178.68.195 port 34728 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:31:29.860636 sshd[6135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:29.866605 systemd-logind[1938]: New session 17 of user core. Dec 13 01:31:29.874176 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 01:31:30.751676 sshd[6135]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:30.757125 systemd-logind[1938]: Session 17 logged out. Waiting for processes to exit. Dec 13 01:31:30.758645 systemd[1]: sshd@16-172.31.31.20:22-139.178.68.195:34728.service: Deactivated successfully. Dec 13 01:31:30.761610 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 01:31:30.764738 systemd-logind[1938]: Removed session 17. Dec 13 01:31:30.786232 systemd[1]: Started sshd@17-172.31.31.20:22-139.178.68.195:34736.service - OpenSSH per-connection server daemon (139.178.68.195:34736). Dec 13 01:31:30.961472 sshd[6167]: Accepted publickey for core from 139.178.68.195 port 34736 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:31:30.964510 sshd[6167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:30.978466 systemd-logind[1938]: New session 18 of user core. Dec 13 01:31:30.983723 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 01:31:34.929290 sshd[6167]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:34.943025 systemd[1]: sshd@17-172.31.31.20:22-139.178.68.195:34736.service: Deactivated successfully. Dec 13 01:31:34.945270 systemd-logind[1938]: Session 18 logged out. Waiting for processes to exit. Dec 13 01:31:34.948733 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 01:31:34.990348 systemd[1]: Started sshd@18-172.31.31.20:22-139.178.68.195:34746.service - OpenSSH per-connection server daemon (139.178.68.195:34746). Dec 13 01:31:34.994233 systemd-logind[1938]: Removed session 18. Dec 13 01:31:35.336676 sshd[6186]: Accepted publickey for core from 139.178.68.195 port 34746 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:31:35.344668 sshd[6186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:35.353633 systemd-logind[1938]: New session 19 of user core. Dec 13 01:31:35.361102 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 01:31:37.161950 sshd[6186]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:37.177108 systemd[1]: sshd@18-172.31.31.20:22-139.178.68.195:34746.service: Deactivated successfully. Dec 13 01:31:37.178069 systemd-logind[1938]: Session 19 logged out. Waiting for processes to exit. Dec 13 01:31:37.182561 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 01:31:37.212515 systemd[1]: Started sshd@19-172.31.31.20:22-139.178.68.195:46098.service - OpenSSH per-connection server daemon (139.178.68.195:46098). Dec 13 01:31:37.214418 systemd-logind[1938]: Removed session 19. Dec 13 01:31:37.473788 sshd[6198]: Accepted publickey for core from 139.178.68.195 port 46098 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:31:37.479576 sshd[6198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:37.516278 systemd-logind[1938]: New session 20 of user core. Dec 13 01:31:37.524594 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 01:31:37.817368 sshd[6198]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:37.845352 systemd-logind[1938]: Session 20 logged out. Waiting for processes to exit. Dec 13 01:31:37.846456 systemd[1]: sshd@19-172.31.31.20:22-139.178.68.195:46098.service: Deactivated successfully. Dec 13 01:31:37.851239 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 01:31:37.853061 systemd-logind[1938]: Removed session 20. Dec 13 01:31:42.856254 systemd[1]: Started sshd@20-172.31.31.20:22-139.178.68.195:46100.service - OpenSSH per-connection server daemon (139.178.68.195:46100). Dec 13 01:31:43.012232 sshd[6211]: Accepted publickey for core from 139.178.68.195 port 46100 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:31:43.014195 sshd[6211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:43.019081 systemd-logind[1938]: New session 21 of user core. Dec 13 01:31:43.025077 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 01:31:43.228110 sshd[6211]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:43.239059 systemd[1]: sshd@20-172.31.31.20:22-139.178.68.195:46100.service: Deactivated successfully. Dec 13 01:31:43.241808 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 01:31:43.243467 systemd-logind[1938]: Session 21 logged out. Waiting for processes to exit. Dec 13 01:31:43.244766 systemd-logind[1938]: Removed session 21. Dec 13 01:31:48.264551 systemd[1]: Started sshd@21-172.31.31.20:22-139.178.68.195:36148.service - OpenSSH per-connection server daemon (139.178.68.195:36148). Dec 13 01:31:48.459257 sshd[6227]: Accepted publickey for core from 139.178.68.195 port 36148 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:31:48.461961 sshd[6227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:48.472456 systemd-logind[1938]: New session 22 of user core. Dec 13 01:31:48.482056 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 01:31:48.733056 sshd[6227]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:48.741764 systemd[1]: sshd@21-172.31.31.20:22-139.178.68.195:36148.service: Deactivated successfully. Dec 13 01:31:48.745655 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 01:31:48.747329 systemd-logind[1938]: Session 22 logged out. Waiting for processes to exit. Dec 13 01:31:48.750493 systemd-logind[1938]: Removed session 22. Dec 13 01:31:53.766424 systemd[1]: Started sshd@22-172.31.31.20:22-139.178.68.195:36154.service - OpenSSH per-connection server daemon (139.178.68.195:36154). Dec 13 01:31:53.958408 sshd[6262]: Accepted publickey for core from 139.178.68.195 port 36154 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:31:53.960327 sshd[6262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:53.965192 systemd-logind[1938]: New session 23 of user core. Dec 13 01:31:53.972072 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 01:31:54.183347 sshd[6262]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:54.189300 systemd[1]: sshd@22-172.31.31.20:22-139.178.68.195:36154.service: Deactivated successfully. Dec 13 01:31:54.192213 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 01:31:54.193293 systemd-logind[1938]: Session 23 logged out. Waiting for processes to exit. Dec 13 01:31:54.194789 systemd-logind[1938]: Removed session 23. Dec 13 01:31:59.238746 systemd[1]: Started sshd@23-172.31.31.20:22-139.178.68.195:37286.service - OpenSSH per-connection server daemon (139.178.68.195:37286). Dec 13 01:31:59.448008 sshd[6277]: Accepted publickey for core from 139.178.68.195 port 37286 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:31:59.450464 sshd[6277]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:59.456933 systemd-logind[1938]: New session 24 of user core. Dec 13 01:31:59.466251 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 13 01:31:59.691282 sshd[6277]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:59.699307 systemd[1]: sshd@23-172.31.31.20:22-139.178.68.195:37286.service: Deactivated successfully. Dec 13 01:31:59.703465 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 01:31:59.704788 systemd-logind[1938]: Session 24 logged out. Waiting for processes to exit. Dec 13 01:31:59.706225 systemd-logind[1938]: Removed session 24. Dec 13 01:32:04.751790 systemd[1]: Started sshd@24-172.31.31.20:22-139.178.68.195:37292.service - OpenSSH per-connection server daemon (139.178.68.195:37292). Dec 13 01:32:04.942174 sshd[6317]: Accepted publickey for core from 139.178.68.195 port 37292 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:32:04.952103 sshd[6317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:32:04.966678 systemd-logind[1938]: New session 25 of user core. Dec 13 01:32:04.975096 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 13 01:32:05.210610 sshd[6317]: pam_unix(sshd:session): session closed for user core Dec 13 01:32:05.216717 systemd[1]: sshd@24-172.31.31.20:22-139.178.68.195:37292.service: Deactivated successfully. Dec 13 01:32:05.217106 systemd-logind[1938]: Session 25 logged out. Waiting for processes to exit. Dec 13 01:32:05.222678 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 01:32:05.225315 systemd-logind[1938]: Removed session 25. Dec 13 01:32:10.253331 systemd[1]: Started sshd@25-172.31.31.20:22-139.178.68.195:54258.service - OpenSSH per-connection server daemon (139.178.68.195:54258). Dec 13 01:32:10.439180 sshd[6329]: Accepted publickey for core from 139.178.68.195 port 54258 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:32:10.441127 sshd[6329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:32:10.465800 systemd-logind[1938]: New session 26 of user core. Dec 13 01:32:10.472287 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 13 01:32:10.723131 sshd[6329]: pam_unix(sshd:session): session closed for user core Dec 13 01:32:10.730397 systemd[1]: sshd@25-172.31.31.20:22-139.178.68.195:54258.service: Deactivated successfully. Dec 13 01:32:10.735815 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 01:32:10.739303 systemd-logind[1938]: Session 26 logged out. Waiting for processes to exit. Dec 13 01:32:10.742387 systemd-logind[1938]: Removed session 26. Dec 13 01:32:26.524508 systemd[1]: cri-containerd-60783935829f2fd2a4bfa85fae21b136c6c6c967b299790bf558169569a24cac.scope: Deactivated successfully. Dec 13 01:32:26.528627 systemd[1]: cri-containerd-60783935829f2fd2a4bfa85fae21b136c6c6c967b299790bf558169569a24cac.scope: Consumed 3.840s CPU time. Dec 13 01:32:26.782271 systemd[1]: cri-containerd-536e9b7e36fadc641dd0d5e5cac00b8d4d0143659722497fc49de530126ba54c.scope: Deactivated successfully. Dec 13 01:32:26.782914 systemd[1]: cri-containerd-536e9b7e36fadc641dd0d5e5cac00b8d4d0143659722497fc49de530126ba54c.scope: Consumed 3.971s CPU time, 17.9M memory peak, 0B memory swap peak. Dec 13 01:32:26.824759 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-60783935829f2fd2a4bfa85fae21b136c6c6c967b299790bf558169569a24cac-rootfs.mount: Deactivated successfully. Dec 13 01:32:26.865011 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-536e9b7e36fadc641dd0d5e5cac00b8d4d0143659722497fc49de530126ba54c-rootfs.mount: Deactivated successfully. Dec 13 01:32:26.908430 containerd[1956]: time="2024-12-13T01:32:26.870820164Z" level=info msg="shim disconnected" id=60783935829f2fd2a4bfa85fae21b136c6c6c967b299790bf558169569a24cac namespace=k8s.io Dec 13 01:32:26.909090 containerd[1956]: time="2024-12-13T01:32:26.870488023Z" level=info msg="shim disconnected" id=536e9b7e36fadc641dd0d5e5cac00b8d4d0143659722497fc49de530126ba54c namespace=k8s.io Dec 13 01:32:26.927205 containerd[1956]: time="2024-12-13T01:32:26.927133175Z" level=warning msg="cleaning up after shim disconnected" id=536e9b7e36fadc641dd0d5e5cac00b8d4d0143659722497fc49de530126ba54c namespace=k8s.io Dec 13 01:32:26.927205 containerd[1956]: time="2024-12-13T01:32:26.927186630Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:32:26.930150 containerd[1956]: time="2024-12-13T01:32:26.927133204Z" level=warning msg="cleaning up after shim disconnected" id=60783935829f2fd2a4bfa85fae21b136c6c6c967b299790bf558169569a24cac namespace=k8s.io Dec 13 01:32:26.930150 containerd[1956]: time="2024-12-13T01:32:26.927664409Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:32:27.080022 containerd[1956]: time="2024-12-13T01:32:27.079780732Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:32:27Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 01:32:27.315216 kubelet[3141]: I1213 01:32:27.315156 3141 scope.go:117] "RemoveContainer" containerID="60783935829f2fd2a4bfa85fae21b136c6c6c967b299790bf558169569a24cac" Dec 13 01:32:27.316668 kubelet[3141]: I1213 01:32:27.315551 3141 scope.go:117] "RemoveContainer" containerID="536e9b7e36fadc641dd0d5e5cac00b8d4d0143659722497fc49de530126ba54c" Dec 13 01:32:27.357598 containerd[1956]: time="2024-12-13T01:32:27.356591609Z" level=info msg="CreateContainer within sandbox \"62a90aff388d57c5e6520d807889f276faeeb1ccc1f37d8f51802a0cfed7f7f8\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Dec 13 01:32:27.358866 containerd[1956]: time="2024-12-13T01:32:27.358506375Z" level=info msg="CreateContainer within sandbox \"6d4fe0f21a5e3180bf711669ca94830baf77c846c1e3276888e6d64d1a2bc2ce\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Dec 13 01:32:27.468630 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2261972778.mount: Deactivated successfully. Dec 13 01:32:27.482998 containerd[1956]: time="2024-12-13T01:32:27.482944641Z" level=info msg="CreateContainer within sandbox \"62a90aff388d57c5e6520d807889f276faeeb1ccc1f37d8f51802a0cfed7f7f8\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"f8212202cc7383547ed73a8da9801cf15438b7bf94481e0e4739524483c2b2f8\"" Dec 13 01:32:27.486218 containerd[1956]: time="2024-12-13T01:32:27.485757240Z" level=info msg="StartContainer for \"f8212202cc7383547ed73a8da9801cf15438b7bf94481e0e4739524483c2b2f8\"" Dec 13 01:32:27.500809 containerd[1956]: time="2024-12-13T01:32:27.500757763Z" level=info msg="CreateContainer within sandbox \"6d4fe0f21a5e3180bf711669ca94830baf77c846c1e3276888e6d64d1a2bc2ce\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"9e998dc8dd7580aeb664c5695f8c0e7c5dd43e2ac4c87a29151176c4c2b16189\"" Dec 13 01:32:27.501858 containerd[1956]: time="2024-12-13T01:32:27.501549935Z" level=info msg="StartContainer for \"9e998dc8dd7580aeb664c5695f8c0e7c5dd43e2ac4c87a29151176c4c2b16189\"" Dec 13 01:32:27.561737 systemd[1]: Started cri-containerd-9e998dc8dd7580aeb664c5695f8c0e7c5dd43e2ac4c87a29151176c4c2b16189.scope - libcontainer container 9e998dc8dd7580aeb664c5695f8c0e7c5dd43e2ac4c87a29151176c4c2b16189. Dec 13 01:32:27.579004 systemd[1]: Started cri-containerd-f8212202cc7383547ed73a8da9801cf15438b7bf94481e0e4739524483c2b2f8.scope - libcontainer container f8212202cc7383547ed73a8da9801cf15438b7bf94481e0e4739524483c2b2f8. Dec 13 01:32:27.675728 containerd[1956]: time="2024-12-13T01:32:27.674571356Z" level=info msg="StartContainer for \"9e998dc8dd7580aeb664c5695f8c0e7c5dd43e2ac4c87a29151176c4c2b16189\" returns successfully" Dec 13 01:32:27.687079 containerd[1956]: time="2024-12-13T01:32:27.687016465Z" level=info msg="StartContainer for \"f8212202cc7383547ed73a8da9801cf15438b7bf94481e0e4739524483c2b2f8\" returns successfully" Dec 13 01:32:27.824966 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount254324251.mount: Deactivated successfully. Dec 13 01:32:28.824205 kubelet[3141]: E1213 01:32:28.824144 3141 controller.go:195] "Failed to update lease" err="Put \"https://172.31.31.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-20?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 13 01:32:30.835099 systemd[1]: cri-containerd-723cb1477de6e07b2b2187250c79cc3e930504c15a93dbffac68dff4e4f4248b.scope: Deactivated successfully. Dec 13 01:32:30.835422 systemd[1]: cri-containerd-723cb1477de6e07b2b2187250c79cc3e930504c15a93dbffac68dff4e4f4248b.scope: Consumed 1.722s CPU time, 18.4M memory peak, 0B memory swap peak. Dec 13 01:32:30.893898 containerd[1956]: time="2024-12-13T01:32:30.892590322Z" level=info msg="shim disconnected" id=723cb1477de6e07b2b2187250c79cc3e930504c15a93dbffac68dff4e4f4248b namespace=k8s.io Dec 13 01:32:30.893898 containerd[1956]: time="2024-12-13T01:32:30.893081660Z" level=warning msg="cleaning up after shim disconnected" id=723cb1477de6e07b2b2187250c79cc3e930504c15a93dbffac68dff4e4f4248b namespace=k8s.io Dec 13 01:32:30.893898 containerd[1956]: time="2024-12-13T01:32:30.893101225Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:32:30.905168 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-723cb1477de6e07b2b2187250c79cc3e930504c15a93dbffac68dff4e4f4248b-rootfs.mount: Deactivated successfully. Dec 13 01:32:31.419597 kubelet[3141]: I1213 01:32:31.419321 3141 scope.go:117] "RemoveContainer" containerID="723cb1477de6e07b2b2187250c79cc3e930504c15a93dbffac68dff4e4f4248b" Dec 13 01:32:31.425722 containerd[1956]: time="2024-12-13T01:32:31.425370310Z" level=info msg="CreateContainer within sandbox \"44c26419c3fd85441256a05e3284b3a3bab9be8983fc0752a7836256135095fd\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Dec 13 01:32:31.464501 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3377402484.mount: Deactivated successfully. Dec 13 01:32:31.496572 containerd[1956]: time="2024-12-13T01:32:31.496513547Z" level=info msg="CreateContainer within sandbox \"44c26419c3fd85441256a05e3284b3a3bab9be8983fc0752a7836256135095fd\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"e2d7c69c2b77b83992c47300d3a3c2e8961f7a83b169a18fdd3e5fe4b1a128fb\"" Dec 13 01:32:31.497433 containerd[1956]: time="2024-12-13T01:32:31.497330171Z" level=info msg="StartContainer for \"e2d7c69c2b77b83992c47300d3a3c2e8961f7a83b169a18fdd3e5fe4b1a128fb\"" Dec 13 01:32:31.555089 systemd[1]: Started cri-containerd-e2d7c69c2b77b83992c47300d3a3c2e8961f7a83b169a18fdd3e5fe4b1a128fb.scope - libcontainer container e2d7c69c2b77b83992c47300d3a3c2e8961f7a83b169a18fdd3e5fe4b1a128fb. Dec 13 01:32:31.700449 containerd[1956]: time="2024-12-13T01:32:31.700077582Z" level=info msg="StartContainer for \"e2d7c69c2b77b83992c47300d3a3c2e8961f7a83b169a18fdd3e5fe4b1a128fb\" returns successfully" Dec 13 01:32:38.825702 kubelet[3141]: E1213 01:32:38.825624 3141 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io ip-172-31-31-20)"