Dec 13 01:32:17.008836 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Dec 12 23:15:00 -00 2024 Dec 13 01:32:17.008876 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:32:17.008891 kernel: BIOS-provided physical RAM map: Dec 13 01:32:17.008902 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 01:32:17.008913 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 01:32:17.008925 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 01:32:17.008943 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Dec 13 01:32:17.008955 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Dec 13 01:32:17.008968 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Dec 13 01:32:17.008981 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 01:32:17.009011 kernel: NX (Execute Disable) protection: active Dec 13 01:32:17.009021 kernel: APIC: Static calls initialized Dec 13 01:32:17.009031 kernel: SMBIOS 2.7 present. Dec 13 01:32:17.009071 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Dec 13 01:32:17.009088 kernel: Hypervisor detected: KVM Dec 13 01:32:17.009124 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 01:32:17.009137 kernel: kvm-clock: using sched offset of 6250328583 cycles Dec 13 01:32:17.009151 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 01:32:17.009190 kernel: tsc: Detected 2499.996 MHz processor Dec 13 01:32:17.009203 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 01:32:17.009216 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 01:32:17.009233 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Dec 13 01:32:17.009247 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 13 01:32:17.009261 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 01:32:17.009273 kernel: Using GB pages for direct mapping Dec 13 01:32:17.009286 kernel: ACPI: Early table checksum verification disabled Dec 13 01:32:17.009298 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Dec 13 01:32:17.009310 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Dec 13 01:32:17.009322 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Dec 13 01:32:17.009335 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Dec 13 01:32:17.009350 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Dec 13 01:32:17.009363 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Dec 13 01:32:17.009375 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Dec 13 01:32:17.009388 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Dec 13 01:32:17.009400 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Dec 13 01:32:17.009412 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Dec 13 01:32:17.009425 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Dec 13 01:32:17.009438 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Dec 13 01:32:17.009450 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Dec 13 01:32:17.009465 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Dec 13 01:32:17.009483 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Dec 13 01:32:17.009496 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Dec 13 01:32:17.009509 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Dec 13 01:32:17.009521 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Dec 13 01:32:17.009537 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Dec 13 01:32:17.009549 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Dec 13 01:32:17.009562 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Dec 13 01:32:17.009575 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Dec 13 01:32:17.009589 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 01:32:17.009601 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 01:32:17.009614 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Dec 13 01:32:17.009627 kernel: NUMA: Initialized distance table, cnt=1 Dec 13 01:32:17.009639 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Dec 13 01:32:17.009655 kernel: Zone ranges: Dec 13 01:32:17.009669 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 01:32:17.009681 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Dec 13 01:32:17.009695 kernel: Normal empty Dec 13 01:32:17.009708 kernel: Movable zone start for each node Dec 13 01:32:17.009721 kernel: Early memory node ranges Dec 13 01:32:17.009734 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 01:32:17.009746 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Dec 13 01:32:17.009759 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Dec 13 01:32:17.009772 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 01:32:17.009789 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 01:32:17.009803 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Dec 13 01:32:17.009815 kernel: ACPI: PM-Timer IO Port: 0xb008 Dec 13 01:32:17.009828 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 01:32:17.009841 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Dec 13 01:32:17.009854 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 01:32:17.009868 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 01:32:17.009880 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 01:32:17.009893 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 01:32:17.009910 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 01:32:17.009922 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 01:32:17.009935 kernel: TSC deadline timer available Dec 13 01:32:17.009948 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 01:32:17.009961 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 13 01:32:17.009974 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Dec 13 01:32:17.010009 kernel: Booting paravirtualized kernel on KVM Dec 13 01:32:17.010023 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 01:32:17.010037 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 13 01:32:17.010052 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Dec 13 01:32:17.010072 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Dec 13 01:32:17.010085 kernel: pcpu-alloc: [0] 0 1 Dec 13 01:32:17.010097 kernel: kvm-guest: PV spinlocks enabled Dec 13 01:32:17.010111 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 01:32:17.010127 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:32:17.010143 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:32:17.010156 kernel: random: crng init done Dec 13 01:32:17.010173 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 01:32:17.010186 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 01:32:17.010199 kernel: Fallback order for Node 0: 0 Dec 13 01:32:17.010212 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Dec 13 01:32:17.010226 kernel: Policy zone: DMA32 Dec 13 01:32:17.010239 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:32:17.010254 kernel: Memory: 1932348K/2057760K available (12288K kernel code, 2299K rwdata, 22724K rodata, 42844K init, 2348K bss, 125152K reserved, 0K cma-reserved) Dec 13 01:32:17.010269 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 01:32:17.010283 kernel: Kernel/User page tables isolation: enabled Dec 13 01:32:17.010302 kernel: ftrace: allocating 37902 entries in 149 pages Dec 13 01:32:17.010318 kernel: ftrace: allocated 149 pages with 4 groups Dec 13 01:32:17.010334 kernel: Dynamic Preempt: voluntary Dec 13 01:32:17.010349 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 01:32:17.010364 kernel: rcu: RCU event tracing is enabled. Dec 13 01:32:17.010380 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 01:32:17.010395 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 01:32:17.010411 kernel: Rude variant of Tasks RCU enabled. Dec 13 01:32:17.010427 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:32:17.010445 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:32:17.010461 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 01:32:17.010477 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 13 01:32:17.010492 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 01:32:17.010508 kernel: Console: colour VGA+ 80x25 Dec 13 01:32:17.010523 kernel: printk: console [ttyS0] enabled Dec 13 01:32:17.010539 kernel: ACPI: Core revision 20230628 Dec 13 01:32:17.010554 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Dec 13 01:32:17.010570 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 01:32:17.010588 kernel: x2apic enabled Dec 13 01:32:17.010604 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 01:32:17.010630 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Dec 13 01:32:17.010650 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Dec 13 01:32:17.010667 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Dec 13 01:32:17.010684 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Dec 13 01:32:17.010699 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 01:32:17.010716 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 01:32:17.010732 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 01:32:17.010749 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 01:32:17.010765 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Dec 13 01:32:17.010782 kernel: RETBleed: Vulnerable Dec 13 01:32:17.010798 kernel: Speculative Store Bypass: Vulnerable Dec 13 01:32:17.010818 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 01:32:17.010834 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 01:32:17.010851 kernel: GDS: Unknown: Dependent on hypervisor status Dec 13 01:32:17.010867 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 01:32:17.010883 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 01:32:17.010900 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 01:32:17.010919 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Dec 13 01:32:17.010936 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Dec 13 01:32:17.010952 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Dec 13 01:32:17.010968 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Dec 13 01:32:17.010985 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Dec 13 01:32:17.011016 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Dec 13 01:32:17.011030 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 01:32:17.011044 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Dec 13 01:32:17.011057 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Dec 13 01:32:17.011070 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Dec 13 01:32:17.011083 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Dec 13 01:32:17.011099 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Dec 13 01:32:17.011112 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Dec 13 01:32:17.011126 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Dec 13 01:32:17.011139 kernel: Freeing SMP alternatives memory: 32K Dec 13 01:32:17.011153 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:32:17.011166 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 01:32:17.011179 kernel: landlock: Up and running. Dec 13 01:32:17.011194 kernel: SELinux: Initializing. Dec 13 01:32:17.011210 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 01:32:17.011226 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 01:32:17.011243 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Dec 13 01:32:17.011264 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:32:17.011281 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:32:17.011298 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:32:17.011315 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Dec 13 01:32:17.011332 kernel: signal: max sigframe size: 3632 Dec 13 01:32:17.011348 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:32:17.011365 kernel: rcu: Max phase no-delay instances is 400. Dec 13 01:32:17.011382 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 01:32:17.011399 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:32:17.011419 kernel: smpboot: x86: Booting SMP configuration: Dec 13 01:32:17.011435 kernel: .... node #0, CPUs: #1 Dec 13 01:32:17.011452 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Dec 13 01:32:17.011470 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 13 01:32:17.011487 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 01:32:17.011504 kernel: smpboot: Max logical packages: 1 Dec 13 01:32:17.011521 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Dec 13 01:32:17.011537 kernel: devtmpfs: initialized Dec 13 01:32:17.011553 kernel: x86/mm: Memory block size: 128MB Dec 13 01:32:17.011573 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:32:17.011590 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 01:32:17.011607 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:32:17.011624 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:32:17.011640 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:32:17.011657 kernel: audit: type=2000 audit(1734053536.697:1): state=initialized audit_enabled=0 res=1 Dec 13 01:32:17.011674 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:32:17.011691 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 01:32:17.011711 kernel: cpuidle: using governor menu Dec 13 01:32:17.011728 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:32:17.011745 kernel: dca service started, version 1.12.1 Dec 13 01:32:17.011761 kernel: PCI: Using configuration type 1 for base access Dec 13 01:32:17.011778 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 01:32:17.011795 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:32:17.011812 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 01:32:17.011829 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:32:17.011846 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 01:32:17.011867 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:32:17.011884 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:32:17.011901 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:32:17.011918 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:32:17.011935 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Dec 13 01:32:17.011952 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 01:32:17.011968 kernel: ACPI: Interpreter enabled Dec 13 01:32:17.011985 kernel: ACPI: PM: (supports S0 S5) Dec 13 01:32:17.012031 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 01:32:17.012045 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 01:32:17.012063 kernel: PCI: Using E820 reservations for host bridge windows Dec 13 01:32:17.012077 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Dec 13 01:32:17.012091 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 01:32:17.012330 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 13 01:32:17.012474 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Dec 13 01:32:17.012608 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Dec 13 01:32:17.012628 kernel: acpiphp: Slot [3] registered Dec 13 01:32:17.012647 kernel: acpiphp: Slot [4] registered Dec 13 01:32:17.012662 kernel: acpiphp: Slot [5] registered Dec 13 01:32:17.012678 kernel: acpiphp: Slot [6] registered Dec 13 01:32:17.012693 kernel: acpiphp: Slot [7] registered Dec 13 01:32:17.012707 kernel: acpiphp: Slot [8] registered Dec 13 01:32:17.012723 kernel: acpiphp: Slot [9] registered Dec 13 01:32:17.012737 kernel: acpiphp: Slot [10] registered Dec 13 01:32:17.012753 kernel: acpiphp: Slot [11] registered Dec 13 01:32:17.012767 kernel: acpiphp: Slot [12] registered Dec 13 01:32:17.012787 kernel: acpiphp: Slot [13] registered Dec 13 01:32:17.012801 kernel: acpiphp: Slot [14] registered Dec 13 01:32:17.012815 kernel: acpiphp: Slot [15] registered Dec 13 01:32:17.012830 kernel: acpiphp: Slot [16] registered Dec 13 01:32:17.012846 kernel: acpiphp: Slot [17] registered Dec 13 01:32:17.012861 kernel: acpiphp: Slot [18] registered Dec 13 01:32:17.012876 kernel: acpiphp: Slot [19] registered Dec 13 01:32:17.012891 kernel: acpiphp: Slot [20] registered Dec 13 01:32:17.012905 kernel: acpiphp: Slot [21] registered Dec 13 01:32:17.012920 kernel: acpiphp: Slot [22] registered Dec 13 01:32:17.012939 kernel: acpiphp: Slot [23] registered Dec 13 01:32:17.012954 kernel: acpiphp: Slot [24] registered Dec 13 01:32:17.012968 kernel: acpiphp: Slot [25] registered Dec 13 01:32:17.012982 kernel: acpiphp: Slot [26] registered Dec 13 01:32:17.013015 kernel: acpiphp: Slot [27] registered Dec 13 01:32:17.013030 kernel: acpiphp: Slot [28] registered Dec 13 01:32:17.013043 kernel: acpiphp: Slot [29] registered Dec 13 01:32:17.013056 kernel: acpiphp: Slot [30] registered Dec 13 01:32:17.013068 kernel: acpiphp: Slot [31] registered Dec 13 01:32:17.013088 kernel: PCI host bridge to bus 0000:00 Dec 13 01:32:17.013247 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 01:32:17.013377 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 01:32:17.013502 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 01:32:17.013624 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Dec 13 01:32:17.013747 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 01:32:17.013904 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Dec 13 01:32:17.014168 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Dec 13 01:32:17.014332 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Dec 13 01:32:17.014474 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Dec 13 01:32:17.014614 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Dec 13 01:32:17.014751 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Dec 13 01:32:17.014906 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Dec 13 01:32:17.015079 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Dec 13 01:32:17.015229 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Dec 13 01:32:17.015379 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Dec 13 01:32:17.015516 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Dec 13 01:32:17.015651 kernel: pci 0000:00:01.3: quirk_piix4_acpi+0x0/0x180 took 11718 usecs Dec 13 01:32:17.015795 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Dec 13 01:32:17.015930 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Dec 13 01:32:17.020426 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Dec 13 01:32:17.020626 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 01:32:17.020779 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Dec 13 01:32:17.020918 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Dec 13 01:32:17.021085 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Dec 13 01:32:17.021222 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Dec 13 01:32:17.021242 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 01:32:17.021267 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 01:32:17.021284 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 01:32:17.021299 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 01:32:17.021315 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 13 01:32:17.021331 kernel: iommu: Default domain type: Translated Dec 13 01:32:17.021347 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 01:32:17.021363 kernel: PCI: Using ACPI for IRQ routing Dec 13 01:32:17.021379 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 01:32:17.021395 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 01:32:17.021414 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Dec 13 01:32:17.021548 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Dec 13 01:32:17.023960 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Dec 13 01:32:17.024305 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 01:32:17.024334 kernel: vgaarb: loaded Dec 13 01:32:17.024351 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Dec 13 01:32:17.024368 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Dec 13 01:32:17.024384 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 01:32:17.024400 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:32:17.024427 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:32:17.024443 kernel: pnp: PnP ACPI init Dec 13 01:32:17.024459 kernel: pnp: PnP ACPI: found 5 devices Dec 13 01:32:17.024476 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 01:32:17.024492 kernel: NET: Registered PF_INET protocol family Dec 13 01:32:17.024508 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 01:32:17.024524 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 01:32:17.024539 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:32:17.024559 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 01:32:17.024575 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 13 01:32:17.024591 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 01:32:17.024606 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 01:32:17.024623 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 01:32:17.024639 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:32:17.024654 kernel: NET: Registered PF_XDP protocol family Dec 13 01:32:17.024792 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 01:32:17.024917 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 01:32:17.025056 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 01:32:17.025179 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Dec 13 01:32:17.025362 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 13 01:32:17.025384 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:32:17.025401 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 01:32:17.025417 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Dec 13 01:32:17.025433 kernel: clocksource: Switched to clocksource tsc Dec 13 01:32:17.025449 kernel: Initialise system trusted keyrings Dec 13 01:32:17.025466 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 01:32:17.025479 kernel: Key type asymmetric registered Dec 13 01:32:17.025491 kernel: Asymmetric key parser 'x509' registered Dec 13 01:32:17.025503 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 01:32:17.025517 kernel: io scheduler mq-deadline registered Dec 13 01:32:17.025530 kernel: io scheduler kyber registered Dec 13 01:32:17.025543 kernel: io scheduler bfq registered Dec 13 01:32:17.025557 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 01:32:17.025569 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:32:17.025586 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 01:32:17.025599 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 01:32:17.025613 kernel: i8042: Warning: Keylock active Dec 13 01:32:17.025628 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 01:32:17.027261 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 01:32:17.027519 kernel: rtc_cmos 00:00: RTC can wake from S4 Dec 13 01:32:17.027665 kernel: rtc_cmos 00:00: registered as rtc0 Dec 13 01:32:17.027794 kernel: rtc_cmos 00:00: setting system clock to 2024-12-13T01:32:16 UTC (1734053536) Dec 13 01:32:17.027924 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Dec 13 01:32:17.027941 kernel: intel_pstate: CPU model not supported Dec 13 01:32:17.027956 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:32:17.027970 kernel: Segment Routing with IPv6 Dec 13 01:32:17.027984 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:32:17.028016 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:32:17.028030 kernel: Key type dns_resolver registered Dec 13 01:32:17.028044 kernel: IPI shorthand broadcast: enabled Dec 13 01:32:17.028059 kernel: sched_clock: Marking stable (609026999, 357123807)->(1087963146, -121812340) Dec 13 01:32:17.028077 kernel: registered taskstats version 1 Dec 13 01:32:17.028091 kernel: Loading compiled-in X.509 certificates Dec 13 01:32:17.028105 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: c82d546f528d79a5758dcebbc47fb6daf92836a0' Dec 13 01:32:17.028118 kernel: Key type .fscrypt registered Dec 13 01:32:17.028132 kernel: Key type fscrypt-provisioning registered Dec 13 01:32:17.028146 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:32:17.028161 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:32:17.028174 kernel: ima: No architecture policies found Dec 13 01:32:17.028192 kernel: clk: Disabling unused clocks Dec 13 01:32:17.028206 kernel: Freeing unused kernel image (initmem) memory: 42844K Dec 13 01:32:17.028221 kernel: Write protecting the kernel read-only data: 36864k Dec 13 01:32:17.028235 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Dec 13 01:32:17.028249 kernel: Run /init as init process Dec 13 01:32:17.028262 kernel: with arguments: Dec 13 01:32:17.028275 kernel: /init Dec 13 01:32:17.028289 kernel: with environment: Dec 13 01:32:17.028302 kernel: HOME=/ Dec 13 01:32:17.028316 kernel: TERM=linux Dec 13 01:32:17.029236 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:32:17.029294 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:32:17.029316 systemd[1]: Detected virtualization amazon. Dec 13 01:32:17.029334 systemd[1]: Detected architecture x86-64. Dec 13 01:32:17.029351 systemd[1]: Running in initrd. Dec 13 01:32:17.029369 systemd[1]: No hostname configured, using default hostname. Dec 13 01:32:17.029386 systemd[1]: Hostname set to . Dec 13 01:32:17.029405 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:32:17.029419 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:32:17.029434 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:32:17.029449 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:32:17.029464 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 01:32:17.029479 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:32:17.029493 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 01:32:17.029511 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 01:32:17.029528 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 01:32:17.029542 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 01:32:17.029557 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:32:17.029571 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:32:17.029588 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:32:17.029700 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:32:17.029726 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:32:17.029742 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:32:17.029755 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:32:17.029770 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:32:17.029785 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:32:17.029801 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:32:17.029817 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:32:17.029835 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:32:17.029853 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:32:17.029874 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:32:17.029891 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 01:32:17.029908 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:32:17.029926 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 01:32:17.029944 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:32:17.029968 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:32:17.029985 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:32:17.030097 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:32:17.030114 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 01:32:17.030165 systemd-journald[178]: Collecting audit messages is disabled. Dec 13 01:32:17.030208 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:32:17.030285 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 01:32:17.030309 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:32:17.030330 systemd-journald[178]: Journal started Dec 13 01:32:17.030369 systemd-journald[178]: Runtime Journal (/run/log/journal/ec2d8d9904beefba5c02a5290f3a95f2) is 4.8M, max 38.6M, 33.7M free. Dec 13 01:32:17.062107 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:32:17.062339 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:32:17.036254 systemd-modules-load[179]: Inserted module 'overlay' Dec 13 01:32:17.069188 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:32:17.070428 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:32:17.085276 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:32:17.085250 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:32:17.203575 kernel: Bridge firewalling registered Dec 13 01:32:17.087633 systemd-modules-load[179]: Inserted module 'br_netfilter' Dec 13 01:32:17.212269 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:32:17.215570 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:32:17.227247 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:32:17.235135 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:32:17.235518 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:32:17.238034 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:32:17.252272 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:32:17.260919 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 01:32:17.266923 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:32:17.279161 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:32:17.292097 dracut-cmdline[211]: dracut-dracut-053 Dec 13 01:32:17.298631 dracut-cmdline[211]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:32:17.355626 systemd-resolved[214]: Positive Trust Anchors: Dec 13 01:32:17.355643 systemd-resolved[214]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:32:17.355705 systemd-resolved[214]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:32:17.359961 systemd-resolved[214]: Defaulting to hostname 'linux'. Dec 13 01:32:17.365258 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:32:17.376958 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:32:17.402017 kernel: SCSI subsystem initialized Dec 13 01:32:17.411037 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:32:17.423020 kernel: iscsi: registered transport (tcp) Dec 13 01:32:17.444023 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:32:17.444095 kernel: QLogic iSCSI HBA Driver Dec 13 01:32:17.484406 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 01:32:17.491272 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 01:32:17.560018 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:32:17.560215 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:32:17.560237 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 01:32:17.626365 kernel: raid6: avx512x4 gen() 6568 MB/s Dec 13 01:32:17.649136 kernel: raid6: avx512x2 gen() 6864 MB/s Dec 13 01:32:17.667072 kernel: raid6: avx512x1 gen() 1959 MB/s Dec 13 01:32:17.684302 kernel: raid6: avx2x4 gen() 5041 MB/s Dec 13 01:32:17.701034 kernel: raid6: avx2x2 gen() 10388 MB/s Dec 13 01:32:17.721509 kernel: raid6: avx2x1 gen() 7164 MB/s Dec 13 01:32:17.721590 kernel: raid6: using algorithm avx2x2 gen() 10388 MB/s Dec 13 01:32:17.740584 kernel: raid6: .... xor() 6528 MB/s, rmw enabled Dec 13 01:32:17.740683 kernel: raid6: using avx512x2 recovery algorithm Dec 13 01:32:17.796117 kernel: xor: automatically using best checksumming function avx Dec 13 01:32:17.965023 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 01:32:17.976503 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:32:17.982305 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:32:18.013490 systemd-udevd[397]: Using default interface naming scheme 'v255'. Dec 13 01:32:18.019023 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:32:18.028192 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 01:32:18.048404 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Dec 13 01:32:18.080509 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:32:18.088187 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:32:18.153325 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:32:18.165284 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 01:32:18.196354 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 01:32:18.201594 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:32:18.203436 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:32:18.207757 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:32:18.216253 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 01:32:18.241953 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:32:18.254095 kernel: ena 0000:00:05.0: ENA device version: 0.10 Dec 13 01:32:18.266362 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Dec 13 01:32:18.266563 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Dec 13 01:32:18.266738 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:a6:24:e5:b7:71 Dec 13 01:32:18.273021 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 01:32:18.282418 (udev-worker)[442]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:32:18.294620 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:32:18.296025 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:32:18.299138 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:32:18.300378 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:32:18.300566 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:32:18.301827 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:32:18.312009 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 01:32:18.312064 kernel: AES CTR mode by8 optimization enabled Dec 13 01:32:18.313329 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:32:18.339369 kernel: nvme nvme0: pci function 0000:00:04.0 Dec 13 01:32:18.339736 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Dec 13 01:32:18.352026 kernel: nvme nvme0: 2/0/0 default/read/poll queues Dec 13 01:32:18.354094 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 01:32:18.354151 kernel: GPT:9289727 != 16777215 Dec 13 01:32:18.354234 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 01:32:18.354263 kernel: GPT:9289727 != 16777215 Dec 13 01:32:18.354281 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 01:32:18.354298 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:32:18.468017 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (442) Dec 13 01:32:18.476635 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:32:18.494902 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Dec 13 01:32:18.504185 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:32:18.541530 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Dec 13 01:32:18.547104 kernel: BTRFS: device fsid c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be devid 1 transid 41 /dev/nvme0n1p3 scanned by (udev-worker) (443) Dec 13 01:32:18.564284 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:32:18.588112 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Dec 13 01:32:18.597429 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Dec 13 01:32:18.597649 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Dec 13 01:32:18.608221 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 01:32:18.627671 disk-uuid[626]: Primary Header is updated. Dec 13 01:32:18.627671 disk-uuid[626]: Secondary Entries is updated. Dec 13 01:32:18.627671 disk-uuid[626]: Secondary Header is updated. Dec 13 01:32:18.633043 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:32:18.638719 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:32:18.643026 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:32:19.650023 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:32:19.651859 disk-uuid[627]: The operation has completed successfully. Dec 13 01:32:19.804135 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:32:19.804257 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 01:32:19.834395 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 01:32:19.838120 sh[970]: Success Dec 13 01:32:19.852023 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 01:32:19.984805 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 01:32:19.994161 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 01:32:19.999491 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 01:32:20.026653 kernel: BTRFS info (device dm-0): first mount of filesystem c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be Dec 13 01:32:20.026725 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:32:20.026749 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 01:32:20.028263 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 01:32:20.028301 kernel: BTRFS info (device dm-0): using free space tree Dec 13 01:32:20.238015 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 13 01:32:20.239759 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 01:32:20.241964 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 01:32:20.249332 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 01:32:20.254711 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 01:32:20.271770 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:32:20.271839 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:32:20.271865 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 01:32:20.276013 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 01:32:20.289035 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:32:20.288836 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:32:20.328896 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 01:32:20.338222 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 01:32:20.374100 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:32:20.381188 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:32:20.409105 systemd-networkd[1162]: lo: Link UP Dec 13 01:32:20.409117 systemd-networkd[1162]: lo: Gained carrier Dec 13 01:32:20.410986 systemd-networkd[1162]: Enumeration completed Dec 13 01:32:20.411416 systemd-networkd[1162]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:32:20.411421 systemd-networkd[1162]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:32:20.412362 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:32:20.416158 systemd[1]: Reached target network.target - Network. Dec 13 01:32:20.421775 systemd-networkd[1162]: eth0: Link UP Dec 13 01:32:20.421783 systemd-networkd[1162]: eth0: Gained carrier Dec 13 01:32:20.421796 systemd-networkd[1162]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:32:20.438118 systemd-networkd[1162]: eth0: DHCPv4 address 172.31.21.168/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 01:32:21.143911 ignition[1121]: Ignition 2.19.0 Dec 13 01:32:21.143925 ignition[1121]: Stage: fetch-offline Dec 13 01:32:21.144210 ignition[1121]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:32:21.144224 ignition[1121]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:32:21.147020 ignition[1121]: Ignition finished successfully Dec 13 01:32:21.150713 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:32:21.157250 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 01:32:21.207857 ignition[1172]: Ignition 2.19.0 Dec 13 01:32:21.208085 ignition[1172]: Stage: fetch Dec 13 01:32:21.211366 ignition[1172]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:32:21.213395 ignition[1172]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:32:21.213549 ignition[1172]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:32:21.234931 ignition[1172]: PUT result: OK Dec 13 01:32:21.240066 ignition[1172]: parsed url from cmdline: "" Dec 13 01:32:21.240078 ignition[1172]: no config URL provided Dec 13 01:32:21.240089 ignition[1172]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:32:21.240106 ignition[1172]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:32:21.240140 ignition[1172]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:32:21.244217 ignition[1172]: PUT result: OK Dec 13 01:32:21.247330 ignition[1172]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Dec 13 01:32:21.257349 ignition[1172]: GET result: OK Dec 13 01:32:21.258981 ignition[1172]: parsing config with SHA512: b09f2d0e33b06cd7f93b37482b66af4414c9e7e3d77a0637190aa4e5abdd774401e581cfe022ca7ff3e0aadc5519cd4c76c8b38188828a271aa34f01a141a91a Dec 13 01:32:21.270409 unknown[1172]: fetched base config from "system" Dec 13 01:32:21.270425 unknown[1172]: fetched base config from "system" Dec 13 01:32:21.270436 unknown[1172]: fetched user config from "aws" Dec 13 01:32:21.273043 ignition[1172]: fetch: fetch complete Dec 13 01:32:21.273053 ignition[1172]: fetch: fetch passed Dec 13 01:32:21.273114 ignition[1172]: Ignition finished successfully Dec 13 01:32:21.279032 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 01:32:21.285149 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 01:32:21.308815 ignition[1179]: Ignition 2.19.0 Dec 13 01:32:21.308825 ignition[1179]: Stage: kargs Dec 13 01:32:21.309161 ignition[1179]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:32:21.309170 ignition[1179]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:32:21.309244 ignition[1179]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:32:21.310638 ignition[1179]: PUT result: OK Dec 13 01:32:21.323323 ignition[1179]: kargs: kargs passed Dec 13 01:32:21.323399 ignition[1179]: Ignition finished successfully Dec 13 01:32:21.326539 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 01:32:21.334304 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 01:32:21.354349 ignition[1185]: Ignition 2.19.0 Dec 13 01:32:21.354362 ignition[1185]: Stage: disks Dec 13 01:32:21.354797 ignition[1185]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:32:21.354808 ignition[1185]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:32:21.354913 ignition[1185]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:32:21.356465 ignition[1185]: PUT result: OK Dec 13 01:32:21.362065 ignition[1185]: disks: disks passed Dec 13 01:32:21.362159 ignition[1185]: Ignition finished successfully Dec 13 01:32:21.364673 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 01:32:21.367563 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 01:32:21.370652 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:32:21.372969 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:32:21.379043 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:32:21.382398 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:32:21.394357 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 01:32:21.489905 systemd-fsck[1193]: ROOT: clean, 14/553520 files, 52654/553472 blocks Dec 13 01:32:21.495381 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 01:32:21.504136 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 01:32:21.661035 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 390119fa-ab9c-4f50-b046-3b5c76c46193 r/w with ordered data mode. Quota mode: none. Dec 13 01:32:21.661878 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 01:32:21.663852 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 01:32:21.674223 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:32:21.679801 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 01:32:21.682848 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 01:32:21.683558 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:32:21.683596 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:32:21.726375 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1212) Dec 13 01:32:21.729930 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:32:21.730001 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:32:21.730022 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 01:32:21.730471 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 01:32:21.744116 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 01:32:21.748029 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 01:32:21.757237 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:32:22.115194 systemd-networkd[1162]: eth0: Gained IPv6LL Dec 13 01:32:22.456045 initrd-setup-root[1239]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:32:22.462866 initrd-setup-root[1246]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:32:22.468799 initrd-setup-root[1253]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:32:22.475323 initrd-setup-root[1260]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:32:22.882520 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 01:32:22.889148 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 01:32:22.892685 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 01:32:22.942436 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 01:32:22.944409 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:32:22.971762 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 01:32:22.990844 ignition[1328]: INFO : Ignition 2.19.0 Dec 13 01:32:22.990844 ignition[1328]: INFO : Stage: mount Dec 13 01:32:22.993627 ignition[1328]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:32:22.993627 ignition[1328]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:32:22.993627 ignition[1328]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:32:22.998214 ignition[1328]: INFO : PUT result: OK Dec 13 01:32:23.001082 ignition[1328]: INFO : mount: mount passed Dec 13 01:32:23.002409 ignition[1328]: INFO : Ignition finished successfully Dec 13 01:32:23.004596 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 01:32:23.012519 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 01:32:23.044553 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:32:23.060024 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1339) Dec 13 01:32:23.062172 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:32:23.062227 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:32:23.062309 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 01:32:23.067014 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 01:32:23.070655 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:32:23.102799 ignition[1356]: INFO : Ignition 2.19.0 Dec 13 01:32:23.102799 ignition[1356]: INFO : Stage: files Dec 13 01:32:23.105589 ignition[1356]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:32:23.105589 ignition[1356]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:32:23.109480 ignition[1356]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:32:23.111370 ignition[1356]: INFO : PUT result: OK Dec 13 01:32:23.115029 ignition[1356]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:32:23.160109 ignition[1356]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:32:23.160109 ignition[1356]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:32:23.164724 ignition[1356]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:32:23.166452 ignition[1356]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:32:23.168149 ignition[1356]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:32:23.166801 unknown[1356]: wrote ssh authorized keys file for user: core Dec 13 01:32:23.171949 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:32:23.171949 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 01:32:23.291367 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 01:32:23.442682 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:32:23.442682 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:32:23.447165 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:32:23.447165 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:32:23.453467 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:32:23.453467 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:32:23.453467 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:32:23.453467 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:32:23.453467 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:32:23.453467 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:32:23.453467 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:32:23.453467 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 01:32:23.453467 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 01:32:23.453467 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 01:32:23.453467 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Dec 13 01:32:23.953840 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 13 01:32:24.616568 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 01:32:24.616568 ignition[1356]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 13 01:32:24.624962 ignition[1356]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:32:24.627720 ignition[1356]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:32:24.627720 ignition[1356]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 13 01:32:24.632059 ignition[1356]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 13 01:32:24.632059 ignition[1356]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 01:32:24.632059 ignition[1356]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:32:24.637039 ignition[1356]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:32:24.637039 ignition[1356]: INFO : files: files passed Dec 13 01:32:24.637039 ignition[1356]: INFO : Ignition finished successfully Dec 13 01:32:24.642419 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 01:32:24.650351 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 01:32:24.653380 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 01:32:24.667411 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:32:24.667616 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 01:32:24.683865 initrd-setup-root-after-ignition[1385]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:32:24.683865 initrd-setup-root-after-ignition[1385]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:32:24.695924 initrd-setup-root-after-ignition[1389]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:32:24.700270 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:32:24.704487 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 01:32:24.715173 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 01:32:24.753589 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:32:24.753685 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 01:32:24.756869 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 01:32:24.761710 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 01:32:24.764597 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 01:32:24.779179 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 01:32:24.804830 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:32:24.812256 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 01:32:24.833776 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:32:24.834052 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:32:24.840559 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 01:32:24.842625 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:32:24.844153 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:32:24.852030 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 01:32:24.855569 systemd[1]: Stopped target basic.target - Basic System. Dec 13 01:32:24.860180 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 01:32:24.862835 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:32:24.866099 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 01:32:24.867710 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 01:32:24.870490 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:32:24.874554 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 01:32:24.876803 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 01:32:24.879363 systemd[1]: Stopped target swap.target - Swaps. Dec 13 01:32:24.881103 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:32:24.882341 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:32:24.884920 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:32:24.886398 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:32:24.890378 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 01:32:24.890499 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:32:24.894537 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:32:24.896170 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 01:32:24.899114 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:32:24.900817 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:32:24.904518 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:32:24.904639 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 01:32:24.917228 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 01:32:24.921260 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 01:32:24.928421 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:32:24.928752 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:32:24.938117 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:32:24.938371 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:32:24.947878 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:32:24.948110 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 01:32:24.961431 ignition[1409]: INFO : Ignition 2.19.0 Dec 13 01:32:24.963068 ignition[1409]: INFO : Stage: umount Dec 13 01:32:24.963068 ignition[1409]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:32:24.963068 ignition[1409]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:32:24.967571 ignition[1409]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:32:24.967571 ignition[1409]: INFO : PUT result: OK Dec 13 01:32:24.972838 ignition[1409]: INFO : umount: umount passed Dec 13 01:32:24.973917 ignition[1409]: INFO : Ignition finished successfully Dec 13 01:32:24.978435 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:32:24.979331 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:32:24.979540 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 01:32:24.981694 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:32:24.981800 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 01:32:24.982977 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:32:24.983053 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 01:32:24.985598 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 01:32:24.985663 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 01:32:24.988704 systemd[1]: Stopped target network.target - Network. Dec 13 01:32:24.990206 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:32:24.990418 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:32:24.993441 systemd[1]: Stopped target paths.target - Path Units. Dec 13 01:32:24.995432 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:32:24.995496 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:32:24.997758 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 01:32:24.998887 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 01:32:25.001881 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:32:25.001948 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:32:25.004333 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:32:25.004390 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:32:25.007812 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:32:25.007889 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 01:32:25.011082 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 01:32:25.011157 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 01:32:25.015831 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 01:32:25.022304 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 01:32:25.031401 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:32:25.031684 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 01:32:25.032968 systemd-networkd[1162]: eth0: DHCPv6 lease lost Dec 13 01:32:25.033940 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:32:25.034079 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 01:32:25.035685 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:32:25.035940 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 01:32:25.041526 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:32:25.041685 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 01:32:25.044726 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:32:25.044869 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:32:25.053187 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 01:32:25.055314 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:32:25.055378 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:32:25.057353 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:32:25.057403 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:32:25.059645 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:32:25.059703 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 01:32:25.061782 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 01:32:25.063063 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:32:25.066203 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:32:25.089546 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:32:25.094264 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:32:25.097769 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:32:25.097821 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 01:32:25.099219 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:32:25.099251 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:32:25.109422 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:32:25.109538 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:32:25.111889 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:32:25.111967 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 01:32:25.114432 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:32:25.114499 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:32:25.123732 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 01:32:25.126883 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:32:25.128730 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:32:25.132085 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 01:32:25.132164 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:32:25.141639 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:32:25.143117 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:32:25.147640 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:32:25.147725 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:32:25.152907 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:32:25.153113 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 01:32:25.155426 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:32:25.155937 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 01:32:25.160921 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 01:32:25.173259 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 01:32:25.190863 systemd[1]: Switching root. Dec 13 01:32:25.251904 systemd-journald[178]: Journal stopped Dec 13 01:32:28.827152 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Dec 13 01:32:28.827244 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 01:32:28.827265 kernel: SELinux: policy capability open_perms=1 Dec 13 01:32:28.832016 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 01:32:28.832073 kernel: SELinux: policy capability always_check_network=0 Dec 13 01:32:28.832101 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 01:32:28.832120 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 01:32:28.832138 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 01:32:28.832155 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 01:32:28.832172 kernel: audit: type=1403 audit(1734053546.928:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 01:32:28.832192 systemd[1]: Successfully loaded SELinux policy in 77.927ms. Dec 13 01:32:28.832223 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.576ms. Dec 13 01:32:28.832242 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:32:28.832263 systemd[1]: Detected virtualization amazon. Dec 13 01:32:28.832285 systemd[1]: Detected architecture x86-64. Dec 13 01:32:28.832303 systemd[1]: Detected first boot. Dec 13 01:32:28.832322 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:32:28.832341 zram_generator::config[1452]: No configuration found. Dec 13 01:32:28.832359 systemd[1]: Populated /etc with preset unit settings. Dec 13 01:32:28.832377 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 01:32:28.832395 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 01:32:28.832413 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 01:32:28.832436 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 01:32:28.832455 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 01:32:28.832474 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 01:32:28.832492 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 01:32:28.832511 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 01:32:28.832534 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 01:32:28.832554 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 01:32:28.832572 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 01:32:28.832593 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:32:28.832612 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:32:28.832630 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 01:32:28.832648 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 01:32:28.832670 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 01:32:28.832689 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:32:28.832707 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 01:32:28.832731 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:32:28.832750 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 01:32:28.832771 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 01:32:28.832790 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 01:32:28.832808 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 01:32:28.832827 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:32:28.832847 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:32:28.848461 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:32:28.848535 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:32:28.848557 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 01:32:28.848585 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 01:32:28.848604 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:32:28.848622 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:32:28.848655 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:32:28.848673 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 01:32:28.848691 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 01:32:28.848711 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 01:32:28.848730 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 01:32:28.848750 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:32:28.848772 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 01:32:28.848791 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 01:32:28.848810 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 01:32:28.848829 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 01:32:28.848848 systemd[1]: Reached target machines.target - Containers. Dec 13 01:32:28.848867 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 01:32:28.848886 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:32:28.848905 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:32:28.848923 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 01:32:28.848946 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:32:28.848965 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:32:28.848984 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:32:28.849015 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 01:32:28.849035 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:32:28.849054 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 01:32:28.853745 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 01:32:28.856422 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 01:32:28.856481 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 01:32:28.856502 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 01:32:28.856520 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:32:28.856539 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:32:28.856558 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 01:32:28.856576 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 01:32:28.856594 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:32:28.856613 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 01:32:28.856633 systemd[1]: Stopped verity-setup.service. Dec 13 01:32:28.856656 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:32:28.856673 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 01:32:28.856691 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 01:32:28.856710 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 01:32:28.856728 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 01:32:28.856747 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 01:32:28.856768 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 01:32:28.856786 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:32:28.856804 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:32:28.856822 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:32:28.856841 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:32:28.856860 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:32:28.856878 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:32:28.856900 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 01:32:28.856918 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 01:32:28.856937 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 01:32:28.856958 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 01:32:28.856976 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:32:28.857007 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 01:32:28.857031 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 01:32:28.857049 kernel: fuse: init (API version 7.39) Dec 13 01:32:28.857067 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 01:32:28.857086 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:32:28.861096 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 01:32:28.861186 systemd-journald[1527]: Collecting audit messages is disabled. Dec 13 01:32:28.861226 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:32:28.861255 kernel: loop: module loaded Dec 13 01:32:28.861278 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 01:32:28.861351 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:32:28.861372 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 01:32:28.861391 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:32:28.861414 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 01:32:28.861436 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 01:32:28.861458 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 01:32:28.861480 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 01:32:28.861507 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:32:28.861531 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:32:28.861554 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 01:32:28.861576 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 01:32:28.861603 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 01:32:28.861746 systemd-journald[1527]: Journal started Dec 13 01:32:28.861793 systemd-journald[1527]: Runtime Journal (/run/log/journal/ec2d8d9904beefba5c02a5290f3a95f2) is 4.8M, max 38.6M, 33.7M free. Dec 13 01:32:28.202802 systemd[1]: Queued start job for default target multi-user.target. Dec 13 01:32:28.259593 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Dec 13 01:32:28.260083 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 01:32:28.882080 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 01:32:28.913650 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 01:32:28.913718 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 01:32:28.908389 systemd-tmpfiles[1549]: ACLs are not supported, ignoring. Dec 13 01:32:28.916189 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:32:28.908404 systemd-tmpfiles[1549]: ACLs are not supported, ignoring. Dec 13 01:32:28.918035 kernel: loop0: detected capacity change from 0 to 140768 Dec 13 01:32:28.918076 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:32:28.922794 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 01:32:28.925214 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 01:32:28.935076 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:32:29.028108 kernel: ACPI: bus type drm_connector registered Dec 13 01:32:29.028002 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 01:32:29.029980 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 01:32:29.038282 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 01:32:29.040129 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:32:29.040334 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:32:29.042856 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:32:29.055816 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 01:32:29.090048 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 01:32:29.102429 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 01:32:29.106575 systemd-journald[1527]: Time spent on flushing to /var/log/journal/ec2d8d9904beefba5c02a5290f3a95f2 is 49.639ms for 974 entries. Dec 13 01:32:29.106575 systemd-journald[1527]: System Journal (/var/log/journal/ec2d8d9904beefba5c02a5290f3a95f2) is 8.0M, max 195.6M, 187.6M free. Dec 13 01:32:29.217280 systemd-journald[1527]: Received client request to flush runtime journal. Dec 13 01:32:29.217445 kernel: loop1: detected capacity change from 0 to 61336 Dec 13 01:32:29.122343 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:32:29.134360 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 01:32:29.168330 udevadm[1595]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 01:32:29.203074 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 01:32:29.213293 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:32:29.222220 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 01:32:29.244819 systemd-tmpfiles[1598]: ACLs are not supported, ignoring. Dec 13 01:32:29.244852 systemd-tmpfiles[1598]: ACLs are not supported, ignoring. Dec 13 01:32:29.252832 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:32:29.346077 kernel: loop2: detected capacity change from 0 to 142488 Dec 13 01:32:29.475797 kernel: loop3: detected capacity change from 0 to 210664 Dec 13 01:32:29.534374 kernel: loop4: detected capacity change from 0 to 140768 Dec 13 01:32:29.562033 kernel: loop5: detected capacity change from 0 to 61336 Dec 13 01:32:29.583016 kernel: loop6: detected capacity change from 0 to 142488 Dec 13 01:32:29.610018 kernel: loop7: detected capacity change from 0 to 210664 Dec 13 01:32:29.622531 (sd-merge)[1607]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Dec 13 01:32:29.624012 (sd-merge)[1607]: Merged extensions into '/usr'. Dec 13 01:32:29.630853 systemd[1]: Reloading requested from client PID 1548 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 01:32:29.631054 systemd[1]: Reloading... Dec 13 01:32:29.718044 zram_generator::config[1629]: No configuration found. Dec 13 01:32:30.006261 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:32:30.138474 systemd[1]: Reloading finished in 506 ms. Dec 13 01:32:30.168921 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 01:32:30.183406 systemd[1]: Starting ensure-sysext.service... Dec 13 01:32:30.227194 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:32:30.263191 systemd[1]: Reloading requested from client PID 1681 ('systemctl') (unit ensure-sysext.service)... Dec 13 01:32:30.263213 systemd[1]: Reloading... Dec 13 01:32:30.279974 systemd-tmpfiles[1682]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 01:32:30.280653 systemd-tmpfiles[1682]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 01:32:30.282297 systemd-tmpfiles[1682]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 01:32:30.282717 systemd-tmpfiles[1682]: ACLs are not supported, ignoring. Dec 13 01:32:30.282811 systemd-tmpfiles[1682]: ACLs are not supported, ignoring. Dec 13 01:32:30.289480 systemd-tmpfiles[1682]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:32:30.289495 systemd-tmpfiles[1682]: Skipping /boot Dec 13 01:32:30.323210 systemd-tmpfiles[1682]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:32:30.324689 systemd-tmpfiles[1682]: Skipping /boot Dec 13 01:32:30.429779 zram_generator::config[1713]: No configuration found. Dec 13 01:32:30.592705 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:32:30.696316 systemd[1]: Reloading finished in 432 ms. Dec 13 01:32:30.714389 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 01:32:30.723754 ldconfig[1540]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 01:32:30.723637 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:32:30.739287 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:32:30.745298 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 01:32:30.754540 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 01:32:30.774384 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:32:30.785226 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:32:30.788683 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 01:32:30.791950 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 01:32:30.806888 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:32:30.807197 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:32:30.816425 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:32:30.819544 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:32:30.825167 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:32:30.826812 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:32:30.838870 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 01:32:30.842086 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:32:30.847285 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:32:30.847753 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:32:30.847980 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:32:30.849796 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:32:30.855407 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 01:32:30.871760 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 01:32:30.887399 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 01:32:30.889903 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:32:30.890138 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:32:30.892362 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:32:30.892545 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:32:30.894754 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:32:30.894932 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:32:30.908620 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:32:30.910085 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:32:30.917413 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:32:30.918810 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:32:30.920112 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:32:30.920267 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:32:30.920328 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 01:32:30.921599 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:32:30.923030 systemd[1]: Finished ensure-sysext.service. Dec 13 01:32:30.940436 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 01:32:30.945162 augenrules[1797]: No rules Dec 13 01:32:30.947280 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:32:30.952720 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:32:30.954076 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:32:30.960789 systemd-udevd[1773]: Using default interface naming scheme 'v255'. Dec 13 01:32:30.981651 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 01:32:30.986769 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 01:32:30.989166 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:32:31.010238 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:32:31.025274 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:32:31.105491 systemd-resolved[1772]: Positive Trust Anchors: Dec 13 01:32:31.105510 systemd-resolved[1772]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:32:31.105653 systemd-resolved[1772]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:32:31.113080 systemd-resolved[1772]: Defaulting to hostname 'linux'. Dec 13 01:32:31.116310 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:32:31.117648 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:32:31.177676 systemd-networkd[1814]: lo: Link UP Dec 13 01:32:31.177687 systemd-networkd[1814]: lo: Gained carrier Dec 13 01:32:31.180600 systemd-networkd[1814]: Enumeration completed Dec 13 01:32:31.180744 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:32:31.182189 systemd[1]: Reached target network.target - Network. Dec 13 01:32:31.188274 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 01:32:31.189837 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 13 01:32:31.191088 (udev-worker)[1811]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:32:31.225020 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1824) Dec 13 01:32:31.235435 systemd-networkd[1814]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:32:31.235446 systemd-networkd[1814]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:32:31.240503 systemd-networkd[1814]: eth0: Link UP Dec 13 01:32:31.241939 systemd-networkd[1814]: eth0: Gained carrier Dec 13 01:32:31.241971 systemd-networkd[1814]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:32:31.245035 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1824) Dec 13 01:32:31.251279 systemd-networkd[1814]: eth0: DHCPv4 address 172.31.21.168/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 01:32:31.283010 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Dec 13 01:32:31.288644 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Dec 13 01:32:31.293169 kernel: ACPI: button: Power Button [PWRF] Dec 13 01:32:31.296215 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Dec 13 01:32:31.305016 kernel: ACPI: button: Sleep Button [SLPF] Dec 13 01:32:31.329241 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Dec 13 01:32:31.375052 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (1823) Dec 13 01:32:31.376011 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 01:32:31.376439 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:32:31.554425 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Dec 13 01:32:31.624067 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 01:32:31.627200 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:32:31.640812 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 01:32:31.653530 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 01:32:31.683059 lvm[1928]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:32:31.692618 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 01:32:31.722531 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 01:32:31.724887 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:32:31.728270 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:32:31.734757 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 01:32:31.740323 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 01:32:31.742056 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 01:32:31.743507 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 01:32:31.745070 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 01:32:31.747256 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 01:32:31.747298 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:32:31.748688 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:32:31.752675 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 01:32:31.760864 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 01:32:31.769106 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 01:32:31.771693 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 01:32:31.774196 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 01:32:31.775507 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:32:31.776654 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:32:31.777704 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:32:31.777743 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:32:31.786831 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 01:32:31.793301 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 01:32:31.797175 lvm[1935]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:32:31.803297 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 01:32:31.816152 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 01:32:31.836812 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 01:32:31.838385 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 01:32:31.841444 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 01:32:31.850264 systemd[1]: Started ntpd.service - Network Time Service. Dec 13 01:32:31.879416 jq[1939]: false Dec 13 01:32:31.896759 extend-filesystems[1940]: Found loop4 Dec 13 01:32:31.896759 extend-filesystems[1940]: Found loop5 Dec 13 01:32:31.896759 extend-filesystems[1940]: Found loop6 Dec 13 01:32:31.896759 extend-filesystems[1940]: Found loop7 Dec 13 01:32:31.896759 extend-filesystems[1940]: Found nvme0n1 Dec 13 01:32:31.896759 extend-filesystems[1940]: Found nvme0n1p1 Dec 13 01:32:31.896759 extend-filesystems[1940]: Found nvme0n1p2 Dec 13 01:32:31.896112 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 01:32:31.913645 extend-filesystems[1940]: Found nvme0n1p3 Dec 13 01:32:31.913645 extend-filesystems[1940]: Found usr Dec 13 01:32:31.913645 extend-filesystems[1940]: Found nvme0n1p4 Dec 13 01:32:31.913645 extend-filesystems[1940]: Found nvme0n1p6 Dec 13 01:32:31.913645 extend-filesystems[1940]: Found nvme0n1p7 Dec 13 01:32:31.913645 extend-filesystems[1940]: Found nvme0n1p9 Dec 13 01:32:31.913645 extend-filesystems[1940]: Checking size of /dev/nvme0n1p9 Dec 13 01:32:31.921239 systemd[1]: Starting setup-oem.service - Setup OEM... Dec 13 01:32:31.932785 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 01:32:31.946204 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 01:32:31.963263 dbus-daemon[1938]: [system] SELinux support is enabled Dec 13 01:32:31.971174 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 01:32:31.978166 dbus-daemon[1938]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1814 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 13 01:32:31.972791 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 01:32:31.973440 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 01:32:31.980219 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 01:32:31.984895 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 01:32:31.985861 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 01:32:31.992065 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 01:32:32.001568 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 01:32:32.020601 extend-filesystems[1940]: Resized partition /dev/nvme0n1p9 Dec 13 01:32:32.001814 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 01:32:32.028226 update_engine[1955]: I20241213 01:32:32.014863 1955 main.cc:92] Flatcar Update Engine starting Dec 13 01:32:32.028460 jq[1958]: true Dec 13 01:32:32.026713 dbus-daemon[1938]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 01:32:32.025444 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 01:32:32.025498 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 01:32:32.029389 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 01:32:32.029419 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 01:32:32.057441 extend-filesystems[1969]: resize2fs 1.47.1 (20-May-2024) Dec 13 01:32:32.046412 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 01:32:32.065137 update_engine[1955]: I20241213 01:32:32.063133 1955 update_check_scheduler.cc:74] Next update check in 11m50s Dec 13 01:32:32.046665 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 01:32:32.065290 jq[1970]: true Dec 13 01:32:32.078471 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Dec 13 01:32:32.066840 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Dec 13 01:32:32.077477 systemd[1]: Started update-engine.service - Update Engine. Dec 13 01:32:32.089267 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 01:32:32.131170 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 01:32:32.131469 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 01:32:32.139979 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Dec 13 01:32:32.141693 ntpd[1943]: ntpd 4.2.8p17@1.4004-o Thu Dec 12 22:36:14 UTC 2024 (1): Starting Dec 13 01:32:32.141726 ntpd[1943]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 13 01:32:32.197031 coreos-metadata[1937]: Dec 13 01:32:32.181 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 13 01:32:32.197031 coreos-metadata[1937]: Dec 13 01:32:32.189 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Dec 13 01:32:32.197031 coreos-metadata[1937]: Dec 13 01:32:32.191 INFO Fetch successful Dec 13 01:32:32.197031 coreos-metadata[1937]: Dec 13 01:32:32.191 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Dec 13 01:32:32.197031 coreos-metadata[1937]: Dec 13 01:32:32.192 INFO Fetch successful Dec 13 01:32:32.197031 coreos-metadata[1937]: Dec 13 01:32:32.192 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Dec 13 01:32:32.197525 ntpd[1943]: 13 Dec 01:32:32 ntpd[1943]: ntpd 4.2.8p17@1.4004-o Thu Dec 12 22:36:14 UTC 2024 (1): Starting Dec 13 01:32:32.197525 ntpd[1943]: 13 Dec 01:32:32 ntpd[1943]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 13 01:32:32.197525 ntpd[1943]: 13 Dec 01:32:32 ntpd[1943]: ---------------------------------------------------- Dec 13 01:32:32.197525 ntpd[1943]: 13 Dec 01:32:32 ntpd[1943]: ntp-4 is maintained by Network Time Foundation, Dec 13 01:32:32.197525 ntpd[1943]: 13 Dec 01:32:32 ntpd[1943]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 13 01:32:32.197525 ntpd[1943]: 13 Dec 01:32:32 ntpd[1943]: corporation. Support and training for ntp-4 are Dec 13 01:32:32.197525 ntpd[1943]: 13 Dec 01:32:32 ntpd[1943]: available at https://www.nwtime.org/support Dec 13 01:32:32.197525 ntpd[1943]: 13 Dec 01:32:32 ntpd[1943]: ---------------------------------------------------- Dec 13 01:32:32.197525 ntpd[1943]: 13 Dec 01:32:32 ntpd[1943]: proto: precision = 0.097 usec (-23) Dec 13 01:32:32.197525 ntpd[1943]: 13 Dec 01:32:32 ntpd[1943]: basedate set to 2024-11-30 Dec 13 01:32:32.197525 ntpd[1943]: 13 Dec 01:32:32 ntpd[1943]: gps base set to 2024-12-01 (week 2343) Dec 13 01:32:32.197525 ntpd[1943]: 13 Dec 01:32:32 ntpd[1943]: Listen and drop on 0 v6wildcard [::]:123 Dec 13 01:32:32.197525 ntpd[1943]: 13 Dec 01:32:32 ntpd[1943]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 13 01:32:32.197525 ntpd[1943]: 13 Dec 01:32:32 ntpd[1943]: Listen normally on 2 lo 127.0.0.1:123 Dec 13 01:32:32.197525 ntpd[1943]: 13 Dec 01:32:32 ntpd[1943]: Listen normally on 3 eth0 172.31.21.168:123 Dec 13 01:32:32.197525 ntpd[1943]: 13 Dec 01:32:32 ntpd[1943]: Listen normally on 4 lo [::1]:123 Dec 13 01:32:32.197525 ntpd[1943]: 13 Dec 01:32:32 ntpd[1943]: bind(21) AF_INET6 fe80::4a6:24ff:fee5:b771%2#123 flags 0x11 failed: Cannot assign requested address Dec 13 01:32:32.197525 ntpd[1943]: 13 Dec 01:32:32 ntpd[1943]: unable to create socket on eth0 (5) for fe80::4a6:24ff:fee5:b771%2#123 Dec 13 01:32:32.197525 ntpd[1943]: 13 Dec 01:32:32 ntpd[1943]: failed to init interface for address fe80::4a6:24ff:fee5:b771%2 Dec 13 01:32:32.197525 ntpd[1943]: 13 Dec 01:32:32 ntpd[1943]: Listening on routing socket on fd #21 for interface updates Dec 13 01:32:32.197525 ntpd[1943]: 13 Dec 01:32:32 ntpd[1943]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:32:32.197525 ntpd[1943]: 13 Dec 01:32:32 ntpd[1943]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:32:32.206498 tar[1961]: linux-amd64/helm Dec 13 01:32:32.151589 (ntainerd)[1979]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 01:32:32.141736 ntpd[1943]: ---------------------------------------------------- Dec 13 01:32:32.213649 extend-filesystems[1969]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Dec 13 01:32:32.213649 extend-filesystems[1969]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 01:32:32.213649 extend-filesystems[1969]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Dec 13 01:32:32.230200 coreos-metadata[1937]: Dec 13 01:32:32.200 INFO Fetch successful Dec 13 01:32:32.230200 coreos-metadata[1937]: Dec 13 01:32:32.200 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Dec 13 01:32:32.230200 coreos-metadata[1937]: Dec 13 01:32:32.206 INFO Fetch successful Dec 13 01:32:32.230200 coreos-metadata[1937]: Dec 13 01:32:32.207 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Dec 13 01:32:32.230200 coreos-metadata[1937]: Dec 13 01:32:32.225 INFO Fetch failed with 404: resource not found Dec 13 01:32:32.230200 coreos-metadata[1937]: Dec 13 01:32:32.225 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Dec 13 01:32:32.230200 coreos-metadata[1937]: Dec 13 01:32:32.229 INFO Fetch successful Dec 13 01:32:32.230200 coreos-metadata[1937]: Dec 13 01:32:32.229 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Dec 13 01:32:32.209854 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 01:32:32.141745 ntpd[1943]: ntp-4 is maintained by Network Time Foundation, Dec 13 01:32:32.235185 extend-filesystems[1940]: Resized filesystem in /dev/nvme0n1p9 Dec 13 01:32:32.210095 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 01:32:32.240856 coreos-metadata[1937]: Dec 13 01:32:32.234 INFO Fetch successful Dec 13 01:32:32.240856 coreos-metadata[1937]: Dec 13 01:32:32.234 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Dec 13 01:32:32.240856 coreos-metadata[1937]: Dec 13 01:32:32.240 INFO Fetch successful Dec 13 01:32:32.240856 coreos-metadata[1937]: Dec 13 01:32:32.240 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Dec 13 01:32:32.141756 ntpd[1943]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 13 01:32:32.141766 ntpd[1943]: corporation. Support and training for ntp-4 are Dec 13 01:32:32.141776 ntpd[1943]: available at https://www.nwtime.org/support Dec 13 01:32:32.141785 ntpd[1943]: ---------------------------------------------------- Dec 13 01:32:32.149695 ntpd[1943]: proto: precision = 0.097 usec (-23) Dec 13 01:32:32.151826 ntpd[1943]: basedate set to 2024-11-30 Dec 13 01:32:32.151845 ntpd[1943]: gps base set to 2024-12-01 (week 2343) Dec 13 01:32:32.163125 ntpd[1943]: Listen and drop on 0 v6wildcard [::]:123 Dec 13 01:32:32.163176 ntpd[1943]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 13 01:32:32.163357 ntpd[1943]: Listen normally on 2 lo 127.0.0.1:123 Dec 13 01:32:32.163393 ntpd[1943]: Listen normally on 3 eth0 172.31.21.168:123 Dec 13 01:32:32.163433 ntpd[1943]: Listen normally on 4 lo [::1]:123 Dec 13 01:32:32.163475 ntpd[1943]: bind(21) AF_INET6 fe80::4a6:24ff:fee5:b771%2#123 flags 0x11 failed: Cannot assign requested address Dec 13 01:32:32.163496 ntpd[1943]: unable to create socket on eth0 (5) for fe80::4a6:24ff:fee5:b771%2#123 Dec 13 01:32:32.163513 ntpd[1943]: failed to init interface for address fe80::4a6:24ff:fee5:b771%2 Dec 13 01:32:32.163544 ntpd[1943]: Listening on routing socket on fd #21 for interface updates Dec 13 01:32:32.177344 ntpd[1943]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:32:32.177376 ntpd[1943]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:32:32.253015 coreos-metadata[1937]: Dec 13 01:32:32.248 INFO Fetch successful Dec 13 01:32:32.253015 coreos-metadata[1937]: Dec 13 01:32:32.248 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Dec 13 01:32:32.253015 coreos-metadata[1937]: Dec 13 01:32:32.251 INFO Fetch successful Dec 13 01:32:32.283015 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (1815) Dec 13 01:32:32.297834 systemd-logind[1953]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 01:32:32.297866 systemd-logind[1953]: Watching system buttons on /dev/input/event2 (Sleep Button) Dec 13 01:32:32.297888 systemd-logind[1953]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 01:32:32.300912 systemd[1]: Finished setup-oem.service - Setup OEM. Dec 13 01:32:32.307444 bash[2015]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:32:32.309895 systemd-logind[1953]: New seat seat0. Dec 13 01:32:32.311442 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 01:32:32.381208 systemd[1]: Starting sshkeys.service... Dec 13 01:32:32.438943 locksmithd[1985]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 01:32:32.502499 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 01:32:32.554968 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 13 01:32:32.567320 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 13 01:32:32.581174 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 01:32:32.602366 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 01:32:32.647403 dbus-daemon[1938]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 13 01:32:32.647620 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Dec 13 01:32:32.652494 dbus-daemon[1938]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1978 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 13 01:32:32.669597 systemd[1]: Starting polkit.service - Authorization Manager... Dec 13 01:32:32.675151 systemd-networkd[1814]: eth0: Gained IPv6LL Dec 13 01:32:32.679486 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 01:32:32.685729 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 01:32:32.692641 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Dec 13 01:32:32.703676 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:32:32.718093 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 01:32:32.756281 polkitd[2094]: Started polkitd version 121 Dec 13 01:32:32.783826 unknown[2076]: wrote ssh authorized keys file for user: core Dec 13 01:32:32.799078 coreos-metadata[2076]: Dec 13 01:32:32.777 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 13 01:32:32.799078 coreos-metadata[2076]: Dec 13 01:32:32.778 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Dec 13 01:32:32.799078 coreos-metadata[2076]: Dec 13 01:32:32.778 INFO Fetch successful Dec 13 01:32:32.799078 coreos-metadata[2076]: Dec 13 01:32:32.778 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 13 01:32:32.799078 coreos-metadata[2076]: Dec 13 01:32:32.779 INFO Fetch successful Dec 13 01:32:32.802793 polkitd[2094]: Loading rules from directory /etc/polkit-1/rules.d Dec 13 01:32:32.817577 polkitd[2094]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 13 01:32:32.818296 polkitd[2094]: Finished loading, compiling and executing 2 rules Dec 13 01:32:32.823394 dbus-daemon[1938]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 13 01:32:32.823571 systemd[1]: Started polkit.service - Authorization Manager. Dec 13 01:32:32.828250 polkitd[2094]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 13 01:32:32.856712 update-ssh-keys[2129]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:32:32.858823 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 13 01:32:32.865048 systemd[1]: Finished sshkeys.service. Dec 13 01:32:32.884213 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 01:32:32.945315 systemd-hostnamed[1978]: Hostname set to (transient) Dec 13 01:32:32.945823 systemd-resolved[1772]: System hostname changed to 'ip-172-31-21-168'. Dec 13 01:32:32.975756 amazon-ssm-agent[2107]: Initializing new seelog logger Dec 13 01:32:32.982077 amazon-ssm-agent[2107]: New Seelog Logger Creation Complete Dec 13 01:32:32.982077 amazon-ssm-agent[2107]: 2024/12/13 01:32:32 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:32:32.982077 amazon-ssm-agent[2107]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:32:32.982077 amazon-ssm-agent[2107]: 2024/12/13 01:32:32 processing appconfig overrides Dec 13 01:32:32.982077 amazon-ssm-agent[2107]: 2024/12/13 01:32:32 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:32:32.982077 amazon-ssm-agent[2107]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:32:32.982077 amazon-ssm-agent[2107]: 2024/12/13 01:32:32 processing appconfig overrides Dec 13 01:32:32.982077 amazon-ssm-agent[2107]: 2024/12/13 01:32:32 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:32:32.982077 amazon-ssm-agent[2107]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:32:32.982077 amazon-ssm-agent[2107]: 2024/12/13 01:32:32 processing appconfig overrides Dec 13 01:32:32.988202 amazon-ssm-agent[2107]: 2024-12-13 01:32:32 INFO Proxy environment variables: Dec 13 01:32:33.003014 amazon-ssm-agent[2107]: 2024/12/13 01:32:33 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:32:33.003014 amazon-ssm-agent[2107]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:32:33.003014 amazon-ssm-agent[2107]: 2024/12/13 01:32:33 processing appconfig overrides Dec 13 01:32:33.052218 sshd_keygen[1986]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 01:32:33.089736 amazon-ssm-agent[2107]: 2024-12-13 01:32:32 INFO no_proxy: Dec 13 01:32:33.121461 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 01:32:33.132398 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 01:32:33.173013 containerd[1979]: time="2024-12-13T01:32:33.172273820Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 01:32:33.184476 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 01:32:33.184714 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 01:32:33.189713 amazon-ssm-agent[2107]: 2024-12-13 01:32:32 INFO https_proxy: Dec 13 01:32:33.195320 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 01:32:33.234573 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 01:32:33.244744 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 01:32:33.252454 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 01:32:33.255453 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 01:32:33.287766 amazon-ssm-agent[2107]: 2024-12-13 01:32:32 INFO http_proxy: Dec 13 01:32:33.296379 containerd[1979]: time="2024-12-13T01:32:33.296099120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:32:33.303231 containerd[1979]: time="2024-12-13T01:32:33.302164724Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:32:33.303231 containerd[1979]: time="2024-12-13T01:32:33.302218110Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 01:32:33.303231 containerd[1979]: time="2024-12-13T01:32:33.302244177Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 01:32:33.303231 containerd[1979]: time="2024-12-13T01:32:33.302424329Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 01:32:33.303231 containerd[1979]: time="2024-12-13T01:32:33.302445744Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 01:32:33.303231 containerd[1979]: time="2024-12-13T01:32:33.302516915Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:32:33.303231 containerd[1979]: time="2024-12-13T01:32:33.302534354Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:32:33.303231 containerd[1979]: time="2024-12-13T01:32:33.302756203Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:32:33.303231 containerd[1979]: time="2024-12-13T01:32:33.302778733Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 01:32:33.303231 containerd[1979]: time="2024-12-13T01:32:33.302798117Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:32:33.303231 containerd[1979]: time="2024-12-13T01:32:33.302814793Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 01:32:33.303712 containerd[1979]: time="2024-12-13T01:32:33.302912877Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:32:33.303712 containerd[1979]: time="2024-12-13T01:32:33.303185304Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:32:33.303969 containerd[1979]: time="2024-12-13T01:32:33.303942526Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:32:33.304063 containerd[1979]: time="2024-12-13T01:32:33.304047536Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 01:32:33.304231 containerd[1979]: time="2024-12-13T01:32:33.304214926Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 01:32:33.304344 containerd[1979]: time="2024-12-13T01:32:33.304330674Z" level=info msg="metadata content store policy set" policy=shared Dec 13 01:32:33.312885 containerd[1979]: time="2024-12-13T01:32:33.312814592Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 01:32:33.315574 containerd[1979]: time="2024-12-13T01:32:33.313059126Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 01:32:33.315574 containerd[1979]: time="2024-12-13T01:32:33.313089695Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 01:32:33.315574 containerd[1979]: time="2024-12-13T01:32:33.313159335Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 01:32:33.315574 containerd[1979]: time="2024-12-13T01:32:33.313186335Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 01:32:33.315574 containerd[1979]: time="2024-12-13T01:32:33.313361352Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 01:32:33.315574 containerd[1979]: time="2024-12-13T01:32:33.313731309Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 01:32:33.315574 containerd[1979]: time="2024-12-13T01:32:33.313838069Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 01:32:33.315574 containerd[1979]: time="2024-12-13T01:32:33.313857704Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 01:32:33.315574 containerd[1979]: time="2024-12-13T01:32:33.313878433Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 01:32:33.315574 containerd[1979]: time="2024-12-13T01:32:33.313898626Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 01:32:33.315574 containerd[1979]: time="2024-12-13T01:32:33.313917853Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 01:32:33.315574 containerd[1979]: time="2024-12-13T01:32:33.313938263Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 01:32:33.315574 containerd[1979]: time="2024-12-13T01:32:33.313959209Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 01:32:33.315574 containerd[1979]: time="2024-12-13T01:32:33.313980554Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 01:32:33.316170 containerd[1979]: time="2024-12-13T01:32:33.314029837Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 01:32:33.316170 containerd[1979]: time="2024-12-13T01:32:33.314049053Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 01:32:33.316170 containerd[1979]: time="2024-12-13T01:32:33.314068248Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 01:32:33.316170 containerd[1979]: time="2024-12-13T01:32:33.314096761Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 01:32:33.316170 containerd[1979]: time="2024-12-13T01:32:33.314116377Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 01:32:33.316170 containerd[1979]: time="2024-12-13T01:32:33.314134682Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 01:32:33.316170 containerd[1979]: time="2024-12-13T01:32:33.314154810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 01:32:33.316170 containerd[1979]: time="2024-12-13T01:32:33.314172962Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 01:32:33.316170 containerd[1979]: time="2024-12-13T01:32:33.314192129Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 01:32:33.316170 containerd[1979]: time="2024-12-13T01:32:33.314212293Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 01:32:33.316170 containerd[1979]: time="2024-12-13T01:32:33.314238183Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 01:32:33.316170 containerd[1979]: time="2024-12-13T01:32:33.314257327Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 01:32:33.316170 containerd[1979]: time="2024-12-13T01:32:33.314279005Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 01:32:33.316170 containerd[1979]: time="2024-12-13T01:32:33.314296440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 01:32:33.316681 containerd[1979]: time="2024-12-13T01:32:33.314313843Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 01:32:33.316681 containerd[1979]: time="2024-12-13T01:32:33.314333137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 01:32:33.316681 containerd[1979]: time="2024-12-13T01:32:33.314357020Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 01:32:33.316681 containerd[1979]: time="2024-12-13T01:32:33.314387323Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 01:32:33.316681 containerd[1979]: time="2024-12-13T01:32:33.314405161Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 01:32:33.316681 containerd[1979]: time="2024-12-13T01:32:33.314422620Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 01:32:33.316681 containerd[1979]: time="2024-12-13T01:32:33.314479958Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 01:32:33.316681 containerd[1979]: time="2024-12-13T01:32:33.314506049Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 01:32:33.316681 containerd[1979]: time="2024-12-13T01:32:33.314522568Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 01:32:33.316681 containerd[1979]: time="2024-12-13T01:32:33.314542720Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 01:32:33.316681 containerd[1979]: time="2024-12-13T01:32:33.314557603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 01:32:33.316681 containerd[1979]: time="2024-12-13T01:32:33.314580144Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 01:32:33.316681 containerd[1979]: time="2024-12-13T01:32:33.314594505Z" level=info msg="NRI interface is disabled by configuration." Dec 13 01:32:33.316681 containerd[1979]: time="2024-12-13T01:32:33.314610106Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 01:32:33.320484 containerd[1979]: time="2024-12-13T01:32:33.318236908Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 01:32:33.320484 containerd[1979]: time="2024-12-13T01:32:33.318349299Z" level=info msg="Connect containerd service" Dec 13 01:32:33.320484 containerd[1979]: time="2024-12-13T01:32:33.318414270Z" level=info msg="using legacy CRI server" Dec 13 01:32:33.320484 containerd[1979]: time="2024-12-13T01:32:33.318425536Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 01:32:33.320484 containerd[1979]: time="2024-12-13T01:32:33.318571251Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 01:32:33.322177 containerd[1979]: time="2024-12-13T01:32:33.321291688Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:32:33.327027 containerd[1979]: time="2024-12-13T01:32:33.322705937Z" level=info msg="Start subscribing containerd event" Dec 13 01:32:33.328193 containerd[1979]: time="2024-12-13T01:32:33.323359060Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 01:32:33.328193 containerd[1979]: time="2024-12-13T01:32:33.327282359Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 01:32:33.329334 containerd[1979]: time="2024-12-13T01:32:33.328830325Z" level=info msg="Start recovering state" Dec 13 01:32:33.329334 containerd[1979]: time="2024-12-13T01:32:33.328946450Z" level=info msg="Start event monitor" Dec 13 01:32:33.329334 containerd[1979]: time="2024-12-13T01:32:33.328971006Z" level=info msg="Start snapshots syncer" Dec 13 01:32:33.329334 containerd[1979]: time="2024-12-13T01:32:33.328985596Z" level=info msg="Start cni network conf syncer for default" Dec 13 01:32:33.329334 containerd[1979]: time="2024-12-13T01:32:33.329019178Z" level=info msg="Start streaming server" Dec 13 01:32:33.329225 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 01:32:33.333628 containerd[1979]: time="2024-12-13T01:32:33.332972453Z" level=info msg="containerd successfully booted in 0.163345s" Dec 13 01:32:33.386769 amazon-ssm-agent[2107]: 2024-12-13 01:32:32 INFO Checking if agent identity type OnPrem can be assumed Dec 13 01:32:33.486891 amazon-ssm-agent[2107]: 2024-12-13 01:32:32 INFO Checking if agent identity type EC2 can be assumed Dec 13 01:32:33.586110 amazon-ssm-agent[2107]: 2024-12-13 01:32:33 INFO Agent will take identity from EC2 Dec 13 01:32:33.684812 amazon-ssm-agent[2107]: 2024-12-13 01:32:33 INFO [amazon-ssm-agent] using named pipe channel for IPC Dec 13 01:32:33.756227 tar[1961]: linux-amd64/LICENSE Dec 13 01:32:33.756227 tar[1961]: linux-amd64/README.md Dec 13 01:32:33.772731 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 01:32:33.783698 amazon-ssm-agent[2107]: 2024-12-13 01:32:33 INFO [amazon-ssm-agent] using named pipe channel for IPC Dec 13 01:32:33.883537 amazon-ssm-agent[2107]: 2024-12-13 01:32:33 INFO [amazon-ssm-agent] using named pipe channel for IPC Dec 13 01:32:33.983540 amazon-ssm-agent[2107]: 2024-12-13 01:32:33 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Dec 13 01:32:33.991451 amazon-ssm-agent[2107]: 2024-12-13 01:32:33 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Dec 13 01:32:33.991451 amazon-ssm-agent[2107]: 2024-12-13 01:32:33 INFO [amazon-ssm-agent] Starting Core Agent Dec 13 01:32:33.991451 amazon-ssm-agent[2107]: 2024-12-13 01:32:33 INFO [amazon-ssm-agent] registrar detected. Attempting registration Dec 13 01:32:33.991451 amazon-ssm-agent[2107]: 2024-12-13 01:32:33 INFO [Registrar] Starting registrar module Dec 13 01:32:33.991451 amazon-ssm-agent[2107]: 2024-12-13 01:32:33 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Dec 13 01:32:33.991451 amazon-ssm-agent[2107]: 2024-12-13 01:32:33 INFO [EC2Identity] EC2 registration was successful. Dec 13 01:32:33.991451 amazon-ssm-agent[2107]: 2024-12-13 01:32:33 INFO [CredentialRefresher] credentialRefresher has started Dec 13 01:32:33.991451 amazon-ssm-agent[2107]: 2024-12-13 01:32:33 INFO [CredentialRefresher] Starting credentials refresher loop Dec 13 01:32:33.991451 amazon-ssm-agent[2107]: 2024-12-13 01:32:33 INFO EC2RoleProvider Successfully connected with instance profile role credentials Dec 13 01:32:34.082763 amazon-ssm-agent[2107]: 2024-12-13 01:32:33 INFO [CredentialRefresher] Next credential rotation will be in 31.199994280466665 minutes Dec 13 01:32:34.182443 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:32:34.184301 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 01:32:34.197033 systemd[1]: Startup finished in 744ms (kernel) + 10.197s (initrd) + 7.341s (userspace) = 18.284s. Dec 13 01:32:34.298527 (kubelet)[2186]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:32:34.866811 kubelet[2186]: E1213 01:32:34.866748 2186 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:32:34.869310 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:32:34.869506 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:32:34.869858 systemd[1]: kubelet.service: Consumed 1.005s CPU time. Dec 13 01:32:35.002925 amazon-ssm-agent[2107]: 2024-12-13 01:32:35 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Dec 13 01:32:35.103374 amazon-ssm-agent[2107]: 2024-12-13 01:32:35 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2199) started Dec 13 01:32:35.142186 ntpd[1943]: Listen normally on 6 eth0 [fe80::4a6:24ff:fee5:b771%2]:123 Dec 13 01:32:35.142511 ntpd[1943]: 13 Dec 01:32:35 ntpd[1943]: Listen normally on 6 eth0 [fe80::4a6:24ff:fee5:b771%2]:123 Dec 13 01:32:35.203580 amazon-ssm-agent[2107]: 2024-12-13 01:32:35 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Dec 13 01:32:40.282309 systemd-resolved[1772]: Clock change detected. Flushing caches. Dec 13 01:32:42.956357 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 01:32:42.965332 systemd[1]: Started sshd@0-172.31.21.168:22-139.178.68.195:45446.service - OpenSSH per-connection server daemon (139.178.68.195:45446). Dec 13 01:32:43.122213 sshd[2211]: Accepted publickey for core from 139.178.68.195 port 45446 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:32:43.124335 sshd[2211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:32:43.133175 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 01:32:43.138347 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 01:32:43.141789 systemd-logind[1953]: New session 1 of user core. Dec 13 01:32:43.154304 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 01:32:43.161358 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 01:32:43.168455 (systemd)[2215]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:32:43.284180 systemd[2215]: Queued start job for default target default.target. Dec 13 01:32:43.297246 systemd[2215]: Created slice app.slice - User Application Slice. Dec 13 01:32:43.297291 systemd[2215]: Reached target paths.target - Paths. Dec 13 01:32:43.297313 systemd[2215]: Reached target timers.target - Timers. Dec 13 01:32:43.298841 systemd[2215]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 01:32:43.311695 systemd[2215]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 01:32:43.311887 systemd[2215]: Reached target sockets.target - Sockets. Dec 13 01:32:43.311910 systemd[2215]: Reached target basic.target - Basic System. Dec 13 01:32:43.311961 systemd[2215]: Reached target default.target - Main User Target. Dec 13 01:32:43.312015 systemd[2215]: Startup finished in 136ms. Dec 13 01:32:43.312339 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 01:32:43.323243 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 01:32:43.474630 systemd[1]: Started sshd@1-172.31.21.168:22-139.178.68.195:45458.service - OpenSSH per-connection server daemon (139.178.68.195:45458). Dec 13 01:32:43.628345 sshd[2226]: Accepted publickey for core from 139.178.68.195 port 45458 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:32:43.630045 sshd[2226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:32:43.635390 systemd-logind[1953]: New session 2 of user core. Dec 13 01:32:43.643197 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 01:32:43.762132 sshd[2226]: pam_unix(sshd:session): session closed for user core Dec 13 01:32:43.765342 systemd[1]: sshd@1-172.31.21.168:22-139.178.68.195:45458.service: Deactivated successfully. Dec 13 01:32:43.767493 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 01:32:43.769032 systemd-logind[1953]: Session 2 logged out. Waiting for processes to exit. Dec 13 01:32:43.770238 systemd-logind[1953]: Removed session 2. Dec 13 01:32:43.797149 systemd[1]: Started sshd@2-172.31.21.168:22-139.178.68.195:45466.service - OpenSSH per-connection server daemon (139.178.68.195:45466). Dec 13 01:32:43.962656 sshd[2233]: Accepted publickey for core from 139.178.68.195 port 45466 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:32:43.964286 sshd[2233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:32:43.973873 systemd-logind[1953]: New session 3 of user core. Dec 13 01:32:43.982232 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 01:32:44.098568 sshd[2233]: pam_unix(sshd:session): session closed for user core Dec 13 01:32:44.101746 systemd[1]: sshd@2-172.31.21.168:22-139.178.68.195:45466.service: Deactivated successfully. Dec 13 01:32:44.103632 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 01:32:44.105024 systemd-logind[1953]: Session 3 logged out. Waiting for processes to exit. Dec 13 01:32:44.106271 systemd-logind[1953]: Removed session 3. Dec 13 01:32:44.140347 systemd[1]: Started sshd@3-172.31.21.168:22-139.178.68.195:45476.service - OpenSSH per-connection server daemon (139.178.68.195:45476). Dec 13 01:32:44.297366 sshd[2240]: Accepted publickey for core from 139.178.68.195 port 45476 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:32:44.298502 sshd[2240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:32:44.303073 systemd-logind[1953]: New session 4 of user core. Dec 13 01:32:44.309173 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 01:32:44.433940 sshd[2240]: pam_unix(sshd:session): session closed for user core Dec 13 01:32:44.436926 systemd[1]: sshd@3-172.31.21.168:22-139.178.68.195:45476.service: Deactivated successfully. Dec 13 01:32:44.438906 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 01:32:44.440369 systemd-logind[1953]: Session 4 logged out. Waiting for processes to exit. Dec 13 01:32:44.441702 systemd-logind[1953]: Removed session 4. Dec 13 01:32:44.472381 systemd[1]: Started sshd@4-172.31.21.168:22-139.178.68.195:45490.service - OpenSSH per-connection server daemon (139.178.68.195:45490). Dec 13 01:32:44.631005 sshd[2247]: Accepted publickey for core from 139.178.68.195 port 45490 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:32:44.632691 sshd[2247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:32:44.638263 systemd-logind[1953]: New session 5 of user core. Dec 13 01:32:44.649195 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 01:32:44.780855 sudo[2250]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 01:32:44.784775 sudo[2250]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:32:44.806630 sudo[2250]: pam_unix(sudo:session): session closed for user root Dec 13 01:32:44.830199 sshd[2247]: pam_unix(sshd:session): session closed for user core Dec 13 01:32:44.834315 systemd[1]: sshd@4-172.31.21.168:22-139.178.68.195:45490.service: Deactivated successfully. Dec 13 01:32:44.836486 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 01:32:44.838057 systemd-logind[1953]: Session 5 logged out. Waiting for processes to exit. Dec 13 01:32:44.839572 systemd-logind[1953]: Removed session 5. Dec 13 01:32:44.877417 systemd[1]: Started sshd@5-172.31.21.168:22-139.178.68.195:45500.service - OpenSSH per-connection server daemon (139.178.68.195:45500). Dec 13 01:32:45.035570 sshd[2255]: Accepted publickey for core from 139.178.68.195 port 45500 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:32:45.037219 sshd[2255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:32:45.043065 systemd-logind[1953]: New session 6 of user core. Dec 13 01:32:45.056212 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 01:32:45.161667 sudo[2259]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 01:32:45.162092 sudo[2259]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:32:45.173713 sudo[2259]: pam_unix(sudo:session): session closed for user root Dec 13 01:32:45.181233 sudo[2258]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 01:32:45.181623 sudo[2258]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:32:45.197064 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 01:32:45.200376 auditctl[2262]: No rules Dec 13 01:32:45.200898 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 01:32:45.201144 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 01:32:45.208645 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:32:45.237064 augenrules[2280]: No rules Dec 13 01:32:45.238516 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:32:45.239861 sudo[2258]: pam_unix(sudo:session): session closed for user root Dec 13 01:32:45.263412 sshd[2255]: pam_unix(sshd:session): session closed for user core Dec 13 01:32:45.267799 systemd[1]: sshd@5-172.31.21.168:22-139.178.68.195:45500.service: Deactivated successfully. Dec 13 01:32:45.269787 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 01:32:45.271267 systemd-logind[1953]: Session 6 logged out. Waiting for processes to exit. Dec 13 01:32:45.272656 systemd-logind[1953]: Removed session 6. Dec 13 01:32:45.301386 systemd[1]: Started sshd@6-172.31.21.168:22-139.178.68.195:45516.service - OpenSSH per-connection server daemon (139.178.68.195:45516). Dec 13 01:32:45.472227 sshd[2288]: Accepted publickey for core from 139.178.68.195 port 45516 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:32:45.473840 sshd[2288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:32:45.479150 systemd-logind[1953]: New session 7 of user core. Dec 13 01:32:45.489289 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 01:32:45.593184 sudo[2291]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 01:32:45.593588 sudo[2291]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:32:46.110624 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 01:32:46.115295 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 01:32:46.125359 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:32:46.126086 (dockerd)[2306]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 01:32:46.688213 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:32:46.700646 (kubelet)[2319]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:32:46.814998 kubelet[2319]: E1213 01:32:46.813074 2319 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:32:46.822832 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:32:46.823117 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:32:46.855456 dockerd[2306]: time="2024-12-13T01:32:46.855396702Z" level=info msg="Starting up" Dec 13 01:32:46.953540 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2039077188-merged.mount: Deactivated successfully. Dec 13 01:32:46.978737 systemd[1]: var-lib-docker-metacopy\x2dcheck438589474-merged.mount: Deactivated successfully. Dec 13 01:32:47.001287 dockerd[2306]: time="2024-12-13T01:32:47.001238782Z" level=info msg="Loading containers: start." Dec 13 01:32:47.153016 kernel: Initializing XFRM netlink socket Dec 13 01:32:47.184384 (udev-worker)[2345]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:32:47.250150 systemd-networkd[1814]: docker0: Link UP Dec 13 01:32:47.275823 dockerd[2306]: time="2024-12-13T01:32:47.275771788Z" level=info msg="Loading containers: done." Dec 13 01:32:47.303172 dockerd[2306]: time="2024-12-13T01:32:47.303110628Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 01:32:47.303373 dockerd[2306]: time="2024-12-13T01:32:47.303278001Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 13 01:32:47.303457 dockerd[2306]: time="2024-12-13T01:32:47.303428438Z" level=info msg="Daemon has completed initialization" Dec 13 01:32:47.343461 dockerd[2306]: time="2024-12-13T01:32:47.343394648Z" level=info msg="API listen on /run/docker.sock" Dec 13 01:32:47.343863 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 01:32:47.949871 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2879359632-merged.mount: Deactivated successfully. Dec 13 01:32:48.558164 containerd[1979]: time="2024-12-13T01:32:48.557716150Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\"" Dec 13 01:32:49.298791 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount492005467.mount: Deactivated successfully. Dec 13 01:32:52.180316 containerd[1979]: time="2024-12-13T01:32:52.180259942Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:52.184998 containerd[1979]: time="2024-12-13T01:32:52.184922217Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.8: active requests=0, bytes read=32675642" Dec 13 01:32:52.187999 containerd[1979]: time="2024-12-13T01:32:52.187935316Z" level=info msg="ImageCreate event name:\"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:52.192034 containerd[1979]: time="2024-12-13T01:32:52.191651005Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:52.193306 containerd[1979]: time="2024-12-13T01:32:52.193263809Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.8\" with image id \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\", size \"32672442\" in 3.635498192s" Dec 13 01:32:52.193398 containerd[1979]: time="2024-12-13T01:32:52.193314644Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\" returns image reference \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\"" Dec 13 01:32:52.221228 containerd[1979]: time="2024-12-13T01:32:52.221190801Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\"" Dec 13 01:32:55.065343 containerd[1979]: time="2024-12-13T01:32:55.065283894Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:55.070721 containerd[1979]: time="2024-12-13T01:32:55.070639983Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.8: active requests=0, bytes read=29606409" Dec 13 01:32:55.072693 containerd[1979]: time="2024-12-13T01:32:55.072237682Z" level=info msg="ImageCreate event name:\"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:55.087932 containerd[1979]: time="2024-12-13T01:32:55.087879156Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:55.089301 containerd[1979]: time="2024-12-13T01:32:55.089106034Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.8\" with image id \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\", size \"31051521\" in 2.867874437s" Dec 13 01:32:55.089301 containerd[1979]: time="2024-12-13T01:32:55.089159015Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\" returns image reference \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\"" Dec 13 01:32:55.116397 containerd[1979]: time="2024-12-13T01:32:55.116358508Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\"" Dec 13 01:32:57.073724 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 01:32:57.075938 containerd[1979]: time="2024-12-13T01:32:57.074512317Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:57.077330 containerd[1979]: time="2024-12-13T01:32:57.077281024Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.8: active requests=0, bytes read=17783035" Dec 13 01:32:57.079620 containerd[1979]: time="2024-12-13T01:32:57.079583740Z" level=info msg="ImageCreate event name:\"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:57.083121 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:32:57.091802 containerd[1979]: time="2024-12-13T01:32:57.091742949Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:57.095931 containerd[1979]: time="2024-12-13T01:32:57.095712949Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.8\" with image id \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\", size \"19228165\" in 1.979309768s" Dec 13 01:32:57.095931 containerd[1979]: time="2024-12-13T01:32:57.095768610Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\" returns image reference \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\"" Dec 13 01:32:57.131213 containerd[1979]: time="2024-12-13T01:32:57.131145403Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Dec 13 01:32:57.552290 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:32:57.557725 (kubelet)[2552]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:32:57.604562 kubelet[2552]: E1213 01:32:57.604485 2552 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:32:57.607901 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:32:57.608160 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:32:58.530630 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3814157516.mount: Deactivated successfully. Dec 13 01:32:59.128289 containerd[1979]: time="2024-12-13T01:32:59.128223796Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:59.141660 containerd[1979]: time="2024-12-13T01:32:59.141297105Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.8: active requests=0, bytes read=29057470" Dec 13 01:32:59.151582 containerd[1979]: time="2024-12-13T01:32:59.151508602Z" level=info msg="ImageCreate event name:\"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:59.163362 containerd[1979]: time="2024-12-13T01:32:59.163282866Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:59.164347 containerd[1979]: time="2024-12-13T01:32:59.164182310Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.8\" with image id \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\", size \"29056489\" in 2.032991443s" Dec 13 01:32:59.164347 containerd[1979]: time="2024-12-13T01:32:59.164225252Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\"" Dec 13 01:32:59.190659 containerd[1979]: time="2024-12-13T01:32:59.190616140Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 01:32:59.776300 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1205127887.mount: Deactivated successfully. Dec 13 01:33:01.072858 containerd[1979]: time="2024-12-13T01:33:01.072807090Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:01.074331 containerd[1979]: time="2024-12-13T01:33:01.074267326Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Dec 13 01:33:01.076269 containerd[1979]: time="2024-12-13T01:33:01.076122473Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:01.081170 containerd[1979]: time="2024-12-13T01:33:01.081085641Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:01.082310 containerd[1979]: time="2024-12-13T01:33:01.082024194Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.891368129s" Dec 13 01:33:01.082310 containerd[1979]: time="2024-12-13T01:33:01.082067987Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 01:33:01.107558 containerd[1979]: time="2024-12-13T01:33:01.107515334Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 01:33:01.692417 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1392481556.mount: Deactivated successfully. Dec 13 01:33:01.703852 containerd[1979]: time="2024-12-13T01:33:01.703777882Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:01.705665 containerd[1979]: time="2024-12-13T01:33:01.705607674Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Dec 13 01:33:01.707471 containerd[1979]: time="2024-12-13T01:33:01.707306843Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:01.710398 containerd[1979]: time="2024-12-13T01:33:01.710328930Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:01.711770 containerd[1979]: time="2024-12-13T01:33:01.711199131Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 603.641641ms" Dec 13 01:33:01.711770 containerd[1979]: time="2024-12-13T01:33:01.711242143Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 01:33:01.744773 containerd[1979]: time="2024-12-13T01:33:01.744721813Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Dec 13 01:33:02.416504 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3173839747.mount: Deactivated successfully. Dec 13 01:33:04.106683 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 13 01:33:06.056247 containerd[1979]: time="2024-12-13T01:33:06.056186464Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:06.079447 containerd[1979]: time="2024-12-13T01:33:06.079146580Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Dec 13 01:33:06.097711 containerd[1979]: time="2024-12-13T01:33:06.097638431Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:06.113717 containerd[1979]: time="2024-12-13T01:33:06.113638243Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:06.115336 containerd[1979]: time="2024-12-13T01:33:06.115153982Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 4.370391683s" Dec 13 01:33:06.115336 containerd[1979]: time="2024-12-13T01:33:06.115202577Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Dec 13 01:33:07.691798 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 01:33:07.705114 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:33:08.287195 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:33:08.299682 (kubelet)[2746]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:33:08.413526 kubelet[2746]: E1213 01:33:08.413478 2746 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:33:08.420337 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:33:08.420895 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:33:09.946925 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:33:09.955442 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:33:10.025724 systemd[1]: Reloading requested from client PID 2761 ('systemctl') (unit session-7.scope)... Dec 13 01:33:10.025784 systemd[1]: Reloading... Dec 13 01:33:10.192020 zram_generator::config[2801]: No configuration found. Dec 13 01:33:10.340552 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:33:10.445298 systemd[1]: Reloading finished in 416 ms. Dec 13 01:33:10.505631 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 01:33:10.505808 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 01:33:10.506437 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:33:10.512331 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:33:10.932244 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:33:10.953780 (kubelet)[2861]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:33:11.056134 kubelet[2861]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:33:11.056134 kubelet[2861]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:33:11.056134 kubelet[2861]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:33:11.060490 kubelet[2861]: I1213 01:33:11.060429 2861 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:33:11.809930 kubelet[2861]: I1213 01:33:11.808821 2861 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 01:33:11.810509 kubelet[2861]: I1213 01:33:11.809962 2861 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:33:11.811134 kubelet[2861]: I1213 01:33:11.810683 2861 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 01:33:11.847466 kubelet[2861]: I1213 01:33:11.847431 2861 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:33:11.850098 kubelet[2861]: E1213 01:33:11.849714 2861 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.21.168:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.21.168:6443: connect: connection refused Dec 13 01:33:11.867544 kubelet[2861]: I1213 01:33:11.867514 2861 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:33:11.871928 kubelet[2861]: I1213 01:33:11.871866 2861 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:33:11.873815 kubelet[2861]: I1213 01:33:11.871925 2861 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-21-168","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:33:11.874172 kubelet[2861]: I1213 01:33:11.873831 2861 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:33:11.874172 kubelet[2861]: I1213 01:33:11.873907 2861 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:33:11.883663 kubelet[2861]: I1213 01:33:11.883611 2861 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:33:11.886249 kubelet[2861]: I1213 01:33:11.886216 2861 kubelet.go:400] "Attempting to sync node with API server" Dec 13 01:33:11.886249 kubelet[2861]: I1213 01:33:11.886250 2861 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:33:11.887986 kubelet[2861]: W1213 01:33:11.886887 2861 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.21.168:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-168&limit=500&resourceVersion=0": dial tcp 172.31.21.168:6443: connect: connection refused Dec 13 01:33:11.887986 kubelet[2861]: E1213 01:33:11.886966 2861 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.21.168:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-168&limit=500&resourceVersion=0": dial tcp 172.31.21.168:6443: connect: connection refused Dec 13 01:33:11.887986 kubelet[2861]: I1213 01:33:11.887749 2861 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:33:11.887986 kubelet[2861]: I1213 01:33:11.887807 2861 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:33:11.895499 kubelet[2861]: W1213 01:33:11.895122 2861 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.21.168:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.21.168:6443: connect: connection refused Dec 13 01:33:11.895499 kubelet[2861]: E1213 01:33:11.895190 2861 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.21.168:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.21.168:6443: connect: connection refused Dec 13 01:33:11.899797 kubelet[2861]: I1213 01:33:11.896543 2861 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:33:11.902415 kubelet[2861]: I1213 01:33:11.902380 2861 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:33:11.902527 kubelet[2861]: W1213 01:33:11.902465 2861 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 01:33:11.906638 kubelet[2861]: I1213 01:33:11.906489 2861 server.go:1264] "Started kubelet" Dec 13 01:33:11.920838 kubelet[2861]: E1213 01:33:11.920631 2861 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.21.168:6443/api/v1/namespaces/default/events\": dial tcp 172.31.21.168:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-21-168.1810988def12fa3f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-21-168,UID:ip-172-31-21-168,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-21-168,},FirstTimestamp:2024-12-13 01:33:11.906449983 +0000 UTC m=+0.944255479,LastTimestamp:2024-12-13 01:33:11.906449983 +0000 UTC m=+0.944255479,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-21-168,}" Dec 13 01:33:11.921057 kubelet[2861]: I1213 01:33:11.920902 2861 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:33:11.921600 kubelet[2861]: I1213 01:33:11.921426 2861 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:33:11.925673 kubelet[2861]: I1213 01:33:11.923769 2861 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:33:11.925673 kubelet[2861]: I1213 01:33:11.924576 2861 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:33:11.927009 kubelet[2861]: I1213 01:33:11.926961 2861 server.go:455] "Adding debug handlers to kubelet server" Dec 13 01:33:11.934899 kubelet[2861]: I1213 01:33:11.933347 2861 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:33:11.937333 kubelet[2861]: I1213 01:33:11.937307 2861 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 01:33:11.938693 kubelet[2861]: I1213 01:33:11.938674 2861 reconciler.go:26] "Reconciler: start to sync state" Dec 13 01:33:11.939357 kubelet[2861]: W1213 01:33:11.939283 2861 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.21.168:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.21.168:6443: connect: connection refused Dec 13 01:33:11.939511 kubelet[2861]: E1213 01:33:11.939496 2861 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.21.168:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.21.168:6443: connect: connection refused Dec 13 01:33:11.940106 kubelet[2861]: E1213 01:33:11.940076 2861 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.168:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-168?timeout=10s\": dial tcp 172.31.21.168:6443: connect: connection refused" interval="200ms" Dec 13 01:33:11.940365 kubelet[2861]: I1213 01:33:11.940351 2861 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:33:11.940534 kubelet[2861]: I1213 01:33:11.940518 2861 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:33:11.945570 kubelet[2861]: I1213 01:33:11.945537 2861 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:33:11.954738 kubelet[2861]: E1213 01:33:11.954704 2861 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:33:11.971264 kubelet[2861]: I1213 01:33:11.971035 2861 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:33:11.973314 kubelet[2861]: I1213 01:33:11.973261 2861 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:33:11.973426 kubelet[2861]: I1213 01:33:11.973390 2861 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:33:11.973426 kubelet[2861]: I1213 01:33:11.973413 2861 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 01:33:11.973933 kubelet[2861]: E1213 01:33:11.973815 2861 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:33:11.977231 kubelet[2861]: I1213 01:33:11.977214 2861 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:33:11.978672 kubelet[2861]: I1213 01:33:11.978576 2861 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:33:11.978798 kubelet[2861]: I1213 01:33:11.978601 2861 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:33:11.981362 kubelet[2861]: W1213 01:33:11.981331 2861 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.21.168:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.21.168:6443: connect: connection refused Dec 13 01:33:11.983339 kubelet[2861]: E1213 01:33:11.981375 2861 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.21.168:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.21.168:6443: connect: connection refused Dec 13 01:33:11.983339 kubelet[2861]: I1213 01:33:11.981722 2861 policy_none.go:49] "None policy: Start" Dec 13 01:33:11.984638 kubelet[2861]: I1213 01:33:11.984179 2861 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:33:11.984638 kubelet[2861]: I1213 01:33:11.984205 2861 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:33:12.000332 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 01:33:12.012263 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 01:33:12.018161 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 01:33:12.026067 kubelet[2861]: I1213 01:33:12.026030 2861 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:33:12.026655 kubelet[2861]: I1213 01:33:12.026605 2861 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 01:33:12.027740 kubelet[2861]: I1213 01:33:12.027723 2861 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:33:12.029702 kubelet[2861]: E1213 01:33:12.029643 2861 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-21-168\" not found" Dec 13 01:33:12.037466 kubelet[2861]: I1213 01:33:12.037180 2861 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-21-168" Dec 13 01:33:12.038222 kubelet[2861]: E1213 01:33:12.038190 2861 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.21.168:6443/api/v1/nodes\": dial tcp 172.31.21.168:6443: connect: connection refused" node="ip-172-31-21-168" Dec 13 01:33:12.076068 kubelet[2861]: I1213 01:33:12.074855 2861 topology_manager.go:215] "Topology Admit Handler" podUID="578b2f09b8a3712d0d61f48334c1b448" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-21-168" Dec 13 01:33:12.079548 kubelet[2861]: I1213 01:33:12.079515 2861 topology_manager.go:215] "Topology Admit Handler" podUID="c86bcb35605c8c43ef52e51663dbd2da" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-21-168" Dec 13 01:33:12.081165 kubelet[2861]: I1213 01:33:12.080947 2861 topology_manager.go:215] "Topology Admit Handler" podUID="495a5bea895d4c3fd62b9d26dd634033" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-21-168" Dec 13 01:33:12.090262 systemd[1]: Created slice kubepods-burstable-pod578b2f09b8a3712d0d61f48334c1b448.slice - libcontainer container kubepods-burstable-pod578b2f09b8a3712d0d61f48334c1b448.slice. Dec 13 01:33:12.110703 systemd[1]: Created slice kubepods-burstable-podc86bcb35605c8c43ef52e51663dbd2da.slice - libcontainer container kubepods-burstable-podc86bcb35605c8c43ef52e51663dbd2da.slice. Dec 13 01:33:12.126660 systemd[1]: Created slice kubepods-burstable-pod495a5bea895d4c3fd62b9d26dd634033.slice - libcontainer container kubepods-burstable-pod495a5bea895d4c3fd62b9d26dd634033.slice. Dec 13 01:33:12.139378 kubelet[2861]: I1213 01:33:12.139342 2861 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c86bcb35605c8c43ef52e51663dbd2da-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-21-168\" (UID: \"c86bcb35605c8c43ef52e51663dbd2da\") " pod="kube-system/kube-controller-manager-ip-172-31-21-168" Dec 13 01:33:12.139378 kubelet[2861]: I1213 01:33:12.139381 2861 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c86bcb35605c8c43ef52e51663dbd2da-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-21-168\" (UID: \"c86bcb35605c8c43ef52e51663dbd2da\") " pod="kube-system/kube-controller-manager-ip-172-31-21-168" Dec 13 01:33:12.139582 kubelet[2861]: I1213 01:33:12.139411 2861 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/495a5bea895d4c3fd62b9d26dd634033-kubeconfig\") pod \"kube-scheduler-ip-172-31-21-168\" (UID: \"495a5bea895d4c3fd62b9d26dd634033\") " pod="kube-system/kube-scheduler-ip-172-31-21-168" Dec 13 01:33:12.139582 kubelet[2861]: I1213 01:33:12.139431 2861 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/578b2f09b8a3712d0d61f48334c1b448-ca-certs\") pod \"kube-apiserver-ip-172-31-21-168\" (UID: \"578b2f09b8a3712d0d61f48334c1b448\") " pod="kube-system/kube-apiserver-ip-172-31-21-168" Dec 13 01:33:12.139582 kubelet[2861]: I1213 01:33:12.139451 2861 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/578b2f09b8a3712d0d61f48334c1b448-k8s-certs\") pod \"kube-apiserver-ip-172-31-21-168\" (UID: \"578b2f09b8a3712d0d61f48334c1b448\") " pod="kube-system/kube-apiserver-ip-172-31-21-168" Dec 13 01:33:12.139582 kubelet[2861]: I1213 01:33:12.139472 2861 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c86bcb35605c8c43ef52e51663dbd2da-k8s-certs\") pod \"kube-controller-manager-ip-172-31-21-168\" (UID: \"c86bcb35605c8c43ef52e51663dbd2da\") " pod="kube-system/kube-controller-manager-ip-172-31-21-168" Dec 13 01:33:12.139582 kubelet[2861]: I1213 01:33:12.139495 2861 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c86bcb35605c8c43ef52e51663dbd2da-kubeconfig\") pod \"kube-controller-manager-ip-172-31-21-168\" (UID: \"c86bcb35605c8c43ef52e51663dbd2da\") " pod="kube-system/kube-controller-manager-ip-172-31-21-168" Dec 13 01:33:12.140232 kubelet[2861]: I1213 01:33:12.139517 2861 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/578b2f09b8a3712d0d61f48334c1b448-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-21-168\" (UID: \"578b2f09b8a3712d0d61f48334c1b448\") " pod="kube-system/kube-apiserver-ip-172-31-21-168" Dec 13 01:33:12.140232 kubelet[2861]: I1213 01:33:12.139540 2861 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c86bcb35605c8c43ef52e51663dbd2da-ca-certs\") pod \"kube-controller-manager-ip-172-31-21-168\" (UID: \"c86bcb35605c8c43ef52e51663dbd2da\") " pod="kube-system/kube-controller-manager-ip-172-31-21-168" Dec 13 01:33:12.140948 kubelet[2861]: E1213 01:33:12.140917 2861 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.168:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-168?timeout=10s\": dial tcp 172.31.21.168:6443: connect: connection refused" interval="400ms" Dec 13 01:33:12.240400 kubelet[2861]: I1213 01:33:12.240324 2861 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-21-168" Dec 13 01:33:12.240739 kubelet[2861]: E1213 01:33:12.240712 2861 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.21.168:6443/api/v1/nodes\": dial tcp 172.31.21.168:6443: connect: connection refused" node="ip-172-31-21-168" Dec 13 01:33:12.407997 containerd[1979]: time="2024-12-13T01:33:12.407875879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-21-168,Uid:578b2f09b8a3712d0d61f48334c1b448,Namespace:kube-system,Attempt:0,}" Dec 13 01:33:12.414663 containerd[1979]: time="2024-12-13T01:33:12.414615992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-21-168,Uid:c86bcb35605c8c43ef52e51663dbd2da,Namespace:kube-system,Attempt:0,}" Dec 13 01:33:12.431009 containerd[1979]: time="2024-12-13T01:33:12.430944105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-21-168,Uid:495a5bea895d4c3fd62b9d26dd634033,Namespace:kube-system,Attempt:0,}" Dec 13 01:33:12.542257 kubelet[2861]: E1213 01:33:12.542207 2861 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.168:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-168?timeout=10s\": dial tcp 172.31.21.168:6443: connect: connection refused" interval="800ms" Dec 13 01:33:12.646630 kubelet[2861]: I1213 01:33:12.646595 2861 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-21-168" Dec 13 01:33:12.647051 kubelet[2861]: E1213 01:33:12.647007 2861 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.21.168:6443/api/v1/nodes\": dial tcp 172.31.21.168:6443: connect: connection refused" node="ip-172-31-21-168" Dec 13 01:33:12.812340 kubelet[2861]: W1213 01:33:12.812252 2861 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.21.168:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.21.168:6443: connect: connection refused Dec 13 01:33:12.812340 kubelet[2861]: E1213 01:33:12.812338 2861 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.21.168:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.21.168:6443: connect: connection refused Dec 13 01:33:12.972890 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2574141173.mount: Deactivated successfully. Dec 13 01:33:12.986284 containerd[1979]: time="2024-12-13T01:33:12.986234103Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:33:12.987677 containerd[1979]: time="2024-12-13T01:33:12.987620553Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Dec 13 01:33:12.989253 containerd[1979]: time="2024-12-13T01:33:12.989218255Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:33:12.990727 containerd[1979]: time="2024-12-13T01:33:12.990614813Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:33:12.992112 containerd[1979]: time="2024-12-13T01:33:12.992029760Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:33:12.993650 containerd[1979]: time="2024-12-13T01:33:12.993612279Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:33:12.996717 containerd[1979]: time="2024-12-13T01:33:12.995162155Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:33:12.999376 containerd[1979]: time="2024-12-13T01:33:12.998317401Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:33:12.999376 containerd[1979]: time="2024-12-13T01:33:12.999257026Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 568.210716ms" Dec 13 01:33:13.002419 containerd[1979]: time="2024-12-13T01:33:13.002335498Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 587.632182ms" Dec 13 01:33:13.008209 containerd[1979]: time="2024-12-13T01:33:13.008160795Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 600.205343ms" Dec 13 01:33:13.041757 kubelet[2861]: W1213 01:33:13.041490 2861 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.21.168:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.21.168:6443: connect: connection refused Dec 13 01:33:13.041757 kubelet[2861]: E1213 01:33:13.041730 2861 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.21.168:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.21.168:6443: connect: connection refused Dec 13 01:33:13.278098 containerd[1979]: time="2024-12-13T01:33:13.277667520Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:33:13.278570 containerd[1979]: time="2024-12-13T01:33:13.278515535Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:33:13.278760 containerd[1979]: time="2024-12-13T01:33:13.278719716Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:33:13.279125 containerd[1979]: time="2024-12-13T01:33:13.279079730Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:33:13.280554 containerd[1979]: time="2024-12-13T01:33:13.280374352Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:33:13.282362 containerd[1979]: time="2024-12-13T01:33:13.282304945Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:33:13.282533 containerd[1979]: time="2024-12-13T01:33:13.282386504Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:33:13.283884 containerd[1979]: time="2024-12-13T01:33:13.283477702Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:33:13.283884 containerd[1979]: time="2024-12-13T01:33:13.283535376Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:33:13.283884 containerd[1979]: time="2024-12-13T01:33:13.283564410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:33:13.283884 containerd[1979]: time="2024-12-13T01:33:13.283668313Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:33:13.284174 containerd[1979]: time="2024-12-13T01:33:13.283961421Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:33:13.330230 systemd[1]: Started cri-containerd-8c4ca4d4ec598db776d7d2645c2b5421cd9949640c8e57a282e0ad863d65863f.scope - libcontainer container 8c4ca4d4ec598db776d7d2645c2b5421cd9949640c8e57a282e0ad863d65863f. Dec 13 01:33:13.342726 kubelet[2861]: E1213 01:33:13.342679 2861 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.168:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-168?timeout=10s\": dial tcp 172.31.21.168:6443: connect: connection refused" interval="1.6s" Dec 13 01:33:13.344375 systemd[1]: Started cri-containerd-6d3925ff94686b76aeab99a8b27111af8b879fe43515fcd97d0e61e752d84767.scope - libcontainer container 6d3925ff94686b76aeab99a8b27111af8b879fe43515fcd97d0e61e752d84767. Dec 13 01:33:13.347199 systemd[1]: Started cri-containerd-e2df51f13d49b0a8c27d64cf05147f2c8932d429ddded6cd49f20d60400918bd.scope - libcontainer container e2df51f13d49b0a8c27d64cf05147f2c8932d429ddded6cd49f20d60400918bd. Dec 13 01:33:13.392935 kubelet[2861]: W1213 01:33:13.392870 2861 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.21.168:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-168&limit=500&resourceVersion=0": dial tcp 172.31.21.168:6443: connect: connection refused Dec 13 01:33:13.393257 kubelet[2861]: E1213 01:33:13.393135 2861 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.21.168:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-168&limit=500&resourceVersion=0": dial tcp 172.31.21.168:6443: connect: connection refused Dec 13 01:33:13.413647 kubelet[2861]: W1213 01:33:13.413489 2861 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.21.168:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.21.168:6443: connect: connection refused Dec 13 01:33:13.413647 kubelet[2861]: E1213 01:33:13.413621 2861 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.21.168:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.21.168:6443: connect: connection refused Dec 13 01:33:13.454872 kubelet[2861]: I1213 01:33:13.454308 2861 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-21-168" Dec 13 01:33:13.454872 kubelet[2861]: E1213 01:33:13.454805 2861 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.21.168:6443/api/v1/nodes\": dial tcp 172.31.21.168:6443: connect: connection refused" node="ip-172-31-21-168" Dec 13 01:33:13.461247 containerd[1979]: time="2024-12-13T01:33:13.461177162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-21-168,Uid:495a5bea895d4c3fd62b9d26dd634033,Namespace:kube-system,Attempt:0,} returns sandbox id \"e2df51f13d49b0a8c27d64cf05147f2c8932d429ddded6cd49f20d60400918bd\"" Dec 13 01:33:13.462678 containerd[1979]: time="2024-12-13T01:33:13.462467344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-21-168,Uid:c86bcb35605c8c43ef52e51663dbd2da,Namespace:kube-system,Attempt:0,} returns sandbox id \"8c4ca4d4ec598db776d7d2645c2b5421cd9949640c8e57a282e0ad863d65863f\"" Dec 13 01:33:13.474579 containerd[1979]: time="2024-12-13T01:33:13.474548759Z" level=info msg="CreateContainer within sandbox \"e2df51f13d49b0a8c27d64cf05147f2c8932d429ddded6cd49f20d60400918bd\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 01:33:13.475104 containerd[1979]: time="2024-12-13T01:33:13.474930242Z" level=info msg="CreateContainer within sandbox \"8c4ca4d4ec598db776d7d2645c2b5421cd9949640c8e57a282e0ad863d65863f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 01:33:13.485531 containerd[1979]: time="2024-12-13T01:33:13.485197351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-21-168,Uid:578b2f09b8a3712d0d61f48334c1b448,Namespace:kube-system,Attempt:0,} returns sandbox id \"6d3925ff94686b76aeab99a8b27111af8b879fe43515fcd97d0e61e752d84767\"" Dec 13 01:33:13.490119 containerd[1979]: time="2024-12-13T01:33:13.490013049Z" level=info msg="CreateContainer within sandbox \"6d3925ff94686b76aeab99a8b27111af8b879fe43515fcd97d0e61e752d84767\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 01:33:13.512068 containerd[1979]: time="2024-12-13T01:33:13.512020468Z" level=info msg="CreateContainer within sandbox \"8c4ca4d4ec598db776d7d2645c2b5421cd9949640c8e57a282e0ad863d65863f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"80acd6fbd181731bdbd433483bd8346bdae0287344bd867d003d838c5c0e81b4\"" Dec 13 01:33:13.512839 containerd[1979]: time="2024-12-13T01:33:13.512755500Z" level=info msg="StartContainer for \"80acd6fbd181731bdbd433483bd8346bdae0287344bd867d003d838c5c0e81b4\"" Dec 13 01:33:13.521563 containerd[1979]: time="2024-12-13T01:33:13.521519442Z" level=info msg="CreateContainer within sandbox \"e2df51f13d49b0a8c27d64cf05147f2c8932d429ddded6cd49f20d60400918bd\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0c3f0ae47ac4d9b44aa718e8311511acceb1c1bf3e7e8cefb29bede7a7258dc1\"" Dec 13 01:33:13.522724 containerd[1979]: time="2024-12-13T01:33:13.522535330Z" level=info msg="StartContainer for \"0c3f0ae47ac4d9b44aa718e8311511acceb1c1bf3e7e8cefb29bede7a7258dc1\"" Dec 13 01:33:13.528277 containerd[1979]: time="2024-12-13T01:33:13.528073470Z" level=info msg="CreateContainer within sandbox \"6d3925ff94686b76aeab99a8b27111af8b879fe43515fcd97d0e61e752d84767\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"acd79d8cfa55a4d252d5ee8ca256081e2156db934c36c348bd718d4bcaf6f965\"" Dec 13 01:33:13.531475 containerd[1979]: time="2024-12-13T01:33:13.529837391Z" level=info msg="StartContainer for \"acd79d8cfa55a4d252d5ee8ca256081e2156db934c36c348bd718d4bcaf6f965\"" Dec 13 01:33:13.574194 systemd[1]: Started cri-containerd-80acd6fbd181731bdbd433483bd8346bdae0287344bd867d003d838c5c0e81b4.scope - libcontainer container 80acd6fbd181731bdbd433483bd8346bdae0287344bd867d003d838c5c0e81b4. Dec 13 01:33:13.594732 systemd[1]: Started cri-containerd-acd79d8cfa55a4d252d5ee8ca256081e2156db934c36c348bd718d4bcaf6f965.scope - libcontainer container acd79d8cfa55a4d252d5ee8ca256081e2156db934c36c348bd718d4bcaf6f965. Dec 13 01:33:13.608305 systemd[1]: Started cri-containerd-0c3f0ae47ac4d9b44aa718e8311511acceb1c1bf3e7e8cefb29bede7a7258dc1.scope - libcontainer container 0c3f0ae47ac4d9b44aa718e8311511acceb1c1bf3e7e8cefb29bede7a7258dc1. Dec 13 01:33:13.675954 containerd[1979]: time="2024-12-13T01:33:13.675913345Z" level=info msg="StartContainer for \"80acd6fbd181731bdbd433483bd8346bdae0287344bd867d003d838c5c0e81b4\" returns successfully" Dec 13 01:33:13.715876 containerd[1979]: time="2024-12-13T01:33:13.715708820Z" level=info msg="StartContainer for \"acd79d8cfa55a4d252d5ee8ca256081e2156db934c36c348bd718d4bcaf6f965\" returns successfully" Dec 13 01:33:13.725070 containerd[1979]: time="2024-12-13T01:33:13.724924509Z" level=info msg="StartContainer for \"0c3f0ae47ac4d9b44aa718e8311511acceb1c1bf3e7e8cefb29bede7a7258dc1\" returns successfully" Dec 13 01:33:13.901645 kubelet[2861]: E1213 01:33:13.901504 2861 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.21.168:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.21.168:6443: connect: connection refused Dec 13 01:33:15.058535 kubelet[2861]: I1213 01:33:15.058501 2861 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-21-168" Dec 13 01:33:16.411021 kubelet[2861]: E1213 01:33:16.410956 2861 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-21-168\" not found" node="ip-172-31-21-168" Dec 13 01:33:16.545328 kubelet[2861]: I1213 01:33:16.545284 2861 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-21-168" Dec 13 01:33:16.896646 kubelet[2861]: I1213 01:33:16.896598 2861 apiserver.go:52] "Watching apiserver" Dec 13 01:33:16.939331 kubelet[2861]: I1213 01:33:16.939295 2861 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 01:33:18.056899 update_engine[1955]: I20241213 01:33:18.056067 1955 update_attempter.cc:509] Updating boot flags... Dec 13 01:33:18.155124 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (3147) Dec 13 01:33:18.365063 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (3157) Dec 13 01:33:18.432665 systemd[1]: Reloading requested from client PID 3281 ('systemctl') (unit session-7.scope)... Dec 13 01:33:18.432690 systemd[1]: Reloading... Dec 13 01:33:18.619013 zram_generator::config[3356]: No configuration found. Dec 13 01:33:18.756182 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:33:18.871967 systemd[1]: Reloading finished in 438 ms. Dec 13 01:33:18.944242 kubelet[2861]: E1213 01:33:18.943596 2861 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{ip-172-31-21-168.1810988def12fa3f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-21-168,UID:ip-172-31-21-168,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-21-168,},FirstTimestamp:2024-12-13 01:33:11.906449983 +0000 UTC m=+0.944255479,LastTimestamp:2024-12-13 01:33:11.906449983 +0000 UTC m=+0.944255479,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-21-168,}" Dec 13 01:33:18.944309 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:33:18.969406 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:33:18.969669 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:33:18.969737 systemd[1]: kubelet.service: Consumed 1.106s CPU time, 116.3M memory peak, 0B memory swap peak. Dec 13 01:33:18.976393 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:33:19.373331 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:33:19.380548 (kubelet)[3413]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:33:19.478638 kubelet[3413]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:33:19.478638 kubelet[3413]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:33:19.478638 kubelet[3413]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:33:19.479140 kubelet[3413]: I1213 01:33:19.478703 3413 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:33:19.488746 kubelet[3413]: I1213 01:33:19.488707 3413 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 01:33:19.488746 kubelet[3413]: I1213 01:33:19.488736 3413 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:33:19.489550 kubelet[3413]: I1213 01:33:19.489215 3413 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 01:33:19.499745 kubelet[3413]: I1213 01:33:19.498875 3413 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 01:33:19.503013 kubelet[3413]: I1213 01:33:19.502906 3413 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:33:19.525906 kubelet[3413]: I1213 01:33:19.525878 3413 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:33:19.527023 kubelet[3413]: I1213 01:33:19.526512 3413 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:33:19.527023 kubelet[3413]: I1213 01:33:19.526559 3413 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-21-168","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:33:19.527023 kubelet[3413]: I1213 01:33:19.526820 3413 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:33:19.527023 kubelet[3413]: I1213 01:33:19.526831 3413 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:33:19.527023 kubelet[3413]: I1213 01:33:19.526869 3413 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:33:19.527390 kubelet[3413]: I1213 01:33:19.526951 3413 kubelet.go:400] "Attempting to sync node with API server" Dec 13 01:33:19.527390 kubelet[3413]: I1213 01:33:19.526965 3413 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:33:19.527507 kubelet[3413]: I1213 01:33:19.527494 3413 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:33:19.527786 kubelet[3413]: I1213 01:33:19.527728 3413 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:33:19.532228 kubelet[3413]: I1213 01:33:19.532203 3413 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:33:19.533574 kubelet[3413]: I1213 01:33:19.532633 3413 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:33:19.534489 kubelet[3413]: I1213 01:33:19.534081 3413 server.go:1264] "Started kubelet" Dec 13 01:33:19.540796 kubelet[3413]: I1213 01:33:19.540670 3413 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:33:19.556107 kubelet[3413]: I1213 01:33:19.556058 3413 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:33:19.563787 kubelet[3413]: I1213 01:33:19.563766 3413 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:33:19.573864 kubelet[3413]: I1213 01:33:19.573830 3413 server.go:455] "Adding debug handlers to kubelet server" Dec 13 01:33:19.576426 kubelet[3413]: I1213 01:33:19.556368 3413 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:33:19.577311 kubelet[3413]: I1213 01:33:19.577293 3413 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:33:19.577446 kubelet[3413]: I1213 01:33:19.563911 3413 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 01:33:19.577618 kubelet[3413]: I1213 01:33:19.577607 3413 reconciler.go:26] "Reconciler: start to sync state" Dec 13 01:33:19.579928 kubelet[3413]: I1213 01:33:19.579904 3413 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:33:19.580067 kubelet[3413]: I1213 01:33:19.580044 3413 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:33:19.585346 kubelet[3413]: I1213 01:33:19.585298 3413 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:33:19.595722 kubelet[3413]: I1213 01:33:19.595387 3413 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:33:19.597730 kubelet[3413]: I1213 01:33:19.597547 3413 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:33:19.597730 kubelet[3413]: I1213 01:33:19.597588 3413 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:33:19.597730 kubelet[3413]: I1213 01:33:19.597612 3413 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 01:33:19.597730 kubelet[3413]: E1213 01:33:19.597663 3413 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:33:19.662099 kubelet[3413]: I1213 01:33:19.661360 3413 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:33:19.662099 kubelet[3413]: I1213 01:33:19.661378 3413 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:33:19.662099 kubelet[3413]: I1213 01:33:19.661396 3413 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:33:19.662099 kubelet[3413]: I1213 01:33:19.661556 3413 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 01:33:19.662099 kubelet[3413]: I1213 01:33:19.661565 3413 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 01:33:19.662099 kubelet[3413]: I1213 01:33:19.661644 3413 policy_none.go:49] "None policy: Start" Dec 13 01:33:19.664992 kubelet[3413]: I1213 01:33:19.664061 3413 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:33:19.664992 kubelet[3413]: I1213 01:33:19.664088 3413 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:33:19.664992 kubelet[3413]: I1213 01:33:19.664314 3413 state_mem.go:75] "Updated machine memory state" Dec 13 01:33:19.669207 kubelet[3413]: I1213 01:33:19.668327 3413 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-21-168" Dec 13 01:33:19.678415 kubelet[3413]: I1213 01:33:19.678362 3413 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:33:19.679811 kubelet[3413]: I1213 01:33:19.679192 3413 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 01:33:19.679811 kubelet[3413]: I1213 01:33:19.679397 3413 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:33:19.692622 kubelet[3413]: I1213 01:33:19.692094 3413 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-21-168" Dec 13 01:33:19.692622 kubelet[3413]: I1213 01:33:19.692160 3413 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-21-168" Dec 13 01:33:19.701038 kubelet[3413]: I1213 01:33:19.700096 3413 topology_manager.go:215] "Topology Admit Handler" podUID="578b2f09b8a3712d0d61f48334c1b448" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-21-168" Dec 13 01:33:19.701038 kubelet[3413]: I1213 01:33:19.700196 3413 topology_manager.go:215] "Topology Admit Handler" podUID="c86bcb35605c8c43ef52e51663dbd2da" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-21-168" Dec 13 01:33:19.701038 kubelet[3413]: I1213 01:33:19.700269 3413 topology_manager.go:215] "Topology Admit Handler" podUID="495a5bea895d4c3fd62b9d26dd634033" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-21-168" Dec 13 01:33:19.781147 kubelet[3413]: I1213 01:33:19.781067 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/495a5bea895d4c3fd62b9d26dd634033-kubeconfig\") pod \"kube-scheduler-ip-172-31-21-168\" (UID: \"495a5bea895d4c3fd62b9d26dd634033\") " pod="kube-system/kube-scheduler-ip-172-31-21-168" Dec 13 01:33:19.781341 kubelet[3413]: I1213 01:33:19.781165 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/578b2f09b8a3712d0d61f48334c1b448-ca-certs\") pod \"kube-apiserver-ip-172-31-21-168\" (UID: \"578b2f09b8a3712d0d61f48334c1b448\") " pod="kube-system/kube-apiserver-ip-172-31-21-168" Dec 13 01:33:19.781341 kubelet[3413]: I1213 01:33:19.781229 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/578b2f09b8a3712d0d61f48334c1b448-k8s-certs\") pod \"kube-apiserver-ip-172-31-21-168\" (UID: \"578b2f09b8a3712d0d61f48334c1b448\") " pod="kube-system/kube-apiserver-ip-172-31-21-168" Dec 13 01:33:19.781341 kubelet[3413]: I1213 01:33:19.781256 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c86bcb35605c8c43ef52e51663dbd2da-ca-certs\") pod \"kube-controller-manager-ip-172-31-21-168\" (UID: \"c86bcb35605c8c43ef52e51663dbd2da\") " pod="kube-system/kube-controller-manager-ip-172-31-21-168" Dec 13 01:33:19.781341 kubelet[3413]: I1213 01:33:19.781315 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c86bcb35605c8c43ef52e51663dbd2da-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-21-168\" (UID: \"c86bcb35605c8c43ef52e51663dbd2da\") " pod="kube-system/kube-controller-manager-ip-172-31-21-168" Dec 13 01:33:19.781700 kubelet[3413]: I1213 01:33:19.781349 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/578b2f09b8a3712d0d61f48334c1b448-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-21-168\" (UID: \"578b2f09b8a3712d0d61f48334c1b448\") " pod="kube-system/kube-apiserver-ip-172-31-21-168" Dec 13 01:33:19.781700 kubelet[3413]: I1213 01:33:19.781407 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c86bcb35605c8c43ef52e51663dbd2da-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-21-168\" (UID: \"c86bcb35605c8c43ef52e51663dbd2da\") " pod="kube-system/kube-controller-manager-ip-172-31-21-168" Dec 13 01:33:19.781700 kubelet[3413]: I1213 01:33:19.781436 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c86bcb35605c8c43ef52e51663dbd2da-k8s-certs\") pod \"kube-controller-manager-ip-172-31-21-168\" (UID: \"c86bcb35605c8c43ef52e51663dbd2da\") " pod="kube-system/kube-controller-manager-ip-172-31-21-168" Dec 13 01:33:19.781700 kubelet[3413]: I1213 01:33:19.781460 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c86bcb35605c8c43ef52e51663dbd2da-kubeconfig\") pod \"kube-controller-manager-ip-172-31-21-168\" (UID: \"c86bcb35605c8c43ef52e51663dbd2da\") " pod="kube-system/kube-controller-manager-ip-172-31-21-168" Dec 13 01:33:20.531058 kubelet[3413]: I1213 01:33:20.530630 3413 apiserver.go:52] "Watching apiserver" Dec 13 01:33:20.577994 kubelet[3413]: I1213 01:33:20.577907 3413 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 01:33:20.750496 kubelet[3413]: I1213 01:33:20.750421 3413 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-21-168" podStartSLOduration=1.750380483 podStartE2EDuration="1.750380483s" podCreationTimestamp="2024-12-13 01:33:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:33:20.749116437 +0000 UTC m=+1.358518292" watchObservedRunningTime="2024-12-13 01:33:20.750380483 +0000 UTC m=+1.359782335" Dec 13 01:33:20.801941 kubelet[3413]: I1213 01:33:20.801653 3413 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-21-168" podStartSLOduration=1.801627866 podStartE2EDuration="1.801627866s" podCreationTimestamp="2024-12-13 01:33:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:33:20.781819754 +0000 UTC m=+1.391221612" watchObservedRunningTime="2024-12-13 01:33:20.801627866 +0000 UTC m=+1.411029718" Dec 13 01:33:24.973484 kubelet[3413]: I1213 01:33:24.973417 3413 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-21-168" podStartSLOduration=5.973378274 podStartE2EDuration="5.973378274s" podCreationTimestamp="2024-12-13 01:33:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:33:20.803162654 +0000 UTC m=+1.412564510" watchObservedRunningTime="2024-12-13 01:33:24.973378274 +0000 UTC m=+5.582780128" Dec 13 01:33:25.936924 sudo[2291]: pam_unix(sudo:session): session closed for user root Dec 13 01:33:25.960880 sshd[2288]: pam_unix(sshd:session): session closed for user core Dec 13 01:33:25.965826 systemd[1]: sshd@6-172.31.21.168:22-139.178.68.195:45516.service: Deactivated successfully. Dec 13 01:33:25.970060 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 01:33:25.970568 systemd[1]: session-7.scope: Consumed 5.266s CPU time, 186.0M memory peak, 0B memory swap peak. Dec 13 01:33:25.973179 systemd-logind[1953]: Session 7 logged out. Waiting for processes to exit. Dec 13 01:33:25.976832 systemd-logind[1953]: Removed session 7. Dec 13 01:33:35.519695 kubelet[3413]: I1213 01:33:35.519657 3413 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 01:33:35.524573 containerd[1979]: time="2024-12-13T01:33:35.524491940Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 01:33:35.525265 kubelet[3413]: I1213 01:33:35.525239 3413 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 01:33:35.808462 kubelet[3413]: I1213 01:33:35.806865 3413 topology_manager.go:215] "Topology Admit Handler" podUID="69ac49a8-6b6f-4a69-af1c-7efd12ee406c" podNamespace="kube-system" podName="kube-proxy-sthvr" Dec 13 01:33:35.815738 kubelet[3413]: W1213 01:33:35.815707 3413 reflector.go:547] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-21-168" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-21-168' and this object Dec 13 01:33:35.817049 kubelet[3413]: E1213 01:33:35.817025 3413 reflector.go:150] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-21-168" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-21-168' and this object Dec 13 01:33:35.817224 kubelet[3413]: W1213 01:33:35.816091 3413 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ip-172-31-21-168" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-21-168' and this object Dec 13 01:33:35.817597 kubelet[3413]: E1213 01:33:35.817381 3413 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ip-172-31-21-168" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-21-168' and this object Dec 13 01:33:35.822559 kubelet[3413]: I1213 01:33:35.820558 3413 topology_manager.go:215] "Topology Admit Handler" podUID="569dbd42-28b2-411a-b3b5-830222166854" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-z5lpc" Dec 13 01:33:35.825152 systemd[1]: Created slice kubepods-besteffort-pod69ac49a8_6b6f_4a69_af1c_7efd12ee406c.slice - libcontainer container kubepods-besteffort-pod69ac49a8_6b6f_4a69_af1c_7efd12ee406c.slice. Dec 13 01:33:35.845171 systemd[1]: Created slice kubepods-besteffort-pod569dbd42_28b2_411a_b3b5_830222166854.slice - libcontainer container kubepods-besteffort-pod569dbd42_28b2_411a_b3b5_830222166854.slice. Dec 13 01:33:35.920997 kubelet[3413]: I1213 01:33:35.919216 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqhqq\" (UniqueName: \"kubernetes.io/projected/569dbd42-28b2-411a-b3b5-830222166854-kube-api-access-lqhqq\") pod \"tigera-operator-7bc55997bb-z5lpc\" (UID: \"569dbd42-28b2-411a-b3b5-830222166854\") " pod="tigera-operator/tigera-operator-7bc55997bb-z5lpc" Dec 13 01:33:35.920997 kubelet[3413]: I1213 01:33:35.919269 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/569dbd42-28b2-411a-b3b5-830222166854-var-lib-calico\") pod \"tigera-operator-7bc55997bb-z5lpc\" (UID: \"569dbd42-28b2-411a-b3b5-830222166854\") " pod="tigera-operator/tigera-operator-7bc55997bb-z5lpc" Dec 13 01:33:35.920997 kubelet[3413]: I1213 01:33:35.919298 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/69ac49a8-6b6f-4a69-af1c-7efd12ee406c-xtables-lock\") pod \"kube-proxy-sthvr\" (UID: \"69ac49a8-6b6f-4a69-af1c-7efd12ee406c\") " pod="kube-system/kube-proxy-sthvr" Dec 13 01:33:35.920997 kubelet[3413]: I1213 01:33:35.919323 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj4r9\" (UniqueName: \"kubernetes.io/projected/69ac49a8-6b6f-4a69-af1c-7efd12ee406c-kube-api-access-bj4r9\") pod \"kube-proxy-sthvr\" (UID: \"69ac49a8-6b6f-4a69-af1c-7efd12ee406c\") " pod="kube-system/kube-proxy-sthvr" Dec 13 01:33:35.920997 kubelet[3413]: I1213 01:33:35.919347 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/69ac49a8-6b6f-4a69-af1c-7efd12ee406c-kube-proxy\") pod \"kube-proxy-sthvr\" (UID: \"69ac49a8-6b6f-4a69-af1c-7efd12ee406c\") " pod="kube-system/kube-proxy-sthvr" Dec 13 01:33:35.921326 kubelet[3413]: I1213 01:33:35.919388 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/69ac49a8-6b6f-4a69-af1c-7efd12ee406c-lib-modules\") pod \"kube-proxy-sthvr\" (UID: \"69ac49a8-6b6f-4a69-af1c-7efd12ee406c\") " pod="kube-system/kube-proxy-sthvr" Dec 13 01:33:36.153239 containerd[1979]: time="2024-12-13T01:33:36.152420944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-z5lpc,Uid:569dbd42-28b2-411a-b3b5-830222166854,Namespace:tigera-operator,Attempt:0,}" Dec 13 01:33:36.224405 containerd[1979]: time="2024-12-13T01:33:36.224294167Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:33:36.224405 containerd[1979]: time="2024-12-13T01:33:36.224350520Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:33:36.224405 containerd[1979]: time="2024-12-13T01:33:36.224365538Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:33:36.225109 containerd[1979]: time="2024-12-13T01:33:36.224913493Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:33:36.253306 systemd[1]: run-containerd-runc-k8s.io-bec9fb2f112bba0b0b6be3e5a7c9748bb1a6b5f829e40fd45ca10116e54759cd-runc.XMog7F.mount: Deactivated successfully. Dec 13 01:33:36.264226 systemd[1]: Started cri-containerd-bec9fb2f112bba0b0b6be3e5a7c9748bb1a6b5f829e40fd45ca10116e54759cd.scope - libcontainer container bec9fb2f112bba0b0b6be3e5a7c9748bb1a6b5f829e40fd45ca10116e54759cd. Dec 13 01:33:36.327065 containerd[1979]: time="2024-12-13T01:33:36.326783973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-z5lpc,Uid:569dbd42-28b2-411a-b3b5-830222166854,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"bec9fb2f112bba0b0b6be3e5a7c9748bb1a6b5f829e40fd45ca10116e54759cd\"" Dec 13 01:33:36.331135 containerd[1979]: time="2024-12-13T01:33:36.330818017Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Dec 13 01:33:37.052021 kubelet[3413]: E1213 01:33:37.051925 3413 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 13 01:33:37.052516 kubelet[3413]: E1213 01:33:37.052044 3413 projected.go:200] Error preparing data for projected volume kube-api-access-bj4r9 for pod kube-system/kube-proxy-sthvr: failed to sync configmap cache: timed out waiting for the condition Dec 13 01:33:37.052516 kubelet[3413]: E1213 01:33:37.052157 3413 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/69ac49a8-6b6f-4a69-af1c-7efd12ee406c-kube-api-access-bj4r9 podName:69ac49a8-6b6f-4a69-af1c-7efd12ee406c nodeName:}" failed. No retries permitted until 2024-12-13 01:33:37.552116389 +0000 UTC m=+18.161518262 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-bj4r9" (UniqueName: "kubernetes.io/projected/69ac49a8-6b6f-4a69-af1c-7efd12ee406c-kube-api-access-bj4r9") pod "kube-proxy-sthvr" (UID: "69ac49a8-6b6f-4a69-af1c-7efd12ee406c") : failed to sync configmap cache: timed out waiting for the condition Dec 13 01:33:37.938212 containerd[1979]: time="2024-12-13T01:33:37.938166592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sthvr,Uid:69ac49a8-6b6f-4a69-af1c-7efd12ee406c,Namespace:kube-system,Attempt:0,}" Dec 13 01:33:37.998061 containerd[1979]: time="2024-12-13T01:33:37.997934615Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:33:37.998264 containerd[1979]: time="2024-12-13T01:33:37.998080446Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:33:37.998264 containerd[1979]: time="2024-12-13T01:33:37.998114966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:33:37.998505 containerd[1979]: time="2024-12-13T01:33:37.998294042Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:33:38.038232 systemd[1]: run-containerd-runc-k8s.io-3d099a1ce1c6012a5d1ecdf82bed16f89e458a9fd568eac09e97d2f7e8c3cc18-runc.bwu4Pp.mount: Deactivated successfully. Dec 13 01:33:38.050193 systemd[1]: Started cri-containerd-3d099a1ce1c6012a5d1ecdf82bed16f89e458a9fd568eac09e97d2f7e8c3cc18.scope - libcontainer container 3d099a1ce1c6012a5d1ecdf82bed16f89e458a9fd568eac09e97d2f7e8c3cc18. Dec 13 01:33:38.078998 containerd[1979]: time="2024-12-13T01:33:38.078923732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sthvr,Uid:69ac49a8-6b6f-4a69-af1c-7efd12ee406c,Namespace:kube-system,Attempt:0,} returns sandbox id \"3d099a1ce1c6012a5d1ecdf82bed16f89e458a9fd568eac09e97d2f7e8c3cc18\"" Dec 13 01:33:38.084498 containerd[1979]: time="2024-12-13T01:33:38.084325160Z" level=info msg="CreateContainer within sandbox \"3d099a1ce1c6012a5d1ecdf82bed16f89e458a9fd568eac09e97d2f7e8c3cc18\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 01:33:38.106918 containerd[1979]: time="2024-12-13T01:33:38.106866536Z" level=info msg="CreateContainer within sandbox \"3d099a1ce1c6012a5d1ecdf82bed16f89e458a9fd568eac09e97d2f7e8c3cc18\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e7c96a35af834a29ff9c1be5ac2bcb6a7b504179753a65899a394f4d7e61b224\"" Dec 13 01:33:38.110699 containerd[1979]: time="2024-12-13T01:33:38.109087794Z" level=info msg="StartContainer for \"e7c96a35af834a29ff9c1be5ac2bcb6a7b504179753a65899a394f4d7e61b224\"" Dec 13 01:33:38.145277 systemd[1]: Started cri-containerd-e7c96a35af834a29ff9c1be5ac2bcb6a7b504179753a65899a394f4d7e61b224.scope - libcontainer container e7c96a35af834a29ff9c1be5ac2bcb6a7b504179753a65899a394f4d7e61b224. Dec 13 01:33:38.198694 containerd[1979]: time="2024-12-13T01:33:38.198647124Z" level=info msg="StartContainer for \"e7c96a35af834a29ff9c1be5ac2bcb6a7b504179753a65899a394f4d7e61b224\" returns successfully" Dec 13 01:33:39.676003 kubelet[3413]: I1213 01:33:39.675478 3413 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-sthvr" podStartSLOduration=4.675454916 podStartE2EDuration="4.675454916s" podCreationTimestamp="2024-12-13 01:33:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:33:38.712941679 +0000 UTC m=+19.322343536" watchObservedRunningTime="2024-12-13 01:33:39.675454916 +0000 UTC m=+20.284856772" Dec 13 01:33:40.295116 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount594829857.mount: Deactivated successfully. Dec 13 01:33:41.173626 containerd[1979]: time="2024-12-13T01:33:41.173569334Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:41.174917 containerd[1979]: time="2024-12-13T01:33:41.174855614Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21764309" Dec 13 01:33:41.178769 containerd[1979]: time="2024-12-13T01:33:41.177701916Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:41.184238 containerd[1979]: time="2024-12-13T01:33:41.184147867Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:41.185095 containerd[1979]: time="2024-12-13T01:33:41.184962995Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 4.854100785s" Dec 13 01:33:41.185095 containerd[1979]: time="2024-12-13T01:33:41.185090208Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Dec 13 01:33:41.194024 containerd[1979]: time="2024-12-13T01:33:41.192552205Z" level=info msg="CreateContainer within sandbox \"bec9fb2f112bba0b0b6be3e5a7c9748bb1a6b5f829e40fd45ca10116e54759cd\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 13 01:33:41.277894 containerd[1979]: time="2024-12-13T01:33:41.277782296Z" level=info msg="CreateContainer within sandbox \"bec9fb2f112bba0b0b6be3e5a7c9748bb1a6b5f829e40fd45ca10116e54759cd\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"e7d4de585ad35b29007a34d13a8cafef3cc9d707e018241f58da7d42ccc9df35\"" Dec 13 01:33:41.279092 containerd[1979]: time="2024-12-13T01:33:41.278708029Z" level=info msg="StartContainer for \"e7d4de585ad35b29007a34d13a8cafef3cc9d707e018241f58da7d42ccc9df35\"" Dec 13 01:33:41.349982 systemd[1]: Started cri-containerd-e7d4de585ad35b29007a34d13a8cafef3cc9d707e018241f58da7d42ccc9df35.scope - libcontainer container e7d4de585ad35b29007a34d13a8cafef3cc9d707e018241f58da7d42ccc9df35. Dec 13 01:33:41.385899 containerd[1979]: time="2024-12-13T01:33:41.385853673Z" level=info msg="StartContainer for \"e7d4de585ad35b29007a34d13a8cafef3cc9d707e018241f58da7d42ccc9df35\" returns successfully" Dec 13 01:33:45.008783 kubelet[3413]: I1213 01:33:45.008632 3413 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-z5lpc" podStartSLOduration=5.146491466 podStartE2EDuration="10.008608969s" podCreationTimestamp="2024-12-13 01:33:35 +0000 UTC" firstStartedPulling="2024-12-13 01:33:36.328908135 +0000 UTC m=+16.938309969" lastFinishedPulling="2024-12-13 01:33:41.191025627 +0000 UTC m=+21.800427472" observedRunningTime="2024-12-13 01:33:41.72723012 +0000 UTC m=+22.336631976" watchObservedRunningTime="2024-12-13 01:33:45.008608969 +0000 UTC m=+25.618010823" Dec 13 01:33:45.013319 kubelet[3413]: I1213 01:33:45.012711 3413 topology_manager.go:215] "Topology Admit Handler" podUID="d60889ef-1074-485b-b4d1-56f1e9b91581" podNamespace="calico-system" podName="calico-typha-6bdcc7988b-c6rdq" Dec 13 01:33:45.039030 systemd[1]: Created slice kubepods-besteffort-podd60889ef_1074_485b_b4d1_56f1e9b91581.slice - libcontainer container kubepods-besteffort-podd60889ef_1074_485b_b4d1_56f1e9b91581.slice. Dec 13 01:33:45.098887 kubelet[3413]: I1213 01:33:45.098838 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d60889ef-1074-485b-b4d1-56f1e9b91581-tigera-ca-bundle\") pod \"calico-typha-6bdcc7988b-c6rdq\" (UID: \"d60889ef-1074-485b-b4d1-56f1e9b91581\") " pod="calico-system/calico-typha-6bdcc7988b-c6rdq" Dec 13 01:33:45.100107 kubelet[3413]: I1213 01:33:45.098896 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tkzw\" (UniqueName: \"kubernetes.io/projected/d60889ef-1074-485b-b4d1-56f1e9b91581-kube-api-access-7tkzw\") pod \"calico-typha-6bdcc7988b-c6rdq\" (UID: \"d60889ef-1074-485b-b4d1-56f1e9b91581\") " pod="calico-system/calico-typha-6bdcc7988b-c6rdq" Dec 13 01:33:45.100107 kubelet[3413]: I1213 01:33:45.098923 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/d60889ef-1074-485b-b4d1-56f1e9b91581-typha-certs\") pod \"calico-typha-6bdcc7988b-c6rdq\" (UID: \"d60889ef-1074-485b-b4d1-56f1e9b91581\") " pod="calico-system/calico-typha-6bdcc7988b-c6rdq" Dec 13 01:33:45.261447 kubelet[3413]: I1213 01:33:45.260900 3413 topology_manager.go:215] "Topology Admit Handler" podUID="ba02b5de-196a-474e-93fa-8abd0dc834dd" podNamespace="calico-system" podName="calico-node-sxdtq" Dec 13 01:33:45.276399 systemd[1]: Created slice kubepods-besteffort-podba02b5de_196a_474e_93fa_8abd0dc834dd.slice - libcontainer container kubepods-besteffort-podba02b5de_196a_474e_93fa_8abd0dc834dd.slice. Dec 13 01:33:45.357923 containerd[1979]: time="2024-12-13T01:33:45.356825421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6bdcc7988b-c6rdq,Uid:d60889ef-1074-485b-b4d1-56f1e9b91581,Namespace:calico-system,Attempt:0,}" Dec 13 01:33:45.400718 kubelet[3413]: I1213 01:33:45.400651 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba02b5de-196a-474e-93fa-8abd0dc834dd-tigera-ca-bundle\") pod \"calico-node-sxdtq\" (UID: \"ba02b5de-196a-474e-93fa-8abd0dc834dd\") " pod="calico-system/calico-node-sxdtq" Dec 13 01:33:45.400718 kubelet[3413]: I1213 01:33:45.400715 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ba02b5de-196a-474e-93fa-8abd0dc834dd-lib-modules\") pod \"calico-node-sxdtq\" (UID: \"ba02b5de-196a-474e-93fa-8abd0dc834dd\") " pod="calico-system/calico-node-sxdtq" Dec 13 01:33:45.401139 kubelet[3413]: I1213 01:33:45.400741 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ba02b5de-196a-474e-93fa-8abd0dc834dd-var-run-calico\") pod \"calico-node-sxdtq\" (UID: \"ba02b5de-196a-474e-93fa-8abd0dc834dd\") " pod="calico-system/calico-node-sxdtq" Dec 13 01:33:45.401139 kubelet[3413]: I1213 01:33:45.400768 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ba02b5de-196a-474e-93fa-8abd0dc834dd-cni-bin-dir\") pod \"calico-node-sxdtq\" (UID: \"ba02b5de-196a-474e-93fa-8abd0dc834dd\") " pod="calico-system/calico-node-sxdtq" Dec 13 01:33:45.401139 kubelet[3413]: I1213 01:33:45.400792 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ba02b5de-196a-474e-93fa-8abd0dc834dd-policysync\") pod \"calico-node-sxdtq\" (UID: \"ba02b5de-196a-474e-93fa-8abd0dc834dd\") " pod="calico-system/calico-node-sxdtq" Dec 13 01:33:45.401139 kubelet[3413]: I1213 01:33:45.400815 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ba02b5de-196a-474e-93fa-8abd0dc834dd-node-certs\") pod \"calico-node-sxdtq\" (UID: \"ba02b5de-196a-474e-93fa-8abd0dc834dd\") " pod="calico-system/calico-node-sxdtq" Dec 13 01:33:45.401139 kubelet[3413]: I1213 01:33:45.400841 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ba02b5de-196a-474e-93fa-8abd0dc834dd-flexvol-driver-host\") pod \"calico-node-sxdtq\" (UID: \"ba02b5de-196a-474e-93fa-8abd0dc834dd\") " pod="calico-system/calico-node-sxdtq" Dec 13 01:33:45.401758 kubelet[3413]: I1213 01:33:45.400865 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhcv9\" (UniqueName: \"kubernetes.io/projected/ba02b5de-196a-474e-93fa-8abd0dc834dd-kube-api-access-lhcv9\") pod \"calico-node-sxdtq\" (UID: \"ba02b5de-196a-474e-93fa-8abd0dc834dd\") " pod="calico-system/calico-node-sxdtq" Dec 13 01:33:45.401758 kubelet[3413]: I1213 01:33:45.400893 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ba02b5de-196a-474e-93fa-8abd0dc834dd-cni-net-dir\") pod \"calico-node-sxdtq\" (UID: \"ba02b5de-196a-474e-93fa-8abd0dc834dd\") " pod="calico-system/calico-node-sxdtq" Dec 13 01:33:45.401758 kubelet[3413]: I1213 01:33:45.400919 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ba02b5de-196a-474e-93fa-8abd0dc834dd-cni-log-dir\") pod \"calico-node-sxdtq\" (UID: \"ba02b5de-196a-474e-93fa-8abd0dc834dd\") " pod="calico-system/calico-node-sxdtq" Dec 13 01:33:45.401758 kubelet[3413]: I1213 01:33:45.400943 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ba02b5de-196a-474e-93fa-8abd0dc834dd-xtables-lock\") pod \"calico-node-sxdtq\" (UID: \"ba02b5de-196a-474e-93fa-8abd0dc834dd\") " pod="calico-system/calico-node-sxdtq" Dec 13 01:33:45.401758 kubelet[3413]: I1213 01:33:45.400986 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ba02b5de-196a-474e-93fa-8abd0dc834dd-var-lib-calico\") pod \"calico-node-sxdtq\" (UID: \"ba02b5de-196a-474e-93fa-8abd0dc834dd\") " pod="calico-system/calico-node-sxdtq" Dec 13 01:33:45.449171 containerd[1979]: time="2024-12-13T01:33:45.446559029Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:33:45.449171 containerd[1979]: time="2024-12-13T01:33:45.446653284Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:33:45.449171 containerd[1979]: time="2024-12-13T01:33:45.446678008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:33:45.449171 containerd[1979]: time="2024-12-13T01:33:45.447070691Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:33:45.499190 systemd[1]: Started cri-containerd-e082063ba417c89c384e02714dfc6eefbf51bf9d3459cf033d9f0efe7cccaa69.scope - libcontainer container e082063ba417c89c384e02714dfc6eefbf51bf9d3459cf033d9f0efe7cccaa69. Dec 13 01:33:45.514626 kubelet[3413]: E1213 01:33:45.513943 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:45.514626 kubelet[3413]: W1213 01:33:45.513996 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:45.516800 kubelet[3413]: E1213 01:33:45.516699 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:45.521075 kubelet[3413]: I1213 01:33:45.520932 3413 topology_manager.go:215] "Topology Admit Handler" podUID="3cc2106b-553b-4660-9b27-e2c825955271" podNamespace="calico-system" podName="csi-node-driver-h9g2n" Dec 13 01:33:45.522537 kubelet[3413]: E1213 01:33:45.522136 3413 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h9g2n" podUID="3cc2106b-553b-4660-9b27-e2c825955271" Dec 13 01:33:45.530609 kubelet[3413]: E1213 01:33:45.530540 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:45.530609 kubelet[3413]: W1213 01:33:45.530600 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:45.531291 kubelet[3413]: E1213 01:33:45.531013 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:45.552997 kubelet[3413]: E1213 01:33:45.552807 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:45.552997 kubelet[3413]: W1213 01:33:45.552831 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:45.552997 kubelet[3413]: E1213 01:33:45.552857 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:45.553397 kubelet[3413]: E1213 01:33:45.553329 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:45.553397 kubelet[3413]: W1213 01:33:45.553345 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:45.553397 kubelet[3413]: E1213 01:33:45.553361 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:45.555429 kubelet[3413]: E1213 01:33:45.555178 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:45.555429 kubelet[3413]: W1213 01:33:45.555203 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:45.555429 kubelet[3413]: E1213 01:33:45.555218 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:45.555710 kubelet[3413]: E1213 01:33:45.555581 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:45.556192 kubelet[3413]: W1213 01:33:45.556032 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:45.556192 kubelet[3413]: E1213 01:33:45.556057 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:45.556514 kubelet[3413]: E1213 01:33:45.556372 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:45.556514 kubelet[3413]: W1213 01:33:45.556385 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:45.556514 kubelet[3413]: E1213 01:33:45.556420 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:45.556859 kubelet[3413]: E1213 01:33:45.556753 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:45.556859 kubelet[3413]: W1213 01:33:45.556764 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:45.557286 kubelet[3413]: E1213 01:33:45.556962 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:45.557545 kubelet[3413]: E1213 01:33:45.557424 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:45.557545 kubelet[3413]: W1213 01:33:45.557437 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:45.557545 kubelet[3413]: E1213 01:33:45.557451 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:45.557938 kubelet[3413]: E1213 01:33:45.557882 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:45.557938 kubelet[3413]: W1213 01:33:45.557895 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:45.560451 kubelet[3413]: E1213 01:33:45.560064 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:45.561330 kubelet[3413]: E1213 01:33:45.561179 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:45.561330 kubelet[3413]: W1213 01:33:45.561208 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:45.561330 kubelet[3413]: E1213 01:33:45.561226 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:45.562476 kubelet[3413]: E1213 01:33:45.562360 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:45.562476 kubelet[3413]: W1213 01:33:45.562376 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:45.562476 kubelet[3413]: E1213 01:33:45.562413 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:45.563116 kubelet[3413]: E1213 01:33:45.562965 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:45.563116 kubelet[3413]: W1213 01:33:45.562998 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:45.563116 kubelet[3413]: E1213 01:33:45.563013 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:45.563653 kubelet[3413]: E1213 01:33:45.563507 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:45.563653 kubelet[3413]: W1213 01:33:45.563519 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:45.563653 kubelet[3413]: E1213 01:33:45.563542 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:45.564114 kubelet[3413]: E1213 01:33:45.563965 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:45.564114 kubelet[3413]: W1213 01:33:45.564016 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:45.564114 kubelet[3413]: E1213 01:33:45.564031 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:45.564666 kubelet[3413]: E1213 01:33:45.564536 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:45.564666 kubelet[3413]: W1213 01:33:45.564551 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:45.564666 kubelet[3413]: E1213 01:33:45.564576 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:45.565199 kubelet[3413]: E1213 01:33:45.565085 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:45.565199 kubelet[3413]: W1213 01:33:45.565100 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:45.565199 kubelet[3413]: E1213 01:33:45.565113 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:45.565769 kubelet[3413]: E1213 01:33:45.565612 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:45.565769 kubelet[3413]: W1213 01:33:45.565625 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:45.565769 kubelet[3413]: E1213 01:33:45.565638 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:45.566466 kubelet[3413]: E1213 01:33:45.566327 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:45.566466 kubelet[3413]: W1213 01:33:45.566340 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:45.566466 kubelet[3413]: E1213 01:33:45.566390 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:45.567066 kubelet[3413]: E1213 01:33:45.566943 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:45.567066 kubelet[3413]: W1213 01:33:45.566956 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:45.567066 kubelet[3413]: E1213 01:33:45.566989 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:45.567581 kubelet[3413]: E1213 01:33:45.567469 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:45.567581 kubelet[3413]: W1213 01:33:45.567500 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:45.567581 kubelet[3413]: E1213 01:33:45.567514 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:45.568202 kubelet[3413]: E1213 01:33:45.568064 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:45.568202 kubelet[3413]: W1213 01:33:45.568080 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:45.568202 kubelet[3413]: E1213 01:33:45.568096 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:45.573491 kubelet[3413]: E1213 01:33:45.573383 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:45.573491 kubelet[3413]: W1213 01:33:45.573419 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:45.573491 kubelet[3413]: E1213 01:33:45.573442 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:45.586460 containerd[1979]: time="2024-12-13T01:33:45.585732679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-sxdtq,Uid:ba02b5de-196a-474e-93fa-8abd0dc834dd,Namespace:calico-system,Attempt:0,}" Dec 13 01:33:45.603315 kubelet[3413]: E1213 01:33:45.603278 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:45.603930 kubelet[3413]: W1213 01:33:45.603847 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:45.603930 kubelet[3413]: E1213 01:33:45.603903 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:45.604626 kubelet[3413]: I1213 01:33:45.604492 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/3cc2106b-553b-4660-9b27-e2c825955271-varrun\") pod \"csi-node-driver-h9g2n\" (UID: \"3cc2106b-553b-4660-9b27-e2c825955271\") " pod="calico-system/csi-node-driver-h9g2n" Dec 13 01:33:45.604996 kubelet[3413]: E1213 01:33:45.604897 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:45.604996 kubelet[3413]: W1213 01:33:45.604913 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:45.604996 kubelet[3413]: E1213 01:33:45.604931 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:45.605751 kubelet[3413]: E1213 01:33:45.605634 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:45.605751 kubelet[3413]: W1213 01:33:45.605672 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:45.605751 kubelet[3413]: E1213 01:33:45.605703 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:45.606610 kubelet[3413]: E1213 01:33:45.606534 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:45.606610 kubelet[3413]: W1213 01:33:45.606566 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:45.606610 kubelet[3413]: E1213 01:33:45.606581 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:45.607364 kubelet[3413]: I1213 01:33:45.607117 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3cc2106b-553b-4660-9b27-e2c825955271-kubelet-dir\") pod \"csi-node-driver-h9g2n\" (UID: \"3cc2106b-553b-4660-9b27-e2c825955271\") " pod="calico-system/csi-node-driver-h9g2n" Dec 13 01:33:45.607891 kubelet[3413]: E1213 01:33:45.607818 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:45.608067 kubelet[3413]: W1213 01:33:45.607835 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:45.608067 kubelet[3413]: E1213 01:33:45.608017 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:45.608581 kubelet[3413]: E1213 01:33:45.608517 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:45.608581 kubelet[3413]: W1213 01:33:45.608563 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:45.608861 kubelet[3413]: E1213 01:33:45.608725 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:45.609492 kubelet[3413]: E1213 01:33:45.609255 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:45.609492 kubelet[3413]: W1213 01:33:45.609269 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:45.609492 kubelet[3413]: E1213 01:33:45.609471 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:45.610061 kubelet[3413]: I1213 01:33:45.609686 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwsk9\" (UniqueName: \"kubernetes.io/projected/3cc2106b-553b-4660-9b27-e2c825955271-kube-api-access-nwsk9\") pod \"csi-node-driver-h9g2n\" (UID: \"3cc2106b-553b-4660-9b27-e2c825955271\") " pod="calico-system/csi-node-driver-h9g2n" Dec 13 01:33:45.610442 kubelet[3413]: E1213 01:33:45.610427 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:45.610607 kubelet[3413]: W1213 01:33:45.610571 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:45.611149 kubelet[3413]: E1213 01:33:45.611100 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:45.611467 kubelet[3413]: E1213 01:33:45.611404 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:45.611467 kubelet[3413]: W1213 01:33:45.611418 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:45.614288 kubelet[3413]: E1213 01:33:45.611447 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:45.614288 kubelet[3413]: I1213 01:33:45.614266 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3cc2106b-553b-4660-9b27-e2c825955271-socket-dir\") pod \"csi-node-driver-h9g2n\" (UID: \"3cc2106b-553b-4660-9b27-e2c825955271\") " pod="calico-system/csi-node-driver-h9g2n" Dec 13 01:33:45.615540 kubelet[3413]: E1213 01:33:45.615500 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:45.615540 kubelet[3413]: W1213 01:33:45.615520 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:45.615540 kubelet[3413]: E1213 01:33:45.615538 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:45.617535 kubelet[3413]: E1213 01:33:45.617511 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:45.617741 kubelet[3413]: W1213 01:33:45.617539 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:45.617741 kubelet[3413]: E1213 01:33:45.617669 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:45.621953 kubelet[3413]: E1213 01:33:45.621757 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:45.621953 kubelet[3413]: W1213 01:33:45.621790 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:45.623665 kubelet[3413]: E1213 01:33:45.622654 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:45.625064 kubelet[3413]: E1213 01:33:45.624395 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:45.625064 kubelet[3413]: W1213 01:33:45.625020 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:45.625064 kubelet[3413]: E1213 01:33:45.625040 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:45.625236 kubelet[3413]: I1213 01:33:45.625094 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3cc2106b-553b-4660-9b27-e2c825955271-registration-dir\") pod \"csi-node-driver-h9g2n\" (UID: \"3cc2106b-553b-4660-9b27-e2c825955271\") " pod="calico-system/csi-node-driver-h9g2n" Dec 13 01:33:45.629518 kubelet[3413]: E1213 01:33:45.629173 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:45.629518 kubelet[3413]: W1213 01:33:45.629201 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:45.629518 kubelet[3413]: E1213 01:33:45.629226 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:45.630435 kubelet[3413]: E1213 01:33:45.630046 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:45.630435 kubelet[3413]: W1213 01:33:45.630063 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:45.630435 kubelet[3413]: E1213 01:33:45.630102 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:45.652615 containerd[1979]: time="2024-12-13T01:33:45.650493983Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:33:45.652615 containerd[1979]: time="2024-12-13T01:33:45.650567680Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:33:45.652615 containerd[1979]: time="2024-12-13T01:33:45.650589090Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:33:45.652615 containerd[1979]: time="2024-12-13T01:33:45.650712307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:33:45.690243 systemd[1]: Started cri-containerd-0ec638f8ecd3977f79fe994f167752ec10690060d84bcf0594d381aac1919911.scope - libcontainer container 0ec638f8ecd3977f79fe994f167752ec10690060d84bcf0594d381aac1919911. Dec 13 01:33:45.728004 kubelet[3413]: E1213 01:33:45.726802 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:45.728004 kubelet[3413]: W1213 01:33:45.726829 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:45.728004 kubelet[3413]: E1213 01:33:45.726852 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:45.728004 kubelet[3413]: E1213 01:33:45.727199 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:45.728004 kubelet[3413]: W1213 01:33:45.727213 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:45.728004 kubelet[3413]: E1213 01:33:45.727239 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:45.729522 kubelet[3413]: E1213 01:33:45.728284 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:45.729522 kubelet[3413]: W1213 01:33:45.728297 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:45.729522 kubelet[3413]: E1213 01:33:45.728325 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:45.729522 kubelet[3413]: E1213 01:33:45.728608 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:45.729522 kubelet[3413]: W1213 01:33:45.728618 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:45.729522 kubelet[3413]: E1213 01:33:45.728649 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:45.729522 kubelet[3413]: E1213 01:33:45.728881 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:45.729522 kubelet[3413]: W1213 01:33:45.728891 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:45.729522 kubelet[3413]: E1213 01:33:45.729080 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:45.731050 kubelet[3413]: E1213 01:33:45.729878 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:45.731050 kubelet[3413]: W1213 01:33:45.729893 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:45.731050 kubelet[3413]: E1213 01:33:45.730061 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:45.731050 kubelet[3413]: E1213 01:33:45.730242 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:45.731050 kubelet[3413]: W1213 01:33:45.730252 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:45.731354 kubelet[3413]: E1213 01:33:45.731257 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:45.731865 kubelet[3413]: E1213 01:33:45.731849 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:45.732179 kubelet[3413]: W1213 01:33:45.731898 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:45.732365 kubelet[3413]: E1213 01:33:45.732256 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:45.732552 kubelet[3413]: E1213 01:33:45.732540 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:45.733915 kubelet[3413]: W1213 01:33:45.732650 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:45.733915 kubelet[3413]: E1213 01:33:45.732685 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:45.734417 kubelet[3413]: E1213 01:33:45.734310 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:45.734417 kubelet[3413]: W1213 01:33:45.734325 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:45.734862 kubelet[3413]: E1213 01:33:45.734693 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:45.734862 kubelet[3413]: E1213 01:33:45.734718 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:45.734862 kubelet[3413]: W1213 01:33:45.734728 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:45.734862 kubelet[3413]: E1213 01:33:45.734789 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:45.735518 kubelet[3413]: E1213 01:33:45.735324 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:45.735518 kubelet[3413]: W1213 01:33:45.735337 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:45.735842 kubelet[3413]: E1213 01:33:45.735653 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:45.735842 kubelet[3413]: E1213 01:33:45.735717 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:45.735842 kubelet[3413]: W1213 01:33:45.735727 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:45.735842 kubelet[3413]: E1213 01:33:45.735752 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:45.736813 kubelet[3413]: E1213 01:33:45.736593 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:45.736813 kubelet[3413]: W1213 01:33:45.736609 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:45.736813 kubelet[3413]: E1213 01:33:45.736671 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:45.737136 kubelet[3413]: E1213 01:33:45.737032 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:45.737136 kubelet[3413]: W1213 01:33:45.737045 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:45.738318 kubelet[3413]: E1213 01:33:45.738044 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:45.738318 kubelet[3413]: W1213 01:33:45.738058 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:45.738683 kubelet[3413]: E1213 01:33:45.738477 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:45.738683 kubelet[3413]: E1213 01:33:45.738515 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:45.738683 kubelet[3413]: E1213 01:33:45.738536 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:45.738683 kubelet[3413]: W1213 01:33:45.738546 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:45.740140 kubelet[3413]: E1213 01:33:45.739323 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:45.740302 kubelet[3413]: E1213 01:33:45.740290 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:45.740497 kubelet[3413]: W1213 01:33:45.740371 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:45.740497 kubelet[3413]: E1213 01:33:45.740418 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:45.742607 kubelet[3413]: E1213 01:33:45.740860 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:45.742607 kubelet[3413]: W1213 01:33:45.740872 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:45.742607 kubelet[3413]: E1213 01:33:45.740962 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:45.746546 kubelet[3413]: E1213 01:33:45.744355 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:45.746546 kubelet[3413]: W1213 01:33:45.744370 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:45.746546 kubelet[3413]: E1213 01:33:45.746246 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:45.746546 kubelet[3413]: W1213 01:33:45.746260 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:45.746546 kubelet[3413]: E1213 01:33:45.746393 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:45.746546 kubelet[3413]: E1213 01:33:45.746418 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:45.747037 kubelet[3413]: E1213 01:33:45.747015 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:45.747999 kubelet[3413]: W1213 01:33:45.747111 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:45.747999 kubelet[3413]: E1213 01:33:45.747164 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:45.748620 kubelet[3413]: E1213 01:33:45.748354 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:45.748620 kubelet[3413]: W1213 01:33:45.748367 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:45.748620 kubelet[3413]: E1213 01:33:45.748396 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:45.748909 kubelet[3413]: E1213 01:33:45.748821 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:45.748909 kubelet[3413]: W1213 01:33:45.748833 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:45.749047 kubelet[3413]: E1213 01:33:45.749034 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:45.749425 kubelet[3413]: E1213 01:33:45.749411 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:45.749511 kubelet[3413]: W1213 01:33:45.749500 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:45.749593 kubelet[3413]: E1213 01:33:45.749581 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:45.757899 kubelet[3413]: E1213 01:33:45.757875 3413 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:33:45.758190 kubelet[3413]: W1213 01:33:45.758117 3413 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:33:45.758190 kubelet[3413]: E1213 01:33:45.758150 3413 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:33:45.793674 containerd[1979]: time="2024-12-13T01:33:45.791533232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-sxdtq,Uid:ba02b5de-196a-474e-93fa-8abd0dc834dd,Namespace:calico-system,Attempt:0,} returns sandbox id \"0ec638f8ecd3977f79fe994f167752ec10690060d84bcf0594d381aac1919911\"" Dec 13 01:33:45.811343 containerd[1979]: time="2024-12-13T01:33:45.811304755Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Dec 13 01:33:45.842663 containerd[1979]: time="2024-12-13T01:33:45.842395536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6bdcc7988b-c6rdq,Uid:d60889ef-1074-485b-b4d1-56f1e9b91581,Namespace:calico-system,Attempt:0,} returns sandbox id \"e082063ba417c89c384e02714dfc6eefbf51bf9d3459cf033d9f0efe7cccaa69\"" Dec 13 01:33:46.598992 kubelet[3413]: E1213 01:33:46.598858 3413 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h9g2n" podUID="3cc2106b-553b-4660-9b27-e2c825955271" Dec 13 01:33:47.105195 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1523122598.mount: Deactivated successfully. Dec 13 01:33:47.258039 containerd[1979]: time="2024-12-13T01:33:47.257966507Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:47.259748 containerd[1979]: time="2024-12-13T01:33:47.259681785Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Dec 13 01:33:47.261280 containerd[1979]: time="2024-12-13T01:33:47.261236560Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:47.265731 containerd[1979]: time="2024-12-13T01:33:47.264706423Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:47.265731 containerd[1979]: time="2024-12-13T01:33:47.265498954Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.453973647s" Dec 13 01:33:47.265731 containerd[1979]: time="2024-12-13T01:33:47.265542962Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Dec 13 01:33:47.270453 containerd[1979]: time="2024-12-13T01:33:47.268560359Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Dec 13 01:33:47.346025 containerd[1979]: time="2024-12-13T01:33:47.345883843Z" level=info msg="CreateContainer within sandbox \"0ec638f8ecd3977f79fe994f167752ec10690060d84bcf0594d381aac1919911\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 01:33:47.412015 containerd[1979]: time="2024-12-13T01:33:47.411894083Z" level=info msg="CreateContainer within sandbox \"0ec638f8ecd3977f79fe994f167752ec10690060d84bcf0594d381aac1919911\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"124edcffcd8463165e02175520337cebcbd7572debe6674ca761628cbbc003da\"" Dec 13 01:33:47.413407 containerd[1979]: time="2024-12-13T01:33:47.413197390Z" level=info msg="StartContainer for \"124edcffcd8463165e02175520337cebcbd7572debe6674ca761628cbbc003da\"" Dec 13 01:33:47.486208 systemd[1]: Started cri-containerd-124edcffcd8463165e02175520337cebcbd7572debe6674ca761628cbbc003da.scope - libcontainer container 124edcffcd8463165e02175520337cebcbd7572debe6674ca761628cbbc003da. Dec 13 01:33:47.530760 containerd[1979]: time="2024-12-13T01:33:47.530712014Z" level=info msg="StartContainer for \"124edcffcd8463165e02175520337cebcbd7572debe6674ca761628cbbc003da\" returns successfully" Dec 13 01:33:47.547220 systemd[1]: cri-containerd-124edcffcd8463165e02175520337cebcbd7572debe6674ca761628cbbc003da.scope: Deactivated successfully. Dec 13 01:33:47.578275 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-124edcffcd8463165e02175520337cebcbd7572debe6674ca761628cbbc003da-rootfs.mount: Deactivated successfully. Dec 13 01:33:47.975640 containerd[1979]: time="2024-12-13T01:33:47.942912648Z" level=info msg="shim disconnected" id=124edcffcd8463165e02175520337cebcbd7572debe6674ca761628cbbc003da namespace=k8s.io Dec 13 01:33:47.975905 containerd[1979]: time="2024-12-13T01:33:47.975641183Z" level=warning msg="cleaning up after shim disconnected" id=124edcffcd8463165e02175520337cebcbd7572debe6674ca761628cbbc003da namespace=k8s.io Dec 13 01:33:47.975905 containerd[1979]: time="2024-12-13T01:33:47.975662094Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:33:48.627253 kubelet[3413]: E1213 01:33:48.627205 3413 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h9g2n" podUID="3cc2106b-553b-4660-9b27-e2c825955271" Dec 13 01:33:49.968188 containerd[1979]: time="2024-12-13T01:33:49.968140390Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:49.969595 containerd[1979]: time="2024-12-13T01:33:49.969462123Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29850141" Dec 13 01:33:49.972879 containerd[1979]: time="2024-12-13T01:33:49.971250042Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:49.974552 containerd[1979]: time="2024-12-13T01:33:49.974513536Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:49.975527 containerd[1979]: time="2024-12-13T01:33:49.975492558Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.706843785s" Dec 13 01:33:49.975672 containerd[1979]: time="2024-12-13T01:33:49.975648906Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Dec 13 01:33:49.977351 containerd[1979]: time="2024-12-13T01:33:49.977325636Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Dec 13 01:33:49.997645 containerd[1979]: time="2024-12-13T01:33:49.996586652Z" level=info msg="CreateContainer within sandbox \"e082063ba417c89c384e02714dfc6eefbf51bf9d3459cf033d9f0efe7cccaa69\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 13 01:33:50.022272 containerd[1979]: time="2024-12-13T01:33:50.022223054Z" level=info msg="CreateContainer within sandbox \"e082063ba417c89c384e02714dfc6eefbf51bf9d3459cf033d9f0efe7cccaa69\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"e790b386a9a4e3063e33bab46e67c487d7d7f9af03022abef11505b16b279043\"" Dec 13 01:33:50.024219 containerd[1979]: time="2024-12-13T01:33:50.024176350Z" level=info msg="StartContainer for \"e790b386a9a4e3063e33bab46e67c487d7d7f9af03022abef11505b16b279043\"" Dec 13 01:33:50.080169 systemd[1]: Started cri-containerd-e790b386a9a4e3063e33bab46e67c487d7d7f9af03022abef11505b16b279043.scope - libcontainer container e790b386a9a4e3063e33bab46e67c487d7d7f9af03022abef11505b16b279043. Dec 13 01:33:50.145288 containerd[1979]: time="2024-12-13T01:33:50.145242306Z" level=info msg="StartContainer for \"e790b386a9a4e3063e33bab46e67c487d7d7f9af03022abef11505b16b279043\" returns successfully" Dec 13 01:33:50.598775 kubelet[3413]: E1213 01:33:50.598724 3413 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h9g2n" podUID="3cc2106b-553b-4660-9b27-e2c825955271" Dec 13 01:33:50.785345 kubelet[3413]: I1213 01:33:50.778106 3413 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6bdcc7988b-c6rdq" podStartSLOduration=2.64843098 podStartE2EDuration="6.77775004s" podCreationTimestamp="2024-12-13 01:33:44 +0000 UTC" firstStartedPulling="2024-12-13 01:33:45.84727096 +0000 UTC m=+26.456672802" lastFinishedPulling="2024-12-13 01:33:49.976590026 +0000 UTC m=+30.585991862" observedRunningTime="2024-12-13 01:33:50.775201077 +0000 UTC m=+31.384603057" watchObservedRunningTime="2024-12-13 01:33:50.77775004 +0000 UTC m=+31.387151889" Dec 13 01:33:51.767999 kubelet[3413]: I1213 01:33:51.766597 3413 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:33:52.599664 kubelet[3413]: E1213 01:33:52.598826 3413 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h9g2n" podUID="3cc2106b-553b-4660-9b27-e2c825955271" Dec 13 01:33:54.031479 kubelet[3413]: I1213 01:33:54.031442 3413 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:33:54.598072 kubelet[3413]: E1213 01:33:54.597984 3413 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h9g2n" podUID="3cc2106b-553b-4660-9b27-e2c825955271" Dec 13 01:33:55.093849 containerd[1979]: time="2024-12-13T01:33:55.093795144Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:55.095333 containerd[1979]: time="2024-12-13T01:33:55.095150058Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Dec 13 01:33:55.097268 containerd[1979]: time="2024-12-13T01:33:55.096899472Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:55.111242 containerd[1979]: time="2024-12-13T01:33:55.111196446Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:55.111940 containerd[1979]: time="2024-12-13T01:33:55.111901023Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 5.134438608s" Dec 13 01:33:55.112054 containerd[1979]: time="2024-12-13T01:33:55.111945617Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Dec 13 01:33:55.115194 containerd[1979]: time="2024-12-13T01:33:55.115159720Z" level=info msg="CreateContainer within sandbox \"0ec638f8ecd3977f79fe994f167752ec10690060d84bcf0594d381aac1919911\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 01:33:55.147165 containerd[1979]: time="2024-12-13T01:33:55.147115608Z" level=info msg="CreateContainer within sandbox \"0ec638f8ecd3977f79fe994f167752ec10690060d84bcf0594d381aac1919911\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"5de0e77c5d0e735700f065ed72d2acff595bd318456a07894055134f94b838f5\"" Dec 13 01:33:55.149291 containerd[1979]: time="2024-12-13T01:33:55.149248433Z" level=info msg="StartContainer for \"5de0e77c5d0e735700f065ed72d2acff595bd318456a07894055134f94b838f5\"" Dec 13 01:33:55.225183 systemd[1]: Started cri-containerd-5de0e77c5d0e735700f065ed72d2acff595bd318456a07894055134f94b838f5.scope - libcontainer container 5de0e77c5d0e735700f065ed72d2acff595bd318456a07894055134f94b838f5. Dec 13 01:33:55.268465 containerd[1979]: time="2024-12-13T01:33:55.268378078Z" level=info msg="StartContainer for \"5de0e77c5d0e735700f065ed72d2acff595bd318456a07894055134f94b838f5\" returns successfully" Dec 13 01:33:56.532870 systemd[1]: cri-containerd-5de0e77c5d0e735700f065ed72d2acff595bd318456a07894055134f94b838f5.scope: Deactivated successfully. Dec 13 01:33:56.590914 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5de0e77c5d0e735700f065ed72d2acff595bd318456a07894055134f94b838f5-rootfs.mount: Deactivated successfully. Dec 13 01:33:56.599840 kubelet[3413]: E1213 01:33:56.597867 3413 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h9g2n" podUID="3cc2106b-553b-4660-9b27-e2c825955271" Dec 13 01:33:56.615743 kubelet[3413]: I1213 01:33:56.615715 3413 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 01:33:56.667330 kubelet[3413]: I1213 01:33:56.667121 3413 topology_manager.go:215] "Topology Admit Handler" podUID="65795569-3321-472e-aa6d-4a50b09325de" podNamespace="kube-system" podName="coredns-7db6d8ff4d-cxh29" Dec 13 01:33:56.676837 kubelet[3413]: I1213 01:33:56.676464 3413 topology_manager.go:215] "Topology Admit Handler" podUID="1d278366-e6f8-4953-ad50-101ffdd81ba1" podNamespace="calico-system" podName="calico-kube-controllers-78977ddc75-mllbh" Dec 13 01:33:56.685042 containerd[1979]: time="2024-12-13T01:33:56.684906720Z" level=info msg="shim disconnected" id=5de0e77c5d0e735700f065ed72d2acff595bd318456a07894055134f94b838f5 namespace=k8s.io Dec 13 01:33:56.685042 containerd[1979]: time="2024-12-13T01:33:56.684985306Z" level=warning msg="cleaning up after shim disconnected" id=5de0e77c5d0e735700f065ed72d2acff595bd318456a07894055134f94b838f5 namespace=k8s.io Dec 13 01:33:56.685042 containerd[1979]: time="2024-12-13T01:33:56.684999238Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:33:56.691467 kubelet[3413]: I1213 01:33:56.690653 3413 topology_manager.go:215] "Topology Admit Handler" podUID="c486b937-2444-419d-bb0f-429a58e9c9a6" podNamespace="calico-apiserver" podName="calico-apiserver-6f66dbc9d-8djcg" Dec 13 01:33:56.699999 kubelet[3413]: I1213 01:33:56.699367 3413 topology_manager.go:215] "Topology Admit Handler" podUID="6d64a3cc-1e12-4f51-9849-d93278d09aa0" podNamespace="calico-apiserver" podName="calico-apiserver-6f66dbc9d-xwlp5" Dec 13 01:33:56.699999 kubelet[3413]: I1213 01:33:56.699632 3413 topology_manager.go:215] "Topology Admit Handler" podUID="aa5dce48-74be-45a6-b213-0b52ff4a1cc4" podNamespace="kube-system" podName="coredns-7db6d8ff4d-cnqvk" Dec 13 01:33:56.720159 systemd[1]: Created slice kubepods-burstable-pod65795569_3321_472e_aa6d_4a50b09325de.slice - libcontainer container kubepods-burstable-pod65795569_3321_472e_aa6d_4a50b09325de.slice. Dec 13 01:33:56.734821 kubelet[3413]: I1213 01:33:56.734782 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxws7\" (UniqueName: \"kubernetes.io/projected/1d278366-e6f8-4953-ad50-101ffdd81ba1-kube-api-access-qxws7\") pod \"calico-kube-controllers-78977ddc75-mllbh\" (UID: \"1d278366-e6f8-4953-ad50-101ffdd81ba1\") " pod="calico-system/calico-kube-controllers-78977ddc75-mllbh" Dec 13 01:33:56.734991 kubelet[3413]: I1213 01:33:56.734846 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/65795569-3321-472e-aa6d-4a50b09325de-config-volume\") pod \"coredns-7db6d8ff4d-cxh29\" (UID: \"65795569-3321-472e-aa6d-4a50b09325de\") " pod="kube-system/coredns-7db6d8ff4d-cxh29" Dec 13 01:33:56.734991 kubelet[3413]: I1213 01:33:56.734879 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c486b937-2444-419d-bb0f-429a58e9c9a6-calico-apiserver-certs\") pod \"calico-apiserver-6f66dbc9d-8djcg\" (UID: \"c486b937-2444-419d-bb0f-429a58e9c9a6\") " pod="calico-apiserver/calico-apiserver-6f66dbc9d-8djcg" Dec 13 01:33:56.734991 kubelet[3413]: I1213 01:33:56.734908 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdp5h\" (UniqueName: \"kubernetes.io/projected/65795569-3321-472e-aa6d-4a50b09325de-kube-api-access-hdp5h\") pod \"coredns-7db6d8ff4d-cxh29\" (UID: \"65795569-3321-472e-aa6d-4a50b09325de\") " pod="kube-system/coredns-7db6d8ff4d-cxh29" Dec 13 01:33:56.734991 kubelet[3413]: I1213 01:33:56.734933 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d278366-e6f8-4953-ad50-101ffdd81ba1-tigera-ca-bundle\") pod \"calico-kube-controllers-78977ddc75-mllbh\" (UID: \"1d278366-e6f8-4953-ad50-101ffdd81ba1\") " pod="calico-system/calico-kube-controllers-78977ddc75-mllbh" Dec 13 01:33:56.734991 kubelet[3413]: I1213 01:33:56.734964 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8skl5\" (UniqueName: \"kubernetes.io/projected/c486b937-2444-419d-bb0f-429a58e9c9a6-kube-api-access-8skl5\") pod \"calico-apiserver-6f66dbc9d-8djcg\" (UID: \"c486b937-2444-419d-bb0f-429a58e9c9a6\") " pod="calico-apiserver/calico-apiserver-6f66dbc9d-8djcg" Dec 13 01:33:56.738898 systemd[1]: Created slice kubepods-besteffort-pod1d278366_e6f8_4953_ad50_101ffdd81ba1.slice - libcontainer container kubepods-besteffort-pod1d278366_e6f8_4953_ad50_101ffdd81ba1.slice. Dec 13 01:33:56.752217 systemd[1]: Created slice kubepods-besteffort-podc486b937_2444_419d_bb0f_429a58e9c9a6.slice - libcontainer container kubepods-besteffort-podc486b937_2444_419d_bb0f_429a58e9c9a6.slice. Dec 13 01:33:56.767905 systemd[1]: Created slice kubepods-besteffort-pod6d64a3cc_1e12_4f51_9849_d93278d09aa0.slice - libcontainer container kubepods-besteffort-pod6d64a3cc_1e12_4f51_9849_d93278d09aa0.slice. Dec 13 01:33:56.781595 systemd[1]: Created slice kubepods-burstable-podaa5dce48_74be_45a6_b213_0b52ff4a1cc4.slice - libcontainer container kubepods-burstable-podaa5dce48_74be_45a6_b213_0b52ff4a1cc4.slice. Dec 13 01:33:56.801514 containerd[1979]: time="2024-12-13T01:33:56.800181529Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Dec 13 01:33:56.837453 kubelet[3413]: I1213 01:33:56.836211 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa5dce48-74be-45a6-b213-0b52ff4a1cc4-config-volume\") pod \"coredns-7db6d8ff4d-cnqvk\" (UID: \"aa5dce48-74be-45a6-b213-0b52ff4a1cc4\") " pod="kube-system/coredns-7db6d8ff4d-cnqvk" Dec 13 01:33:56.837453 kubelet[3413]: I1213 01:33:56.836275 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stp57\" (UniqueName: \"kubernetes.io/projected/aa5dce48-74be-45a6-b213-0b52ff4a1cc4-kube-api-access-stp57\") pod \"coredns-7db6d8ff4d-cnqvk\" (UID: \"aa5dce48-74be-45a6-b213-0b52ff4a1cc4\") " pod="kube-system/coredns-7db6d8ff4d-cnqvk" Dec 13 01:33:56.837453 kubelet[3413]: I1213 01:33:56.836344 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfkj8\" (UniqueName: \"kubernetes.io/projected/6d64a3cc-1e12-4f51-9849-d93278d09aa0-kube-api-access-tfkj8\") pod \"calico-apiserver-6f66dbc9d-xwlp5\" (UID: \"6d64a3cc-1e12-4f51-9849-d93278d09aa0\") " pod="calico-apiserver/calico-apiserver-6f66dbc9d-xwlp5" Dec 13 01:33:56.845575 kubelet[3413]: I1213 01:33:56.836386 3413 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6d64a3cc-1e12-4f51-9849-d93278d09aa0-calico-apiserver-certs\") pod \"calico-apiserver-6f66dbc9d-xwlp5\" (UID: \"6d64a3cc-1e12-4f51-9849-d93278d09aa0\") " pod="calico-apiserver/calico-apiserver-6f66dbc9d-xwlp5" Dec 13 01:33:57.029640 containerd[1979]: time="2024-12-13T01:33:57.029602021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-cxh29,Uid:65795569-3321-472e-aa6d-4a50b09325de,Namespace:kube-system,Attempt:0,}" Dec 13 01:33:57.047434 containerd[1979]: time="2024-12-13T01:33:57.047389967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78977ddc75-mllbh,Uid:1d278366-e6f8-4953-ad50-101ffdd81ba1,Namespace:calico-system,Attempt:0,}" Dec 13 01:33:57.065219 containerd[1979]: time="2024-12-13T01:33:57.065112210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f66dbc9d-8djcg,Uid:c486b937-2444-419d-bb0f-429a58e9c9a6,Namespace:calico-apiserver,Attempt:0,}" Dec 13 01:33:57.111810 containerd[1979]: time="2024-12-13T01:33:57.111711049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-cnqvk,Uid:aa5dce48-74be-45a6-b213-0b52ff4a1cc4,Namespace:kube-system,Attempt:0,}" Dec 13 01:33:57.113907 containerd[1979]: time="2024-12-13T01:33:57.112537258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f66dbc9d-xwlp5,Uid:6d64a3cc-1e12-4f51-9849-d93278d09aa0,Namespace:calico-apiserver,Attempt:0,}" Dec 13 01:33:57.584651 containerd[1979]: time="2024-12-13T01:33:57.584593147Z" level=error msg="Failed to destroy network for sandbox \"716e2c99c627a9ef2c641d8ab97a6bbec5ec4e757e4a49af70ed29e8c8e07eea\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:57.601676 containerd[1979]: time="2024-12-13T01:33:57.601337197Z" level=error msg="encountered an error cleaning up failed sandbox \"716e2c99c627a9ef2c641d8ab97a6bbec5ec4e757e4a49af70ed29e8c8e07eea\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:57.601676 containerd[1979]: time="2024-12-13T01:33:57.601445381Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-cxh29,Uid:65795569-3321-472e-aa6d-4a50b09325de,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"716e2c99c627a9ef2c641d8ab97a6bbec5ec4e757e4a49af70ed29e8c8e07eea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:57.614557 containerd[1979]: time="2024-12-13T01:33:57.614353913Z" level=error msg="Failed to destroy network for sandbox \"e95d667c7a378fd23dc3d90537774f033dd062db75a6518b25ae0dd49f029226\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:57.615558 containerd[1979]: time="2024-12-13T01:33:57.615283200Z" level=error msg="encountered an error cleaning up failed sandbox \"e95d667c7a378fd23dc3d90537774f033dd062db75a6518b25ae0dd49f029226\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:57.615558 containerd[1979]: time="2024-12-13T01:33:57.615360141Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f66dbc9d-xwlp5,Uid:6d64a3cc-1e12-4f51-9849-d93278d09aa0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e95d667c7a378fd23dc3d90537774f033dd062db75a6518b25ae0dd49f029226\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:57.615558 containerd[1979]: time="2024-12-13T01:33:57.615464803Z" level=error msg="Failed to destroy network for sandbox \"2eaa7c00dc1059f984b8d85d43e3a833a431225dbd3979c46548f3a569df3fd2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:57.620570 kubelet[3413]: E1213 01:33:57.615902 3413 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e95d667c7a378fd23dc3d90537774f033dd062db75a6518b25ae0dd49f029226\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:57.620570 kubelet[3413]: E1213 01:33:57.616025 3413 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e95d667c7a378fd23dc3d90537774f033dd062db75a6518b25ae0dd49f029226\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f66dbc9d-xwlp5" Dec 13 01:33:57.620570 kubelet[3413]: E1213 01:33:57.616059 3413 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e95d667c7a378fd23dc3d90537774f033dd062db75a6518b25ae0dd49f029226\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f66dbc9d-xwlp5" Dec 13 01:33:57.621167 containerd[1979]: time="2024-12-13T01:33:57.616184990Z" level=error msg="encountered an error cleaning up failed sandbox \"2eaa7c00dc1059f984b8d85d43e3a833a431225dbd3979c46548f3a569df3fd2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:57.621167 containerd[1979]: time="2024-12-13T01:33:57.616236051Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-cnqvk,Uid:aa5dce48-74be-45a6-b213-0b52ff4a1cc4,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2eaa7c00dc1059f984b8d85d43e3a833a431225dbd3979c46548f3a569df3fd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:57.637034 kubelet[3413]: E1213 01:33:57.616123 3413 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6f66dbc9d-xwlp5_calico-apiserver(6d64a3cc-1e12-4f51-9849-d93278d09aa0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6f66dbc9d-xwlp5_calico-apiserver(6d64a3cc-1e12-4f51-9849-d93278d09aa0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e95d667c7a378fd23dc3d90537774f033dd062db75a6518b25ae0dd49f029226\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6f66dbc9d-xwlp5" podUID="6d64a3cc-1e12-4f51-9849-d93278d09aa0" Dec 13 01:33:57.637034 kubelet[3413]: E1213 01:33:57.616404 3413 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"716e2c99c627a9ef2c641d8ab97a6bbec5ec4e757e4a49af70ed29e8c8e07eea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:57.637034 kubelet[3413]: E1213 01:33:57.616442 3413 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"716e2c99c627a9ef2c641d8ab97a6bbec5ec4e757e4a49af70ed29e8c8e07eea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-cxh29" Dec 13 01:33:57.632522 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e95d667c7a378fd23dc3d90537774f033dd062db75a6518b25ae0dd49f029226-shm.mount: Deactivated successfully. Dec 13 01:33:57.638162 kubelet[3413]: E1213 01:33:57.616520 3413 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"716e2c99c627a9ef2c641d8ab97a6bbec5ec4e757e4a49af70ed29e8c8e07eea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-cxh29" Dec 13 01:33:57.638162 kubelet[3413]: E1213 01:33:57.616563 3413 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-cxh29_kube-system(65795569-3321-472e-aa6d-4a50b09325de)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-cxh29_kube-system(65795569-3321-472e-aa6d-4a50b09325de)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"716e2c99c627a9ef2c641d8ab97a6bbec5ec4e757e4a49af70ed29e8c8e07eea\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-cxh29" podUID="65795569-3321-472e-aa6d-4a50b09325de" Dec 13 01:33:57.638162 kubelet[3413]: E1213 01:33:57.629089 3413 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2eaa7c00dc1059f984b8d85d43e3a833a431225dbd3979c46548f3a569df3fd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:57.632674 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2eaa7c00dc1059f984b8d85d43e3a833a431225dbd3979c46548f3a569df3fd2-shm.mount: Deactivated successfully. Dec 13 01:33:57.638536 kubelet[3413]: E1213 01:33:57.629177 3413 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2eaa7c00dc1059f984b8d85d43e3a833a431225dbd3979c46548f3a569df3fd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-cnqvk" Dec 13 01:33:57.638536 kubelet[3413]: E1213 01:33:57.629559 3413 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2eaa7c00dc1059f984b8d85d43e3a833a431225dbd3979c46548f3a569df3fd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-cnqvk" Dec 13 01:33:57.638536 kubelet[3413]: E1213 01:33:57.630018 3413 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-cnqvk_kube-system(aa5dce48-74be-45a6-b213-0b52ff4a1cc4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-cnqvk_kube-system(aa5dce48-74be-45a6-b213-0b52ff4a1cc4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2eaa7c00dc1059f984b8d85d43e3a833a431225dbd3979c46548f3a569df3fd2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-cnqvk" podUID="aa5dce48-74be-45a6-b213-0b52ff4a1cc4" Dec 13 01:33:57.632763 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-716e2c99c627a9ef2c641d8ab97a6bbec5ec4e757e4a49af70ed29e8c8e07eea-shm.mount: Deactivated successfully. Dec 13 01:33:57.648560 containerd[1979]: time="2024-12-13T01:33:57.648446927Z" level=error msg="Failed to destroy network for sandbox \"fd6dfcf0597f677db2ceafeccfcae83d0c94389d7b4f60e386d7103cd9e041f0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:57.651171 containerd[1979]: time="2024-12-13T01:33:57.651129776Z" level=error msg="Failed to destroy network for sandbox \"4dd6aa44c91115181a135cc1e15d99ee327c8432859e99438498718f53a5c7e7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:57.651543 containerd[1979]: time="2024-12-13T01:33:57.651469660Z" level=error msg="encountered an error cleaning up failed sandbox \"4dd6aa44c91115181a135cc1e15d99ee327c8432859e99438498718f53a5c7e7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:57.651653 containerd[1979]: time="2024-12-13T01:33:57.651576316Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78977ddc75-mllbh,Uid:1d278366-e6f8-4953-ad50-101ffdd81ba1,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4dd6aa44c91115181a135cc1e15d99ee327c8432859e99438498718f53a5c7e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:57.654855 kubelet[3413]: E1213 01:33:57.652150 3413 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4dd6aa44c91115181a135cc1e15d99ee327c8432859e99438498718f53a5c7e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:57.654855 kubelet[3413]: E1213 01:33:57.652240 3413 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4dd6aa44c91115181a135cc1e15d99ee327c8432859e99438498718f53a5c7e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78977ddc75-mllbh" Dec 13 01:33:57.654855 kubelet[3413]: E1213 01:33:57.652287 3413 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4dd6aa44c91115181a135cc1e15d99ee327c8432859e99438498718f53a5c7e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78977ddc75-mllbh" Dec 13 01:33:57.654613 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fd6dfcf0597f677db2ceafeccfcae83d0c94389d7b4f60e386d7103cd9e041f0-shm.mount: Deactivated successfully. Dec 13 01:33:57.655187 containerd[1979]: time="2024-12-13T01:33:57.652230120Z" level=error msg="encountered an error cleaning up failed sandbox \"fd6dfcf0597f677db2ceafeccfcae83d0c94389d7b4f60e386d7103cd9e041f0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:57.655187 containerd[1979]: time="2024-12-13T01:33:57.652289683Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f66dbc9d-8djcg,Uid:c486b937-2444-419d-bb0f-429a58e9c9a6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fd6dfcf0597f677db2ceafeccfcae83d0c94389d7b4f60e386d7103cd9e041f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:57.655332 kubelet[3413]: E1213 01:33:57.652356 3413 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-78977ddc75-mllbh_calico-system(1d278366-e6f8-4953-ad50-101ffdd81ba1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-78977ddc75-mllbh_calico-system(1d278366-e6f8-4953-ad50-101ffdd81ba1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4dd6aa44c91115181a135cc1e15d99ee327c8432859e99438498718f53a5c7e7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-78977ddc75-mllbh" podUID="1d278366-e6f8-4953-ad50-101ffdd81ba1" Dec 13 01:33:57.655861 kubelet[3413]: E1213 01:33:57.655654 3413 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd6dfcf0597f677db2ceafeccfcae83d0c94389d7b4f60e386d7103cd9e041f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:57.655861 kubelet[3413]: E1213 01:33:57.655712 3413 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd6dfcf0597f677db2ceafeccfcae83d0c94389d7b4f60e386d7103cd9e041f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f66dbc9d-8djcg" Dec 13 01:33:57.655861 kubelet[3413]: E1213 01:33:57.655741 3413 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd6dfcf0597f677db2ceafeccfcae83d0c94389d7b4f60e386d7103cd9e041f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f66dbc9d-8djcg" Dec 13 01:33:57.656087 kubelet[3413]: E1213 01:33:57.655787 3413 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6f66dbc9d-8djcg_calico-apiserver(c486b937-2444-419d-bb0f-429a58e9c9a6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6f66dbc9d-8djcg_calico-apiserver(c486b937-2444-419d-bb0f-429a58e9c9a6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fd6dfcf0597f677db2ceafeccfcae83d0c94389d7b4f60e386d7103cd9e041f0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6f66dbc9d-8djcg" podUID="c486b937-2444-419d-bb0f-429a58e9c9a6" Dec 13 01:33:57.661157 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4dd6aa44c91115181a135cc1e15d99ee327c8432859e99438498718f53a5c7e7-shm.mount: Deactivated successfully. Dec 13 01:33:57.814020 kubelet[3413]: I1213 01:33:57.809661 3413 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e95d667c7a378fd23dc3d90537774f033dd062db75a6518b25ae0dd49f029226" Dec 13 01:33:57.824820 kubelet[3413]: I1213 01:33:57.821284 3413 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2eaa7c00dc1059f984b8d85d43e3a833a431225dbd3979c46548f3a569df3fd2" Dec 13 01:33:57.838398 containerd[1979]: time="2024-12-13T01:33:57.838264575Z" level=info msg="StopPodSandbox for \"2eaa7c00dc1059f984b8d85d43e3a833a431225dbd3979c46548f3a569df3fd2\"" Dec 13 01:33:57.838847 containerd[1979]: time="2024-12-13T01:33:57.838558972Z" level=info msg="StopPodSandbox for \"e95d667c7a378fd23dc3d90537774f033dd062db75a6518b25ae0dd49f029226\"" Dec 13 01:33:57.842033 containerd[1979]: time="2024-12-13T01:33:57.841654228Z" level=info msg="Ensure that sandbox e95d667c7a378fd23dc3d90537774f033dd062db75a6518b25ae0dd49f029226 in task-service has been cleanup successfully" Dec 13 01:33:57.843048 containerd[1979]: time="2024-12-13T01:33:57.842315611Z" level=info msg="Ensure that sandbox 2eaa7c00dc1059f984b8d85d43e3a833a431225dbd3979c46548f3a569df3fd2 in task-service has been cleanup successfully" Dec 13 01:33:57.862158 kubelet[3413]: I1213 01:33:57.862125 3413 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4dd6aa44c91115181a135cc1e15d99ee327c8432859e99438498718f53a5c7e7" Dec 13 01:33:57.866021 containerd[1979]: time="2024-12-13T01:33:57.865699885Z" level=info msg="StopPodSandbox for \"4dd6aa44c91115181a135cc1e15d99ee327c8432859e99438498718f53a5c7e7\"" Dec 13 01:33:57.866021 containerd[1979]: time="2024-12-13T01:33:57.865917075Z" level=info msg="Ensure that sandbox 4dd6aa44c91115181a135cc1e15d99ee327c8432859e99438498718f53a5c7e7 in task-service has been cleanup successfully" Dec 13 01:33:57.870277 kubelet[3413]: I1213 01:33:57.869861 3413 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd6dfcf0597f677db2ceafeccfcae83d0c94389d7b4f60e386d7103cd9e041f0" Dec 13 01:33:57.875554 containerd[1979]: time="2024-12-13T01:33:57.874484836Z" level=info msg="StopPodSandbox for \"fd6dfcf0597f677db2ceafeccfcae83d0c94389d7b4f60e386d7103cd9e041f0\"" Dec 13 01:33:57.875554 containerd[1979]: time="2024-12-13T01:33:57.874656379Z" level=info msg="Ensure that sandbox fd6dfcf0597f677db2ceafeccfcae83d0c94389d7b4f60e386d7103cd9e041f0 in task-service has been cleanup successfully" Dec 13 01:33:57.887264 kubelet[3413]: I1213 01:33:57.886560 3413 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="716e2c99c627a9ef2c641d8ab97a6bbec5ec4e757e4a49af70ed29e8c8e07eea" Dec 13 01:33:57.891187 containerd[1979]: time="2024-12-13T01:33:57.891138692Z" level=info msg="StopPodSandbox for \"716e2c99c627a9ef2c641d8ab97a6bbec5ec4e757e4a49af70ed29e8c8e07eea\"" Dec 13 01:33:57.891422 containerd[1979]: time="2024-12-13T01:33:57.891373661Z" level=info msg="Ensure that sandbox 716e2c99c627a9ef2c641d8ab97a6bbec5ec4e757e4a49af70ed29e8c8e07eea in task-service has been cleanup successfully" Dec 13 01:33:57.958103 containerd[1979]: time="2024-12-13T01:33:57.952462601Z" level=error msg="StopPodSandbox for \"e95d667c7a378fd23dc3d90537774f033dd062db75a6518b25ae0dd49f029226\" failed" error="failed to destroy network for sandbox \"e95d667c7a378fd23dc3d90537774f033dd062db75a6518b25ae0dd49f029226\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:57.958744 kubelet[3413]: E1213 01:33:57.958681 3413 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e95d667c7a378fd23dc3d90537774f033dd062db75a6518b25ae0dd49f029226\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e95d667c7a378fd23dc3d90537774f033dd062db75a6518b25ae0dd49f029226" Dec 13 01:33:57.958846 kubelet[3413]: E1213 01:33:57.958746 3413 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e95d667c7a378fd23dc3d90537774f033dd062db75a6518b25ae0dd49f029226"} Dec 13 01:33:57.958846 kubelet[3413]: E1213 01:33:57.958814 3413 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6d64a3cc-1e12-4f51-9849-d93278d09aa0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e95d667c7a378fd23dc3d90537774f033dd062db75a6518b25ae0dd49f029226\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:33:57.961497 kubelet[3413]: E1213 01:33:57.958842 3413 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6d64a3cc-1e12-4f51-9849-d93278d09aa0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e95d667c7a378fd23dc3d90537774f033dd062db75a6518b25ae0dd49f029226\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6f66dbc9d-xwlp5" podUID="6d64a3cc-1e12-4f51-9849-d93278d09aa0" Dec 13 01:33:58.023443 containerd[1979]: time="2024-12-13T01:33:58.023131910Z" level=error msg="StopPodSandbox for \"716e2c99c627a9ef2c641d8ab97a6bbec5ec4e757e4a49af70ed29e8c8e07eea\" failed" error="failed to destroy network for sandbox \"716e2c99c627a9ef2c641d8ab97a6bbec5ec4e757e4a49af70ed29e8c8e07eea\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:58.023640 kubelet[3413]: E1213 01:33:58.023536 3413 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"716e2c99c627a9ef2c641d8ab97a6bbec5ec4e757e4a49af70ed29e8c8e07eea\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="716e2c99c627a9ef2c641d8ab97a6bbec5ec4e757e4a49af70ed29e8c8e07eea" Dec 13 01:33:58.023640 kubelet[3413]: E1213 01:33:58.023595 3413 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"716e2c99c627a9ef2c641d8ab97a6bbec5ec4e757e4a49af70ed29e8c8e07eea"} Dec 13 01:33:58.023802 kubelet[3413]: E1213 01:33:58.023641 3413 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"65795569-3321-472e-aa6d-4a50b09325de\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"716e2c99c627a9ef2c641d8ab97a6bbec5ec4e757e4a49af70ed29e8c8e07eea\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:33:58.023802 kubelet[3413]: E1213 01:33:58.023673 3413 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"65795569-3321-472e-aa6d-4a50b09325de\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"716e2c99c627a9ef2c641d8ab97a6bbec5ec4e757e4a49af70ed29e8c8e07eea\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-cxh29" podUID="65795569-3321-472e-aa6d-4a50b09325de" Dec 13 01:33:58.037299 containerd[1979]: time="2024-12-13T01:33:58.036128951Z" level=error msg="StopPodSandbox for \"fd6dfcf0597f677db2ceafeccfcae83d0c94389d7b4f60e386d7103cd9e041f0\" failed" error="failed to destroy network for sandbox \"fd6dfcf0597f677db2ceafeccfcae83d0c94389d7b4f60e386d7103cd9e041f0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:58.037434 kubelet[3413]: E1213 01:33:58.036449 3413 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fd6dfcf0597f677db2ceafeccfcae83d0c94389d7b4f60e386d7103cd9e041f0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fd6dfcf0597f677db2ceafeccfcae83d0c94389d7b4f60e386d7103cd9e041f0" Dec 13 01:33:58.037434 kubelet[3413]: E1213 01:33:58.036513 3413 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fd6dfcf0597f677db2ceafeccfcae83d0c94389d7b4f60e386d7103cd9e041f0"} Dec 13 01:33:58.037434 kubelet[3413]: E1213 01:33:58.036559 3413 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c486b937-2444-419d-bb0f-429a58e9c9a6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fd6dfcf0597f677db2ceafeccfcae83d0c94389d7b4f60e386d7103cd9e041f0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:33:58.037434 kubelet[3413]: E1213 01:33:58.036590 3413 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c486b937-2444-419d-bb0f-429a58e9c9a6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fd6dfcf0597f677db2ceafeccfcae83d0c94389d7b4f60e386d7103cd9e041f0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6f66dbc9d-8djcg" podUID="c486b937-2444-419d-bb0f-429a58e9c9a6" Dec 13 01:33:58.040028 containerd[1979]: time="2024-12-13T01:33:58.038954608Z" level=error msg="StopPodSandbox for \"2eaa7c00dc1059f984b8d85d43e3a833a431225dbd3979c46548f3a569df3fd2\" failed" error="failed to destroy network for sandbox \"2eaa7c00dc1059f984b8d85d43e3a833a431225dbd3979c46548f3a569df3fd2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:58.040850 kubelet[3413]: E1213 01:33:58.040810 3413 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2eaa7c00dc1059f984b8d85d43e3a833a431225dbd3979c46548f3a569df3fd2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2eaa7c00dc1059f984b8d85d43e3a833a431225dbd3979c46548f3a569df3fd2" Dec 13 01:33:58.041283 kubelet[3413]: E1213 01:33:58.040987 3413 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2eaa7c00dc1059f984b8d85d43e3a833a431225dbd3979c46548f3a569df3fd2"} Dec 13 01:33:58.041283 kubelet[3413]: E1213 01:33:58.041034 3413 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"aa5dce48-74be-45a6-b213-0b52ff4a1cc4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2eaa7c00dc1059f984b8d85d43e3a833a431225dbd3979c46548f3a569df3fd2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:33:58.041283 kubelet[3413]: E1213 01:33:58.041178 3413 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"aa5dce48-74be-45a6-b213-0b52ff4a1cc4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2eaa7c00dc1059f984b8d85d43e3a833a431225dbd3979c46548f3a569df3fd2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-cnqvk" podUID="aa5dce48-74be-45a6-b213-0b52ff4a1cc4" Dec 13 01:33:58.042401 containerd[1979]: time="2024-12-13T01:33:58.042366938Z" level=error msg="StopPodSandbox for \"4dd6aa44c91115181a135cc1e15d99ee327c8432859e99438498718f53a5c7e7\" failed" error="failed to destroy network for sandbox \"4dd6aa44c91115181a135cc1e15d99ee327c8432859e99438498718f53a5c7e7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:58.042597 kubelet[3413]: E1213 01:33:58.042560 3413 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4dd6aa44c91115181a135cc1e15d99ee327c8432859e99438498718f53a5c7e7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4dd6aa44c91115181a135cc1e15d99ee327c8432859e99438498718f53a5c7e7" Dec 13 01:33:58.042706 kubelet[3413]: E1213 01:33:58.042612 3413 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4dd6aa44c91115181a135cc1e15d99ee327c8432859e99438498718f53a5c7e7"} Dec 13 01:33:58.042706 kubelet[3413]: E1213 01:33:58.042649 3413 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1d278366-e6f8-4953-ad50-101ffdd81ba1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4dd6aa44c91115181a135cc1e15d99ee327c8432859e99438498718f53a5c7e7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:33:58.042706 kubelet[3413]: E1213 01:33:58.042678 3413 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1d278366-e6f8-4953-ad50-101ffdd81ba1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4dd6aa44c91115181a135cc1e15d99ee327c8432859e99438498718f53a5c7e7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-78977ddc75-mllbh" podUID="1d278366-e6f8-4953-ad50-101ffdd81ba1" Dec 13 01:33:58.610328 systemd[1]: Created slice kubepods-besteffort-pod3cc2106b_553b_4660_9b27_e2c825955271.slice - libcontainer container kubepods-besteffort-pod3cc2106b_553b_4660_9b27_e2c825955271.slice. Dec 13 01:33:58.614714 containerd[1979]: time="2024-12-13T01:33:58.614666445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h9g2n,Uid:3cc2106b-553b-4660-9b27-e2c825955271,Namespace:calico-system,Attempt:0,}" Dec 13 01:33:58.770316 containerd[1979]: time="2024-12-13T01:33:58.770261797Z" level=error msg="Failed to destroy network for sandbox \"c8a2e70154e495bf46665598891321bff252266a56eb550d6306733c38850a2b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:58.774101 containerd[1979]: time="2024-12-13T01:33:58.773276939Z" level=error msg="encountered an error cleaning up failed sandbox \"c8a2e70154e495bf46665598891321bff252266a56eb550d6306733c38850a2b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:58.774101 containerd[1979]: time="2024-12-13T01:33:58.773371955Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h9g2n,Uid:3cc2106b-553b-4660-9b27-e2c825955271,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c8a2e70154e495bf46665598891321bff252266a56eb550d6306733c38850a2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:58.774283 kubelet[3413]: E1213 01:33:58.773668 3413 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8a2e70154e495bf46665598891321bff252266a56eb550d6306733c38850a2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:58.774283 kubelet[3413]: E1213 01:33:58.773744 3413 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8a2e70154e495bf46665598891321bff252266a56eb550d6306733c38850a2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h9g2n" Dec 13 01:33:58.774283 kubelet[3413]: E1213 01:33:58.773772 3413 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8a2e70154e495bf46665598891321bff252266a56eb550d6306733c38850a2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h9g2n" Dec 13 01:33:58.774741 kubelet[3413]: E1213 01:33:58.773831 3413 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-h9g2n_calico-system(3cc2106b-553b-4660-9b27-e2c825955271)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-h9g2n_calico-system(3cc2106b-553b-4660-9b27-e2c825955271)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c8a2e70154e495bf46665598891321bff252266a56eb550d6306733c38850a2b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-h9g2n" podUID="3cc2106b-553b-4660-9b27-e2c825955271" Dec 13 01:33:58.778741 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c8a2e70154e495bf46665598891321bff252266a56eb550d6306733c38850a2b-shm.mount: Deactivated successfully. Dec 13 01:33:58.892574 kubelet[3413]: I1213 01:33:58.891373 3413 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8a2e70154e495bf46665598891321bff252266a56eb550d6306733c38850a2b" Dec 13 01:33:58.893841 containerd[1979]: time="2024-12-13T01:33:58.893803291Z" level=info msg="StopPodSandbox for \"c8a2e70154e495bf46665598891321bff252266a56eb550d6306733c38850a2b\"" Dec 13 01:33:58.895120 containerd[1979]: time="2024-12-13T01:33:58.895067269Z" level=info msg="Ensure that sandbox c8a2e70154e495bf46665598891321bff252266a56eb550d6306733c38850a2b in task-service has been cleanup successfully" Dec 13 01:33:58.974027 containerd[1979]: time="2024-12-13T01:33:58.972865329Z" level=error msg="StopPodSandbox for \"c8a2e70154e495bf46665598891321bff252266a56eb550d6306733c38850a2b\" failed" error="failed to destroy network for sandbox \"c8a2e70154e495bf46665598891321bff252266a56eb550d6306733c38850a2b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:33:58.974208 kubelet[3413]: E1213 01:33:58.973174 3413 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c8a2e70154e495bf46665598891321bff252266a56eb550d6306733c38850a2b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c8a2e70154e495bf46665598891321bff252266a56eb550d6306733c38850a2b" Dec 13 01:33:58.974208 kubelet[3413]: E1213 01:33:58.973223 3413 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c8a2e70154e495bf46665598891321bff252266a56eb550d6306733c38850a2b"} Dec 13 01:33:58.974208 kubelet[3413]: E1213 01:33:58.973271 3413 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3cc2106b-553b-4660-9b27-e2c825955271\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c8a2e70154e495bf46665598891321bff252266a56eb550d6306733c38850a2b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:33:58.974208 kubelet[3413]: E1213 01:33:58.973302 3413 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3cc2106b-553b-4660-9b27-e2c825955271\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c8a2e70154e495bf46665598891321bff252266a56eb550d6306733c38850a2b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-h9g2n" podUID="3cc2106b-553b-4660-9b27-e2c825955271" Dec 13 01:34:04.499137 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2788485766.mount: Deactivated successfully. Dec 13 01:34:04.723212 containerd[1979]: time="2024-12-13T01:34:04.720738818Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 7.920449597s" Dec 13 01:34:04.723212 containerd[1979]: time="2024-12-13T01:34:04.720805859Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Dec 13 01:34:04.740485 containerd[1979]: time="2024-12-13T01:34:04.740434029Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:34:04.745602 containerd[1979]: time="2024-12-13T01:34:04.693705710Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Dec 13 01:34:04.747024 containerd[1979]: time="2024-12-13T01:34:04.746898112Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:34:04.748761 containerd[1979]: time="2024-12-13T01:34:04.748715293Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:34:04.801127 containerd[1979]: time="2024-12-13T01:34:04.800760877Z" level=info msg="CreateContainer within sandbox \"0ec638f8ecd3977f79fe994f167752ec10690060d84bcf0594d381aac1919911\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 01:34:04.915147 containerd[1979]: time="2024-12-13T01:34:04.915097350Z" level=info msg="CreateContainer within sandbox \"0ec638f8ecd3977f79fe994f167752ec10690060d84bcf0594d381aac1919911\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"39b9c77ff21cbc370d7fbba93d882d513926126f2d021a24d229dee56331ada0\"" Dec 13 01:34:04.917647 containerd[1979]: time="2024-12-13T01:34:04.917593858Z" level=info msg="StartContainer for \"39b9c77ff21cbc370d7fbba93d882d513926126f2d021a24d229dee56331ada0\"" Dec 13 01:34:05.192299 systemd[1]: Started cri-containerd-39b9c77ff21cbc370d7fbba93d882d513926126f2d021a24d229dee56331ada0.scope - libcontainer container 39b9c77ff21cbc370d7fbba93d882d513926126f2d021a24d229dee56331ada0. Dec 13 01:34:05.273078 containerd[1979]: time="2024-12-13T01:34:05.272898942Z" level=info msg="StartContainer for \"39b9c77ff21cbc370d7fbba93d882d513926126f2d021a24d229dee56331ada0\" returns successfully" Dec 13 01:34:05.389801 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 01:34:05.391314 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 13 01:34:06.051725 kubelet[3413]: I1213 01:34:06.015947 3413 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-sxdtq" podStartSLOduration=2.047270185 podStartE2EDuration="20.998202415s" podCreationTimestamp="2024-12-13 01:33:45 +0000 UTC" firstStartedPulling="2024-12-13 01:33:45.800209767 +0000 UTC m=+26.409611608" lastFinishedPulling="2024-12-13 01:34:04.751141991 +0000 UTC m=+45.360543838" observedRunningTime="2024-12-13 01:34:05.997517053 +0000 UTC m=+46.606918910" watchObservedRunningTime="2024-12-13 01:34:05.998202415 +0000 UTC m=+46.607604272" Dec 13 01:34:07.753006 kernel: bpftool[4653]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Dec 13 01:34:08.118497 (udev-worker)[4487]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:34:08.135245 systemd-networkd[1814]: vxlan.calico: Link UP Dec 13 01:34:08.135258 systemd-networkd[1814]: vxlan.calico: Gained carrier Dec 13 01:34:08.177549 (udev-worker)[4709]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:34:08.185213 (udev-worker)[4710]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:34:09.495697 systemd-networkd[1814]: vxlan.calico: Gained IPv6LL Dec 13 01:34:09.599780 containerd[1979]: time="2024-12-13T01:34:09.599375488Z" level=info msg="StopPodSandbox for \"2eaa7c00dc1059f984b8d85d43e3a833a431225dbd3979c46548f3a569df3fd2\"" Dec 13 01:34:10.170151 containerd[1979]: 2024-12-13 01:34:09.719 [INFO][4764] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2eaa7c00dc1059f984b8d85d43e3a833a431225dbd3979c46548f3a569df3fd2" Dec 13 01:34:10.170151 containerd[1979]: 2024-12-13 01:34:09.721 [INFO][4764] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2eaa7c00dc1059f984b8d85d43e3a833a431225dbd3979c46548f3a569df3fd2" iface="eth0" netns="/var/run/netns/cni-476c1a57-2b63-c94a-d161-114978e99b77" Dec 13 01:34:10.170151 containerd[1979]: 2024-12-13 01:34:09.721 [INFO][4764] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2eaa7c00dc1059f984b8d85d43e3a833a431225dbd3979c46548f3a569df3fd2" iface="eth0" netns="/var/run/netns/cni-476c1a57-2b63-c94a-d161-114978e99b77" Dec 13 01:34:10.170151 containerd[1979]: 2024-12-13 01:34:09.724 [INFO][4764] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2eaa7c00dc1059f984b8d85d43e3a833a431225dbd3979c46548f3a569df3fd2" iface="eth0" netns="/var/run/netns/cni-476c1a57-2b63-c94a-d161-114978e99b77" Dec 13 01:34:10.170151 containerd[1979]: 2024-12-13 01:34:09.724 [INFO][4764] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2eaa7c00dc1059f984b8d85d43e3a833a431225dbd3979c46548f3a569df3fd2" Dec 13 01:34:10.170151 containerd[1979]: 2024-12-13 01:34:09.724 [INFO][4764] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2eaa7c00dc1059f984b8d85d43e3a833a431225dbd3979c46548f3a569df3fd2" Dec 13 01:34:10.170151 containerd[1979]: 2024-12-13 01:34:10.122 [INFO][4770] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2eaa7c00dc1059f984b8d85d43e3a833a431225dbd3979c46548f3a569df3fd2" HandleID="k8s-pod-network.2eaa7c00dc1059f984b8d85d43e3a833a431225dbd3979c46548f3a569df3fd2" Workload="ip--172--31--21--168-k8s-coredns--7db6d8ff4d--cnqvk-eth0" Dec 13 01:34:10.170151 containerd[1979]: 2024-12-13 01:34:10.127 [INFO][4770] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:34:10.170151 containerd[1979]: 2024-12-13 01:34:10.128 [INFO][4770] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:34:10.170151 containerd[1979]: 2024-12-13 01:34:10.148 [WARNING][4770] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2eaa7c00dc1059f984b8d85d43e3a833a431225dbd3979c46548f3a569df3fd2" HandleID="k8s-pod-network.2eaa7c00dc1059f984b8d85d43e3a833a431225dbd3979c46548f3a569df3fd2" Workload="ip--172--31--21--168-k8s-coredns--7db6d8ff4d--cnqvk-eth0" Dec 13 01:34:10.170151 containerd[1979]: 2024-12-13 01:34:10.148 [INFO][4770] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2eaa7c00dc1059f984b8d85d43e3a833a431225dbd3979c46548f3a569df3fd2" HandleID="k8s-pod-network.2eaa7c00dc1059f984b8d85d43e3a833a431225dbd3979c46548f3a569df3fd2" Workload="ip--172--31--21--168-k8s-coredns--7db6d8ff4d--cnqvk-eth0" Dec 13 01:34:10.170151 containerd[1979]: 2024-12-13 01:34:10.157 [INFO][4770] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:34:10.170151 containerd[1979]: 2024-12-13 01:34:10.166 [INFO][4764] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2eaa7c00dc1059f984b8d85d43e3a833a431225dbd3979c46548f3a569df3fd2" Dec 13 01:34:10.180099 systemd[1]: run-netns-cni\x2d476c1a57\x2d2b63\x2dc94a\x2dd161\x2d114978e99b77.mount: Deactivated successfully. Dec 13 01:34:10.196024 containerd[1979]: time="2024-12-13T01:34:10.195789118Z" level=info msg="TearDown network for sandbox \"2eaa7c00dc1059f984b8d85d43e3a833a431225dbd3979c46548f3a569df3fd2\" successfully" Dec 13 01:34:10.196024 containerd[1979]: time="2024-12-13T01:34:10.195967814Z" level=info msg="StopPodSandbox for \"2eaa7c00dc1059f984b8d85d43e3a833a431225dbd3979c46548f3a569df3fd2\" returns successfully" Dec 13 01:34:10.204319 containerd[1979]: time="2024-12-13T01:34:10.204270075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-cnqvk,Uid:aa5dce48-74be-45a6-b213-0b52ff4a1cc4,Namespace:kube-system,Attempt:1,}" Dec 13 01:34:10.481415 systemd-networkd[1814]: cali38ae70511dc: Link UP Dec 13 01:34:10.482704 systemd-networkd[1814]: cali38ae70511dc: Gained carrier Dec 13 01:34:10.520596 containerd[1979]: 2024-12-13 01:34:10.337 [INFO][4776] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--21--168-k8s-coredns--7db6d8ff4d--cnqvk-eth0 coredns-7db6d8ff4d- kube-system aa5dce48-74be-45a6-b213-0b52ff4a1cc4 762 0 2024-12-13 01:33:35 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-21-168 coredns-7db6d8ff4d-cnqvk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali38ae70511dc [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="769911095551810d0c9aac847714cda41168d936db9008a03a97a811246e5d53" Namespace="kube-system" Pod="coredns-7db6d8ff4d-cnqvk" WorkloadEndpoint="ip--172--31--21--168-k8s-coredns--7db6d8ff4d--cnqvk-" Dec 13 01:34:10.520596 containerd[1979]: 2024-12-13 01:34:10.338 [INFO][4776] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="769911095551810d0c9aac847714cda41168d936db9008a03a97a811246e5d53" Namespace="kube-system" Pod="coredns-7db6d8ff4d-cnqvk" WorkloadEndpoint="ip--172--31--21--168-k8s-coredns--7db6d8ff4d--cnqvk-eth0" Dec 13 01:34:10.520596 containerd[1979]: 2024-12-13 01:34:10.383 [INFO][4788] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="769911095551810d0c9aac847714cda41168d936db9008a03a97a811246e5d53" HandleID="k8s-pod-network.769911095551810d0c9aac847714cda41168d936db9008a03a97a811246e5d53" Workload="ip--172--31--21--168-k8s-coredns--7db6d8ff4d--cnqvk-eth0" Dec 13 01:34:10.520596 containerd[1979]: 2024-12-13 01:34:10.398 [INFO][4788] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="769911095551810d0c9aac847714cda41168d936db9008a03a97a811246e5d53" HandleID="k8s-pod-network.769911095551810d0c9aac847714cda41168d936db9008a03a97a811246e5d53" Workload="ip--172--31--21--168-k8s-coredns--7db6d8ff4d--cnqvk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ed0c0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-21-168", "pod":"coredns-7db6d8ff4d-cnqvk", "timestamp":"2024-12-13 01:34:10.383360782 +0000 UTC"}, Hostname:"ip-172-31-21-168", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:34:10.520596 containerd[1979]: 2024-12-13 01:34:10.398 [INFO][4788] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:34:10.520596 containerd[1979]: 2024-12-13 01:34:10.398 [INFO][4788] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:34:10.520596 containerd[1979]: 2024-12-13 01:34:10.398 [INFO][4788] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-21-168' Dec 13 01:34:10.520596 containerd[1979]: 2024-12-13 01:34:10.402 [INFO][4788] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.769911095551810d0c9aac847714cda41168d936db9008a03a97a811246e5d53" host="ip-172-31-21-168" Dec 13 01:34:10.520596 containerd[1979]: 2024-12-13 01:34:10.419 [INFO][4788] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-21-168" Dec 13 01:34:10.520596 containerd[1979]: 2024-12-13 01:34:10.434 [INFO][4788] ipam/ipam.go 489: Trying affinity for 192.168.116.0/26 host="ip-172-31-21-168" Dec 13 01:34:10.520596 containerd[1979]: 2024-12-13 01:34:10.436 [INFO][4788] ipam/ipam.go 155: Attempting to load block cidr=192.168.116.0/26 host="ip-172-31-21-168" Dec 13 01:34:10.520596 containerd[1979]: 2024-12-13 01:34:10.439 [INFO][4788] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.116.0/26 host="ip-172-31-21-168" Dec 13 01:34:10.520596 containerd[1979]: 2024-12-13 01:34:10.439 [INFO][4788] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.116.0/26 handle="k8s-pod-network.769911095551810d0c9aac847714cda41168d936db9008a03a97a811246e5d53" host="ip-172-31-21-168" Dec 13 01:34:10.520596 containerd[1979]: 2024-12-13 01:34:10.445 [INFO][4788] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.769911095551810d0c9aac847714cda41168d936db9008a03a97a811246e5d53 Dec 13 01:34:10.520596 containerd[1979]: 2024-12-13 01:34:10.456 [INFO][4788] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.116.0/26 handle="k8s-pod-network.769911095551810d0c9aac847714cda41168d936db9008a03a97a811246e5d53" host="ip-172-31-21-168" Dec 13 01:34:10.520596 containerd[1979]: 2024-12-13 01:34:10.466 [INFO][4788] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.116.1/26] block=192.168.116.0/26 handle="k8s-pod-network.769911095551810d0c9aac847714cda41168d936db9008a03a97a811246e5d53" host="ip-172-31-21-168" Dec 13 01:34:10.520596 containerd[1979]: 2024-12-13 01:34:10.466 [INFO][4788] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.116.1/26] handle="k8s-pod-network.769911095551810d0c9aac847714cda41168d936db9008a03a97a811246e5d53" host="ip-172-31-21-168" Dec 13 01:34:10.520596 containerd[1979]: 2024-12-13 01:34:10.466 [INFO][4788] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:34:10.520596 containerd[1979]: 2024-12-13 01:34:10.466 [INFO][4788] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.116.1/26] IPv6=[] ContainerID="769911095551810d0c9aac847714cda41168d936db9008a03a97a811246e5d53" HandleID="k8s-pod-network.769911095551810d0c9aac847714cda41168d936db9008a03a97a811246e5d53" Workload="ip--172--31--21--168-k8s-coredns--7db6d8ff4d--cnqvk-eth0" Dec 13 01:34:10.521847 containerd[1979]: 2024-12-13 01:34:10.474 [INFO][4776] cni-plugin/k8s.go 386: Populated endpoint ContainerID="769911095551810d0c9aac847714cda41168d936db9008a03a97a811246e5d53" Namespace="kube-system" Pod="coredns-7db6d8ff4d-cnqvk" WorkloadEndpoint="ip--172--31--21--168-k8s-coredns--7db6d8ff4d--cnqvk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--168-k8s-coredns--7db6d8ff4d--cnqvk-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"aa5dce48-74be-45a6-b213-0b52ff4a1cc4", ResourceVersion:"762", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 33, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-168", ContainerID:"", Pod:"coredns-7db6d8ff4d-cnqvk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.116.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali38ae70511dc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:34:10.521847 containerd[1979]: 2024-12-13 01:34:10.474 [INFO][4776] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.116.1/32] ContainerID="769911095551810d0c9aac847714cda41168d936db9008a03a97a811246e5d53" Namespace="kube-system" Pod="coredns-7db6d8ff4d-cnqvk" WorkloadEndpoint="ip--172--31--21--168-k8s-coredns--7db6d8ff4d--cnqvk-eth0" Dec 13 01:34:10.521847 containerd[1979]: 2024-12-13 01:34:10.474 [INFO][4776] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali38ae70511dc ContainerID="769911095551810d0c9aac847714cda41168d936db9008a03a97a811246e5d53" Namespace="kube-system" Pod="coredns-7db6d8ff4d-cnqvk" WorkloadEndpoint="ip--172--31--21--168-k8s-coredns--7db6d8ff4d--cnqvk-eth0" Dec 13 01:34:10.521847 containerd[1979]: 2024-12-13 01:34:10.482 [INFO][4776] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="769911095551810d0c9aac847714cda41168d936db9008a03a97a811246e5d53" Namespace="kube-system" Pod="coredns-7db6d8ff4d-cnqvk" WorkloadEndpoint="ip--172--31--21--168-k8s-coredns--7db6d8ff4d--cnqvk-eth0" Dec 13 01:34:10.521847 containerd[1979]: 2024-12-13 01:34:10.484 [INFO][4776] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="769911095551810d0c9aac847714cda41168d936db9008a03a97a811246e5d53" Namespace="kube-system" Pod="coredns-7db6d8ff4d-cnqvk" WorkloadEndpoint="ip--172--31--21--168-k8s-coredns--7db6d8ff4d--cnqvk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--168-k8s-coredns--7db6d8ff4d--cnqvk-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"aa5dce48-74be-45a6-b213-0b52ff4a1cc4", ResourceVersion:"762", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 33, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-168", ContainerID:"769911095551810d0c9aac847714cda41168d936db9008a03a97a811246e5d53", Pod:"coredns-7db6d8ff4d-cnqvk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.116.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali38ae70511dc", MAC:"3a:e8:e8:dd:6f:6a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:34:10.521847 containerd[1979]: 2024-12-13 01:34:10.509 [INFO][4776] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="769911095551810d0c9aac847714cda41168d936db9008a03a97a811246e5d53" Namespace="kube-system" Pod="coredns-7db6d8ff4d-cnqvk" WorkloadEndpoint="ip--172--31--21--168-k8s-coredns--7db6d8ff4d--cnqvk-eth0" Dec 13 01:34:10.602835 containerd[1979]: time="2024-12-13T01:34:10.602134850Z" level=info msg="StopPodSandbox for \"e95d667c7a378fd23dc3d90537774f033dd062db75a6518b25ae0dd49f029226\"" Dec 13 01:34:10.612881 containerd[1979]: time="2024-12-13T01:34:10.612053035Z" level=info msg="StopPodSandbox for \"4dd6aa44c91115181a135cc1e15d99ee327c8432859e99438498718f53a5c7e7\"" Dec 13 01:34:10.723402 containerd[1979]: time="2024-12-13T01:34:10.723210800Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:34:10.723735 containerd[1979]: time="2024-12-13T01:34:10.723671163Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:34:10.726198 containerd[1979]: time="2024-12-13T01:34:10.724011949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:34:10.726198 containerd[1979]: time="2024-12-13T01:34:10.725756798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:34:10.826209 systemd[1]: Started cri-containerd-769911095551810d0c9aac847714cda41168d936db9008a03a97a811246e5d53.scope - libcontainer container 769911095551810d0c9aac847714cda41168d936db9008a03a97a811246e5d53. Dec 13 01:34:10.834300 systemd[1]: Started sshd@7-172.31.21.168:22-139.178.68.195:60764.service - OpenSSH per-connection server daemon (139.178.68.195:60764). Dec 13 01:34:11.029963 containerd[1979]: time="2024-12-13T01:34:11.028434259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-cnqvk,Uid:aa5dce48-74be-45a6-b213-0b52ff4a1cc4,Namespace:kube-system,Attempt:1,} returns sandbox id \"769911095551810d0c9aac847714cda41168d936db9008a03a97a811246e5d53\"" Dec 13 01:34:11.033309 containerd[1979]: 2024-12-13 01:34:10.906 [INFO][4830] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e95d667c7a378fd23dc3d90537774f033dd062db75a6518b25ae0dd49f029226" Dec 13 01:34:11.033309 containerd[1979]: 2024-12-13 01:34:10.906 [INFO][4830] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e95d667c7a378fd23dc3d90537774f033dd062db75a6518b25ae0dd49f029226" iface="eth0" netns="/var/run/netns/cni-d0bc61fb-aa07-d514-2062-04f36fc2d365" Dec 13 01:34:11.033309 containerd[1979]: 2024-12-13 01:34:10.907 [INFO][4830] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e95d667c7a378fd23dc3d90537774f033dd062db75a6518b25ae0dd49f029226" iface="eth0" netns="/var/run/netns/cni-d0bc61fb-aa07-d514-2062-04f36fc2d365" Dec 13 01:34:11.033309 containerd[1979]: 2024-12-13 01:34:10.920 [INFO][4830] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e95d667c7a378fd23dc3d90537774f033dd062db75a6518b25ae0dd49f029226" iface="eth0" netns="/var/run/netns/cni-d0bc61fb-aa07-d514-2062-04f36fc2d365" Dec 13 01:34:11.033309 containerd[1979]: 2024-12-13 01:34:10.920 [INFO][4830] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e95d667c7a378fd23dc3d90537774f033dd062db75a6518b25ae0dd49f029226" Dec 13 01:34:11.033309 containerd[1979]: 2024-12-13 01:34:10.920 [INFO][4830] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e95d667c7a378fd23dc3d90537774f033dd062db75a6518b25ae0dd49f029226" Dec 13 01:34:11.033309 containerd[1979]: 2024-12-13 01:34:10.995 [INFO][4882] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e95d667c7a378fd23dc3d90537774f033dd062db75a6518b25ae0dd49f029226" HandleID="k8s-pod-network.e95d667c7a378fd23dc3d90537774f033dd062db75a6518b25ae0dd49f029226" Workload="ip--172--31--21--168-k8s-calico--apiserver--6f66dbc9d--xwlp5-eth0" Dec 13 01:34:11.033309 containerd[1979]: 2024-12-13 01:34:10.995 [INFO][4882] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:34:11.033309 containerd[1979]: 2024-12-13 01:34:10.995 [INFO][4882] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:34:11.033309 containerd[1979]: 2024-12-13 01:34:11.008 [WARNING][4882] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e95d667c7a378fd23dc3d90537774f033dd062db75a6518b25ae0dd49f029226" HandleID="k8s-pod-network.e95d667c7a378fd23dc3d90537774f033dd062db75a6518b25ae0dd49f029226" Workload="ip--172--31--21--168-k8s-calico--apiserver--6f66dbc9d--xwlp5-eth0" Dec 13 01:34:11.033309 containerd[1979]: 2024-12-13 01:34:11.008 [INFO][4882] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e95d667c7a378fd23dc3d90537774f033dd062db75a6518b25ae0dd49f029226" HandleID="k8s-pod-network.e95d667c7a378fd23dc3d90537774f033dd062db75a6518b25ae0dd49f029226" Workload="ip--172--31--21--168-k8s-calico--apiserver--6f66dbc9d--xwlp5-eth0" Dec 13 01:34:11.033309 containerd[1979]: 2024-12-13 01:34:11.012 [INFO][4882] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:34:11.033309 containerd[1979]: 2024-12-13 01:34:11.025 [INFO][4830] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e95d667c7a378fd23dc3d90537774f033dd062db75a6518b25ae0dd49f029226" Dec 13 01:34:11.035678 containerd[1979]: time="2024-12-13T01:34:11.034960199Z" level=info msg="TearDown network for sandbox \"e95d667c7a378fd23dc3d90537774f033dd062db75a6518b25ae0dd49f029226\" successfully" Dec 13 01:34:11.035678 containerd[1979]: time="2024-12-13T01:34:11.035006409Z" level=info msg="StopPodSandbox for \"e95d667c7a378fd23dc3d90537774f033dd062db75a6518b25ae0dd49f029226\" returns successfully" Dec 13 01:34:11.043049 containerd[1979]: time="2024-12-13T01:34:11.042488370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f66dbc9d-xwlp5,Uid:6d64a3cc-1e12-4f51-9849-d93278d09aa0,Namespace:calico-apiserver,Attempt:1,}" Dec 13 01:34:11.054159 containerd[1979]: time="2024-12-13T01:34:11.053225933Z" level=info msg="CreateContainer within sandbox \"769911095551810d0c9aac847714cda41168d936db9008a03a97a811246e5d53\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:34:11.122088 sshd[4873]: Accepted publickey for core from 139.178.68.195 port 60764 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:34:11.129611 sshd[4873]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:34:11.143416 containerd[1979]: 2024-12-13 01:34:10.912 [INFO][4838] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4dd6aa44c91115181a135cc1e15d99ee327c8432859e99438498718f53a5c7e7" Dec 13 01:34:11.143416 containerd[1979]: 2024-12-13 01:34:10.912 [INFO][4838] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4dd6aa44c91115181a135cc1e15d99ee327c8432859e99438498718f53a5c7e7" iface="eth0" netns="/var/run/netns/cni-a405382f-9da3-6475-2c24-7682a77d4cce" Dec 13 01:34:11.143416 containerd[1979]: 2024-12-13 01:34:10.921 [INFO][4838] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4dd6aa44c91115181a135cc1e15d99ee327c8432859e99438498718f53a5c7e7" iface="eth0" netns="/var/run/netns/cni-a405382f-9da3-6475-2c24-7682a77d4cce" Dec 13 01:34:11.143416 containerd[1979]: 2024-12-13 01:34:10.926 [INFO][4838] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4dd6aa44c91115181a135cc1e15d99ee327c8432859e99438498718f53a5c7e7" iface="eth0" netns="/var/run/netns/cni-a405382f-9da3-6475-2c24-7682a77d4cce" Dec 13 01:34:11.143416 containerd[1979]: 2024-12-13 01:34:10.926 [INFO][4838] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4dd6aa44c91115181a135cc1e15d99ee327c8432859e99438498718f53a5c7e7" Dec 13 01:34:11.143416 containerd[1979]: 2024-12-13 01:34:10.926 [INFO][4838] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4dd6aa44c91115181a135cc1e15d99ee327c8432859e99438498718f53a5c7e7" Dec 13 01:34:11.143416 containerd[1979]: 2024-12-13 01:34:11.084 [INFO][4883] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4dd6aa44c91115181a135cc1e15d99ee327c8432859e99438498718f53a5c7e7" HandleID="k8s-pod-network.4dd6aa44c91115181a135cc1e15d99ee327c8432859e99438498718f53a5c7e7" Workload="ip--172--31--21--168-k8s-calico--kube--controllers--78977ddc75--mllbh-eth0" Dec 13 01:34:11.143416 containerd[1979]: 2024-12-13 01:34:11.084 [INFO][4883] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:34:11.143416 containerd[1979]: 2024-12-13 01:34:11.086 [INFO][4883] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:34:11.143416 containerd[1979]: 2024-12-13 01:34:11.109 [WARNING][4883] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4dd6aa44c91115181a135cc1e15d99ee327c8432859e99438498718f53a5c7e7" HandleID="k8s-pod-network.4dd6aa44c91115181a135cc1e15d99ee327c8432859e99438498718f53a5c7e7" Workload="ip--172--31--21--168-k8s-calico--kube--controllers--78977ddc75--mllbh-eth0" Dec 13 01:34:11.143416 containerd[1979]: 2024-12-13 01:34:11.109 [INFO][4883] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4dd6aa44c91115181a135cc1e15d99ee327c8432859e99438498718f53a5c7e7" HandleID="k8s-pod-network.4dd6aa44c91115181a135cc1e15d99ee327c8432859e99438498718f53a5c7e7" Workload="ip--172--31--21--168-k8s-calico--kube--controllers--78977ddc75--mllbh-eth0" Dec 13 01:34:11.143416 containerd[1979]: 2024-12-13 01:34:11.113 [INFO][4883] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:34:11.143416 containerd[1979]: 2024-12-13 01:34:11.129 [INFO][4838] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4dd6aa44c91115181a135cc1e15d99ee327c8432859e99438498718f53a5c7e7" Dec 13 01:34:11.147143 containerd[1979]: time="2024-12-13T01:34:11.147086613Z" level=info msg="TearDown network for sandbox \"4dd6aa44c91115181a135cc1e15d99ee327c8432859e99438498718f53a5c7e7\" successfully" Dec 13 01:34:11.147420 containerd[1979]: time="2024-12-13T01:34:11.147377108Z" level=info msg="StopPodSandbox for \"4dd6aa44c91115181a135cc1e15d99ee327c8432859e99438498718f53a5c7e7\" returns successfully" Dec 13 01:34:11.149144 systemd-logind[1953]: New session 8 of user core. Dec 13 01:34:11.154245 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 01:34:11.156197 containerd[1979]: time="2024-12-13T01:34:11.154958068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78977ddc75-mllbh,Uid:1d278366-e6f8-4953-ad50-101ffdd81ba1,Namespace:calico-system,Attempt:1,}" Dec 13 01:34:11.174458 containerd[1979]: time="2024-12-13T01:34:11.174406470Z" level=info msg="CreateContainer within sandbox \"769911095551810d0c9aac847714cda41168d936db9008a03a97a811246e5d53\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0a4c020397d480ac68955554d1285f8fd047ca47131b4f4dbe587fd95219845e\"" Dec 13 01:34:11.181405 containerd[1979]: time="2024-12-13T01:34:11.181363009Z" level=info msg="StartContainer for \"0a4c020397d480ac68955554d1285f8fd047ca47131b4f4dbe587fd95219845e\"" Dec 13 01:34:11.183223 systemd[1]: run-netns-cni\x2dd0bc61fb\x2daa07\x2dd514\x2d2062\x2d04f36fc2d365.mount: Deactivated successfully. Dec 13 01:34:11.187039 systemd[1]: run-netns-cni\x2da405382f\x2d9da3\x2d6475\x2d2c24\x2d7682a77d4cce.mount: Deactivated successfully. Dec 13 01:34:11.344815 systemd[1]: Started cri-containerd-0a4c020397d480ac68955554d1285f8fd047ca47131b4f4dbe587fd95219845e.scope - libcontainer container 0a4c020397d480ac68955554d1285f8fd047ca47131b4f4dbe587fd95219845e. Dec 13 01:34:11.489524 containerd[1979]: time="2024-12-13T01:34:11.489483380Z" level=info msg="StartContainer for \"0a4c020397d480ac68955554d1285f8fd047ca47131b4f4dbe587fd95219845e\" returns successfully" Dec 13 01:34:11.609231 containerd[1979]: time="2024-12-13T01:34:11.609100118Z" level=info msg="StopPodSandbox for \"c8a2e70154e495bf46665598891321bff252266a56eb550d6306733c38850a2b\"" Dec 13 01:34:11.713744 systemd-networkd[1814]: cali41f8612d6ec: Link UP Dec 13 01:34:11.728781 systemd-networkd[1814]: cali41f8612d6ec: Gained carrier Dec 13 01:34:11.802243 containerd[1979]: 2024-12-13 01:34:11.387 [INFO][4906] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--21--168-k8s-calico--apiserver--6f66dbc9d--xwlp5-eth0 calico-apiserver-6f66dbc9d- calico-apiserver 6d64a3cc-1e12-4f51-9849-d93278d09aa0 802 0 2024-12-13 01:33:44 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6f66dbc9d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-21-168 calico-apiserver-6f66dbc9d-xwlp5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali41f8612d6ec [] []}} ContainerID="2caa5a2c92b1210ebb37a903629dcd6974fe98353ecbdce3fbd93174f7eb9ca0" Namespace="calico-apiserver" Pod="calico-apiserver-6f66dbc9d-xwlp5" WorkloadEndpoint="ip--172--31--21--168-k8s-calico--apiserver--6f66dbc9d--xwlp5-" Dec 13 01:34:11.802243 containerd[1979]: 2024-12-13 01:34:11.388 [INFO][4906] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2caa5a2c92b1210ebb37a903629dcd6974fe98353ecbdce3fbd93174f7eb9ca0" Namespace="calico-apiserver" Pod="calico-apiserver-6f66dbc9d-xwlp5" WorkloadEndpoint="ip--172--31--21--168-k8s-calico--apiserver--6f66dbc9d--xwlp5-eth0" Dec 13 01:34:11.802243 containerd[1979]: 2024-12-13 01:34:11.537 [INFO][4964] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2caa5a2c92b1210ebb37a903629dcd6974fe98353ecbdce3fbd93174f7eb9ca0" HandleID="k8s-pod-network.2caa5a2c92b1210ebb37a903629dcd6974fe98353ecbdce3fbd93174f7eb9ca0" Workload="ip--172--31--21--168-k8s-calico--apiserver--6f66dbc9d--xwlp5-eth0" Dec 13 01:34:11.802243 containerd[1979]: 2024-12-13 01:34:11.570 [INFO][4964] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2caa5a2c92b1210ebb37a903629dcd6974fe98353ecbdce3fbd93174f7eb9ca0" HandleID="k8s-pod-network.2caa5a2c92b1210ebb37a903629dcd6974fe98353ecbdce3fbd93174f7eb9ca0" Workload="ip--172--31--21--168-k8s-calico--apiserver--6f66dbc9d--xwlp5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00026da30), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-21-168", "pod":"calico-apiserver-6f66dbc9d-xwlp5", "timestamp":"2024-12-13 01:34:11.537716659 +0000 UTC"}, Hostname:"ip-172-31-21-168", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:34:11.802243 containerd[1979]: 2024-12-13 01:34:11.570 [INFO][4964] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:34:11.802243 containerd[1979]: 2024-12-13 01:34:11.570 [INFO][4964] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:34:11.802243 containerd[1979]: 2024-12-13 01:34:11.570 [INFO][4964] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-21-168' Dec 13 01:34:11.802243 containerd[1979]: 2024-12-13 01:34:11.575 [INFO][4964] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2caa5a2c92b1210ebb37a903629dcd6974fe98353ecbdce3fbd93174f7eb9ca0" host="ip-172-31-21-168" Dec 13 01:34:11.802243 containerd[1979]: 2024-12-13 01:34:11.584 [INFO][4964] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-21-168" Dec 13 01:34:11.802243 containerd[1979]: 2024-12-13 01:34:11.593 [INFO][4964] ipam/ipam.go 489: Trying affinity for 192.168.116.0/26 host="ip-172-31-21-168" Dec 13 01:34:11.802243 containerd[1979]: 2024-12-13 01:34:11.597 [INFO][4964] ipam/ipam.go 155: Attempting to load block cidr=192.168.116.0/26 host="ip-172-31-21-168" Dec 13 01:34:11.802243 containerd[1979]: 2024-12-13 01:34:11.621 [INFO][4964] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.116.0/26 host="ip-172-31-21-168" Dec 13 01:34:11.802243 containerd[1979]: 2024-12-13 01:34:11.621 [INFO][4964] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.116.0/26 handle="k8s-pod-network.2caa5a2c92b1210ebb37a903629dcd6974fe98353ecbdce3fbd93174f7eb9ca0" host="ip-172-31-21-168" Dec 13 01:34:11.802243 containerd[1979]: 2024-12-13 01:34:11.639 [INFO][4964] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2caa5a2c92b1210ebb37a903629dcd6974fe98353ecbdce3fbd93174f7eb9ca0 Dec 13 01:34:11.802243 containerd[1979]: 2024-12-13 01:34:11.658 [INFO][4964] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.116.0/26 handle="k8s-pod-network.2caa5a2c92b1210ebb37a903629dcd6974fe98353ecbdce3fbd93174f7eb9ca0" host="ip-172-31-21-168" Dec 13 01:34:11.802243 containerd[1979]: 2024-12-13 01:34:11.679 [INFO][4964] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.116.2/26] block=192.168.116.0/26 handle="k8s-pod-network.2caa5a2c92b1210ebb37a903629dcd6974fe98353ecbdce3fbd93174f7eb9ca0" host="ip-172-31-21-168" Dec 13 01:34:11.802243 containerd[1979]: 2024-12-13 01:34:11.679 [INFO][4964] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.116.2/26] handle="k8s-pod-network.2caa5a2c92b1210ebb37a903629dcd6974fe98353ecbdce3fbd93174f7eb9ca0" host="ip-172-31-21-168" Dec 13 01:34:11.802243 containerd[1979]: 2024-12-13 01:34:11.680 [INFO][4964] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:34:11.802243 containerd[1979]: 2024-12-13 01:34:11.680 [INFO][4964] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.116.2/26] IPv6=[] ContainerID="2caa5a2c92b1210ebb37a903629dcd6974fe98353ecbdce3fbd93174f7eb9ca0" HandleID="k8s-pod-network.2caa5a2c92b1210ebb37a903629dcd6974fe98353ecbdce3fbd93174f7eb9ca0" Workload="ip--172--31--21--168-k8s-calico--apiserver--6f66dbc9d--xwlp5-eth0" Dec 13 01:34:11.808033 containerd[1979]: 2024-12-13 01:34:11.693 [INFO][4906] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2caa5a2c92b1210ebb37a903629dcd6974fe98353ecbdce3fbd93174f7eb9ca0" Namespace="calico-apiserver" Pod="calico-apiserver-6f66dbc9d-xwlp5" WorkloadEndpoint="ip--172--31--21--168-k8s-calico--apiserver--6f66dbc9d--xwlp5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--168-k8s-calico--apiserver--6f66dbc9d--xwlp5-eth0", GenerateName:"calico-apiserver-6f66dbc9d-", Namespace:"calico-apiserver", SelfLink:"", UID:"6d64a3cc-1e12-4f51-9849-d93278d09aa0", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 33, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f66dbc9d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-168", ContainerID:"", Pod:"calico-apiserver-6f66dbc9d-xwlp5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.116.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali41f8612d6ec", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:34:11.808033 containerd[1979]: 2024-12-13 01:34:11.695 [INFO][4906] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.116.2/32] ContainerID="2caa5a2c92b1210ebb37a903629dcd6974fe98353ecbdce3fbd93174f7eb9ca0" Namespace="calico-apiserver" Pod="calico-apiserver-6f66dbc9d-xwlp5" WorkloadEndpoint="ip--172--31--21--168-k8s-calico--apiserver--6f66dbc9d--xwlp5-eth0" Dec 13 01:34:11.808033 containerd[1979]: 2024-12-13 01:34:11.696 [INFO][4906] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali41f8612d6ec ContainerID="2caa5a2c92b1210ebb37a903629dcd6974fe98353ecbdce3fbd93174f7eb9ca0" Namespace="calico-apiserver" Pod="calico-apiserver-6f66dbc9d-xwlp5" WorkloadEndpoint="ip--172--31--21--168-k8s-calico--apiserver--6f66dbc9d--xwlp5-eth0" Dec 13 01:34:11.808033 containerd[1979]: 2024-12-13 01:34:11.736 [INFO][4906] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2caa5a2c92b1210ebb37a903629dcd6974fe98353ecbdce3fbd93174f7eb9ca0" Namespace="calico-apiserver" Pod="calico-apiserver-6f66dbc9d-xwlp5" WorkloadEndpoint="ip--172--31--21--168-k8s-calico--apiserver--6f66dbc9d--xwlp5-eth0" Dec 13 01:34:11.808033 containerd[1979]: 2024-12-13 01:34:11.740 [INFO][4906] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2caa5a2c92b1210ebb37a903629dcd6974fe98353ecbdce3fbd93174f7eb9ca0" Namespace="calico-apiserver" Pod="calico-apiserver-6f66dbc9d-xwlp5" WorkloadEndpoint="ip--172--31--21--168-k8s-calico--apiserver--6f66dbc9d--xwlp5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--168-k8s-calico--apiserver--6f66dbc9d--xwlp5-eth0", GenerateName:"calico-apiserver-6f66dbc9d-", Namespace:"calico-apiserver", SelfLink:"", UID:"6d64a3cc-1e12-4f51-9849-d93278d09aa0", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 33, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f66dbc9d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-168", ContainerID:"2caa5a2c92b1210ebb37a903629dcd6974fe98353ecbdce3fbd93174f7eb9ca0", Pod:"calico-apiserver-6f66dbc9d-xwlp5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.116.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali41f8612d6ec", MAC:"c2:e3:05:96:6e:a2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:34:11.808033 containerd[1979]: 2024-12-13 01:34:11.791 [INFO][4906] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2caa5a2c92b1210ebb37a903629dcd6974fe98353ecbdce3fbd93174f7eb9ca0" Namespace="calico-apiserver" Pod="calico-apiserver-6f66dbc9d-xwlp5" WorkloadEndpoint="ip--172--31--21--168-k8s-calico--apiserver--6f66dbc9d--xwlp5-eth0" Dec 13 01:34:11.941567 containerd[1979]: time="2024-12-13T01:34:11.941284754Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:34:11.942150 containerd[1979]: time="2024-12-13T01:34:11.941818046Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:34:11.943203 containerd[1979]: time="2024-12-13T01:34:11.941857102Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:34:11.943765 containerd[1979]: time="2024-12-13T01:34:11.943625475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:34:11.952986 systemd-networkd[1814]: cali34d1a994072: Link UP Dec 13 01:34:11.959483 systemd-networkd[1814]: cali34d1a994072: Gained carrier Dec 13 01:34:12.024824 containerd[1979]: 2024-12-13 01:34:11.396 [INFO][4917] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--21--168-k8s-calico--kube--controllers--78977ddc75--mllbh-eth0 calico-kube-controllers-78977ddc75- calico-system 1d278366-e6f8-4953-ad50-101ffdd81ba1 803 0 2024-12-13 01:33:45 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:78977ddc75 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-21-168 calico-kube-controllers-78977ddc75-mllbh eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali34d1a994072 [] []}} ContainerID="ae2b81cb626894472c928d821563ac840f5503dd3c0bcd7334acfc165c03a0e0" Namespace="calico-system" Pod="calico-kube-controllers-78977ddc75-mllbh" WorkloadEndpoint="ip--172--31--21--168-k8s-calico--kube--controllers--78977ddc75--mllbh-" Dec 13 01:34:12.024824 containerd[1979]: 2024-12-13 01:34:11.397 [INFO][4917] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ae2b81cb626894472c928d821563ac840f5503dd3c0bcd7334acfc165c03a0e0" Namespace="calico-system" Pod="calico-kube-controllers-78977ddc75-mllbh" WorkloadEndpoint="ip--172--31--21--168-k8s-calico--kube--controllers--78977ddc75--mllbh-eth0" Dec 13 01:34:12.024824 containerd[1979]: 2024-12-13 01:34:11.554 [INFO][4969] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ae2b81cb626894472c928d821563ac840f5503dd3c0bcd7334acfc165c03a0e0" HandleID="k8s-pod-network.ae2b81cb626894472c928d821563ac840f5503dd3c0bcd7334acfc165c03a0e0" Workload="ip--172--31--21--168-k8s-calico--kube--controllers--78977ddc75--mllbh-eth0" Dec 13 01:34:12.024824 containerd[1979]: 2024-12-13 01:34:11.588 [INFO][4969] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ae2b81cb626894472c928d821563ac840f5503dd3c0bcd7334acfc165c03a0e0" HandleID="k8s-pod-network.ae2b81cb626894472c928d821563ac840f5503dd3c0bcd7334acfc165c03a0e0" Workload="ip--172--31--21--168-k8s-calico--kube--controllers--78977ddc75--mllbh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00036e870), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-21-168", "pod":"calico-kube-controllers-78977ddc75-mllbh", "timestamp":"2024-12-13 01:34:11.553827222 +0000 UTC"}, Hostname:"ip-172-31-21-168", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:34:12.024824 containerd[1979]: 2024-12-13 01:34:11.588 [INFO][4969] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:34:12.024824 containerd[1979]: 2024-12-13 01:34:11.681 [INFO][4969] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:34:12.024824 containerd[1979]: 2024-12-13 01:34:11.684 [INFO][4969] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-21-168' Dec 13 01:34:12.024824 containerd[1979]: 2024-12-13 01:34:11.691 [INFO][4969] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ae2b81cb626894472c928d821563ac840f5503dd3c0bcd7334acfc165c03a0e0" host="ip-172-31-21-168" Dec 13 01:34:12.024824 containerd[1979]: 2024-12-13 01:34:11.712 [INFO][4969] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-21-168" Dec 13 01:34:12.024824 containerd[1979]: 2024-12-13 01:34:11.754 [INFO][4969] ipam/ipam.go 489: Trying affinity for 192.168.116.0/26 host="ip-172-31-21-168" Dec 13 01:34:12.024824 containerd[1979]: 2024-12-13 01:34:11.768 [INFO][4969] ipam/ipam.go 155: Attempting to load block cidr=192.168.116.0/26 host="ip-172-31-21-168" Dec 13 01:34:12.024824 containerd[1979]: 2024-12-13 01:34:11.783 [INFO][4969] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.116.0/26 host="ip-172-31-21-168" Dec 13 01:34:12.024824 containerd[1979]: 2024-12-13 01:34:11.783 [INFO][4969] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.116.0/26 handle="k8s-pod-network.ae2b81cb626894472c928d821563ac840f5503dd3c0bcd7334acfc165c03a0e0" host="ip-172-31-21-168" Dec 13 01:34:12.024824 containerd[1979]: 2024-12-13 01:34:11.796 [INFO][4969] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ae2b81cb626894472c928d821563ac840f5503dd3c0bcd7334acfc165c03a0e0 Dec 13 01:34:12.024824 containerd[1979]: 2024-12-13 01:34:11.860 [INFO][4969] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.116.0/26 handle="k8s-pod-network.ae2b81cb626894472c928d821563ac840f5503dd3c0bcd7334acfc165c03a0e0" host="ip-172-31-21-168" Dec 13 01:34:12.024824 containerd[1979]: 2024-12-13 01:34:11.933 [INFO][4969] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.116.3/26] block=192.168.116.0/26 handle="k8s-pod-network.ae2b81cb626894472c928d821563ac840f5503dd3c0bcd7334acfc165c03a0e0" host="ip-172-31-21-168" Dec 13 01:34:12.024824 containerd[1979]: 2024-12-13 01:34:11.933 [INFO][4969] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.116.3/26] handle="k8s-pod-network.ae2b81cb626894472c928d821563ac840f5503dd3c0bcd7334acfc165c03a0e0" host="ip-172-31-21-168" Dec 13 01:34:12.024824 containerd[1979]: 2024-12-13 01:34:11.933 [INFO][4969] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:34:12.024824 containerd[1979]: 2024-12-13 01:34:11.933 [INFO][4969] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.116.3/26] IPv6=[] ContainerID="ae2b81cb626894472c928d821563ac840f5503dd3c0bcd7334acfc165c03a0e0" HandleID="k8s-pod-network.ae2b81cb626894472c928d821563ac840f5503dd3c0bcd7334acfc165c03a0e0" Workload="ip--172--31--21--168-k8s-calico--kube--controllers--78977ddc75--mllbh-eth0" Dec 13 01:34:12.023412 systemd[1]: Started cri-containerd-2caa5a2c92b1210ebb37a903629dcd6974fe98353ecbdce3fbd93174f7eb9ca0.scope - libcontainer container 2caa5a2c92b1210ebb37a903629dcd6974fe98353ecbdce3fbd93174f7eb9ca0. Dec 13 01:34:12.025949 containerd[1979]: 2024-12-13 01:34:11.944 [INFO][4917] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ae2b81cb626894472c928d821563ac840f5503dd3c0bcd7334acfc165c03a0e0" Namespace="calico-system" Pod="calico-kube-controllers-78977ddc75-mllbh" WorkloadEndpoint="ip--172--31--21--168-k8s-calico--kube--controllers--78977ddc75--mllbh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--168-k8s-calico--kube--controllers--78977ddc75--mllbh-eth0", GenerateName:"calico-kube-controllers-78977ddc75-", Namespace:"calico-system", SelfLink:"", UID:"1d278366-e6f8-4953-ad50-101ffdd81ba1", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 33, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78977ddc75", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-168", ContainerID:"", Pod:"calico-kube-controllers-78977ddc75-mllbh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.116.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali34d1a994072", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:34:12.025949 containerd[1979]: 2024-12-13 01:34:11.946 [INFO][4917] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.116.3/32] ContainerID="ae2b81cb626894472c928d821563ac840f5503dd3c0bcd7334acfc165c03a0e0" Namespace="calico-system" Pod="calico-kube-controllers-78977ddc75-mllbh" WorkloadEndpoint="ip--172--31--21--168-k8s-calico--kube--controllers--78977ddc75--mllbh-eth0" Dec 13 01:34:12.025949 containerd[1979]: 2024-12-13 01:34:11.946 [INFO][4917] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali34d1a994072 ContainerID="ae2b81cb626894472c928d821563ac840f5503dd3c0bcd7334acfc165c03a0e0" Namespace="calico-system" Pod="calico-kube-controllers-78977ddc75-mllbh" WorkloadEndpoint="ip--172--31--21--168-k8s-calico--kube--controllers--78977ddc75--mllbh-eth0" Dec 13 01:34:12.025949 containerd[1979]: 2024-12-13 01:34:11.949 [INFO][4917] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ae2b81cb626894472c928d821563ac840f5503dd3c0bcd7334acfc165c03a0e0" Namespace="calico-system" Pod="calico-kube-controllers-78977ddc75-mllbh" WorkloadEndpoint="ip--172--31--21--168-k8s-calico--kube--controllers--78977ddc75--mllbh-eth0" Dec 13 01:34:12.025949 containerd[1979]: 2024-12-13 01:34:11.953 [INFO][4917] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ae2b81cb626894472c928d821563ac840f5503dd3c0bcd7334acfc165c03a0e0" Namespace="calico-system" Pod="calico-kube-controllers-78977ddc75-mllbh" WorkloadEndpoint="ip--172--31--21--168-k8s-calico--kube--controllers--78977ddc75--mllbh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--168-k8s-calico--kube--controllers--78977ddc75--mllbh-eth0", GenerateName:"calico-kube-controllers-78977ddc75-", Namespace:"calico-system", SelfLink:"", UID:"1d278366-e6f8-4953-ad50-101ffdd81ba1", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 33, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78977ddc75", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-168", ContainerID:"ae2b81cb626894472c928d821563ac840f5503dd3c0bcd7334acfc165c03a0e0", Pod:"calico-kube-controllers-78977ddc75-mllbh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.116.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali34d1a994072", MAC:"66:d7:fc:a8:24:18", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:34:12.025949 containerd[1979]: 2024-12-13 01:34:12.006 [INFO][4917] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ae2b81cb626894472c928d821563ac840f5503dd3c0bcd7334acfc165c03a0e0" Namespace="calico-system" Pod="calico-kube-controllers-78977ddc75-mllbh" WorkloadEndpoint="ip--172--31--21--168-k8s-calico--kube--controllers--78977ddc75--mllbh-eth0" Dec 13 01:34:12.121007 systemd-networkd[1814]: cali38ae70511dc: Gained IPv6LL Dec 13 01:34:12.126542 sshd[4873]: pam_unix(sshd:session): session closed for user core Dec 13 01:34:12.138371 systemd[1]: sshd@7-172.31.21.168:22-139.178.68.195:60764.service: Deactivated successfully. Dec 13 01:34:12.150388 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 01:34:12.155943 systemd-logind[1953]: Session 8 logged out. Waiting for processes to exit. Dec 13 01:34:12.166871 systemd-logind[1953]: Removed session 8. Dec 13 01:34:12.191620 kubelet[3413]: I1213 01:34:12.188037 3413 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-cnqvk" podStartSLOduration=37.188007889 podStartE2EDuration="37.188007889s" podCreationTimestamp="2024-12-13 01:33:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:34:12.185409186 +0000 UTC m=+52.794811042" watchObservedRunningTime="2024-12-13 01:34:12.188007889 +0000 UTC m=+52.797409746" Dec 13 01:34:12.242713 containerd[1979]: time="2024-12-13T01:34:12.237464473Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:34:12.242713 containerd[1979]: time="2024-12-13T01:34:12.237558220Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:34:12.242713 containerd[1979]: time="2024-12-13T01:34:12.237581166Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:34:12.242713 containerd[1979]: time="2024-12-13T01:34:12.237706666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:34:12.255482 containerd[1979]: 2024-12-13 01:34:12.039 [INFO][5000] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c8a2e70154e495bf46665598891321bff252266a56eb550d6306733c38850a2b" Dec 13 01:34:12.255482 containerd[1979]: 2024-12-13 01:34:12.040 [INFO][5000] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c8a2e70154e495bf46665598891321bff252266a56eb550d6306733c38850a2b" iface="eth0" netns="/var/run/netns/cni-2d18df35-c100-6702-39ee-b64c47a3705f" Dec 13 01:34:12.255482 containerd[1979]: 2024-12-13 01:34:12.047 [INFO][5000] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c8a2e70154e495bf46665598891321bff252266a56eb550d6306733c38850a2b" iface="eth0" netns="/var/run/netns/cni-2d18df35-c100-6702-39ee-b64c47a3705f" Dec 13 01:34:12.255482 containerd[1979]: 2024-12-13 01:34:12.051 [INFO][5000] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c8a2e70154e495bf46665598891321bff252266a56eb550d6306733c38850a2b" iface="eth0" netns="/var/run/netns/cni-2d18df35-c100-6702-39ee-b64c47a3705f" Dec 13 01:34:12.255482 containerd[1979]: 2024-12-13 01:34:12.051 [INFO][5000] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c8a2e70154e495bf46665598891321bff252266a56eb550d6306733c38850a2b" Dec 13 01:34:12.255482 containerd[1979]: 2024-12-13 01:34:12.051 [INFO][5000] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c8a2e70154e495bf46665598891321bff252266a56eb550d6306733c38850a2b" Dec 13 01:34:12.255482 containerd[1979]: 2024-12-13 01:34:12.109 [INFO][5057] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c8a2e70154e495bf46665598891321bff252266a56eb550d6306733c38850a2b" HandleID="k8s-pod-network.c8a2e70154e495bf46665598891321bff252266a56eb550d6306733c38850a2b" Workload="ip--172--31--21--168-k8s-csi--node--driver--h9g2n-eth0" Dec 13 01:34:12.255482 containerd[1979]: 2024-12-13 01:34:12.109 [INFO][5057] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:34:12.255482 containerd[1979]: 2024-12-13 01:34:12.109 [INFO][5057] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:34:12.255482 containerd[1979]: 2024-12-13 01:34:12.151 [WARNING][5057] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c8a2e70154e495bf46665598891321bff252266a56eb550d6306733c38850a2b" HandleID="k8s-pod-network.c8a2e70154e495bf46665598891321bff252266a56eb550d6306733c38850a2b" Workload="ip--172--31--21--168-k8s-csi--node--driver--h9g2n-eth0" Dec 13 01:34:12.255482 containerd[1979]: 2024-12-13 01:34:12.151 [INFO][5057] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c8a2e70154e495bf46665598891321bff252266a56eb550d6306733c38850a2b" HandleID="k8s-pod-network.c8a2e70154e495bf46665598891321bff252266a56eb550d6306733c38850a2b" Workload="ip--172--31--21--168-k8s-csi--node--driver--h9g2n-eth0" Dec 13 01:34:12.255482 containerd[1979]: 2024-12-13 01:34:12.158 [INFO][5057] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:34:12.255482 containerd[1979]: 2024-12-13 01:34:12.168 [INFO][5000] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c8a2e70154e495bf46665598891321bff252266a56eb550d6306733c38850a2b" Dec 13 01:34:12.261277 containerd[1979]: time="2024-12-13T01:34:12.255634966Z" level=info msg="TearDown network for sandbox \"c8a2e70154e495bf46665598891321bff252266a56eb550d6306733c38850a2b\" successfully" Dec 13 01:34:12.261277 containerd[1979]: time="2024-12-13T01:34:12.255672322Z" level=info msg="StopPodSandbox for \"c8a2e70154e495bf46665598891321bff252266a56eb550d6306733c38850a2b\" returns successfully" Dec 13 01:34:12.261277 containerd[1979]: time="2024-12-13T01:34:12.257396553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h9g2n,Uid:3cc2106b-553b-4660-9b27-e2c825955271,Namespace:calico-system,Attempt:1,}" Dec 13 01:34:12.262590 systemd[1]: run-netns-cni\x2d2d18df35\x2dc100\x2d6702\x2d39ee\x2db64c47a3705f.mount: Deactivated successfully. Dec 13 01:34:12.298272 systemd[1]: Started cri-containerd-ae2b81cb626894472c928d821563ac840f5503dd3c0bcd7334acfc165c03a0e0.scope - libcontainer container ae2b81cb626894472c928d821563ac840f5503dd3c0bcd7334acfc165c03a0e0. Dec 13 01:34:12.396868 containerd[1979]: time="2024-12-13T01:34:12.396503303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f66dbc9d-xwlp5,Uid:6d64a3cc-1e12-4f51-9849-d93278d09aa0,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"2caa5a2c92b1210ebb37a903629dcd6974fe98353ecbdce3fbd93174f7eb9ca0\"" Dec 13 01:34:12.443035 containerd[1979]: time="2024-12-13T01:34:12.441778982Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 01:34:12.534962 containerd[1979]: time="2024-12-13T01:34:12.534818882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78977ddc75-mllbh,Uid:1d278366-e6f8-4953-ad50-101ffdd81ba1,Namespace:calico-system,Attempt:1,} returns sandbox id \"ae2b81cb626894472c928d821563ac840f5503dd3c0bcd7334acfc165c03a0e0\"" Dec 13 01:34:12.599558 containerd[1979]: time="2024-12-13T01:34:12.599435410Z" level=info msg="StopPodSandbox for \"716e2c99c627a9ef2c641d8ab97a6bbec5ec4e757e4a49af70ed29e8c8e07eea\"" Dec 13 01:34:12.735188 systemd-networkd[1814]: cali890e5acc998: Link UP Dec 13 01:34:12.736192 systemd-networkd[1814]: cali890e5acc998: Gained carrier Dec 13 01:34:12.757507 containerd[1979]: 2024-12-13 01:34:12.683 [INFO][5161] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="716e2c99c627a9ef2c641d8ab97a6bbec5ec4e757e4a49af70ed29e8c8e07eea" Dec 13 01:34:12.757507 containerd[1979]: 2024-12-13 01:34:12.683 [INFO][5161] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="716e2c99c627a9ef2c641d8ab97a6bbec5ec4e757e4a49af70ed29e8c8e07eea" iface="eth0" netns="/var/run/netns/cni-45cd19cf-493e-8f08-68b2-1be282adc734" Dec 13 01:34:12.757507 containerd[1979]: 2024-12-13 01:34:12.684 [INFO][5161] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="716e2c99c627a9ef2c641d8ab97a6bbec5ec4e757e4a49af70ed29e8c8e07eea" iface="eth0" netns="/var/run/netns/cni-45cd19cf-493e-8f08-68b2-1be282adc734" Dec 13 01:34:12.757507 containerd[1979]: 2024-12-13 01:34:12.684 [INFO][5161] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="716e2c99c627a9ef2c641d8ab97a6bbec5ec4e757e4a49af70ed29e8c8e07eea" iface="eth0" netns="/var/run/netns/cni-45cd19cf-493e-8f08-68b2-1be282adc734" Dec 13 01:34:12.757507 containerd[1979]: 2024-12-13 01:34:12.685 [INFO][5161] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="716e2c99c627a9ef2c641d8ab97a6bbec5ec4e757e4a49af70ed29e8c8e07eea" Dec 13 01:34:12.757507 containerd[1979]: 2024-12-13 01:34:12.685 [INFO][5161] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="716e2c99c627a9ef2c641d8ab97a6bbec5ec4e757e4a49af70ed29e8c8e07eea" Dec 13 01:34:12.757507 containerd[1979]: 2024-12-13 01:34:12.725 [INFO][5170] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="716e2c99c627a9ef2c641d8ab97a6bbec5ec4e757e4a49af70ed29e8c8e07eea" HandleID="k8s-pod-network.716e2c99c627a9ef2c641d8ab97a6bbec5ec4e757e4a49af70ed29e8c8e07eea" Workload="ip--172--31--21--168-k8s-coredns--7db6d8ff4d--cxh29-eth0" Dec 13 01:34:12.757507 containerd[1979]: 2024-12-13 01:34:12.725 [INFO][5170] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:34:12.757507 containerd[1979]: 2024-12-13 01:34:12.725 [INFO][5170] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:34:12.757507 containerd[1979]: 2024-12-13 01:34:12.746 [WARNING][5170] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="716e2c99c627a9ef2c641d8ab97a6bbec5ec4e757e4a49af70ed29e8c8e07eea" HandleID="k8s-pod-network.716e2c99c627a9ef2c641d8ab97a6bbec5ec4e757e4a49af70ed29e8c8e07eea" Workload="ip--172--31--21--168-k8s-coredns--7db6d8ff4d--cxh29-eth0" Dec 13 01:34:12.757507 containerd[1979]: 2024-12-13 01:34:12.746 [INFO][5170] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="716e2c99c627a9ef2c641d8ab97a6bbec5ec4e757e4a49af70ed29e8c8e07eea" HandleID="k8s-pod-network.716e2c99c627a9ef2c641d8ab97a6bbec5ec4e757e4a49af70ed29e8c8e07eea" Workload="ip--172--31--21--168-k8s-coredns--7db6d8ff4d--cxh29-eth0" Dec 13 01:34:12.757507 containerd[1979]: 2024-12-13 01:34:12.754 [INFO][5170] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:34:12.757507 containerd[1979]: 2024-12-13 01:34:12.755 [INFO][5161] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="716e2c99c627a9ef2c641d8ab97a6bbec5ec4e757e4a49af70ed29e8c8e07eea" Dec 13 01:34:12.760565 containerd[1979]: time="2024-12-13T01:34:12.757676478Z" level=info msg="TearDown network for sandbox \"716e2c99c627a9ef2c641d8ab97a6bbec5ec4e757e4a49af70ed29e8c8e07eea\" successfully" Dec 13 01:34:12.760565 containerd[1979]: time="2024-12-13T01:34:12.757777163Z" level=info msg="StopPodSandbox for \"716e2c99c627a9ef2c641d8ab97a6bbec5ec4e757e4a49af70ed29e8c8e07eea\" returns successfully" Dec 13 01:34:12.760565 containerd[1979]: time="2024-12-13T01:34:12.758677018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-cxh29,Uid:65795569-3321-472e-aa6d-4a50b09325de,Namespace:kube-system,Attempt:1,}" Dec 13 01:34:12.775774 containerd[1979]: 2024-12-13 01:34:12.586 [INFO][5108] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--21--168-k8s-csi--node--driver--h9g2n-eth0 csi-node-driver- calico-system 3cc2106b-553b-4660-9b27-e2c825955271 816 0 2024-12-13 01:33:45 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-21-168 csi-node-driver-h9g2n eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali890e5acc998 [] []}} ContainerID="1d9fa490bea217f50373c6c706ed86b7c34ff9d7500b25f51c53f61631768545" Namespace="calico-system" Pod="csi-node-driver-h9g2n" WorkloadEndpoint="ip--172--31--21--168-k8s-csi--node--driver--h9g2n-" Dec 13 01:34:12.775774 containerd[1979]: 2024-12-13 01:34:12.586 [INFO][5108] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1d9fa490bea217f50373c6c706ed86b7c34ff9d7500b25f51c53f61631768545" Namespace="calico-system" Pod="csi-node-driver-h9g2n" WorkloadEndpoint="ip--172--31--21--168-k8s-csi--node--driver--h9g2n-eth0" Dec 13 01:34:12.775774 containerd[1979]: 2024-12-13 01:34:12.635 [INFO][5143] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1d9fa490bea217f50373c6c706ed86b7c34ff9d7500b25f51c53f61631768545" HandleID="k8s-pod-network.1d9fa490bea217f50373c6c706ed86b7c34ff9d7500b25f51c53f61631768545" Workload="ip--172--31--21--168-k8s-csi--node--driver--h9g2n-eth0" Dec 13 01:34:12.775774 containerd[1979]: 2024-12-13 01:34:12.649 [INFO][5143] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1d9fa490bea217f50373c6c706ed86b7c34ff9d7500b25f51c53f61631768545" HandleID="k8s-pod-network.1d9fa490bea217f50373c6c706ed86b7c34ff9d7500b25f51c53f61631768545" Workload="ip--172--31--21--168-k8s-csi--node--driver--h9g2n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00038f980), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-21-168", "pod":"csi-node-driver-h9g2n", "timestamp":"2024-12-13 01:34:12.63521017 +0000 UTC"}, Hostname:"ip-172-31-21-168", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:34:12.775774 containerd[1979]: 2024-12-13 01:34:12.649 [INFO][5143] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:34:12.775774 containerd[1979]: 2024-12-13 01:34:12.649 [INFO][5143] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:34:12.775774 containerd[1979]: 2024-12-13 01:34:12.649 [INFO][5143] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-21-168' Dec 13 01:34:12.775774 containerd[1979]: 2024-12-13 01:34:12.655 [INFO][5143] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1d9fa490bea217f50373c6c706ed86b7c34ff9d7500b25f51c53f61631768545" host="ip-172-31-21-168" Dec 13 01:34:12.775774 containerd[1979]: 2024-12-13 01:34:12.668 [INFO][5143] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-21-168" Dec 13 01:34:12.775774 containerd[1979]: 2024-12-13 01:34:12.682 [INFO][5143] ipam/ipam.go 489: Trying affinity for 192.168.116.0/26 host="ip-172-31-21-168" Dec 13 01:34:12.775774 containerd[1979]: 2024-12-13 01:34:12.689 [INFO][5143] ipam/ipam.go 155: Attempting to load block cidr=192.168.116.0/26 host="ip-172-31-21-168" Dec 13 01:34:12.775774 containerd[1979]: 2024-12-13 01:34:12.696 [INFO][5143] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.116.0/26 host="ip-172-31-21-168" Dec 13 01:34:12.775774 containerd[1979]: 2024-12-13 01:34:12.696 [INFO][5143] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.116.0/26 handle="k8s-pod-network.1d9fa490bea217f50373c6c706ed86b7c34ff9d7500b25f51c53f61631768545" host="ip-172-31-21-168" Dec 13 01:34:12.775774 containerd[1979]: 2024-12-13 01:34:12.702 [INFO][5143] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1d9fa490bea217f50373c6c706ed86b7c34ff9d7500b25f51c53f61631768545 Dec 13 01:34:12.775774 containerd[1979]: 2024-12-13 01:34:12.710 [INFO][5143] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.116.0/26 handle="k8s-pod-network.1d9fa490bea217f50373c6c706ed86b7c34ff9d7500b25f51c53f61631768545" host="ip-172-31-21-168" Dec 13 01:34:12.775774 containerd[1979]: 2024-12-13 01:34:12.723 [INFO][5143] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.116.4/26] block=192.168.116.0/26 handle="k8s-pod-network.1d9fa490bea217f50373c6c706ed86b7c34ff9d7500b25f51c53f61631768545" host="ip-172-31-21-168" Dec 13 01:34:12.775774 containerd[1979]: 2024-12-13 01:34:12.723 [INFO][5143] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.116.4/26] handle="k8s-pod-network.1d9fa490bea217f50373c6c706ed86b7c34ff9d7500b25f51c53f61631768545" host="ip-172-31-21-168" Dec 13 01:34:12.775774 containerd[1979]: 2024-12-13 01:34:12.723 [INFO][5143] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:34:12.775774 containerd[1979]: 2024-12-13 01:34:12.723 [INFO][5143] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.116.4/26] IPv6=[] ContainerID="1d9fa490bea217f50373c6c706ed86b7c34ff9d7500b25f51c53f61631768545" HandleID="k8s-pod-network.1d9fa490bea217f50373c6c706ed86b7c34ff9d7500b25f51c53f61631768545" Workload="ip--172--31--21--168-k8s-csi--node--driver--h9g2n-eth0" Dec 13 01:34:12.778770 containerd[1979]: 2024-12-13 01:34:12.726 [INFO][5108] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1d9fa490bea217f50373c6c706ed86b7c34ff9d7500b25f51c53f61631768545" Namespace="calico-system" Pod="csi-node-driver-h9g2n" WorkloadEndpoint="ip--172--31--21--168-k8s-csi--node--driver--h9g2n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--168-k8s-csi--node--driver--h9g2n-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3cc2106b-553b-4660-9b27-e2c825955271", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 33, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-168", ContainerID:"", Pod:"csi-node-driver-h9g2n", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.116.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali890e5acc998", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:34:12.778770 containerd[1979]: 2024-12-13 01:34:12.726 [INFO][5108] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.116.4/32] ContainerID="1d9fa490bea217f50373c6c706ed86b7c34ff9d7500b25f51c53f61631768545" Namespace="calico-system" Pod="csi-node-driver-h9g2n" WorkloadEndpoint="ip--172--31--21--168-k8s-csi--node--driver--h9g2n-eth0" Dec 13 01:34:12.778770 containerd[1979]: 2024-12-13 01:34:12.726 [INFO][5108] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali890e5acc998 ContainerID="1d9fa490bea217f50373c6c706ed86b7c34ff9d7500b25f51c53f61631768545" Namespace="calico-system" Pod="csi-node-driver-h9g2n" WorkloadEndpoint="ip--172--31--21--168-k8s-csi--node--driver--h9g2n-eth0" Dec 13 01:34:12.778770 containerd[1979]: 2024-12-13 01:34:12.736 [INFO][5108] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1d9fa490bea217f50373c6c706ed86b7c34ff9d7500b25f51c53f61631768545" Namespace="calico-system" Pod="csi-node-driver-h9g2n" WorkloadEndpoint="ip--172--31--21--168-k8s-csi--node--driver--h9g2n-eth0" Dec 13 01:34:12.778770 containerd[1979]: 2024-12-13 01:34:12.738 [INFO][5108] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1d9fa490bea217f50373c6c706ed86b7c34ff9d7500b25f51c53f61631768545" Namespace="calico-system" Pod="csi-node-driver-h9g2n" WorkloadEndpoint="ip--172--31--21--168-k8s-csi--node--driver--h9g2n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--168-k8s-csi--node--driver--h9g2n-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3cc2106b-553b-4660-9b27-e2c825955271", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 33, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-168", ContainerID:"1d9fa490bea217f50373c6c706ed86b7c34ff9d7500b25f51c53f61631768545", Pod:"csi-node-driver-h9g2n", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.116.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali890e5acc998", MAC:"9a:9f:a2:72:5a:11", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:34:12.778770 containerd[1979]: 2024-12-13 01:34:12.772 [INFO][5108] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1d9fa490bea217f50373c6c706ed86b7c34ff9d7500b25f51c53f61631768545" Namespace="calico-system" Pod="csi-node-driver-h9g2n" WorkloadEndpoint="ip--172--31--21--168-k8s-csi--node--driver--h9g2n-eth0" Dec 13 01:34:12.835449 containerd[1979]: time="2024-12-13T01:34:12.834160730Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:34:12.835449 containerd[1979]: time="2024-12-13T01:34:12.834252644Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:34:12.835449 containerd[1979]: time="2024-12-13T01:34:12.834277482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:34:12.835449 containerd[1979]: time="2024-12-13T01:34:12.834393695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:34:12.872258 systemd[1]: Started cri-containerd-1d9fa490bea217f50373c6c706ed86b7c34ff9d7500b25f51c53f61631768545.scope - libcontainer container 1d9fa490bea217f50373c6c706ed86b7c34ff9d7500b25f51c53f61631768545. Dec 13 01:34:12.954177 containerd[1979]: time="2024-12-13T01:34:12.954122887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h9g2n,Uid:3cc2106b-553b-4660-9b27-e2c825955271,Namespace:calico-system,Attempt:1,} returns sandbox id \"1d9fa490bea217f50373c6c706ed86b7c34ff9d7500b25f51c53f61631768545\"" Dec 13 01:34:13.102707 systemd-networkd[1814]: cali0ab2d70712d: Link UP Dec 13 01:34:13.106405 systemd-networkd[1814]: cali0ab2d70712d: Gained carrier Dec 13 01:34:13.148627 containerd[1979]: 2024-12-13 01:34:12.858 [INFO][5185] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--21--168-k8s-coredns--7db6d8ff4d--cxh29-eth0 coredns-7db6d8ff4d- kube-system 65795569-3321-472e-aa6d-4a50b09325de 830 0 2024-12-13 01:33:35 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-21-168 coredns-7db6d8ff4d-cxh29 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0ab2d70712d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="7482a8e59b159a63c482b4426391ff05f9ff7be23ad8250099a7c8432768bb3f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-cxh29" WorkloadEndpoint="ip--172--31--21--168-k8s-coredns--7db6d8ff4d--cxh29-" Dec 13 01:34:13.148627 containerd[1979]: 2024-12-13 01:34:12.859 [INFO][5185] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7482a8e59b159a63c482b4426391ff05f9ff7be23ad8250099a7c8432768bb3f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-cxh29" WorkloadEndpoint="ip--172--31--21--168-k8s-coredns--7db6d8ff4d--cxh29-eth0" Dec 13 01:34:13.148627 containerd[1979]: 2024-12-13 01:34:12.969 [INFO][5234] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7482a8e59b159a63c482b4426391ff05f9ff7be23ad8250099a7c8432768bb3f" HandleID="k8s-pod-network.7482a8e59b159a63c482b4426391ff05f9ff7be23ad8250099a7c8432768bb3f" Workload="ip--172--31--21--168-k8s-coredns--7db6d8ff4d--cxh29-eth0" Dec 13 01:34:13.148627 containerd[1979]: 2024-12-13 01:34:12.981 [INFO][5234] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7482a8e59b159a63c482b4426391ff05f9ff7be23ad8250099a7c8432768bb3f" HandleID="k8s-pod-network.7482a8e59b159a63c482b4426391ff05f9ff7be23ad8250099a7c8432768bb3f" Workload="ip--172--31--21--168-k8s-coredns--7db6d8ff4d--cxh29-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000166ad0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-21-168", "pod":"coredns-7db6d8ff4d-cxh29", "timestamp":"2024-12-13 01:34:12.969930102 +0000 UTC"}, Hostname:"ip-172-31-21-168", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:34:13.148627 containerd[1979]: 2024-12-13 01:34:12.981 [INFO][5234] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:34:13.148627 containerd[1979]: 2024-12-13 01:34:12.981 [INFO][5234] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:34:13.148627 containerd[1979]: 2024-12-13 01:34:12.981 [INFO][5234] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-21-168' Dec 13 01:34:13.148627 containerd[1979]: 2024-12-13 01:34:12.984 [INFO][5234] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7482a8e59b159a63c482b4426391ff05f9ff7be23ad8250099a7c8432768bb3f" host="ip-172-31-21-168" Dec 13 01:34:13.148627 containerd[1979]: 2024-12-13 01:34:12.992 [INFO][5234] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-21-168" Dec 13 01:34:13.148627 containerd[1979]: 2024-12-13 01:34:13.000 [INFO][5234] ipam/ipam.go 489: Trying affinity for 192.168.116.0/26 host="ip-172-31-21-168" Dec 13 01:34:13.148627 containerd[1979]: 2024-12-13 01:34:13.023 [INFO][5234] ipam/ipam.go 155: Attempting to load block cidr=192.168.116.0/26 host="ip-172-31-21-168" Dec 13 01:34:13.148627 containerd[1979]: 2024-12-13 01:34:13.050 [INFO][5234] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.116.0/26 host="ip-172-31-21-168" Dec 13 01:34:13.148627 containerd[1979]: 2024-12-13 01:34:13.050 [INFO][5234] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.116.0/26 handle="k8s-pod-network.7482a8e59b159a63c482b4426391ff05f9ff7be23ad8250099a7c8432768bb3f" host="ip-172-31-21-168" Dec 13 01:34:13.148627 containerd[1979]: 2024-12-13 01:34:13.054 [INFO][5234] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7482a8e59b159a63c482b4426391ff05f9ff7be23ad8250099a7c8432768bb3f Dec 13 01:34:13.148627 containerd[1979]: 2024-12-13 01:34:13.076 [INFO][5234] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.116.0/26 handle="k8s-pod-network.7482a8e59b159a63c482b4426391ff05f9ff7be23ad8250099a7c8432768bb3f" host="ip-172-31-21-168" Dec 13 01:34:13.148627 containerd[1979]: 2024-12-13 01:34:13.090 [INFO][5234] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.116.5/26] block=192.168.116.0/26 handle="k8s-pod-network.7482a8e59b159a63c482b4426391ff05f9ff7be23ad8250099a7c8432768bb3f" host="ip-172-31-21-168" Dec 13 01:34:13.148627 containerd[1979]: 2024-12-13 01:34:13.090 [INFO][5234] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.116.5/26] handle="k8s-pod-network.7482a8e59b159a63c482b4426391ff05f9ff7be23ad8250099a7c8432768bb3f" host="ip-172-31-21-168" Dec 13 01:34:13.148627 containerd[1979]: 2024-12-13 01:34:13.090 [INFO][5234] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:34:13.148627 containerd[1979]: 2024-12-13 01:34:13.090 [INFO][5234] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.116.5/26] IPv6=[] ContainerID="7482a8e59b159a63c482b4426391ff05f9ff7be23ad8250099a7c8432768bb3f" HandleID="k8s-pod-network.7482a8e59b159a63c482b4426391ff05f9ff7be23ad8250099a7c8432768bb3f" Workload="ip--172--31--21--168-k8s-coredns--7db6d8ff4d--cxh29-eth0" Dec 13 01:34:13.149661 containerd[1979]: 2024-12-13 01:34:13.094 [INFO][5185] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7482a8e59b159a63c482b4426391ff05f9ff7be23ad8250099a7c8432768bb3f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-cxh29" WorkloadEndpoint="ip--172--31--21--168-k8s-coredns--7db6d8ff4d--cxh29-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--168-k8s-coredns--7db6d8ff4d--cxh29-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"65795569-3321-472e-aa6d-4a50b09325de", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 33, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-168", ContainerID:"", Pod:"coredns-7db6d8ff4d-cxh29", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.116.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0ab2d70712d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:34:13.149661 containerd[1979]: 2024-12-13 01:34:13.095 [INFO][5185] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.116.5/32] ContainerID="7482a8e59b159a63c482b4426391ff05f9ff7be23ad8250099a7c8432768bb3f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-cxh29" WorkloadEndpoint="ip--172--31--21--168-k8s-coredns--7db6d8ff4d--cxh29-eth0" Dec 13 01:34:13.149661 containerd[1979]: 2024-12-13 01:34:13.095 [INFO][5185] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0ab2d70712d ContainerID="7482a8e59b159a63c482b4426391ff05f9ff7be23ad8250099a7c8432768bb3f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-cxh29" WorkloadEndpoint="ip--172--31--21--168-k8s-coredns--7db6d8ff4d--cxh29-eth0" Dec 13 01:34:13.149661 containerd[1979]: 2024-12-13 01:34:13.103 [INFO][5185] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7482a8e59b159a63c482b4426391ff05f9ff7be23ad8250099a7c8432768bb3f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-cxh29" WorkloadEndpoint="ip--172--31--21--168-k8s-coredns--7db6d8ff4d--cxh29-eth0" Dec 13 01:34:13.149661 containerd[1979]: 2024-12-13 01:34:13.104 [INFO][5185] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7482a8e59b159a63c482b4426391ff05f9ff7be23ad8250099a7c8432768bb3f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-cxh29" WorkloadEndpoint="ip--172--31--21--168-k8s-coredns--7db6d8ff4d--cxh29-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--168-k8s-coredns--7db6d8ff4d--cxh29-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"65795569-3321-472e-aa6d-4a50b09325de", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 33, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-168", ContainerID:"7482a8e59b159a63c482b4426391ff05f9ff7be23ad8250099a7c8432768bb3f", Pod:"coredns-7db6d8ff4d-cxh29", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.116.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0ab2d70712d", MAC:"46:2c:06:b1:52:6b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:34:13.149661 containerd[1979]: 2024-12-13 01:34:13.144 [INFO][5185] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7482a8e59b159a63c482b4426391ff05f9ff7be23ad8250099a7c8432768bb3f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-cxh29" WorkloadEndpoint="ip--172--31--21--168-k8s-coredns--7db6d8ff4d--cxh29-eth0" Dec 13 01:34:13.193180 systemd[1]: run-netns-cni\x2d45cd19cf\x2d493e\x2d8f08\x2d68b2\x2d1be282adc734.mount: Deactivated successfully. Dec 13 01:34:13.211649 containerd[1979]: time="2024-12-13T01:34:13.211523721Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:34:13.213049 containerd[1979]: time="2024-12-13T01:34:13.211751219Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:34:13.213049 containerd[1979]: time="2024-12-13T01:34:13.211779577Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:34:13.213049 containerd[1979]: time="2024-12-13T01:34:13.212118689Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:34:13.262263 systemd[1]: Started cri-containerd-7482a8e59b159a63c482b4426391ff05f9ff7be23ad8250099a7c8432768bb3f.scope - libcontainer container 7482a8e59b159a63c482b4426391ff05f9ff7be23ad8250099a7c8432768bb3f. Dec 13 01:34:13.357251 containerd[1979]: time="2024-12-13T01:34:13.356747986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-cxh29,Uid:65795569-3321-472e-aa6d-4a50b09325de,Namespace:kube-system,Attempt:1,} returns sandbox id \"7482a8e59b159a63c482b4426391ff05f9ff7be23ad8250099a7c8432768bb3f\"" Dec 13 01:34:13.364004 containerd[1979]: time="2024-12-13T01:34:13.363843105Z" level=info msg="CreateContainer within sandbox \"7482a8e59b159a63c482b4426391ff05f9ff7be23ad8250099a7c8432768bb3f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:34:13.396574 containerd[1979]: time="2024-12-13T01:34:13.396524866Z" level=info msg="CreateContainer within sandbox \"7482a8e59b159a63c482b4426391ff05f9ff7be23ad8250099a7c8432768bb3f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f79d7f2515123510ef52ed76d5408b164a0b4cd482eea83214ad38e76e6ef64a\"" Dec 13 01:34:13.398121 containerd[1979]: time="2024-12-13T01:34:13.397618924Z" level=info msg="StartContainer for \"f79d7f2515123510ef52ed76d5408b164a0b4cd482eea83214ad38e76e6ef64a\"" Dec 13 01:34:13.443419 systemd[1]: Started cri-containerd-f79d7f2515123510ef52ed76d5408b164a0b4cd482eea83214ad38e76e6ef64a.scope - libcontainer container f79d7f2515123510ef52ed76d5408b164a0b4cd482eea83214ad38e76e6ef64a. Dec 13 01:34:13.489460 containerd[1979]: time="2024-12-13T01:34:13.489387896Z" level=info msg="StartContainer for \"f79d7f2515123510ef52ed76d5408b164a0b4cd482eea83214ad38e76e6ef64a\" returns successfully" Dec 13 01:34:13.527654 systemd-networkd[1814]: cali34d1a994072: Gained IPv6LL Dec 13 01:34:13.609708 containerd[1979]: time="2024-12-13T01:34:13.609571277Z" level=info msg="StopPodSandbox for \"fd6dfcf0597f677db2ceafeccfcae83d0c94389d7b4f60e386d7103cd9e041f0\"" Dec 13 01:34:13.764860 containerd[1979]: 2024-12-13 01:34:13.723 [INFO][5361] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fd6dfcf0597f677db2ceafeccfcae83d0c94389d7b4f60e386d7103cd9e041f0" Dec 13 01:34:13.764860 containerd[1979]: 2024-12-13 01:34:13.724 [INFO][5361] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fd6dfcf0597f677db2ceafeccfcae83d0c94389d7b4f60e386d7103cd9e041f0" iface="eth0" netns="/var/run/netns/cni-b66b6a3d-f610-3aa5-b24e-375ca7beec46" Dec 13 01:34:13.764860 containerd[1979]: 2024-12-13 01:34:13.724 [INFO][5361] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fd6dfcf0597f677db2ceafeccfcae83d0c94389d7b4f60e386d7103cd9e041f0" iface="eth0" netns="/var/run/netns/cni-b66b6a3d-f610-3aa5-b24e-375ca7beec46" Dec 13 01:34:13.764860 containerd[1979]: 2024-12-13 01:34:13.724 [INFO][5361] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fd6dfcf0597f677db2ceafeccfcae83d0c94389d7b4f60e386d7103cd9e041f0" iface="eth0" netns="/var/run/netns/cni-b66b6a3d-f610-3aa5-b24e-375ca7beec46" Dec 13 01:34:13.764860 containerd[1979]: 2024-12-13 01:34:13.724 [INFO][5361] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fd6dfcf0597f677db2ceafeccfcae83d0c94389d7b4f60e386d7103cd9e041f0" Dec 13 01:34:13.764860 containerd[1979]: 2024-12-13 01:34:13.724 [INFO][5361] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fd6dfcf0597f677db2ceafeccfcae83d0c94389d7b4f60e386d7103cd9e041f0" Dec 13 01:34:13.764860 containerd[1979]: 2024-12-13 01:34:13.753 [INFO][5367] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fd6dfcf0597f677db2ceafeccfcae83d0c94389d7b4f60e386d7103cd9e041f0" HandleID="k8s-pod-network.fd6dfcf0597f677db2ceafeccfcae83d0c94389d7b4f60e386d7103cd9e041f0" Workload="ip--172--31--21--168-k8s-calico--apiserver--6f66dbc9d--8djcg-eth0" Dec 13 01:34:13.764860 containerd[1979]: 2024-12-13 01:34:13.753 [INFO][5367] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:34:13.764860 containerd[1979]: 2024-12-13 01:34:13.753 [INFO][5367] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:34:13.764860 containerd[1979]: 2024-12-13 01:34:13.759 [WARNING][5367] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fd6dfcf0597f677db2ceafeccfcae83d0c94389d7b4f60e386d7103cd9e041f0" HandleID="k8s-pod-network.fd6dfcf0597f677db2ceafeccfcae83d0c94389d7b4f60e386d7103cd9e041f0" Workload="ip--172--31--21--168-k8s-calico--apiserver--6f66dbc9d--8djcg-eth0" Dec 13 01:34:13.764860 containerd[1979]: 2024-12-13 01:34:13.759 [INFO][5367] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fd6dfcf0597f677db2ceafeccfcae83d0c94389d7b4f60e386d7103cd9e041f0" HandleID="k8s-pod-network.fd6dfcf0597f677db2ceafeccfcae83d0c94389d7b4f60e386d7103cd9e041f0" Workload="ip--172--31--21--168-k8s-calico--apiserver--6f66dbc9d--8djcg-eth0" Dec 13 01:34:13.764860 containerd[1979]: 2024-12-13 01:34:13.761 [INFO][5367] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:34:13.764860 containerd[1979]: 2024-12-13 01:34:13.762 [INFO][5361] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fd6dfcf0597f677db2ceafeccfcae83d0c94389d7b4f60e386d7103cd9e041f0" Dec 13 01:34:13.765892 containerd[1979]: time="2024-12-13T01:34:13.765060964Z" level=info msg="TearDown network for sandbox \"fd6dfcf0597f677db2ceafeccfcae83d0c94389d7b4f60e386d7103cd9e041f0\" successfully" Dec 13 01:34:13.765892 containerd[1979]: time="2024-12-13T01:34:13.765094672Z" level=info msg="StopPodSandbox for \"fd6dfcf0597f677db2ceafeccfcae83d0c94389d7b4f60e386d7103cd9e041f0\" returns successfully" Dec 13 01:34:13.767047 containerd[1979]: time="2024-12-13T01:34:13.766793487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f66dbc9d-8djcg,Uid:c486b937-2444-419d-bb0f-429a58e9c9a6,Namespace:calico-apiserver,Attempt:1,}" Dec 13 01:34:13.783431 systemd-networkd[1814]: cali41f8612d6ec: Gained IPv6LL Dec 13 01:34:14.096329 systemd-networkd[1814]: cali03b027879ba: Link UP Dec 13 01:34:14.097997 systemd-networkd[1814]: cali03b027879ba: Gained carrier Dec 13 01:34:14.138168 containerd[1979]: 2024-12-13 01:34:13.898 [INFO][5374] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--21--168-k8s-calico--apiserver--6f66dbc9d--8djcg-eth0 calico-apiserver-6f66dbc9d- calico-apiserver c486b937-2444-419d-bb0f-429a58e9c9a6 849 0 2024-12-13 01:33:44 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6f66dbc9d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-21-168 calico-apiserver-6f66dbc9d-8djcg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali03b027879ba [] []}} ContainerID="c371c25644a170730635d72e76d52ce85c890f9ef79c2e80322019b9c189a317" Namespace="calico-apiserver" Pod="calico-apiserver-6f66dbc9d-8djcg" WorkloadEndpoint="ip--172--31--21--168-k8s-calico--apiserver--6f66dbc9d--8djcg-" Dec 13 01:34:14.138168 containerd[1979]: 2024-12-13 01:34:13.900 [INFO][5374] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c371c25644a170730635d72e76d52ce85c890f9ef79c2e80322019b9c189a317" Namespace="calico-apiserver" Pod="calico-apiserver-6f66dbc9d-8djcg" WorkloadEndpoint="ip--172--31--21--168-k8s-calico--apiserver--6f66dbc9d--8djcg-eth0" Dec 13 01:34:14.138168 containerd[1979]: 2024-12-13 01:34:13.973 [INFO][5385] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c371c25644a170730635d72e76d52ce85c890f9ef79c2e80322019b9c189a317" HandleID="k8s-pod-network.c371c25644a170730635d72e76d52ce85c890f9ef79c2e80322019b9c189a317" Workload="ip--172--31--21--168-k8s-calico--apiserver--6f66dbc9d--8djcg-eth0" Dec 13 01:34:14.138168 containerd[1979]: 2024-12-13 01:34:14.005 [INFO][5385] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c371c25644a170730635d72e76d52ce85c890f9ef79c2e80322019b9c189a317" HandleID="k8s-pod-network.c371c25644a170730635d72e76d52ce85c890f9ef79c2e80322019b9c189a317" Workload="ip--172--31--21--168-k8s-calico--apiserver--6f66dbc9d--8djcg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000334ac0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-21-168", "pod":"calico-apiserver-6f66dbc9d-8djcg", "timestamp":"2024-12-13 01:34:13.973392386 +0000 UTC"}, Hostname:"ip-172-31-21-168", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:34:14.138168 containerd[1979]: 2024-12-13 01:34:14.005 [INFO][5385] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:34:14.138168 containerd[1979]: 2024-12-13 01:34:14.005 [INFO][5385] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:34:14.138168 containerd[1979]: 2024-12-13 01:34:14.005 [INFO][5385] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-21-168' Dec 13 01:34:14.138168 containerd[1979]: 2024-12-13 01:34:14.009 [INFO][5385] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c371c25644a170730635d72e76d52ce85c890f9ef79c2e80322019b9c189a317" host="ip-172-31-21-168" Dec 13 01:34:14.138168 containerd[1979]: 2024-12-13 01:34:14.016 [INFO][5385] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-21-168" Dec 13 01:34:14.138168 containerd[1979]: 2024-12-13 01:34:14.027 [INFO][5385] ipam/ipam.go 489: Trying affinity for 192.168.116.0/26 host="ip-172-31-21-168" Dec 13 01:34:14.138168 containerd[1979]: 2024-12-13 01:34:14.031 [INFO][5385] ipam/ipam.go 155: Attempting to load block cidr=192.168.116.0/26 host="ip-172-31-21-168" Dec 13 01:34:14.138168 containerd[1979]: 2024-12-13 01:34:14.034 [INFO][5385] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.116.0/26 host="ip-172-31-21-168" Dec 13 01:34:14.138168 containerd[1979]: 2024-12-13 01:34:14.034 [INFO][5385] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.116.0/26 handle="k8s-pod-network.c371c25644a170730635d72e76d52ce85c890f9ef79c2e80322019b9c189a317" host="ip-172-31-21-168" Dec 13 01:34:14.138168 containerd[1979]: 2024-12-13 01:34:14.038 [INFO][5385] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c371c25644a170730635d72e76d52ce85c890f9ef79c2e80322019b9c189a317 Dec 13 01:34:14.138168 containerd[1979]: 2024-12-13 01:34:14.048 [INFO][5385] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.116.0/26 handle="k8s-pod-network.c371c25644a170730635d72e76d52ce85c890f9ef79c2e80322019b9c189a317" host="ip-172-31-21-168" Dec 13 01:34:14.138168 containerd[1979]: 2024-12-13 01:34:14.085 [INFO][5385] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.116.6/26] block=192.168.116.0/26 handle="k8s-pod-network.c371c25644a170730635d72e76d52ce85c890f9ef79c2e80322019b9c189a317" host="ip-172-31-21-168" Dec 13 01:34:14.138168 containerd[1979]: 2024-12-13 01:34:14.085 [INFO][5385] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.116.6/26] handle="k8s-pod-network.c371c25644a170730635d72e76d52ce85c890f9ef79c2e80322019b9c189a317" host="ip-172-31-21-168" Dec 13 01:34:14.138168 containerd[1979]: 2024-12-13 01:34:14.085 [INFO][5385] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:34:14.138168 containerd[1979]: 2024-12-13 01:34:14.085 [INFO][5385] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.116.6/26] IPv6=[] ContainerID="c371c25644a170730635d72e76d52ce85c890f9ef79c2e80322019b9c189a317" HandleID="k8s-pod-network.c371c25644a170730635d72e76d52ce85c890f9ef79c2e80322019b9c189a317" Workload="ip--172--31--21--168-k8s-calico--apiserver--6f66dbc9d--8djcg-eth0" Dec 13 01:34:14.139505 containerd[1979]: 2024-12-13 01:34:14.090 [INFO][5374] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c371c25644a170730635d72e76d52ce85c890f9ef79c2e80322019b9c189a317" Namespace="calico-apiserver" Pod="calico-apiserver-6f66dbc9d-8djcg" WorkloadEndpoint="ip--172--31--21--168-k8s-calico--apiserver--6f66dbc9d--8djcg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--168-k8s-calico--apiserver--6f66dbc9d--8djcg-eth0", GenerateName:"calico-apiserver-6f66dbc9d-", Namespace:"calico-apiserver", SelfLink:"", UID:"c486b937-2444-419d-bb0f-429a58e9c9a6", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 33, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f66dbc9d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-168", ContainerID:"", Pod:"calico-apiserver-6f66dbc9d-8djcg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.116.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali03b027879ba", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:34:14.139505 containerd[1979]: 2024-12-13 01:34:14.091 [INFO][5374] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.116.6/32] ContainerID="c371c25644a170730635d72e76d52ce85c890f9ef79c2e80322019b9c189a317" Namespace="calico-apiserver" Pod="calico-apiserver-6f66dbc9d-8djcg" WorkloadEndpoint="ip--172--31--21--168-k8s-calico--apiserver--6f66dbc9d--8djcg-eth0" Dec 13 01:34:14.139505 containerd[1979]: 2024-12-13 01:34:14.091 [INFO][5374] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali03b027879ba ContainerID="c371c25644a170730635d72e76d52ce85c890f9ef79c2e80322019b9c189a317" Namespace="calico-apiserver" Pod="calico-apiserver-6f66dbc9d-8djcg" WorkloadEndpoint="ip--172--31--21--168-k8s-calico--apiserver--6f66dbc9d--8djcg-eth0" Dec 13 01:34:14.139505 containerd[1979]: 2024-12-13 01:34:14.095 [INFO][5374] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c371c25644a170730635d72e76d52ce85c890f9ef79c2e80322019b9c189a317" Namespace="calico-apiserver" Pod="calico-apiserver-6f66dbc9d-8djcg" WorkloadEndpoint="ip--172--31--21--168-k8s-calico--apiserver--6f66dbc9d--8djcg-eth0" Dec 13 01:34:14.139505 containerd[1979]: 2024-12-13 01:34:14.095 [INFO][5374] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c371c25644a170730635d72e76d52ce85c890f9ef79c2e80322019b9c189a317" Namespace="calico-apiserver" Pod="calico-apiserver-6f66dbc9d-8djcg" WorkloadEndpoint="ip--172--31--21--168-k8s-calico--apiserver--6f66dbc9d--8djcg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--168-k8s-calico--apiserver--6f66dbc9d--8djcg-eth0", GenerateName:"calico-apiserver-6f66dbc9d-", Namespace:"calico-apiserver", SelfLink:"", UID:"c486b937-2444-419d-bb0f-429a58e9c9a6", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 33, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f66dbc9d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-168", ContainerID:"c371c25644a170730635d72e76d52ce85c890f9ef79c2e80322019b9c189a317", Pod:"calico-apiserver-6f66dbc9d-8djcg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.116.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali03b027879ba", MAC:"6a:4a:13:c8:fd:d3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:34:14.139505 containerd[1979]: 2024-12-13 01:34:14.128 [INFO][5374] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c371c25644a170730635d72e76d52ce85c890f9ef79c2e80322019b9c189a317" Namespace="calico-apiserver" Pod="calico-apiserver-6f66dbc9d-8djcg" WorkloadEndpoint="ip--172--31--21--168-k8s-calico--apiserver--6f66dbc9d--8djcg-eth0" Dec 13 01:34:14.194963 systemd[1]: run-netns-cni\x2db66b6a3d\x2df610\x2d3aa5\x2db24e\x2d375ca7beec46.mount: Deactivated successfully. Dec 13 01:34:14.245939 containerd[1979]: time="2024-12-13T01:34:14.245825041Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:34:14.246152 containerd[1979]: time="2024-12-13T01:34:14.245926354Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:34:14.246411 containerd[1979]: time="2024-12-13T01:34:14.246304580Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:34:14.246745 containerd[1979]: time="2024-12-13T01:34:14.246592590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:34:14.304467 systemd[1]: Started cri-containerd-c371c25644a170730635d72e76d52ce85c890f9ef79c2e80322019b9c189a317.scope - libcontainer container c371c25644a170730635d72e76d52ce85c890f9ef79c2e80322019b9c189a317. Dec 13 01:34:14.388492 containerd[1979]: time="2024-12-13T01:34:14.388306200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f66dbc9d-8djcg,Uid:c486b937-2444-419d-bb0f-429a58e9c9a6,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"c371c25644a170730635d72e76d52ce85c890f9ef79c2e80322019b9c189a317\"" Dec 13 01:34:14.425715 systemd-networkd[1814]: cali0ab2d70712d: Gained IPv6LL Dec 13 01:34:14.487251 systemd-networkd[1814]: cali890e5acc998: Gained IPv6LL Dec 13 01:34:15.229239 kubelet[3413]: I1213 01:34:15.227822 3413 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-cxh29" podStartSLOduration=40.227794677 podStartE2EDuration="40.227794677s" podCreationTimestamp="2024-12-13 01:33:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:34:14.215805832 +0000 UTC m=+54.825207687" watchObservedRunningTime="2024-12-13 01:34:15.227794677 +0000 UTC m=+55.837196549" Dec 13 01:34:15.769347 systemd-networkd[1814]: cali03b027879ba: Gained IPv6LL Dec 13 01:34:15.963039 containerd[1979]: time="2024-12-13T01:34:15.961905507Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:34:15.964800 containerd[1979]: time="2024-12-13T01:34:15.964665068Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Dec 13 01:34:15.967243 containerd[1979]: time="2024-12-13T01:34:15.967160026Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:34:15.972394 containerd[1979]: time="2024-12-13T01:34:15.971807329Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:34:15.975153 containerd[1979]: time="2024-12-13T01:34:15.975095532Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 3.533267266s" Dec 13 01:34:15.975538 containerd[1979]: time="2024-12-13T01:34:15.975147649Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Dec 13 01:34:15.983706 containerd[1979]: time="2024-12-13T01:34:15.983662660Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Dec 13 01:34:15.987236 containerd[1979]: time="2024-12-13T01:34:15.987184384Z" level=info msg="CreateContainer within sandbox \"2caa5a2c92b1210ebb37a903629dcd6974fe98353ecbdce3fbd93174f7eb9ca0\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 01:34:16.015417 containerd[1979]: time="2024-12-13T01:34:16.015247121Z" level=info msg="CreateContainer within sandbox \"2caa5a2c92b1210ebb37a903629dcd6974fe98353ecbdce3fbd93174f7eb9ca0\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"bd5fa9139310d32a4421f4eacddae09d12d352fbf72780559bb3e64130964f6d\"" Dec 13 01:34:16.017987 containerd[1979]: time="2024-12-13T01:34:16.017369386Z" level=info msg="StartContainer for \"bd5fa9139310d32a4421f4eacddae09d12d352fbf72780559bb3e64130964f6d\"" Dec 13 01:34:16.121964 systemd[1]: Started cri-containerd-bd5fa9139310d32a4421f4eacddae09d12d352fbf72780559bb3e64130964f6d.scope - libcontainer container bd5fa9139310d32a4421f4eacddae09d12d352fbf72780559bb3e64130964f6d. Dec 13 01:34:16.210343 containerd[1979]: time="2024-12-13T01:34:16.210120555Z" level=info msg="StartContainer for \"bd5fa9139310d32a4421f4eacddae09d12d352fbf72780559bb3e64130964f6d\" returns successfully" Dec 13 01:34:17.173820 systemd[1]: Started sshd@8-172.31.21.168:22-139.178.68.195:33112.service - OpenSSH per-connection server daemon (139.178.68.195:33112). Dec 13 01:34:17.262588 kubelet[3413]: I1213 01:34:17.261612 3413 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6f66dbc9d-xwlp5" podStartSLOduration=29.719474573 podStartE2EDuration="33.2615928s" podCreationTimestamp="2024-12-13 01:33:44 +0000 UTC" firstStartedPulling="2024-12-13 01:34:12.440894249 +0000 UTC m=+53.050296103" lastFinishedPulling="2024-12-13 01:34:15.983012474 +0000 UTC m=+56.592414330" observedRunningTime="2024-12-13 01:34:17.261312575 +0000 UTC m=+57.870714431" watchObservedRunningTime="2024-12-13 01:34:17.2615928 +0000 UTC m=+57.870994649" Dec 13 01:34:17.481650 sshd[5497]: Accepted publickey for core from 139.178.68.195 port 33112 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:34:17.485745 sshd[5497]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:34:17.492961 systemd-logind[1953]: New session 9 of user core. Dec 13 01:34:17.503719 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 01:34:18.243736 kubelet[3413]: I1213 01:34:18.243691 3413 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:34:18.284279 ntpd[1943]: Listen normally on 7 vxlan.calico 192.168.116.0:123 Dec 13 01:34:18.295118 ntpd[1943]: 13 Dec 01:34:18 ntpd[1943]: Listen normally on 7 vxlan.calico 192.168.116.0:123 Dec 13 01:34:18.295118 ntpd[1943]: 13 Dec 01:34:18 ntpd[1943]: Listen normally on 8 vxlan.calico [fe80::6469:e4ff:fe6f:884f%4]:123 Dec 13 01:34:18.295118 ntpd[1943]: 13 Dec 01:34:18 ntpd[1943]: Listen normally on 9 cali38ae70511dc [fe80::ecee:eeff:feee:eeee%7]:123 Dec 13 01:34:18.295118 ntpd[1943]: 13 Dec 01:34:18 ntpd[1943]: Listen normally on 10 cali41f8612d6ec [fe80::ecee:eeff:feee:eeee%8]:123 Dec 13 01:34:18.295118 ntpd[1943]: 13 Dec 01:34:18 ntpd[1943]: Listen normally on 11 cali34d1a994072 [fe80::ecee:eeff:feee:eeee%9]:123 Dec 13 01:34:18.295118 ntpd[1943]: 13 Dec 01:34:18 ntpd[1943]: Listen normally on 12 cali890e5acc998 [fe80::ecee:eeff:feee:eeee%10]:123 Dec 13 01:34:18.295118 ntpd[1943]: 13 Dec 01:34:18 ntpd[1943]: Listen normally on 13 cali0ab2d70712d [fe80::ecee:eeff:feee:eeee%11]:123 Dec 13 01:34:18.295118 ntpd[1943]: 13 Dec 01:34:18 ntpd[1943]: Listen normally on 14 cali03b027879ba [fe80::ecee:eeff:feee:eeee%12]:123 Dec 13 01:34:18.286246 ntpd[1943]: Listen normally on 8 vxlan.calico [fe80::6469:e4ff:fe6f:884f%4]:123 Dec 13 01:34:18.289025 ntpd[1943]: Listen normally on 9 cali38ae70511dc [fe80::ecee:eeff:feee:eeee%7]:123 Dec 13 01:34:18.289120 ntpd[1943]: Listen normally on 10 cali41f8612d6ec [fe80::ecee:eeff:feee:eeee%8]:123 Dec 13 01:34:18.289509 ntpd[1943]: Listen normally on 11 cali34d1a994072 [fe80::ecee:eeff:feee:eeee%9]:123 Dec 13 01:34:18.289572 ntpd[1943]: Listen normally on 12 cali890e5acc998 [fe80::ecee:eeff:feee:eeee%10]:123 Dec 13 01:34:18.289609 ntpd[1943]: Listen normally on 13 cali0ab2d70712d [fe80::ecee:eeff:feee:eeee%11]:123 Dec 13 01:34:18.289646 ntpd[1943]: Listen normally on 14 cali03b027879ba [fe80::ecee:eeff:feee:eeee%12]:123 Dec 13 01:34:18.639721 sshd[5497]: pam_unix(sshd:session): session closed for user core Dec 13 01:34:18.656536 systemd-logind[1953]: Session 9 logged out. Waiting for processes to exit. Dec 13 01:34:18.659535 systemd[1]: sshd@8-172.31.21.168:22-139.178.68.195:33112.service: Deactivated successfully. Dec 13 01:34:18.667525 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 01:34:18.675144 systemd-logind[1953]: Removed session 9. Dec 13 01:34:19.208806 systemd[1]: run-containerd-runc-k8s.io-39b9c77ff21cbc370d7fbba93d882d513926126f2d021a24d229dee56331ada0-runc.VmPlIR.mount: Deactivated successfully. Dec 13 01:34:19.389940 containerd[1979]: time="2024-12-13T01:34:19.387459426Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:34:19.395509 containerd[1979]: time="2024-12-13T01:34:19.395440548Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Dec 13 01:34:19.398190 containerd[1979]: time="2024-12-13T01:34:19.398151370Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:34:19.404038 containerd[1979]: time="2024-12-13T01:34:19.403962972Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:34:19.406332 containerd[1979]: time="2024-12-13T01:34:19.406285529Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 3.422542405s" Dec 13 01:34:19.406502 containerd[1979]: time="2024-12-13T01:34:19.406340782Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Dec 13 01:34:19.410529 containerd[1979]: time="2024-12-13T01:34:19.410490608Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Dec 13 01:34:19.456095 containerd[1979]: time="2024-12-13T01:34:19.455944159Z" level=info msg="CreateContainer within sandbox \"ae2b81cb626894472c928d821563ac840f5503dd3c0bcd7334acfc165c03a0e0\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Dec 13 01:34:19.538273 containerd[1979]: time="2024-12-13T01:34:19.538060500Z" level=info msg="CreateContainer within sandbox \"ae2b81cb626894472c928d821563ac840f5503dd3c0bcd7334acfc165c03a0e0\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"7b25389d2704b691e0a927391858104953b88ebe8406d298e53584914be044e5\"" Dec 13 01:34:19.540377 containerd[1979]: time="2024-12-13T01:34:19.540193550Z" level=info msg="StartContainer for \"7b25389d2704b691e0a927391858104953b88ebe8406d298e53584914be044e5\"" Dec 13 01:34:19.671367 systemd[1]: Started cri-containerd-7b25389d2704b691e0a927391858104953b88ebe8406d298e53584914be044e5.scope - libcontainer container 7b25389d2704b691e0a927391858104953b88ebe8406d298e53584914be044e5. Dec 13 01:34:19.753353 containerd[1979]: time="2024-12-13T01:34:19.751786107Z" level=info msg="StopPodSandbox for \"4dd6aa44c91115181a135cc1e15d99ee327c8432859e99438498718f53a5c7e7\"" Dec 13 01:34:19.782231 containerd[1979]: time="2024-12-13T01:34:19.782183201Z" level=info msg="StartContainer for \"7b25389d2704b691e0a927391858104953b88ebe8406d298e53584914be044e5\" returns successfully" Dec 13 01:34:20.043216 containerd[1979]: 2024-12-13 01:34:19.993 [WARNING][5597] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4dd6aa44c91115181a135cc1e15d99ee327c8432859e99438498718f53a5c7e7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--168-k8s-calico--kube--controllers--78977ddc75--mllbh-eth0", GenerateName:"calico-kube-controllers-78977ddc75-", Namespace:"calico-system", SelfLink:"", UID:"1d278366-e6f8-4953-ad50-101ffdd81ba1", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 33, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78977ddc75", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-168", ContainerID:"ae2b81cb626894472c928d821563ac840f5503dd3c0bcd7334acfc165c03a0e0", Pod:"calico-kube-controllers-78977ddc75-mllbh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.116.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali34d1a994072", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:34:20.043216 containerd[1979]: 2024-12-13 01:34:19.995 [INFO][5597] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4dd6aa44c91115181a135cc1e15d99ee327c8432859e99438498718f53a5c7e7" Dec 13 01:34:20.043216 containerd[1979]: 2024-12-13 01:34:19.995 [INFO][5597] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4dd6aa44c91115181a135cc1e15d99ee327c8432859e99438498718f53a5c7e7" iface="eth0" netns="" Dec 13 01:34:20.043216 containerd[1979]: 2024-12-13 01:34:19.995 [INFO][5597] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4dd6aa44c91115181a135cc1e15d99ee327c8432859e99438498718f53a5c7e7" Dec 13 01:34:20.043216 containerd[1979]: 2024-12-13 01:34:19.995 [INFO][5597] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4dd6aa44c91115181a135cc1e15d99ee327c8432859e99438498718f53a5c7e7" Dec 13 01:34:20.043216 containerd[1979]: 2024-12-13 01:34:20.029 [INFO][5608] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4dd6aa44c91115181a135cc1e15d99ee327c8432859e99438498718f53a5c7e7" HandleID="k8s-pod-network.4dd6aa44c91115181a135cc1e15d99ee327c8432859e99438498718f53a5c7e7" Workload="ip--172--31--21--168-k8s-calico--kube--controllers--78977ddc75--mllbh-eth0" Dec 13 01:34:20.043216 containerd[1979]: 2024-12-13 01:34:20.029 [INFO][5608] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:34:20.043216 containerd[1979]: 2024-12-13 01:34:20.030 [INFO][5608] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:34:20.043216 containerd[1979]: 2024-12-13 01:34:20.037 [WARNING][5608] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4dd6aa44c91115181a135cc1e15d99ee327c8432859e99438498718f53a5c7e7" HandleID="k8s-pod-network.4dd6aa44c91115181a135cc1e15d99ee327c8432859e99438498718f53a5c7e7" Workload="ip--172--31--21--168-k8s-calico--kube--controllers--78977ddc75--mllbh-eth0" Dec 13 01:34:20.043216 containerd[1979]: 2024-12-13 01:34:20.037 [INFO][5608] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4dd6aa44c91115181a135cc1e15d99ee327c8432859e99438498718f53a5c7e7" HandleID="k8s-pod-network.4dd6aa44c91115181a135cc1e15d99ee327c8432859e99438498718f53a5c7e7" Workload="ip--172--31--21--168-k8s-calico--kube--controllers--78977ddc75--mllbh-eth0" Dec 13 01:34:20.043216 containerd[1979]: 2024-12-13 01:34:20.039 [INFO][5608] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:34:20.043216 containerd[1979]: 2024-12-13 01:34:20.041 [INFO][5597] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4dd6aa44c91115181a135cc1e15d99ee327c8432859e99438498718f53a5c7e7" Dec 13 01:34:20.043746 containerd[1979]: time="2024-12-13T01:34:20.043255399Z" level=info msg="TearDown network for sandbox \"4dd6aa44c91115181a135cc1e15d99ee327c8432859e99438498718f53a5c7e7\" successfully" Dec 13 01:34:20.043746 containerd[1979]: time="2024-12-13T01:34:20.043284391Z" level=info msg="StopPodSandbox for \"4dd6aa44c91115181a135cc1e15d99ee327c8432859e99438498718f53a5c7e7\" returns successfully" Dec 13 01:34:20.084765 containerd[1979]: time="2024-12-13T01:34:20.084711185Z" level=info msg="RemovePodSandbox for \"4dd6aa44c91115181a135cc1e15d99ee327c8432859e99438498718f53a5c7e7\"" Dec 13 01:34:20.084894 containerd[1979]: time="2024-12-13T01:34:20.084773834Z" level=info msg="Forcibly stopping sandbox \"4dd6aa44c91115181a135cc1e15d99ee327c8432859e99438498718f53a5c7e7\"" Dec 13 01:34:20.182060 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount743571586.mount: Deactivated successfully. Dec 13 01:34:20.203255 containerd[1979]: 2024-12-13 01:34:20.147 [WARNING][5627] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4dd6aa44c91115181a135cc1e15d99ee327c8432859e99438498718f53a5c7e7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--168-k8s-calico--kube--controllers--78977ddc75--mllbh-eth0", GenerateName:"calico-kube-controllers-78977ddc75-", Namespace:"calico-system", SelfLink:"", UID:"1d278366-e6f8-4953-ad50-101ffdd81ba1", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 33, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78977ddc75", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-168", ContainerID:"ae2b81cb626894472c928d821563ac840f5503dd3c0bcd7334acfc165c03a0e0", Pod:"calico-kube-controllers-78977ddc75-mllbh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.116.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali34d1a994072", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:34:20.203255 containerd[1979]: 2024-12-13 01:34:20.147 [INFO][5627] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4dd6aa44c91115181a135cc1e15d99ee327c8432859e99438498718f53a5c7e7" Dec 13 01:34:20.203255 containerd[1979]: 2024-12-13 01:34:20.147 [INFO][5627] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4dd6aa44c91115181a135cc1e15d99ee327c8432859e99438498718f53a5c7e7" iface="eth0" netns="" Dec 13 01:34:20.203255 containerd[1979]: 2024-12-13 01:34:20.147 [INFO][5627] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4dd6aa44c91115181a135cc1e15d99ee327c8432859e99438498718f53a5c7e7" Dec 13 01:34:20.203255 containerd[1979]: 2024-12-13 01:34:20.148 [INFO][5627] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4dd6aa44c91115181a135cc1e15d99ee327c8432859e99438498718f53a5c7e7" Dec 13 01:34:20.203255 containerd[1979]: 2024-12-13 01:34:20.184 [INFO][5633] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4dd6aa44c91115181a135cc1e15d99ee327c8432859e99438498718f53a5c7e7" HandleID="k8s-pod-network.4dd6aa44c91115181a135cc1e15d99ee327c8432859e99438498718f53a5c7e7" Workload="ip--172--31--21--168-k8s-calico--kube--controllers--78977ddc75--mllbh-eth0" Dec 13 01:34:20.203255 containerd[1979]: 2024-12-13 01:34:20.187 [INFO][5633] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:34:20.203255 containerd[1979]: 2024-12-13 01:34:20.188 [INFO][5633] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:34:20.203255 containerd[1979]: 2024-12-13 01:34:20.197 [WARNING][5633] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4dd6aa44c91115181a135cc1e15d99ee327c8432859e99438498718f53a5c7e7" HandleID="k8s-pod-network.4dd6aa44c91115181a135cc1e15d99ee327c8432859e99438498718f53a5c7e7" Workload="ip--172--31--21--168-k8s-calico--kube--controllers--78977ddc75--mllbh-eth0" Dec 13 01:34:20.203255 containerd[1979]: 2024-12-13 01:34:20.197 [INFO][5633] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4dd6aa44c91115181a135cc1e15d99ee327c8432859e99438498718f53a5c7e7" HandleID="k8s-pod-network.4dd6aa44c91115181a135cc1e15d99ee327c8432859e99438498718f53a5c7e7" Workload="ip--172--31--21--168-k8s-calico--kube--controllers--78977ddc75--mllbh-eth0" Dec 13 01:34:20.203255 containerd[1979]: 2024-12-13 01:34:20.199 [INFO][5633] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:34:20.203255 containerd[1979]: 2024-12-13 01:34:20.201 [INFO][5627] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4dd6aa44c91115181a135cc1e15d99ee327c8432859e99438498718f53a5c7e7" Dec 13 01:34:20.204090 containerd[1979]: time="2024-12-13T01:34:20.203297764Z" level=info msg="TearDown network for sandbox \"4dd6aa44c91115181a135cc1e15d99ee327c8432859e99438498718f53a5c7e7\" successfully" Dec 13 01:34:20.238320 containerd[1979]: time="2024-12-13T01:34:20.238274137Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4dd6aa44c91115181a135cc1e15d99ee327c8432859e99438498718f53a5c7e7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:34:20.277639 containerd[1979]: time="2024-12-13T01:34:20.275583891Z" level=info msg="RemovePodSandbox \"4dd6aa44c91115181a135cc1e15d99ee327c8432859e99438498718f53a5c7e7\" returns successfully" Dec 13 01:34:20.309919 containerd[1979]: time="2024-12-13T01:34:20.309801049Z" level=info msg="StopPodSandbox for \"e95d667c7a378fd23dc3d90537774f033dd062db75a6518b25ae0dd49f029226\"" Dec 13 01:34:20.404551 kubelet[3413]: I1213 01:34:20.404357 3413 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-78977ddc75-mllbh" podStartSLOduration=28.549723001 podStartE2EDuration="35.404315004s" podCreationTimestamp="2024-12-13 01:33:45 +0000 UTC" firstStartedPulling="2024-12-13 01:34:12.554277112 +0000 UTC m=+53.163678954" lastFinishedPulling="2024-12-13 01:34:19.408869109 +0000 UTC m=+60.018270957" observedRunningTime="2024-12-13 01:34:20.277596973 +0000 UTC m=+60.886998832" watchObservedRunningTime="2024-12-13 01:34:20.404315004 +0000 UTC m=+61.013716860" Dec 13 01:34:20.452663 containerd[1979]: 2024-12-13 01:34:20.397 [WARNING][5663] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e95d667c7a378fd23dc3d90537774f033dd062db75a6518b25ae0dd49f029226" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--168-k8s-calico--apiserver--6f66dbc9d--xwlp5-eth0", GenerateName:"calico-apiserver-6f66dbc9d-", Namespace:"calico-apiserver", SelfLink:"", UID:"6d64a3cc-1e12-4f51-9849-d93278d09aa0", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 33, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f66dbc9d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-168", ContainerID:"2caa5a2c92b1210ebb37a903629dcd6974fe98353ecbdce3fbd93174f7eb9ca0", Pod:"calico-apiserver-6f66dbc9d-xwlp5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.116.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali41f8612d6ec", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:34:20.452663 containerd[1979]: 2024-12-13 01:34:20.397 [INFO][5663] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e95d667c7a378fd23dc3d90537774f033dd062db75a6518b25ae0dd49f029226" Dec 13 01:34:20.452663 containerd[1979]: 2024-12-13 01:34:20.397 [INFO][5663] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e95d667c7a378fd23dc3d90537774f033dd062db75a6518b25ae0dd49f029226" iface="eth0" netns="" Dec 13 01:34:20.452663 containerd[1979]: 2024-12-13 01:34:20.397 [INFO][5663] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e95d667c7a378fd23dc3d90537774f033dd062db75a6518b25ae0dd49f029226" Dec 13 01:34:20.452663 containerd[1979]: 2024-12-13 01:34:20.397 [INFO][5663] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e95d667c7a378fd23dc3d90537774f033dd062db75a6518b25ae0dd49f029226" Dec 13 01:34:20.452663 containerd[1979]: 2024-12-13 01:34:20.441 [INFO][5677] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e95d667c7a378fd23dc3d90537774f033dd062db75a6518b25ae0dd49f029226" HandleID="k8s-pod-network.e95d667c7a378fd23dc3d90537774f033dd062db75a6518b25ae0dd49f029226" Workload="ip--172--31--21--168-k8s-calico--apiserver--6f66dbc9d--xwlp5-eth0" Dec 13 01:34:20.452663 containerd[1979]: 2024-12-13 01:34:20.441 [INFO][5677] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:34:20.452663 containerd[1979]: 2024-12-13 01:34:20.441 [INFO][5677] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:34:20.452663 containerd[1979]: 2024-12-13 01:34:20.447 [WARNING][5677] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e95d667c7a378fd23dc3d90537774f033dd062db75a6518b25ae0dd49f029226" HandleID="k8s-pod-network.e95d667c7a378fd23dc3d90537774f033dd062db75a6518b25ae0dd49f029226" Workload="ip--172--31--21--168-k8s-calico--apiserver--6f66dbc9d--xwlp5-eth0" Dec 13 01:34:20.452663 containerd[1979]: 2024-12-13 01:34:20.447 [INFO][5677] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e95d667c7a378fd23dc3d90537774f033dd062db75a6518b25ae0dd49f029226" HandleID="k8s-pod-network.e95d667c7a378fd23dc3d90537774f033dd062db75a6518b25ae0dd49f029226" Workload="ip--172--31--21--168-k8s-calico--apiserver--6f66dbc9d--xwlp5-eth0" Dec 13 01:34:20.452663 containerd[1979]: 2024-12-13 01:34:20.448 [INFO][5677] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:34:20.452663 containerd[1979]: 2024-12-13 01:34:20.450 [INFO][5663] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e95d667c7a378fd23dc3d90537774f033dd062db75a6518b25ae0dd49f029226" Dec 13 01:34:20.454633 containerd[1979]: time="2024-12-13T01:34:20.452715452Z" level=info msg="TearDown network for sandbox \"e95d667c7a378fd23dc3d90537774f033dd062db75a6518b25ae0dd49f029226\" successfully" Dec 13 01:34:20.454633 containerd[1979]: time="2024-12-13T01:34:20.452743777Z" level=info msg="StopPodSandbox for \"e95d667c7a378fd23dc3d90537774f033dd062db75a6518b25ae0dd49f029226\" returns successfully" Dec 13 01:34:20.454633 containerd[1979]: time="2024-12-13T01:34:20.453513280Z" level=info msg="RemovePodSandbox for \"e95d667c7a378fd23dc3d90537774f033dd062db75a6518b25ae0dd49f029226\"" Dec 13 01:34:20.454633 containerd[1979]: time="2024-12-13T01:34:20.453545049Z" level=info msg="Forcibly stopping sandbox \"e95d667c7a378fd23dc3d90537774f033dd062db75a6518b25ae0dd49f029226\"" Dec 13 01:34:20.546166 containerd[1979]: 2024-12-13 01:34:20.505 [WARNING][5696] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e95d667c7a378fd23dc3d90537774f033dd062db75a6518b25ae0dd49f029226" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--168-k8s-calico--apiserver--6f66dbc9d--xwlp5-eth0", GenerateName:"calico-apiserver-6f66dbc9d-", Namespace:"calico-apiserver", SelfLink:"", UID:"6d64a3cc-1e12-4f51-9849-d93278d09aa0", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 33, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f66dbc9d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-168", ContainerID:"2caa5a2c92b1210ebb37a903629dcd6974fe98353ecbdce3fbd93174f7eb9ca0", Pod:"calico-apiserver-6f66dbc9d-xwlp5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.116.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali41f8612d6ec", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:34:20.546166 containerd[1979]: 2024-12-13 01:34:20.505 [INFO][5696] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e95d667c7a378fd23dc3d90537774f033dd062db75a6518b25ae0dd49f029226" Dec 13 01:34:20.546166 containerd[1979]: 2024-12-13 01:34:20.505 [INFO][5696] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e95d667c7a378fd23dc3d90537774f033dd062db75a6518b25ae0dd49f029226" iface="eth0" netns="" Dec 13 01:34:20.546166 containerd[1979]: 2024-12-13 01:34:20.505 [INFO][5696] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e95d667c7a378fd23dc3d90537774f033dd062db75a6518b25ae0dd49f029226" Dec 13 01:34:20.546166 containerd[1979]: 2024-12-13 01:34:20.505 [INFO][5696] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e95d667c7a378fd23dc3d90537774f033dd062db75a6518b25ae0dd49f029226" Dec 13 01:34:20.546166 containerd[1979]: 2024-12-13 01:34:20.533 [INFO][5703] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e95d667c7a378fd23dc3d90537774f033dd062db75a6518b25ae0dd49f029226" HandleID="k8s-pod-network.e95d667c7a378fd23dc3d90537774f033dd062db75a6518b25ae0dd49f029226" Workload="ip--172--31--21--168-k8s-calico--apiserver--6f66dbc9d--xwlp5-eth0" Dec 13 01:34:20.546166 containerd[1979]: 2024-12-13 01:34:20.533 [INFO][5703] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:34:20.546166 containerd[1979]: 2024-12-13 01:34:20.533 [INFO][5703] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:34:20.546166 containerd[1979]: 2024-12-13 01:34:20.540 [WARNING][5703] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e95d667c7a378fd23dc3d90537774f033dd062db75a6518b25ae0dd49f029226" HandleID="k8s-pod-network.e95d667c7a378fd23dc3d90537774f033dd062db75a6518b25ae0dd49f029226" Workload="ip--172--31--21--168-k8s-calico--apiserver--6f66dbc9d--xwlp5-eth0" Dec 13 01:34:20.546166 containerd[1979]: 2024-12-13 01:34:20.540 [INFO][5703] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e95d667c7a378fd23dc3d90537774f033dd062db75a6518b25ae0dd49f029226" HandleID="k8s-pod-network.e95d667c7a378fd23dc3d90537774f033dd062db75a6518b25ae0dd49f029226" Workload="ip--172--31--21--168-k8s-calico--apiserver--6f66dbc9d--xwlp5-eth0" Dec 13 01:34:20.546166 containerd[1979]: 2024-12-13 01:34:20.542 [INFO][5703] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:34:20.546166 containerd[1979]: 2024-12-13 01:34:20.544 [INFO][5696] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e95d667c7a378fd23dc3d90537774f033dd062db75a6518b25ae0dd49f029226" Dec 13 01:34:20.546895 containerd[1979]: time="2024-12-13T01:34:20.546212348Z" level=info msg="TearDown network for sandbox \"e95d667c7a378fd23dc3d90537774f033dd062db75a6518b25ae0dd49f029226\" successfully" Dec 13 01:34:20.551111 containerd[1979]: time="2024-12-13T01:34:20.551066233Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e95d667c7a378fd23dc3d90537774f033dd062db75a6518b25ae0dd49f029226\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:34:20.551909 containerd[1979]: time="2024-12-13T01:34:20.551178093Z" level=info msg="RemovePodSandbox \"e95d667c7a378fd23dc3d90537774f033dd062db75a6518b25ae0dd49f029226\" returns successfully" Dec 13 01:34:20.551909 containerd[1979]: time="2024-12-13T01:34:20.551864725Z" level=info msg="StopPodSandbox for \"2eaa7c00dc1059f984b8d85d43e3a833a431225dbd3979c46548f3a569df3fd2\"" Dec 13 01:34:20.664367 containerd[1979]: 2024-12-13 01:34:20.613 [WARNING][5722] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2eaa7c00dc1059f984b8d85d43e3a833a431225dbd3979c46548f3a569df3fd2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--168-k8s-coredns--7db6d8ff4d--cnqvk-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"aa5dce48-74be-45a6-b213-0b52ff4a1cc4", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 33, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-168", ContainerID:"769911095551810d0c9aac847714cda41168d936db9008a03a97a811246e5d53", Pod:"coredns-7db6d8ff4d-cnqvk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.116.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali38ae70511dc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:34:20.664367 containerd[1979]: 2024-12-13 01:34:20.613 [INFO][5722] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2eaa7c00dc1059f984b8d85d43e3a833a431225dbd3979c46548f3a569df3fd2" Dec 13 01:34:20.664367 containerd[1979]: 2024-12-13 01:34:20.613 [INFO][5722] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2eaa7c00dc1059f984b8d85d43e3a833a431225dbd3979c46548f3a569df3fd2" iface="eth0" netns="" Dec 13 01:34:20.664367 containerd[1979]: 2024-12-13 01:34:20.613 [INFO][5722] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2eaa7c00dc1059f984b8d85d43e3a833a431225dbd3979c46548f3a569df3fd2" Dec 13 01:34:20.664367 containerd[1979]: 2024-12-13 01:34:20.613 [INFO][5722] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2eaa7c00dc1059f984b8d85d43e3a833a431225dbd3979c46548f3a569df3fd2" Dec 13 01:34:20.664367 containerd[1979]: 2024-12-13 01:34:20.651 [INFO][5728] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2eaa7c00dc1059f984b8d85d43e3a833a431225dbd3979c46548f3a569df3fd2" HandleID="k8s-pod-network.2eaa7c00dc1059f984b8d85d43e3a833a431225dbd3979c46548f3a569df3fd2" Workload="ip--172--31--21--168-k8s-coredns--7db6d8ff4d--cnqvk-eth0" Dec 13 01:34:20.664367 containerd[1979]: 2024-12-13 01:34:20.652 [INFO][5728] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:34:20.664367 containerd[1979]: 2024-12-13 01:34:20.652 [INFO][5728] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:34:20.664367 containerd[1979]: 2024-12-13 01:34:20.658 [WARNING][5728] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2eaa7c00dc1059f984b8d85d43e3a833a431225dbd3979c46548f3a569df3fd2" HandleID="k8s-pod-network.2eaa7c00dc1059f984b8d85d43e3a833a431225dbd3979c46548f3a569df3fd2" Workload="ip--172--31--21--168-k8s-coredns--7db6d8ff4d--cnqvk-eth0" Dec 13 01:34:20.664367 containerd[1979]: 2024-12-13 01:34:20.658 [INFO][5728] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2eaa7c00dc1059f984b8d85d43e3a833a431225dbd3979c46548f3a569df3fd2" HandleID="k8s-pod-network.2eaa7c00dc1059f984b8d85d43e3a833a431225dbd3979c46548f3a569df3fd2" Workload="ip--172--31--21--168-k8s-coredns--7db6d8ff4d--cnqvk-eth0" Dec 13 01:34:20.664367 containerd[1979]: 2024-12-13 01:34:20.660 [INFO][5728] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:34:20.664367 containerd[1979]: 2024-12-13 01:34:20.662 [INFO][5722] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2eaa7c00dc1059f984b8d85d43e3a833a431225dbd3979c46548f3a569df3fd2" Dec 13 01:34:20.665209 containerd[1979]: time="2024-12-13T01:34:20.664353112Z" level=info msg="TearDown network for sandbox \"2eaa7c00dc1059f984b8d85d43e3a833a431225dbd3979c46548f3a569df3fd2\" successfully" Dec 13 01:34:20.665209 containerd[1979]: time="2024-12-13T01:34:20.664385068Z" level=info msg="StopPodSandbox for \"2eaa7c00dc1059f984b8d85d43e3a833a431225dbd3979c46548f3a569df3fd2\" returns successfully" Dec 13 01:34:20.666676 containerd[1979]: time="2024-12-13T01:34:20.665849624Z" level=info msg="RemovePodSandbox for \"2eaa7c00dc1059f984b8d85d43e3a833a431225dbd3979c46548f3a569df3fd2\"" Dec 13 01:34:20.666676 containerd[1979]: time="2024-12-13T01:34:20.665890904Z" level=info msg="Forcibly stopping sandbox \"2eaa7c00dc1059f984b8d85d43e3a833a431225dbd3979c46548f3a569df3fd2\"" Dec 13 01:34:20.753173 containerd[1979]: 2024-12-13 01:34:20.713 [WARNING][5746] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2eaa7c00dc1059f984b8d85d43e3a833a431225dbd3979c46548f3a569df3fd2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--168-k8s-coredns--7db6d8ff4d--cnqvk-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"aa5dce48-74be-45a6-b213-0b52ff4a1cc4", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 33, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-168", ContainerID:"769911095551810d0c9aac847714cda41168d936db9008a03a97a811246e5d53", Pod:"coredns-7db6d8ff4d-cnqvk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.116.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali38ae70511dc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:34:20.753173 containerd[1979]: 2024-12-13 01:34:20.713 [INFO][5746] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2eaa7c00dc1059f984b8d85d43e3a833a431225dbd3979c46548f3a569df3fd2" Dec 13 01:34:20.753173 containerd[1979]: 2024-12-13 01:34:20.713 [INFO][5746] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2eaa7c00dc1059f984b8d85d43e3a833a431225dbd3979c46548f3a569df3fd2" iface="eth0" netns="" Dec 13 01:34:20.753173 containerd[1979]: 2024-12-13 01:34:20.713 [INFO][5746] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2eaa7c00dc1059f984b8d85d43e3a833a431225dbd3979c46548f3a569df3fd2" Dec 13 01:34:20.753173 containerd[1979]: 2024-12-13 01:34:20.713 [INFO][5746] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2eaa7c00dc1059f984b8d85d43e3a833a431225dbd3979c46548f3a569df3fd2" Dec 13 01:34:20.753173 containerd[1979]: 2024-12-13 01:34:20.740 [INFO][5752] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2eaa7c00dc1059f984b8d85d43e3a833a431225dbd3979c46548f3a569df3fd2" HandleID="k8s-pod-network.2eaa7c00dc1059f984b8d85d43e3a833a431225dbd3979c46548f3a569df3fd2" Workload="ip--172--31--21--168-k8s-coredns--7db6d8ff4d--cnqvk-eth0" Dec 13 01:34:20.753173 containerd[1979]: 2024-12-13 01:34:20.740 [INFO][5752] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:34:20.753173 containerd[1979]: 2024-12-13 01:34:20.741 [INFO][5752] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:34:20.753173 containerd[1979]: 2024-12-13 01:34:20.747 [WARNING][5752] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2eaa7c00dc1059f984b8d85d43e3a833a431225dbd3979c46548f3a569df3fd2" HandleID="k8s-pod-network.2eaa7c00dc1059f984b8d85d43e3a833a431225dbd3979c46548f3a569df3fd2" Workload="ip--172--31--21--168-k8s-coredns--7db6d8ff4d--cnqvk-eth0" Dec 13 01:34:20.753173 containerd[1979]: 2024-12-13 01:34:20.747 [INFO][5752] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2eaa7c00dc1059f984b8d85d43e3a833a431225dbd3979c46548f3a569df3fd2" HandleID="k8s-pod-network.2eaa7c00dc1059f984b8d85d43e3a833a431225dbd3979c46548f3a569df3fd2" Workload="ip--172--31--21--168-k8s-coredns--7db6d8ff4d--cnqvk-eth0" Dec 13 01:34:20.753173 containerd[1979]: 2024-12-13 01:34:20.749 [INFO][5752] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:34:20.753173 containerd[1979]: 2024-12-13 01:34:20.751 [INFO][5746] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2eaa7c00dc1059f984b8d85d43e3a833a431225dbd3979c46548f3a569df3fd2" Dec 13 01:34:20.754714 containerd[1979]: time="2024-12-13T01:34:20.753221619Z" level=info msg="TearDown network for sandbox \"2eaa7c00dc1059f984b8d85d43e3a833a431225dbd3979c46548f3a569df3fd2\" successfully" Dec 13 01:34:20.757994 containerd[1979]: time="2024-12-13T01:34:20.757934253Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2eaa7c00dc1059f984b8d85d43e3a833a431225dbd3979c46548f3a569df3fd2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:34:20.758363 containerd[1979]: time="2024-12-13T01:34:20.758020545Z" level=info msg="RemovePodSandbox \"2eaa7c00dc1059f984b8d85d43e3a833a431225dbd3979c46548f3a569df3fd2\" returns successfully" Dec 13 01:34:20.758851 containerd[1979]: time="2024-12-13T01:34:20.758823508Z" level=info msg="StopPodSandbox for \"c8a2e70154e495bf46665598891321bff252266a56eb550d6306733c38850a2b\"" Dec 13 01:34:20.859554 containerd[1979]: 2024-12-13 01:34:20.812 [WARNING][5770] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c8a2e70154e495bf46665598891321bff252266a56eb550d6306733c38850a2b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--168-k8s-csi--node--driver--h9g2n-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3cc2106b-553b-4660-9b27-e2c825955271", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 33, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-168", ContainerID:"1d9fa490bea217f50373c6c706ed86b7c34ff9d7500b25f51c53f61631768545", Pod:"csi-node-driver-h9g2n", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.116.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali890e5acc998", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:34:20.859554 containerd[1979]: 2024-12-13 01:34:20.813 [INFO][5770] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c8a2e70154e495bf46665598891321bff252266a56eb550d6306733c38850a2b" Dec 13 01:34:20.859554 containerd[1979]: 2024-12-13 01:34:20.813 [INFO][5770] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c8a2e70154e495bf46665598891321bff252266a56eb550d6306733c38850a2b" iface="eth0" netns="" Dec 13 01:34:20.859554 containerd[1979]: 2024-12-13 01:34:20.813 [INFO][5770] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c8a2e70154e495bf46665598891321bff252266a56eb550d6306733c38850a2b" Dec 13 01:34:20.859554 containerd[1979]: 2024-12-13 01:34:20.813 [INFO][5770] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c8a2e70154e495bf46665598891321bff252266a56eb550d6306733c38850a2b" Dec 13 01:34:20.859554 containerd[1979]: 2024-12-13 01:34:20.847 [INFO][5777] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c8a2e70154e495bf46665598891321bff252266a56eb550d6306733c38850a2b" HandleID="k8s-pod-network.c8a2e70154e495bf46665598891321bff252266a56eb550d6306733c38850a2b" Workload="ip--172--31--21--168-k8s-csi--node--driver--h9g2n-eth0" Dec 13 01:34:20.859554 containerd[1979]: 2024-12-13 01:34:20.848 [INFO][5777] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:34:20.859554 containerd[1979]: 2024-12-13 01:34:20.848 [INFO][5777] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:34:20.859554 containerd[1979]: 2024-12-13 01:34:20.854 [WARNING][5777] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c8a2e70154e495bf46665598891321bff252266a56eb550d6306733c38850a2b" HandleID="k8s-pod-network.c8a2e70154e495bf46665598891321bff252266a56eb550d6306733c38850a2b" Workload="ip--172--31--21--168-k8s-csi--node--driver--h9g2n-eth0" Dec 13 01:34:20.859554 containerd[1979]: 2024-12-13 01:34:20.854 [INFO][5777] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c8a2e70154e495bf46665598891321bff252266a56eb550d6306733c38850a2b" HandleID="k8s-pod-network.c8a2e70154e495bf46665598891321bff252266a56eb550d6306733c38850a2b" Workload="ip--172--31--21--168-k8s-csi--node--driver--h9g2n-eth0" Dec 13 01:34:20.859554 containerd[1979]: 2024-12-13 01:34:20.856 [INFO][5777] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:34:20.859554 containerd[1979]: 2024-12-13 01:34:20.857 [INFO][5770] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c8a2e70154e495bf46665598891321bff252266a56eb550d6306733c38850a2b" Dec 13 01:34:20.860327 containerd[1979]: time="2024-12-13T01:34:20.859599357Z" level=info msg="TearDown network for sandbox \"c8a2e70154e495bf46665598891321bff252266a56eb550d6306733c38850a2b\" successfully" Dec 13 01:34:20.860327 containerd[1979]: time="2024-12-13T01:34:20.859628884Z" level=info msg="StopPodSandbox for \"c8a2e70154e495bf46665598891321bff252266a56eb550d6306733c38850a2b\" returns successfully" Dec 13 01:34:20.860327 containerd[1979]: time="2024-12-13T01:34:20.860293981Z" level=info msg="RemovePodSandbox for \"c8a2e70154e495bf46665598891321bff252266a56eb550d6306733c38850a2b\"" Dec 13 01:34:20.860462 containerd[1979]: time="2024-12-13T01:34:20.860327281Z" level=info msg="Forcibly stopping sandbox \"c8a2e70154e495bf46665598891321bff252266a56eb550d6306733c38850a2b\"" Dec 13 01:34:20.953943 containerd[1979]: 2024-12-13 01:34:20.910 [WARNING][5795] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c8a2e70154e495bf46665598891321bff252266a56eb550d6306733c38850a2b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--168-k8s-csi--node--driver--h9g2n-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3cc2106b-553b-4660-9b27-e2c825955271", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 33, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-168", ContainerID:"1d9fa490bea217f50373c6c706ed86b7c34ff9d7500b25f51c53f61631768545", Pod:"csi-node-driver-h9g2n", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.116.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali890e5acc998", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:34:20.953943 containerd[1979]: 2024-12-13 01:34:20.910 [INFO][5795] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c8a2e70154e495bf46665598891321bff252266a56eb550d6306733c38850a2b" Dec 13 01:34:20.953943 containerd[1979]: 2024-12-13 01:34:20.910 [INFO][5795] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c8a2e70154e495bf46665598891321bff252266a56eb550d6306733c38850a2b" iface="eth0" netns="" Dec 13 01:34:20.953943 containerd[1979]: 2024-12-13 01:34:20.910 [INFO][5795] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c8a2e70154e495bf46665598891321bff252266a56eb550d6306733c38850a2b" Dec 13 01:34:20.953943 containerd[1979]: 2024-12-13 01:34:20.910 [INFO][5795] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c8a2e70154e495bf46665598891321bff252266a56eb550d6306733c38850a2b" Dec 13 01:34:20.953943 containerd[1979]: 2024-12-13 01:34:20.939 [INFO][5801] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c8a2e70154e495bf46665598891321bff252266a56eb550d6306733c38850a2b" HandleID="k8s-pod-network.c8a2e70154e495bf46665598891321bff252266a56eb550d6306733c38850a2b" Workload="ip--172--31--21--168-k8s-csi--node--driver--h9g2n-eth0" Dec 13 01:34:20.953943 containerd[1979]: 2024-12-13 01:34:20.939 [INFO][5801] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:34:20.953943 containerd[1979]: 2024-12-13 01:34:20.939 [INFO][5801] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:34:20.953943 containerd[1979]: 2024-12-13 01:34:20.947 [WARNING][5801] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c8a2e70154e495bf46665598891321bff252266a56eb550d6306733c38850a2b" HandleID="k8s-pod-network.c8a2e70154e495bf46665598891321bff252266a56eb550d6306733c38850a2b" Workload="ip--172--31--21--168-k8s-csi--node--driver--h9g2n-eth0" Dec 13 01:34:20.953943 containerd[1979]: 2024-12-13 01:34:20.947 [INFO][5801] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c8a2e70154e495bf46665598891321bff252266a56eb550d6306733c38850a2b" HandleID="k8s-pod-network.c8a2e70154e495bf46665598891321bff252266a56eb550d6306733c38850a2b" Workload="ip--172--31--21--168-k8s-csi--node--driver--h9g2n-eth0" Dec 13 01:34:20.953943 containerd[1979]: 2024-12-13 01:34:20.949 [INFO][5801] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:34:20.953943 containerd[1979]: 2024-12-13 01:34:20.951 [INFO][5795] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c8a2e70154e495bf46665598891321bff252266a56eb550d6306733c38850a2b" Dec 13 01:34:20.954816 containerd[1979]: time="2024-12-13T01:34:20.954264220Z" level=info msg="TearDown network for sandbox \"c8a2e70154e495bf46665598891321bff252266a56eb550d6306733c38850a2b\" successfully" Dec 13 01:34:20.960625 containerd[1979]: time="2024-12-13T01:34:20.959239898Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c8a2e70154e495bf46665598891321bff252266a56eb550d6306733c38850a2b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:34:20.960625 containerd[1979]: time="2024-12-13T01:34:20.959365579Z" level=info msg="RemovePodSandbox \"c8a2e70154e495bf46665598891321bff252266a56eb550d6306733c38850a2b\" returns successfully" Dec 13 01:34:20.961080 containerd[1979]: time="2024-12-13T01:34:20.961050999Z" level=info msg="StopPodSandbox for \"716e2c99c627a9ef2c641d8ab97a6bbec5ec4e757e4a49af70ed29e8c8e07eea\"" Dec 13 01:34:21.173506 containerd[1979]: 2024-12-13 01:34:21.081 [WARNING][5819] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="716e2c99c627a9ef2c641d8ab97a6bbec5ec4e757e4a49af70ed29e8c8e07eea" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--168-k8s-coredns--7db6d8ff4d--cxh29-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"65795569-3321-472e-aa6d-4a50b09325de", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 33, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-168", ContainerID:"7482a8e59b159a63c482b4426391ff05f9ff7be23ad8250099a7c8432768bb3f", Pod:"coredns-7db6d8ff4d-cxh29", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.116.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0ab2d70712d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:34:21.173506 containerd[1979]: 2024-12-13 01:34:21.081 [INFO][5819] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="716e2c99c627a9ef2c641d8ab97a6bbec5ec4e757e4a49af70ed29e8c8e07eea" Dec 13 01:34:21.173506 containerd[1979]: 2024-12-13 01:34:21.081 [INFO][5819] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="716e2c99c627a9ef2c641d8ab97a6bbec5ec4e757e4a49af70ed29e8c8e07eea" iface="eth0" netns="" Dec 13 01:34:21.173506 containerd[1979]: 2024-12-13 01:34:21.081 [INFO][5819] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="716e2c99c627a9ef2c641d8ab97a6bbec5ec4e757e4a49af70ed29e8c8e07eea" Dec 13 01:34:21.173506 containerd[1979]: 2024-12-13 01:34:21.081 [INFO][5819] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="716e2c99c627a9ef2c641d8ab97a6bbec5ec4e757e4a49af70ed29e8c8e07eea" Dec 13 01:34:21.173506 containerd[1979]: 2024-12-13 01:34:21.144 [INFO][5829] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="716e2c99c627a9ef2c641d8ab97a6bbec5ec4e757e4a49af70ed29e8c8e07eea" HandleID="k8s-pod-network.716e2c99c627a9ef2c641d8ab97a6bbec5ec4e757e4a49af70ed29e8c8e07eea" Workload="ip--172--31--21--168-k8s-coredns--7db6d8ff4d--cxh29-eth0" Dec 13 01:34:21.173506 containerd[1979]: 2024-12-13 01:34:21.145 [INFO][5829] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:34:21.173506 containerd[1979]: 2024-12-13 01:34:21.145 [INFO][5829] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:34:21.173506 containerd[1979]: 2024-12-13 01:34:21.159 [WARNING][5829] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="716e2c99c627a9ef2c641d8ab97a6bbec5ec4e757e4a49af70ed29e8c8e07eea" HandleID="k8s-pod-network.716e2c99c627a9ef2c641d8ab97a6bbec5ec4e757e4a49af70ed29e8c8e07eea" Workload="ip--172--31--21--168-k8s-coredns--7db6d8ff4d--cxh29-eth0" Dec 13 01:34:21.173506 containerd[1979]: 2024-12-13 01:34:21.159 [INFO][5829] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="716e2c99c627a9ef2c641d8ab97a6bbec5ec4e757e4a49af70ed29e8c8e07eea" HandleID="k8s-pod-network.716e2c99c627a9ef2c641d8ab97a6bbec5ec4e757e4a49af70ed29e8c8e07eea" Workload="ip--172--31--21--168-k8s-coredns--7db6d8ff4d--cxh29-eth0" Dec 13 01:34:21.173506 containerd[1979]: 2024-12-13 01:34:21.162 [INFO][5829] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:34:21.173506 containerd[1979]: 2024-12-13 01:34:21.167 [INFO][5819] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="716e2c99c627a9ef2c641d8ab97a6bbec5ec4e757e4a49af70ed29e8c8e07eea" Dec 13 01:34:21.174444 containerd[1979]: time="2024-12-13T01:34:21.173557731Z" level=info msg="TearDown network for sandbox \"716e2c99c627a9ef2c641d8ab97a6bbec5ec4e757e4a49af70ed29e8c8e07eea\" successfully" Dec 13 01:34:21.174444 containerd[1979]: time="2024-12-13T01:34:21.173587635Z" level=info msg="StopPodSandbox for \"716e2c99c627a9ef2c641d8ab97a6bbec5ec4e757e4a49af70ed29e8c8e07eea\" returns successfully" Dec 13 01:34:21.174880 containerd[1979]: time="2024-12-13T01:34:21.174848576Z" level=info msg="RemovePodSandbox for \"716e2c99c627a9ef2c641d8ab97a6bbec5ec4e757e4a49af70ed29e8c8e07eea\"" Dec 13 01:34:21.175037 containerd[1979]: time="2024-12-13T01:34:21.174885623Z" level=info msg="Forcibly stopping sandbox \"716e2c99c627a9ef2c641d8ab97a6bbec5ec4e757e4a49af70ed29e8c8e07eea\"" Dec 13 01:34:21.354936 containerd[1979]: 2024-12-13 01:34:21.275 [WARNING][5847] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="716e2c99c627a9ef2c641d8ab97a6bbec5ec4e757e4a49af70ed29e8c8e07eea" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--168-k8s-coredns--7db6d8ff4d--cxh29-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"65795569-3321-472e-aa6d-4a50b09325de", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 33, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-168", ContainerID:"7482a8e59b159a63c482b4426391ff05f9ff7be23ad8250099a7c8432768bb3f", Pod:"coredns-7db6d8ff4d-cxh29", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.116.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0ab2d70712d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:34:21.354936 containerd[1979]: 2024-12-13 01:34:21.276 [INFO][5847] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="716e2c99c627a9ef2c641d8ab97a6bbec5ec4e757e4a49af70ed29e8c8e07eea" Dec 13 01:34:21.354936 containerd[1979]: 2024-12-13 01:34:21.276 [INFO][5847] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="716e2c99c627a9ef2c641d8ab97a6bbec5ec4e757e4a49af70ed29e8c8e07eea" iface="eth0" netns="" Dec 13 01:34:21.354936 containerd[1979]: 2024-12-13 01:34:21.276 [INFO][5847] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="716e2c99c627a9ef2c641d8ab97a6bbec5ec4e757e4a49af70ed29e8c8e07eea" Dec 13 01:34:21.354936 containerd[1979]: 2024-12-13 01:34:21.276 [INFO][5847] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="716e2c99c627a9ef2c641d8ab97a6bbec5ec4e757e4a49af70ed29e8c8e07eea" Dec 13 01:34:21.354936 containerd[1979]: 2024-12-13 01:34:21.334 [INFO][5854] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="716e2c99c627a9ef2c641d8ab97a6bbec5ec4e757e4a49af70ed29e8c8e07eea" HandleID="k8s-pod-network.716e2c99c627a9ef2c641d8ab97a6bbec5ec4e757e4a49af70ed29e8c8e07eea" Workload="ip--172--31--21--168-k8s-coredns--7db6d8ff4d--cxh29-eth0" Dec 13 01:34:21.354936 containerd[1979]: 2024-12-13 01:34:21.334 [INFO][5854] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:34:21.354936 containerd[1979]: 2024-12-13 01:34:21.334 [INFO][5854] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:34:21.354936 containerd[1979]: 2024-12-13 01:34:21.344 [WARNING][5854] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="716e2c99c627a9ef2c641d8ab97a6bbec5ec4e757e4a49af70ed29e8c8e07eea" HandleID="k8s-pod-network.716e2c99c627a9ef2c641d8ab97a6bbec5ec4e757e4a49af70ed29e8c8e07eea" Workload="ip--172--31--21--168-k8s-coredns--7db6d8ff4d--cxh29-eth0" Dec 13 01:34:21.354936 containerd[1979]: 2024-12-13 01:34:21.344 [INFO][5854] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="716e2c99c627a9ef2c641d8ab97a6bbec5ec4e757e4a49af70ed29e8c8e07eea" HandleID="k8s-pod-network.716e2c99c627a9ef2c641d8ab97a6bbec5ec4e757e4a49af70ed29e8c8e07eea" Workload="ip--172--31--21--168-k8s-coredns--7db6d8ff4d--cxh29-eth0" Dec 13 01:34:21.354936 containerd[1979]: 2024-12-13 01:34:21.346 [INFO][5854] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:34:21.354936 containerd[1979]: 2024-12-13 01:34:21.350 [INFO][5847] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="716e2c99c627a9ef2c641d8ab97a6bbec5ec4e757e4a49af70ed29e8c8e07eea" Dec 13 01:34:21.354936 containerd[1979]: time="2024-12-13T01:34:21.353647286Z" level=info msg="TearDown network for sandbox \"716e2c99c627a9ef2c641d8ab97a6bbec5ec4e757e4a49af70ed29e8c8e07eea\" successfully" Dec 13 01:34:21.364067 containerd[1979]: time="2024-12-13T01:34:21.364015863Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"716e2c99c627a9ef2c641d8ab97a6bbec5ec4e757e4a49af70ed29e8c8e07eea\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:34:21.364270 containerd[1979]: time="2024-12-13T01:34:21.364239496Z" level=info msg="RemovePodSandbox \"716e2c99c627a9ef2c641d8ab97a6bbec5ec4e757e4a49af70ed29e8c8e07eea\" returns successfully" Dec 13 01:34:21.364863 containerd[1979]: time="2024-12-13T01:34:21.364817449Z" level=info msg="StopPodSandbox for \"fd6dfcf0597f677db2ceafeccfcae83d0c94389d7b4f60e386d7103cd9e041f0\"" Dec 13 01:34:21.372087 containerd[1979]: time="2024-12-13T01:34:21.371780195Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Dec 13 01:34:21.374692 containerd[1979]: time="2024-12-13T01:34:21.374648354Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:34:21.380831 containerd[1979]: time="2024-12-13T01:34:21.380652840Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.970110263s" Dec 13 01:34:21.380831 containerd[1979]: time="2024-12-13T01:34:21.380698469Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Dec 13 01:34:21.381500 containerd[1979]: time="2024-12-13T01:34:21.381465974Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:34:21.383180 containerd[1979]: time="2024-12-13T01:34:21.383078310Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:34:21.383993 containerd[1979]: time="2024-12-13T01:34:21.383731868Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 01:34:21.388368 containerd[1979]: time="2024-12-13T01:34:21.388335476Z" level=info msg="CreateContainer within sandbox \"1d9fa490bea217f50373c6c706ed86b7c34ff9d7500b25f51c53f61631768545\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Dec 13 01:34:21.420103 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3043796388.mount: Deactivated successfully. Dec 13 01:34:21.423300 containerd[1979]: time="2024-12-13T01:34:21.423056709Z" level=info msg="CreateContainer within sandbox \"1d9fa490bea217f50373c6c706ed86b7c34ff9d7500b25f51c53f61631768545\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"68f8bac886a956626cc72d930576d972262c365ac391c155a2942044cbd6a99b\"" Dec 13 01:34:21.426198 containerd[1979]: time="2024-12-13T01:34:21.425678047Z" level=info msg="StartContainer for \"68f8bac886a956626cc72d930576d972262c365ac391c155a2942044cbd6a99b\"" Dec 13 01:34:21.503795 systemd[1]: run-containerd-runc-k8s.io-68f8bac886a956626cc72d930576d972262c365ac391c155a2942044cbd6a99b-runc.VrTgfD.mount: Deactivated successfully. Dec 13 01:34:21.522614 systemd[1]: Started cri-containerd-68f8bac886a956626cc72d930576d972262c365ac391c155a2942044cbd6a99b.scope - libcontainer container 68f8bac886a956626cc72d930576d972262c365ac391c155a2942044cbd6a99b. Dec 13 01:34:21.589422 containerd[1979]: time="2024-12-13T01:34:21.589239580Z" level=info msg="StartContainer for \"68f8bac886a956626cc72d930576d972262c365ac391c155a2942044cbd6a99b\" returns successfully" Dec 13 01:34:21.595538 containerd[1979]: 2024-12-13 01:34:21.480 [WARNING][5872] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fd6dfcf0597f677db2ceafeccfcae83d0c94389d7b4f60e386d7103cd9e041f0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--168-k8s-calico--apiserver--6f66dbc9d--8djcg-eth0", GenerateName:"calico-apiserver-6f66dbc9d-", Namespace:"calico-apiserver", SelfLink:"", UID:"c486b937-2444-419d-bb0f-429a58e9c9a6", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 33, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f66dbc9d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-168", ContainerID:"c371c25644a170730635d72e76d52ce85c890f9ef79c2e80322019b9c189a317", Pod:"calico-apiserver-6f66dbc9d-8djcg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.116.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali03b027879ba", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:34:21.595538 containerd[1979]: 2024-12-13 01:34:21.481 [INFO][5872] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fd6dfcf0597f677db2ceafeccfcae83d0c94389d7b4f60e386d7103cd9e041f0" Dec 13 01:34:21.595538 containerd[1979]: 2024-12-13 01:34:21.481 [INFO][5872] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fd6dfcf0597f677db2ceafeccfcae83d0c94389d7b4f60e386d7103cd9e041f0" iface="eth0" netns="" Dec 13 01:34:21.595538 containerd[1979]: 2024-12-13 01:34:21.481 [INFO][5872] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fd6dfcf0597f677db2ceafeccfcae83d0c94389d7b4f60e386d7103cd9e041f0" Dec 13 01:34:21.595538 containerd[1979]: 2024-12-13 01:34:21.481 [INFO][5872] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fd6dfcf0597f677db2ceafeccfcae83d0c94389d7b4f60e386d7103cd9e041f0" Dec 13 01:34:21.595538 containerd[1979]: 2024-12-13 01:34:21.568 [INFO][5894] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fd6dfcf0597f677db2ceafeccfcae83d0c94389d7b4f60e386d7103cd9e041f0" HandleID="k8s-pod-network.fd6dfcf0597f677db2ceafeccfcae83d0c94389d7b4f60e386d7103cd9e041f0" Workload="ip--172--31--21--168-k8s-calico--apiserver--6f66dbc9d--8djcg-eth0" Dec 13 01:34:21.595538 containerd[1979]: 2024-12-13 01:34:21.569 [INFO][5894] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:34:21.595538 containerd[1979]: 2024-12-13 01:34:21.569 [INFO][5894] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:34:21.595538 containerd[1979]: 2024-12-13 01:34:21.579 [WARNING][5894] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fd6dfcf0597f677db2ceafeccfcae83d0c94389d7b4f60e386d7103cd9e041f0" HandleID="k8s-pod-network.fd6dfcf0597f677db2ceafeccfcae83d0c94389d7b4f60e386d7103cd9e041f0" Workload="ip--172--31--21--168-k8s-calico--apiserver--6f66dbc9d--8djcg-eth0" Dec 13 01:34:21.595538 containerd[1979]: 2024-12-13 01:34:21.579 [INFO][5894] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fd6dfcf0597f677db2ceafeccfcae83d0c94389d7b4f60e386d7103cd9e041f0" HandleID="k8s-pod-network.fd6dfcf0597f677db2ceafeccfcae83d0c94389d7b4f60e386d7103cd9e041f0" Workload="ip--172--31--21--168-k8s-calico--apiserver--6f66dbc9d--8djcg-eth0" Dec 13 01:34:21.595538 containerd[1979]: 2024-12-13 01:34:21.582 [INFO][5894] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:34:21.595538 containerd[1979]: 2024-12-13 01:34:21.591 [INFO][5872] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fd6dfcf0597f677db2ceafeccfcae83d0c94389d7b4f60e386d7103cd9e041f0" Dec 13 01:34:21.596374 containerd[1979]: time="2024-12-13T01:34:21.595575950Z" level=info msg="TearDown network for sandbox \"fd6dfcf0597f677db2ceafeccfcae83d0c94389d7b4f60e386d7103cd9e041f0\" successfully" Dec 13 01:34:21.596374 containerd[1979]: time="2024-12-13T01:34:21.595605881Z" level=info msg="StopPodSandbox for \"fd6dfcf0597f677db2ceafeccfcae83d0c94389d7b4f60e386d7103cd9e041f0\" returns successfully" Dec 13 01:34:21.605378 containerd[1979]: time="2024-12-13T01:34:21.605264672Z" level=info msg="RemovePodSandbox for \"fd6dfcf0597f677db2ceafeccfcae83d0c94389d7b4f60e386d7103cd9e041f0\"" Dec 13 01:34:21.605581 containerd[1979]: time="2024-12-13T01:34:21.605546779Z" level=info msg="Forcibly stopping sandbox \"fd6dfcf0597f677db2ceafeccfcae83d0c94389d7b4f60e386d7103cd9e041f0\"" Dec 13 01:34:21.749950 containerd[1979]: time="2024-12-13T01:34:21.749813852Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:34:21.753165 containerd[1979]: time="2024-12-13T01:34:21.753108805Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Dec 13 01:34:21.756103 containerd[1979]: time="2024-12-13T01:34:21.756020178Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 371.683851ms" Dec 13 01:34:21.756460 containerd[1979]: time="2024-12-13T01:34:21.756110210Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Dec 13 01:34:21.758436 containerd[1979]: time="2024-12-13T01:34:21.758401722Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Dec 13 01:34:21.763344 containerd[1979]: time="2024-12-13T01:34:21.763302985Z" level=info msg="CreateContainer within sandbox \"c371c25644a170730635d72e76d52ce85c890f9ef79c2e80322019b9c189a317\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 01:34:21.764371 containerd[1979]: 2024-12-13 01:34:21.705 [WARNING][5933] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fd6dfcf0597f677db2ceafeccfcae83d0c94389d7b4f60e386d7103cd9e041f0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--168-k8s-calico--apiserver--6f66dbc9d--8djcg-eth0", GenerateName:"calico-apiserver-6f66dbc9d-", Namespace:"calico-apiserver", SelfLink:"", UID:"c486b937-2444-419d-bb0f-429a58e9c9a6", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 33, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f66dbc9d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-168", ContainerID:"c371c25644a170730635d72e76d52ce85c890f9ef79c2e80322019b9c189a317", Pod:"calico-apiserver-6f66dbc9d-8djcg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.116.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali03b027879ba", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:34:21.764371 containerd[1979]: 2024-12-13 01:34:21.706 [INFO][5933] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fd6dfcf0597f677db2ceafeccfcae83d0c94389d7b4f60e386d7103cd9e041f0" Dec 13 01:34:21.764371 containerd[1979]: 2024-12-13 01:34:21.706 [INFO][5933] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fd6dfcf0597f677db2ceafeccfcae83d0c94389d7b4f60e386d7103cd9e041f0" iface="eth0" netns="" Dec 13 01:34:21.764371 containerd[1979]: 2024-12-13 01:34:21.706 [INFO][5933] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fd6dfcf0597f677db2ceafeccfcae83d0c94389d7b4f60e386d7103cd9e041f0" Dec 13 01:34:21.764371 containerd[1979]: 2024-12-13 01:34:21.706 [INFO][5933] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fd6dfcf0597f677db2ceafeccfcae83d0c94389d7b4f60e386d7103cd9e041f0" Dec 13 01:34:21.764371 containerd[1979]: 2024-12-13 01:34:21.741 [INFO][5940] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fd6dfcf0597f677db2ceafeccfcae83d0c94389d7b4f60e386d7103cd9e041f0" HandleID="k8s-pod-network.fd6dfcf0597f677db2ceafeccfcae83d0c94389d7b4f60e386d7103cd9e041f0" Workload="ip--172--31--21--168-k8s-calico--apiserver--6f66dbc9d--8djcg-eth0" Dec 13 01:34:21.764371 containerd[1979]: 2024-12-13 01:34:21.741 [INFO][5940] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:34:21.764371 containerd[1979]: 2024-12-13 01:34:21.741 [INFO][5940] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:34:21.764371 containerd[1979]: 2024-12-13 01:34:21.750 [WARNING][5940] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fd6dfcf0597f677db2ceafeccfcae83d0c94389d7b4f60e386d7103cd9e041f0" HandleID="k8s-pod-network.fd6dfcf0597f677db2ceafeccfcae83d0c94389d7b4f60e386d7103cd9e041f0" Workload="ip--172--31--21--168-k8s-calico--apiserver--6f66dbc9d--8djcg-eth0" Dec 13 01:34:21.764371 containerd[1979]: 2024-12-13 01:34:21.750 [INFO][5940] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fd6dfcf0597f677db2ceafeccfcae83d0c94389d7b4f60e386d7103cd9e041f0" HandleID="k8s-pod-network.fd6dfcf0597f677db2ceafeccfcae83d0c94389d7b4f60e386d7103cd9e041f0" Workload="ip--172--31--21--168-k8s-calico--apiserver--6f66dbc9d--8djcg-eth0" Dec 13 01:34:21.764371 containerd[1979]: 2024-12-13 01:34:21.753 [INFO][5940] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:34:21.764371 containerd[1979]: 2024-12-13 01:34:21.756 [INFO][5933] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fd6dfcf0597f677db2ceafeccfcae83d0c94389d7b4f60e386d7103cd9e041f0" Dec 13 01:34:21.765775 containerd[1979]: time="2024-12-13T01:34:21.764404633Z" level=info msg="TearDown network for sandbox \"fd6dfcf0597f677db2ceafeccfcae83d0c94389d7b4f60e386d7103cd9e041f0\" successfully" Dec 13 01:34:21.785851 containerd[1979]: time="2024-12-13T01:34:21.785655020Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fd6dfcf0597f677db2ceafeccfcae83d0c94389d7b4f60e386d7103cd9e041f0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:34:21.785851 containerd[1979]: time="2024-12-13T01:34:21.785748453Z" level=info msg="RemovePodSandbox \"fd6dfcf0597f677db2ceafeccfcae83d0c94389d7b4f60e386d7103cd9e041f0\" returns successfully" Dec 13 01:34:21.790172 containerd[1979]: time="2024-12-13T01:34:21.790133013Z" level=info msg="CreateContainer within sandbox \"c371c25644a170730635d72e76d52ce85c890f9ef79c2e80322019b9c189a317\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1c5a4b99d11cb1099e356a1e366c71f188a118269625bd3b4b513bbbe6aaaf4d\"" Dec 13 01:34:21.793772 containerd[1979]: time="2024-12-13T01:34:21.791775182Z" level=info msg="StartContainer for \"1c5a4b99d11cb1099e356a1e366c71f188a118269625bd3b4b513bbbe6aaaf4d\"" Dec 13 01:34:21.834303 systemd[1]: Started cri-containerd-1c5a4b99d11cb1099e356a1e366c71f188a118269625bd3b4b513bbbe6aaaf4d.scope - libcontainer container 1c5a4b99d11cb1099e356a1e366c71f188a118269625bd3b4b513bbbe6aaaf4d. Dec 13 01:34:21.916068 containerd[1979]: time="2024-12-13T01:34:21.914893124Z" level=info msg="StartContainer for \"1c5a4b99d11cb1099e356a1e366c71f188a118269625bd3b4b513bbbe6aaaf4d\" returns successfully" Dec 13 01:34:23.297728 kubelet[3413]: I1213 01:34:23.297690 3413 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:34:23.691340 systemd[1]: Started sshd@9-172.31.21.168:22-139.178.68.195:33126.service - OpenSSH per-connection server daemon (139.178.68.195:33126). Dec 13 01:34:23.958683 sshd[5987]: Accepted publickey for core from 139.178.68.195 port 33126 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:34:23.961484 sshd[5987]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:34:23.971054 systemd-logind[1953]: New session 10 of user core. Dec 13 01:34:23.974546 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 01:34:24.662220 sshd[5987]: pam_unix(sshd:session): session closed for user core Dec 13 01:34:24.671841 systemd[1]: sshd@9-172.31.21.168:22-139.178.68.195:33126.service: Deactivated successfully. Dec 13 01:34:24.676787 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 01:34:24.680820 systemd-logind[1953]: Session 10 logged out. Waiting for processes to exit. Dec 13 01:34:24.711200 systemd[1]: Started sshd@10-172.31.21.168:22-139.178.68.195:33134.service - OpenSSH per-connection server daemon (139.178.68.195:33134). Dec 13 01:34:24.713471 systemd-logind[1953]: Removed session 10. Dec 13 01:34:24.890963 sshd[6011]: Accepted publickey for core from 139.178.68.195 port 33134 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:34:24.891848 sshd[6011]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:34:24.905696 systemd-logind[1953]: New session 11 of user core. Dec 13 01:34:24.912280 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 01:34:25.390499 sshd[6011]: pam_unix(sshd:session): session closed for user core Dec 13 01:34:25.401541 systemd-logind[1953]: Session 11 logged out. Waiting for processes to exit. Dec 13 01:34:25.406005 systemd[1]: sshd@10-172.31.21.168:22-139.178.68.195:33134.service: Deactivated successfully. Dec 13 01:34:25.413462 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 01:34:25.431094 systemd-logind[1953]: Removed session 11. Dec 13 01:34:25.443403 systemd[1]: Started sshd@11-172.31.21.168:22-139.178.68.195:33136.service - OpenSSH per-connection server daemon (139.178.68.195:33136). Dec 13 01:34:25.645201 containerd[1979]: time="2024-12-13T01:34:25.643734782Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:34:25.645627 sshd[6026]: Accepted publickey for core from 139.178.68.195 port 33136 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:34:25.649270 sshd[6026]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:34:25.666401 containerd[1979]: time="2024-12-13T01:34:25.666336134Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Dec 13 01:34:25.670457 containerd[1979]: time="2024-12-13T01:34:25.668553903Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:34:25.670716 systemd-logind[1953]: New session 12 of user core. Dec 13 01:34:25.673319 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 01:34:25.678887 containerd[1979]: time="2024-12-13T01:34:25.676153802Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:34:25.678887 containerd[1979]: time="2024-12-13T01:34:25.677225692Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 3.918784739s" Dec 13 01:34:25.678887 containerd[1979]: time="2024-12-13T01:34:25.677264856Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Dec 13 01:34:25.684430 containerd[1979]: time="2024-12-13T01:34:25.684385675Z" level=info msg="CreateContainer within sandbox \"1d9fa490bea217f50373c6c706ed86b7c34ff9d7500b25f51c53f61631768545\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Dec 13 01:34:25.730083 containerd[1979]: time="2024-12-13T01:34:25.730038000Z" level=info msg="CreateContainer within sandbox \"1d9fa490bea217f50373c6c706ed86b7c34ff9d7500b25f51c53f61631768545\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"77c0ae5cdc4935a95bb0d75a0397ac84dc24103b7456d5d9d3f9e1a64129800b\"" Dec 13 01:34:25.731840 containerd[1979]: time="2024-12-13T01:34:25.731802810Z" level=info msg="StartContainer for \"77c0ae5cdc4935a95bb0d75a0397ac84dc24103b7456d5d9d3f9e1a64129800b\"" Dec 13 01:34:25.830189 systemd[1]: run-containerd-runc-k8s.io-77c0ae5cdc4935a95bb0d75a0397ac84dc24103b7456d5d9d3f9e1a64129800b-runc.wYozfu.mount: Deactivated successfully. Dec 13 01:34:25.843303 systemd[1]: Started cri-containerd-77c0ae5cdc4935a95bb0d75a0397ac84dc24103b7456d5d9d3f9e1a64129800b.scope - libcontainer container 77c0ae5cdc4935a95bb0d75a0397ac84dc24103b7456d5d9d3f9e1a64129800b. Dec 13 01:34:26.014277 containerd[1979]: time="2024-12-13T01:34:26.006503220Z" level=info msg="StartContainer for \"77c0ae5cdc4935a95bb0d75a0397ac84dc24103b7456d5d9d3f9e1a64129800b\" returns successfully" Dec 13 01:34:26.131098 sshd[6026]: pam_unix(sshd:session): session closed for user core Dec 13 01:34:26.144075 systemd-logind[1953]: Session 12 logged out. Waiting for processes to exit. Dec 13 01:34:26.145188 systemd[1]: sshd@11-172.31.21.168:22-139.178.68.195:33136.service: Deactivated successfully. Dec 13 01:34:26.148724 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 01:34:26.151047 systemd-logind[1953]: Removed session 12. Dec 13 01:34:26.349077 kubelet[3413]: I1213 01:34:26.348468 3413 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6f66dbc9d-8djcg" podStartSLOduration=34.980524895 podStartE2EDuration="42.348448726s" podCreationTimestamp="2024-12-13 01:33:44 +0000 UTC" firstStartedPulling="2024-12-13 01:34:14.389715757 +0000 UTC m=+54.999117597" lastFinishedPulling="2024-12-13 01:34:21.757639591 +0000 UTC m=+62.367041428" observedRunningTime="2024-12-13 01:34:22.292625039 +0000 UTC m=+62.902026895" watchObservedRunningTime="2024-12-13 01:34:26.348448726 +0000 UTC m=+66.957850583" Dec 13 01:34:26.353277 kubelet[3413]: I1213 01:34:26.349950 3413 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-h9g2n" podStartSLOduration=28.627142699 podStartE2EDuration="41.349932596s" podCreationTimestamp="2024-12-13 01:33:45 +0000 UTC" firstStartedPulling="2024-12-13 01:34:12.957637021 +0000 UTC m=+53.567038857" lastFinishedPulling="2024-12-13 01:34:25.680426913 +0000 UTC m=+66.289828754" observedRunningTime="2024-12-13 01:34:26.347371817 +0000 UTC m=+66.956773672" watchObservedRunningTime="2024-12-13 01:34:26.349932596 +0000 UTC m=+66.959334452" Dec 13 01:34:27.029743 kubelet[3413]: I1213 01:34:27.025140 3413 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Dec 13 01:34:27.043841 kubelet[3413]: I1213 01:34:27.043803 3413 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Dec 13 01:34:27.088548 systemd[1]: run-containerd-runc-k8s.io-7b25389d2704b691e0a927391858104953b88ebe8406d298e53584914be044e5-runc.x52OmU.mount: Deactivated successfully. Dec 13 01:34:31.170429 systemd[1]: Started sshd@12-172.31.21.168:22-139.178.68.195:40010.service - OpenSSH per-connection server daemon (139.178.68.195:40010). Dec 13 01:34:31.402257 sshd[6106]: Accepted publickey for core from 139.178.68.195 port 40010 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:34:31.406237 sshd[6106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:34:31.414636 systemd-logind[1953]: New session 13 of user core. Dec 13 01:34:31.425300 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 01:34:31.947099 sshd[6106]: pam_unix(sshd:session): session closed for user core Dec 13 01:34:31.951303 systemd[1]: sshd@12-172.31.21.168:22-139.178.68.195:40010.service: Deactivated successfully. Dec 13 01:34:31.953498 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 01:34:31.954723 systemd-logind[1953]: Session 13 logged out. Waiting for processes to exit. Dec 13 01:34:31.955895 systemd-logind[1953]: Removed session 13. Dec 13 01:34:37.006395 systemd[1]: Started sshd@13-172.31.21.168:22-139.178.68.195:47996.service - OpenSSH per-connection server daemon (139.178.68.195:47996). Dec 13 01:34:37.225928 sshd[6121]: Accepted publickey for core from 139.178.68.195 port 47996 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:34:37.227605 sshd[6121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:34:37.237510 systemd-logind[1953]: New session 14 of user core. Dec 13 01:34:37.248282 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 01:34:37.705381 sshd[6121]: pam_unix(sshd:session): session closed for user core Dec 13 01:34:37.716587 systemd[1]: sshd@13-172.31.21.168:22-139.178.68.195:47996.service: Deactivated successfully. Dec 13 01:34:37.719814 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 01:34:37.721886 systemd-logind[1953]: Session 14 logged out. Waiting for processes to exit. Dec 13 01:34:37.723299 systemd-logind[1953]: Removed session 14. Dec 13 01:34:42.756766 systemd[1]: Started sshd@14-172.31.21.168:22-139.178.68.195:48012.service - OpenSSH per-connection server daemon (139.178.68.195:48012). Dec 13 01:34:43.009958 sshd[6137]: Accepted publickey for core from 139.178.68.195 port 48012 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:34:43.012776 sshd[6137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:34:43.028679 systemd-logind[1953]: New session 15 of user core. Dec 13 01:34:43.033504 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 01:34:43.402126 sshd[6137]: pam_unix(sshd:session): session closed for user core Dec 13 01:34:43.408304 systemd[1]: sshd@14-172.31.21.168:22-139.178.68.195:48012.service: Deactivated successfully. Dec 13 01:34:43.412155 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 01:34:43.414325 systemd-logind[1953]: Session 15 logged out. Waiting for processes to exit. Dec 13 01:34:43.415848 systemd-logind[1953]: Removed session 15. Dec 13 01:34:44.138793 kubelet[3413]: I1213 01:34:44.138618 3413 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:34:48.447383 systemd[1]: Started sshd@15-172.31.21.168:22-139.178.68.195:36244.service - OpenSSH per-connection server daemon (139.178.68.195:36244). Dec 13 01:34:48.631903 kubelet[3413]: I1213 01:34:48.631344 3413 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:34:48.684679 sshd[6152]: Accepted publickey for core from 139.178.68.195 port 36244 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:34:48.690170 sshd[6152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:34:48.700959 systemd-logind[1953]: New session 16 of user core. Dec 13 01:34:48.709210 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 01:34:49.717516 sshd[6152]: pam_unix(sshd:session): session closed for user core Dec 13 01:34:49.740194 systemd[1]: sshd@15-172.31.21.168:22-139.178.68.195:36244.service: Deactivated successfully. Dec 13 01:34:49.760043 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 01:34:49.782760 systemd-logind[1953]: Session 16 logged out. Waiting for processes to exit. Dec 13 01:34:49.805671 systemd[1]: Started sshd@16-172.31.21.168:22-139.178.68.195:36248.service - OpenSSH per-connection server daemon (139.178.68.195:36248). Dec 13 01:34:49.809519 systemd-logind[1953]: Removed session 16. Dec 13 01:34:49.990401 sshd[6193]: Accepted publickey for core from 139.178.68.195 port 36248 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:34:49.993100 sshd[6193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:34:50.003895 systemd-logind[1953]: New session 17 of user core. Dec 13 01:34:50.012196 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 01:34:50.802803 sshd[6193]: pam_unix(sshd:session): session closed for user core Dec 13 01:34:50.829857 systemd[1]: sshd@16-172.31.21.168:22-139.178.68.195:36248.service: Deactivated successfully. Dec 13 01:34:50.842035 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 01:34:50.845684 systemd-logind[1953]: Session 17 logged out. Waiting for processes to exit. Dec 13 01:34:50.854376 systemd[1]: Started sshd@17-172.31.21.168:22-139.178.68.195:36262.service - OpenSSH per-connection server daemon (139.178.68.195:36262). Dec 13 01:34:50.859463 systemd-logind[1953]: Removed session 17. Dec 13 01:34:51.068641 sshd[6204]: Accepted publickey for core from 139.178.68.195 port 36262 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:34:51.071046 sshd[6204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:34:51.078690 systemd-logind[1953]: New session 18 of user core. Dec 13 01:34:51.085282 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 01:34:54.263383 sshd[6204]: pam_unix(sshd:session): session closed for user core Dec 13 01:34:54.272641 systemd[1]: sshd@17-172.31.21.168:22-139.178.68.195:36262.service: Deactivated successfully. Dec 13 01:34:54.279400 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 01:34:54.283517 systemd-logind[1953]: Session 18 logged out. Waiting for processes to exit. Dec 13 01:34:54.299802 systemd-logind[1953]: Removed session 18. Dec 13 01:34:54.309806 systemd[1]: Started sshd@18-172.31.21.168:22-139.178.68.195:36278.service - OpenSSH per-connection server daemon (139.178.68.195:36278). Dec 13 01:34:54.522151 sshd[6228]: Accepted publickey for core from 139.178.68.195 port 36278 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:34:54.535140 sshd[6228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:34:54.551663 systemd-logind[1953]: New session 19 of user core. Dec 13 01:34:54.561515 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 01:34:55.706775 sshd[6228]: pam_unix(sshd:session): session closed for user core Dec 13 01:34:55.716005 systemd-logind[1953]: Session 19 logged out. Waiting for processes to exit. Dec 13 01:34:55.718478 systemd[1]: sshd@18-172.31.21.168:22-139.178.68.195:36278.service: Deactivated successfully. Dec 13 01:34:55.724052 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 01:34:55.736629 systemd-logind[1953]: Removed session 19. Dec 13 01:34:55.749826 systemd[1]: Started sshd@19-172.31.21.168:22-139.178.68.195:36282.service - OpenSSH per-connection server daemon (139.178.68.195:36282). Dec 13 01:34:55.934038 sshd[6239]: Accepted publickey for core from 139.178.68.195 port 36282 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:34:55.936707 sshd[6239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:34:55.944454 systemd-logind[1953]: New session 20 of user core. Dec 13 01:34:55.953493 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 01:34:56.173152 sshd[6239]: pam_unix(sshd:session): session closed for user core Dec 13 01:34:56.179669 systemd[1]: sshd@19-172.31.21.168:22-139.178.68.195:36282.service: Deactivated successfully. Dec 13 01:34:56.182959 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 01:34:56.184390 systemd-logind[1953]: Session 20 logged out. Waiting for processes to exit. Dec 13 01:34:56.185670 systemd-logind[1953]: Removed session 20. Dec 13 01:35:01.224898 systemd[1]: Started sshd@20-172.31.21.168:22-139.178.68.195:56706.service - OpenSSH per-connection server daemon (139.178.68.195:56706). Dec 13 01:35:01.424200 sshd[6271]: Accepted publickey for core from 139.178.68.195 port 56706 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:35:01.441351 sshd[6271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:35:01.459948 systemd-logind[1953]: New session 21 of user core. Dec 13 01:35:01.476595 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 01:35:02.193624 sshd[6271]: pam_unix(sshd:session): session closed for user core Dec 13 01:35:02.209863 systemd[1]: sshd@20-172.31.21.168:22-139.178.68.195:56706.service: Deactivated successfully. Dec 13 01:35:02.230142 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 01:35:02.234555 systemd-logind[1953]: Session 21 logged out. Waiting for processes to exit. Dec 13 01:35:02.236835 systemd-logind[1953]: Removed session 21. Dec 13 01:35:07.245790 systemd[1]: Started sshd@21-172.31.21.168:22-139.178.68.195:57796.service - OpenSSH per-connection server daemon (139.178.68.195:57796). Dec 13 01:35:07.474741 sshd[6287]: Accepted publickey for core from 139.178.68.195 port 57796 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:35:07.477066 sshd[6287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:35:07.483049 systemd-logind[1953]: New session 22 of user core. Dec 13 01:35:07.491201 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 01:35:07.736909 sshd[6287]: pam_unix(sshd:session): session closed for user core Dec 13 01:35:07.743690 systemd[1]: sshd@21-172.31.21.168:22-139.178.68.195:57796.service: Deactivated successfully. Dec 13 01:35:07.747896 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 01:35:07.750632 systemd-logind[1953]: Session 22 logged out. Waiting for processes to exit. Dec 13 01:35:07.752950 systemd-logind[1953]: Removed session 22. Dec 13 01:35:12.775534 systemd[1]: Started sshd@22-172.31.21.168:22-139.178.68.195:57804.service - OpenSSH per-connection server daemon (139.178.68.195:57804). Dec 13 01:35:12.946095 sshd[6302]: Accepted publickey for core from 139.178.68.195 port 57804 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:35:12.949439 sshd[6302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:35:12.954749 systemd-logind[1953]: New session 23 of user core. Dec 13 01:35:12.962208 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 01:35:13.246488 sshd[6302]: pam_unix(sshd:session): session closed for user core Dec 13 01:35:13.253108 systemd[1]: sshd@22-172.31.21.168:22-139.178.68.195:57804.service: Deactivated successfully. Dec 13 01:35:13.257751 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 01:35:13.259454 systemd-logind[1953]: Session 23 logged out. Waiting for processes to exit. Dec 13 01:35:13.270193 systemd-logind[1953]: Removed session 23. Dec 13 01:35:18.283448 systemd[1]: Started sshd@23-172.31.21.168:22-139.178.68.195:54604.service - OpenSSH per-connection server daemon (139.178.68.195:54604). Dec 13 01:35:18.463104 sshd[6314]: Accepted publickey for core from 139.178.68.195 port 54604 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:35:18.464544 sshd[6314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:35:18.478286 systemd-logind[1953]: New session 24 of user core. Dec 13 01:35:18.488881 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 13 01:35:18.756285 sshd[6314]: pam_unix(sshd:session): session closed for user core Dec 13 01:35:18.773518 systemd[1]: sshd@23-172.31.21.168:22-139.178.68.195:54604.service: Deactivated successfully. Dec 13 01:35:18.782261 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 01:35:18.785935 systemd-logind[1953]: Session 24 logged out. Waiting for processes to exit. Dec 13 01:35:18.791381 systemd-logind[1953]: Removed session 24. Dec 13 01:35:23.803454 systemd[1]: Started sshd@24-172.31.21.168:22-139.178.68.195:54614.service - OpenSSH per-connection server daemon (139.178.68.195:54614). Dec 13 01:35:24.033934 sshd[6350]: Accepted publickey for core from 139.178.68.195 port 54614 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:35:24.040526 sshd[6350]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:35:24.052966 systemd-logind[1953]: New session 25 of user core. Dec 13 01:35:24.059332 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 13 01:35:24.443795 sshd[6350]: pam_unix(sshd:session): session closed for user core Dec 13 01:35:24.451142 systemd[1]: sshd@24-172.31.21.168:22-139.178.68.195:54614.service: Deactivated successfully. Dec 13 01:35:24.454310 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 01:35:24.455380 systemd-logind[1953]: Session 25 logged out. Waiting for processes to exit. Dec 13 01:35:24.457397 systemd-logind[1953]: Removed session 25. Dec 13 01:35:27.752038 systemd[1]: run-containerd-runc-k8s.io-7b25389d2704b691e0a927391858104953b88ebe8406d298e53584914be044e5-runc.LNuRQU.mount: Deactivated successfully. Dec 13 01:35:29.493705 systemd[1]: Started sshd@25-172.31.21.168:22-139.178.68.195:46578.service - OpenSSH per-connection server daemon (139.178.68.195:46578). Dec 13 01:35:29.687145 sshd[6408]: Accepted publickey for core from 139.178.68.195 port 46578 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:35:29.688752 sshd[6408]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:35:29.696075 systemd-logind[1953]: New session 26 of user core. Dec 13 01:35:29.703596 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 13 01:35:29.900016 sshd[6408]: pam_unix(sshd:session): session closed for user core Dec 13 01:35:29.906024 systemd[1]: sshd@25-172.31.21.168:22-139.178.68.195:46578.service: Deactivated successfully. Dec 13 01:35:29.908730 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 01:35:29.911066 systemd-logind[1953]: Session 26 logged out. Waiting for processes to exit. Dec 13 01:35:29.914332 systemd-logind[1953]: Removed session 26. Dec 13 01:35:43.420730 systemd[1]: cri-containerd-80acd6fbd181731bdbd433483bd8346bdae0287344bd867d003d838c5c0e81b4.scope: Deactivated successfully. Dec 13 01:35:43.422274 systemd[1]: cri-containerd-80acd6fbd181731bdbd433483bd8346bdae0287344bd867d003d838c5c0e81b4.scope: Consumed 3.774s CPU time, 25.8M memory peak, 0B memory swap peak. Dec 13 01:35:43.556285 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-80acd6fbd181731bdbd433483bd8346bdae0287344bd867d003d838c5c0e81b4-rootfs.mount: Deactivated successfully. Dec 13 01:35:43.632230 containerd[1979]: time="2024-12-13T01:35:43.602443017Z" level=info msg="shim disconnected" id=80acd6fbd181731bdbd433483bd8346bdae0287344bd867d003d838c5c0e81b4 namespace=k8s.io Dec 13 01:35:43.655895 containerd[1979]: time="2024-12-13T01:35:43.655815953Z" level=warning msg="cleaning up after shim disconnected" id=80acd6fbd181731bdbd433483bd8346bdae0287344bd867d003d838c5c0e81b4 namespace=k8s.io Dec 13 01:35:43.655895 containerd[1979]: time="2024-12-13T01:35:43.655861663Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:35:44.030303 kubelet[3413]: I1213 01:35:44.030241 3413 scope.go:117] "RemoveContainer" containerID="80acd6fbd181731bdbd433483bd8346bdae0287344bd867d003d838c5c0e81b4" Dec 13 01:35:44.062708 containerd[1979]: time="2024-12-13T01:35:44.062509327Z" level=info msg="CreateContainer within sandbox \"8c4ca4d4ec598db776d7d2645c2b5421cd9949640c8e57a282e0ad863d65863f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Dec 13 01:35:44.124154 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1484004270.mount: Deactivated successfully. Dec 13 01:35:44.157157 containerd[1979]: time="2024-12-13T01:35:44.157106046Z" level=info msg="CreateContainer within sandbox \"8c4ca4d4ec598db776d7d2645c2b5421cd9949640c8e57a282e0ad863d65863f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"f0e4d644405b0498a27c9236b8ef34ec6b9554f4f01c2abe46e1154297a18ad9\"" Dec 13 01:35:44.158671 containerd[1979]: time="2024-12-13T01:35:44.158285616Z" level=info msg="StartContainer for \"f0e4d644405b0498a27c9236b8ef34ec6b9554f4f01c2abe46e1154297a18ad9\"" Dec 13 01:35:44.249299 systemd[1]: Started cri-containerd-f0e4d644405b0498a27c9236b8ef34ec6b9554f4f01c2abe46e1154297a18ad9.scope - libcontainer container f0e4d644405b0498a27c9236b8ef34ec6b9554f4f01c2abe46e1154297a18ad9. Dec 13 01:35:44.365018 containerd[1979]: time="2024-12-13T01:35:44.364853055Z" level=info msg="StartContainer for \"f0e4d644405b0498a27c9236b8ef34ec6b9554f4f01c2abe46e1154297a18ad9\" returns successfully" Dec 13 01:35:44.764655 systemd[1]: cri-containerd-e7d4de585ad35b29007a34d13a8cafef3cc9d707e018241f58da7d42ccc9df35.scope: Deactivated successfully. Dec 13 01:35:44.766126 systemd[1]: cri-containerd-e7d4de585ad35b29007a34d13a8cafef3cc9d707e018241f58da7d42ccc9df35.scope: Consumed 3.820s CPU time. Dec 13 01:35:44.815813 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e7d4de585ad35b29007a34d13a8cafef3cc9d707e018241f58da7d42ccc9df35-rootfs.mount: Deactivated successfully. Dec 13 01:35:44.817170 containerd[1979]: time="2024-12-13T01:35:44.816190182Z" level=info msg="shim disconnected" id=e7d4de585ad35b29007a34d13a8cafef3cc9d707e018241f58da7d42ccc9df35 namespace=k8s.io Dec 13 01:35:44.817170 containerd[1979]: time="2024-12-13T01:35:44.816350905Z" level=warning msg="cleaning up after shim disconnected" id=e7d4de585ad35b29007a34d13a8cafef3cc9d707e018241f58da7d42ccc9df35 namespace=k8s.io Dec 13 01:35:44.817170 containerd[1979]: time="2024-12-13T01:35:44.816366236Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:35:44.839074 containerd[1979]: time="2024-12-13T01:35:44.838578067Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:35:44Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 01:35:45.002745 kubelet[3413]: I1213 01:35:45.002707 3413 scope.go:117] "RemoveContainer" containerID="e7d4de585ad35b29007a34d13a8cafef3cc9d707e018241f58da7d42ccc9df35" Dec 13 01:35:45.026329 containerd[1979]: time="2024-12-13T01:35:45.025788965Z" level=info msg="CreateContainer within sandbox \"bec9fb2f112bba0b0b6be3e5a7c9748bb1a6b5f829e40fd45ca10116e54759cd\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Dec 13 01:35:45.057175 containerd[1979]: time="2024-12-13T01:35:45.057112555Z" level=info msg="CreateContainer within sandbox \"bec9fb2f112bba0b0b6be3e5a7c9748bb1a6b5f829e40fd45ca10116e54759cd\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"489f0facf06fa9a057d7db54470cd6f5541445fbf43f9d48e86766d2a78ffa45\"" Dec 13 01:35:45.067088 containerd[1979]: time="2024-12-13T01:35:45.066354351Z" level=info msg="StartContainer for \"489f0facf06fa9a057d7db54470cd6f5541445fbf43f9d48e86766d2a78ffa45\"" Dec 13 01:35:45.071514 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3418343078.mount: Deactivated successfully. Dec 13 01:35:45.134674 systemd[1]: Started cri-containerd-489f0facf06fa9a057d7db54470cd6f5541445fbf43f9d48e86766d2a78ffa45.scope - libcontainer container 489f0facf06fa9a057d7db54470cd6f5541445fbf43f9d48e86766d2a78ffa45. Dec 13 01:35:45.207433 containerd[1979]: time="2024-12-13T01:35:45.206577054Z" level=info msg="StartContainer for \"489f0facf06fa9a057d7db54470cd6f5541445fbf43f9d48e86766d2a78ffa45\" returns successfully" Dec 13 01:35:48.365080 systemd[1]: cri-containerd-0c3f0ae47ac4d9b44aa718e8311511acceb1c1bf3e7e8cefb29bede7a7258dc1.scope: Deactivated successfully. Dec 13 01:35:48.365381 systemd[1]: cri-containerd-0c3f0ae47ac4d9b44aa718e8311511acceb1c1bf3e7e8cefb29bede7a7258dc1.scope: Consumed 1.628s CPU time, 19.9M memory peak, 0B memory swap peak. Dec 13 01:35:48.449471 containerd[1979]: time="2024-12-13T01:35:48.449121487Z" level=info msg="shim disconnected" id=0c3f0ae47ac4d9b44aa718e8311511acceb1c1bf3e7e8cefb29bede7a7258dc1 namespace=k8s.io Dec 13 01:35:48.449471 containerd[1979]: time="2024-12-13T01:35:48.449211121Z" level=warning msg="cleaning up after shim disconnected" id=0c3f0ae47ac4d9b44aa718e8311511acceb1c1bf3e7e8cefb29bede7a7258dc1 namespace=k8s.io Dec 13 01:35:48.449471 containerd[1979]: time="2024-12-13T01:35:48.449228036Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:35:48.452483 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0c3f0ae47ac4d9b44aa718e8311511acceb1c1bf3e7e8cefb29bede7a7258dc1-rootfs.mount: Deactivated successfully. Dec 13 01:35:49.036954 kubelet[3413]: I1213 01:35:49.036911 3413 scope.go:117] "RemoveContainer" containerID="0c3f0ae47ac4d9b44aa718e8311511acceb1c1bf3e7e8cefb29bede7a7258dc1" Dec 13 01:35:49.045862 containerd[1979]: time="2024-12-13T01:35:49.045815883Z" level=info msg="CreateContainer within sandbox \"e2df51f13d49b0a8c27d64cf05147f2c8932d429ddded6cd49f20d60400918bd\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Dec 13 01:35:49.074670 containerd[1979]: time="2024-12-13T01:35:49.074245792Z" level=info msg="CreateContainer within sandbox \"e2df51f13d49b0a8c27d64cf05147f2c8932d429ddded6cd49f20d60400918bd\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"add3215d03927dd5a7f1d2570aa58bd452277602fe3995383127c071af302227\"" Dec 13 01:35:49.077107 containerd[1979]: time="2024-12-13T01:35:49.076820596Z" level=info msg="StartContainer for \"add3215d03927dd5a7f1d2570aa58bd452277602fe3995383127c071af302227\"" Dec 13 01:35:49.135358 systemd[1]: Started cri-containerd-add3215d03927dd5a7f1d2570aa58bd452277602fe3995383127c071af302227.scope - libcontainer container add3215d03927dd5a7f1d2570aa58bd452277602fe3995383127c071af302227. Dec 13 01:35:49.244746 containerd[1979]: time="2024-12-13T01:35:49.244677289Z" level=info msg="StartContainer for \"add3215d03927dd5a7f1d2570aa58bd452277602fe3995383127c071af302227\" returns successfully" Dec 13 01:35:53.044401 kubelet[3413]: E1213 01:35:53.044335 3413 controller.go:195] "Failed to update lease" err="Put \"https://172.31.21.168:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-168?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 13 01:36:03.048542 kubelet[3413]: E1213 01:36:03.048143 3413 controller.go:195] "Failed to update lease" err="Put \"https://172.31.21.168:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-168?timeout=10s\": context deadline exceeded"