Sep 4 17:26:27.993252 kernel: Linux version 6.6.48-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Wed Sep 4 15:49:08 -00 2024 Sep 4 17:26:27.993364 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=6662bd39fec77da4c9a5c59d2cba257325976309ed96904c83697df1825085bf Sep 4 17:26:27.993381 kernel: BIOS-provided physical RAM map: Sep 4 17:26:27.993393 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 4 17:26:27.993405 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 4 17:26:27.993416 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 4 17:26:27.993435 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Sep 4 17:26:27.993448 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Sep 4 17:26:27.993460 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Sep 4 17:26:27.993473 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 4 17:26:27.993486 kernel: NX (Execute Disable) protection: active Sep 4 17:26:27.993498 kernel: APIC: Static calls initialized Sep 4 17:26:27.993510 kernel: SMBIOS 2.7 present. Sep 4 17:26:27.993524 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Sep 4 17:26:27.993542 kernel: Hypervisor detected: KVM Sep 4 17:26:27.993556 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 4 17:26:27.993570 kernel: kvm-clock: using sched offset of 6171926509 cycles Sep 4 17:26:27.993585 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 4 17:26:27.993600 kernel: tsc: Detected 2500.004 MHz processor Sep 4 17:26:27.993614 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 4 17:26:27.993629 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 4 17:26:27.993644 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Sep 4 17:26:27.994309 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 4 17:26:27.994354 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 4 17:26:27.994409 kernel: Using GB pages for direct mapping Sep 4 17:26:27.994425 kernel: ACPI: Early table checksum verification disabled Sep 4 17:26:27.994439 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Sep 4 17:26:27.994454 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Sep 4 17:26:27.994469 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Sep 4 17:26:27.994483 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Sep 4 17:26:27.994503 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Sep 4 17:26:27.994518 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Sep 4 17:26:27.994532 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Sep 4 17:26:27.994547 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Sep 4 17:26:27.994562 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Sep 4 17:26:27.994576 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Sep 4 17:26:27.994591 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Sep 4 17:26:27.994605 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Sep 4 17:26:27.994622 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Sep 4 17:26:27.994637 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Sep 4 17:26:27.994658 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Sep 4 17:26:27.994672 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Sep 4 17:26:27.994684 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Sep 4 17:26:27.994696 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Sep 4 17:26:27.994712 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Sep 4 17:26:27.994724 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Sep 4 17:26:27.994736 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Sep 4 17:26:27.994749 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Sep 4 17:26:27.994764 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 4 17:26:27.994780 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 4 17:26:27.994793 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Sep 4 17:26:27.994808 kernel: NUMA: Initialized distance table, cnt=1 Sep 4 17:26:27.994823 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Sep 4 17:26:27.994886 kernel: Zone ranges: Sep 4 17:26:27.994904 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 4 17:26:27.994920 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Sep 4 17:26:27.994935 kernel: Normal empty Sep 4 17:26:27.994950 kernel: Movable zone start for each node Sep 4 17:26:27.994966 kernel: Early memory node ranges Sep 4 17:26:27.994981 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 4 17:26:27.994996 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Sep 4 17:26:27.995012 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Sep 4 17:26:27.995030 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 4 17:26:27.995128 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 4 17:26:27.995145 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Sep 4 17:26:27.995161 kernel: ACPI: PM-Timer IO Port: 0xb008 Sep 4 17:26:27.995177 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 4 17:26:27.995193 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Sep 4 17:26:27.995208 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 4 17:26:27.995223 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 4 17:26:27.995237 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 4 17:26:27.995254 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 4 17:26:27.995270 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 4 17:26:27.995304 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 4 17:26:27.995319 kernel: TSC deadline timer available Sep 4 17:26:27.995334 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 4 17:26:27.995349 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 4 17:26:27.995364 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Sep 4 17:26:27.995379 kernel: Booting paravirtualized kernel on KVM Sep 4 17:26:27.995395 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 4 17:26:27.995410 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 4 17:26:27.995429 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Sep 4 17:26:27.995444 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Sep 4 17:26:27.995499 kernel: pcpu-alloc: [0] 0 1 Sep 4 17:26:27.995513 kernel: kvm-guest: PV spinlocks enabled Sep 4 17:26:27.995528 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 4 17:26:27.995585 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=6662bd39fec77da4c9a5c59d2cba257325976309ed96904c83697df1825085bf Sep 4 17:26:27.995602 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 17:26:27.995621 kernel: random: crng init done Sep 4 17:26:27.995634 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 4 17:26:27.995648 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 4 17:26:27.995662 kernel: Fallback order for Node 0: 0 Sep 4 17:26:27.995675 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Sep 4 17:26:27.995689 kernel: Policy zone: DMA32 Sep 4 17:26:27.995702 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 17:26:27.995715 kernel: Memory: 1926204K/2057760K available (12288K kernel code, 2303K rwdata, 22640K rodata, 49336K init, 2008K bss, 131296K reserved, 0K cma-reserved) Sep 4 17:26:27.995730 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 4 17:26:27.995747 kernel: Kernel/User page tables isolation: enabled Sep 4 17:26:27.995762 kernel: ftrace: allocating 37670 entries in 148 pages Sep 4 17:26:27.995775 kernel: ftrace: allocated 148 pages with 3 groups Sep 4 17:26:27.995789 kernel: Dynamic Preempt: voluntary Sep 4 17:26:27.995803 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 17:26:27.995818 kernel: rcu: RCU event tracing is enabled. Sep 4 17:26:27.995832 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 4 17:26:27.995846 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 17:26:27.995859 kernel: Rude variant of Tasks RCU enabled. Sep 4 17:26:27.995873 kernel: Tracing variant of Tasks RCU enabled. Sep 4 17:26:27.995894 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 17:26:27.995908 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 4 17:26:27.995977 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 4 17:26:27.995996 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 17:26:27.996012 kernel: Console: colour VGA+ 80x25 Sep 4 17:26:27.996028 kernel: printk: console [ttyS0] enabled Sep 4 17:26:27.996044 kernel: ACPI: Core revision 20230628 Sep 4 17:26:27.996060 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Sep 4 17:26:27.996076 kernel: APIC: Switch to symmetric I/O mode setup Sep 4 17:26:27.996094 kernel: x2apic enabled Sep 4 17:26:27.996109 kernel: APIC: Switched APIC routing to: physical x2apic Sep 4 17:26:27.996134 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093d6e846, max_idle_ns: 440795249997 ns Sep 4 17:26:27.996151 kernel: Calibrating delay loop (skipped) preset value.. 5000.00 BogoMIPS (lpj=2500004) Sep 4 17:26:27.996165 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Sep 4 17:26:27.996180 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Sep 4 17:26:27.996194 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 4 17:26:27.996208 kernel: Spectre V2 : Mitigation: Retpolines Sep 4 17:26:27.996222 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Sep 4 17:26:27.996236 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Sep 4 17:26:27.996251 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Sep 4 17:26:27.996265 kernel: RETBleed: Vulnerable Sep 4 17:26:27.996306 kernel: Speculative Store Bypass: Vulnerable Sep 4 17:26:27.996322 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Sep 4 17:26:27.996338 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 4 17:26:27.996354 kernel: GDS: Unknown: Dependent on hypervisor status Sep 4 17:26:27.996370 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 4 17:26:27.996386 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 4 17:26:27.996462 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 4 17:26:27.996480 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Sep 4 17:26:27.996497 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Sep 4 17:26:27.996513 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Sep 4 17:26:27.996578 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Sep 4 17:26:27.996595 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Sep 4 17:26:27.996611 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Sep 4 17:26:27.996627 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 4 17:26:27.996642 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Sep 4 17:26:27.996658 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Sep 4 17:26:27.996674 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Sep 4 17:26:27.996692 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Sep 4 17:26:27.996705 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Sep 4 17:26:27.996719 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Sep 4 17:26:27.996735 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Sep 4 17:26:27.996751 kernel: Freeing SMP alternatives memory: 32K Sep 4 17:26:27.996767 kernel: pid_max: default: 32768 minimum: 301 Sep 4 17:26:27.996783 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Sep 4 17:26:27.996799 kernel: SELinux: Initializing. Sep 4 17:26:27.996815 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 4 17:26:27.996831 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 4 17:26:27.996847 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Sep 4 17:26:27.996864 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:26:27.996883 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:26:27.996899 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:26:27.996913 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Sep 4 17:26:27.996926 kernel: signal: max sigframe size: 3632 Sep 4 17:26:27.996938 kernel: rcu: Hierarchical SRCU implementation. Sep 4 17:26:27.996952 kernel: rcu: Max phase no-delay instances is 400. Sep 4 17:26:27.996964 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 4 17:26:27.996978 kernel: smp: Bringing up secondary CPUs ... Sep 4 17:26:27.997034 kernel: smpboot: x86: Booting SMP configuration: Sep 4 17:26:27.997057 kernel: .... node #0, CPUs: #1 Sep 4 17:26:27.997073 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Sep 4 17:26:27.997086 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 4 17:26:27.997098 kernel: smp: Brought up 1 node, 2 CPUs Sep 4 17:26:27.997111 kernel: smpboot: Max logical packages: 1 Sep 4 17:26:27.997123 kernel: smpboot: Total of 2 processors activated (10000.01 BogoMIPS) Sep 4 17:26:27.997137 kernel: devtmpfs: initialized Sep 4 17:26:27.997150 kernel: x86/mm: Memory block size: 128MB Sep 4 17:26:27.997220 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 17:26:27.997233 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 4 17:26:27.997246 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 17:26:27.997260 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 17:26:27.997272 kernel: audit: initializing netlink subsys (disabled) Sep 4 17:26:27.997304 kernel: audit: type=2000 audit(1725470787.139:1): state=initialized audit_enabled=0 res=1 Sep 4 17:26:27.997317 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 17:26:27.997331 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 4 17:26:27.997345 kernel: cpuidle: using governor menu Sep 4 17:26:27.997363 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 17:26:27.997377 kernel: dca service started, version 1.12.1 Sep 4 17:26:27.997390 kernel: PCI: Using configuration type 1 for base access Sep 4 17:26:27.997404 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 4 17:26:27.997419 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 4 17:26:27.997433 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 4 17:26:27.997447 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 17:26:27.997462 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 17:26:27.997476 kernel: ACPI: Added _OSI(Module Device) Sep 4 17:26:27.997493 kernel: ACPI: Added _OSI(Processor Device) Sep 4 17:26:27.997508 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Sep 4 17:26:27.997522 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 17:26:27.997537 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Sep 4 17:26:27.997551 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 4 17:26:27.997566 kernel: ACPI: Interpreter enabled Sep 4 17:26:27.997581 kernel: ACPI: PM: (supports S0 S5) Sep 4 17:26:27.997595 kernel: ACPI: Using IOAPIC for interrupt routing Sep 4 17:26:27.997650 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 4 17:26:27.997669 kernel: PCI: Using E820 reservations for host bridge windows Sep 4 17:26:27.997684 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Sep 4 17:26:27.997699 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 4 17:26:27.997927 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 4 17:26:27.998124 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Sep 4 17:26:27.998288 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Sep 4 17:26:27.998308 kernel: acpiphp: Slot [3] registered Sep 4 17:26:27.998329 kernel: acpiphp: Slot [4] registered Sep 4 17:26:27.998344 kernel: acpiphp: Slot [5] registered Sep 4 17:26:27.998359 kernel: acpiphp: Slot [6] registered Sep 4 17:26:27.998375 kernel: acpiphp: Slot [7] registered Sep 4 17:26:27.998389 kernel: acpiphp: Slot [8] registered Sep 4 17:26:27.998402 kernel: acpiphp: Slot [9] registered Sep 4 17:26:27.998417 kernel: acpiphp: Slot [10] registered Sep 4 17:26:27.998431 kernel: acpiphp: Slot [11] registered Sep 4 17:26:27.998445 kernel: acpiphp: Slot [12] registered Sep 4 17:26:27.998460 kernel: acpiphp: Slot [13] registered Sep 4 17:26:27.998478 kernel: acpiphp: Slot [14] registered Sep 4 17:26:27.998492 kernel: acpiphp: Slot [15] registered Sep 4 17:26:27.998507 kernel: acpiphp: Slot [16] registered Sep 4 17:26:27.998522 kernel: acpiphp: Slot [17] registered Sep 4 17:26:27.998535 kernel: acpiphp: Slot [18] registered Sep 4 17:26:27.998551 kernel: acpiphp: Slot [19] registered Sep 4 17:26:27.998566 kernel: acpiphp: Slot [20] registered Sep 4 17:26:27.998581 kernel: acpiphp: Slot [21] registered Sep 4 17:26:27.998596 kernel: acpiphp: Slot [22] registered Sep 4 17:26:27.998613 kernel: acpiphp: Slot [23] registered Sep 4 17:26:27.998627 kernel: acpiphp: Slot [24] registered Sep 4 17:26:27.998641 kernel: acpiphp: Slot [25] registered Sep 4 17:26:27.998656 kernel: acpiphp: Slot [26] registered Sep 4 17:26:27.998670 kernel: acpiphp: Slot [27] registered Sep 4 17:26:27.998754 kernel: acpiphp: Slot [28] registered Sep 4 17:26:27.998771 kernel: acpiphp: Slot [29] registered Sep 4 17:26:27.998786 kernel: acpiphp: Slot [30] registered Sep 4 17:26:27.998800 kernel: acpiphp: Slot [31] registered Sep 4 17:26:27.998814 kernel: PCI host bridge to bus 0000:00 Sep 4 17:26:27.998982 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 4 17:26:27.999111 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 4 17:26:27.999322 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 4 17:26:27.999450 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Sep 4 17:26:27.999629 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 4 17:26:27.999852 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Sep 4 17:26:28.000118 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Sep 4 17:26:28.000343 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Sep 4 17:26:28.000547 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Sep 4 17:26:28.000752 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Sep 4 17:26:28.000927 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Sep 4 17:26:28.001122 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Sep 4 17:26:28.001256 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Sep 4 17:26:28.001656 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Sep 4 17:26:28.001800 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Sep 4 17:26:28.002101 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Sep 4 17:26:28.002270 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Sep 4 17:26:28.004027 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Sep 4 17:26:28.004250 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Sep 4 17:26:28.004545 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 4 17:26:28.004848 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Sep 4 17:26:28.004999 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Sep 4 17:26:28.005148 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Sep 4 17:26:28.005393 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Sep 4 17:26:28.005440 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 4 17:26:28.005480 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 4 17:26:28.005494 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 4 17:26:28.005537 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 4 17:26:28.005552 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 4 17:26:28.005587 kernel: iommu: Default domain type: Translated Sep 4 17:26:28.005602 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 4 17:26:28.005640 kernel: PCI: Using ACPI for IRQ routing Sep 4 17:26:28.005655 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 4 17:26:28.005691 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 4 17:26:28.005704 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Sep 4 17:26:28.005907 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Sep 4 17:26:28.006047 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Sep 4 17:26:28.006177 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 4 17:26:28.006195 kernel: vgaarb: loaded Sep 4 17:26:28.006209 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Sep 4 17:26:28.006223 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Sep 4 17:26:28.006237 kernel: clocksource: Switched to clocksource kvm-clock Sep 4 17:26:28.006251 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 17:26:28.006339 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 17:26:28.006358 kernel: pnp: PnP ACPI init Sep 4 17:26:28.006372 kernel: pnp: PnP ACPI: found 5 devices Sep 4 17:26:28.006387 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 4 17:26:28.006402 kernel: NET: Registered PF_INET protocol family Sep 4 17:26:28.006415 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 4 17:26:28.006429 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 4 17:26:28.006443 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 17:26:28.006457 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 4 17:26:28.006471 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 4 17:26:28.006488 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 4 17:26:28.006502 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 4 17:26:28.006516 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 4 17:26:28.006531 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 17:26:28.006545 kernel: NET: Registered PF_XDP protocol family Sep 4 17:26:28.006674 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 4 17:26:28.006794 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 4 17:26:28.006972 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 4 17:26:28.007103 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Sep 4 17:26:28.007340 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 4 17:26:28.007404 kernel: PCI: CLS 0 bytes, default 64 Sep 4 17:26:28.007419 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 4 17:26:28.007433 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093d6e846, max_idle_ns: 440795249997 ns Sep 4 17:26:28.007447 kernel: clocksource: Switched to clocksource tsc Sep 4 17:26:28.007461 kernel: Initialise system trusted keyrings Sep 4 17:26:28.007475 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 4 17:26:28.007489 kernel: Key type asymmetric registered Sep 4 17:26:28.007509 kernel: Asymmetric key parser 'x509' registered Sep 4 17:26:28.007523 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 4 17:26:28.007583 kernel: io scheduler mq-deadline registered Sep 4 17:26:28.007600 kernel: io scheduler kyber registered Sep 4 17:26:28.007613 kernel: io scheduler bfq registered Sep 4 17:26:28.007627 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 4 17:26:28.007695 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 17:26:28.007711 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 4 17:26:28.007727 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 4 17:26:28.007747 kernel: i8042: Warning: Keylock active Sep 4 17:26:28.007761 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 4 17:26:28.007778 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 4 17:26:28.007993 kernel: rtc_cmos 00:00: RTC can wake from S4 Sep 4 17:26:28.008117 kernel: rtc_cmos 00:00: registered as rtc0 Sep 4 17:26:28.008266 kernel: rtc_cmos 00:00: setting system clock to 2024-09-04T17:26:27 UTC (1725470787) Sep 4 17:26:28.008536 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Sep 4 17:26:28.008559 kernel: intel_pstate: CPU model not supported Sep 4 17:26:28.008579 kernel: NET: Registered PF_INET6 protocol family Sep 4 17:26:28.008593 kernel: Segment Routing with IPv6 Sep 4 17:26:28.008606 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 17:26:28.008621 kernel: NET: Registered PF_PACKET protocol family Sep 4 17:26:28.008634 kernel: Key type dns_resolver registered Sep 4 17:26:28.008648 kernel: IPI shorthand broadcast: enabled Sep 4 17:26:28.008662 kernel: sched_clock: Marking stable (532001909, 265276524)->(874572574, -77294141) Sep 4 17:26:28.008675 kernel: registered taskstats version 1 Sep 4 17:26:28.008689 kernel: Loading compiled-in X.509 certificates Sep 4 17:26:28.008706 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.48-flatcar: a53bb4e7e3319f75620f709d8a6c7aef0adb3b02' Sep 4 17:26:28.008720 kernel: Key type .fscrypt registered Sep 4 17:26:28.008733 kernel: Key type fscrypt-provisioning registered Sep 4 17:26:28.008748 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 4 17:26:28.008812 kernel: ima: Allocated hash algorithm: sha1 Sep 4 17:26:28.008828 kernel: ima: No architecture policies found Sep 4 17:26:28.008842 kernel: clk: Disabling unused clocks Sep 4 17:26:28.008857 kernel: Freeing unused kernel image (initmem) memory: 49336K Sep 4 17:26:28.008875 kernel: Write protecting the kernel read-only data: 36864k Sep 4 17:26:28.008888 kernel: Freeing unused kernel image (rodata/data gap) memory: 1936K Sep 4 17:26:28.008902 kernel: Run /init as init process Sep 4 17:26:28.008915 kernel: with arguments: Sep 4 17:26:28.008929 kernel: /init Sep 4 17:26:28.008942 kernel: with environment: Sep 4 17:26:28.008956 kernel: HOME=/ Sep 4 17:26:28.009008 kernel: TERM=linux Sep 4 17:26:28.009024 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 17:26:28.009042 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:26:28.009062 systemd[1]: Detected virtualization amazon. Sep 4 17:26:28.009094 systemd[1]: Detected architecture x86-64. Sep 4 17:26:28.009108 systemd[1]: Running in initrd. Sep 4 17:26:28.009122 systemd[1]: No hostname configured, using default hostname. Sep 4 17:26:28.009139 systemd[1]: Hostname set to . Sep 4 17:26:28.010093 systemd[1]: Initializing machine ID from VM UUID. Sep 4 17:26:28.010114 systemd[1]: Queued start job for default target initrd.target. Sep 4 17:26:28.010213 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:26:28.010230 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:26:28.010249 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 17:26:28.010267 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:26:28.010321 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 17:26:28.010341 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 17:26:28.010358 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 17:26:28.010373 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 17:26:28.010388 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:26:28.010405 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:26:28.010421 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:26:28.010437 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:26:28.010469 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:26:28.010483 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:26:28.010496 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:26:28.010510 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:26:28.010524 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 17:26:28.010539 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 4 17:26:28.010554 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:26:28.010568 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:26:28.010583 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:26:28.010601 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:26:28.010618 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 17:26:28.010677 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:26:28.010692 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 17:26:28.010708 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 17:26:28.010723 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:26:28.010737 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:26:28.010756 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:26:28.010802 systemd-journald[178]: Collecting audit messages is disabled. Sep 4 17:26:28.010844 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 17:26:28.010861 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:26:28.010878 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 4 17:26:28.010895 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 17:26:28.010913 systemd-journald[178]: Journal started Sep 4 17:26:28.010951 systemd-journald[178]: Runtime Journal (/run/log/journal/ec210d0e3da1e74e050a4df80e532de6) is 4.8M, max 38.6M, 33.7M free. Sep 4 17:26:28.025382 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 17:26:28.027305 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:26:28.036359 systemd-modules-load[179]: Inserted module 'overlay' Sep 4 17:26:28.051527 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Sep 4 17:26:28.184856 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 17:26:28.184894 kernel: Bridge firewalling registered Sep 4 17:26:28.094155 systemd-modules-load[179]: Inserted module 'br_netfilter' Sep 4 17:26:28.182972 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:26:28.196601 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:26:28.198565 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:26:28.220583 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:26:28.222695 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:26:28.244998 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Sep 4 17:26:28.248431 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:26:28.273197 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:26:28.285974 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:26:28.287686 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:26:28.295506 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 17:26:28.308107 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:26:28.319886 systemd-resolved[201]: Positive Trust Anchors: Sep 4 17:26:28.320224 systemd-resolved[201]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:26:28.320295 systemd-resolved[201]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Sep 4 17:26:28.330323 dracut-cmdline[213]: dracut-dracut-053 Sep 4 17:26:28.331547 dracut-cmdline[213]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=6662bd39fec77da4c9a5c59d2cba257325976309ed96904c83697df1825085bf Sep 4 17:26:28.339223 systemd-resolved[201]: Defaulting to hostname 'linux'. Sep 4 17:26:28.341757 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:26:28.344116 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:26:28.420315 kernel: SCSI subsystem initialized Sep 4 17:26:28.431307 kernel: Loading iSCSI transport class v2.0-870. Sep 4 17:26:28.445308 kernel: iscsi: registered transport (tcp) Sep 4 17:26:28.472590 kernel: iscsi: registered transport (qla4xxx) Sep 4 17:26:28.472671 kernel: QLogic iSCSI HBA Driver Sep 4 17:26:28.513619 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 17:26:28.523590 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 17:26:28.562013 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 17:26:28.562097 kernel: device-mapper: uevent: version 1.0.3 Sep 4 17:26:28.562118 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 4 17:26:28.650138 kernel: raid6: avx512x4 gen() 6435 MB/s Sep 4 17:26:28.667332 kernel: raid6: avx512x2 gen() 7252 MB/s Sep 4 17:26:28.687624 kernel: raid6: avx512x1 gen() 6713 MB/s Sep 4 17:26:28.705607 kernel: raid6: avx2x4 gen() 5895 MB/s Sep 4 17:26:28.724339 kernel: raid6: avx2x2 gen() 5730 MB/s Sep 4 17:26:28.741332 kernel: raid6: avx2x1 gen() 6820 MB/s Sep 4 17:26:28.741415 kernel: raid6: using algorithm avx512x2 gen() 7252 MB/s Sep 4 17:26:28.759313 kernel: raid6: .... xor() 10464 MB/s, rmw enabled Sep 4 17:26:28.759388 kernel: raid6: using avx512x2 recovery algorithm Sep 4 17:26:28.792310 kernel: xor: automatically using best checksumming function avx Sep 4 17:26:29.065308 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 17:26:29.077847 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:26:29.084962 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:26:29.112235 systemd-udevd[396]: Using default interface naming scheme 'v255'. Sep 4 17:26:29.118429 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:26:29.134844 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 17:26:29.155790 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Sep 4 17:26:29.196995 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:26:29.204648 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:26:29.314692 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:26:29.326501 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 17:26:29.368705 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 17:26:29.372916 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:26:29.374494 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:26:29.375833 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:26:29.388491 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 17:26:29.433110 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:26:29.445311 kernel: cryptd: max_cpu_qlen set to 1000 Sep 4 17:26:29.470620 kernel: ena 0000:00:05.0: ENA device version: 0.10 Sep 4 17:26:29.470896 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Sep 4 17:26:29.477302 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Sep 4 17:26:29.482309 kernel: AVX2 version of gcm_enc/dec engaged. Sep 4 17:26:29.485163 kernel: AES CTR mode by8 optimization enabled Sep 4 17:26:29.483661 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:26:29.483828 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:26:29.489844 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:26:29.493431 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:52:6c:42:e3:c1 Sep 4 17:26:29.493863 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:26:29.495232 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:26:29.498146 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:26:29.505819 (udev-worker)[454]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:26:29.511519 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:26:29.608641 kernel: nvme nvme0: pci function 0000:00:04.0 Sep 4 17:26:29.608943 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Sep 4 17:26:29.619304 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 4 17:26:29.629305 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 4 17:26:29.629379 kernel: GPT:9289727 != 16777215 Sep 4 17:26:29.629398 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 4 17:26:29.629416 kernel: GPT:9289727 != 16777215 Sep 4 17:26:29.629432 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 4 17:26:29.629450 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 17:26:29.727209 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:26:29.739309 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (461) Sep 4 17:26:29.742690 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:26:29.757331 kernel: BTRFS: device fsid d110be6f-93a3-451a-b365-11b5d04e0602 devid 1 transid 33 /dev/nvme0n1p3 scanned by (udev-worker) (451) Sep 4 17:26:29.791844 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:26:29.835203 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Sep 4 17:26:29.862175 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Sep 4 17:26:29.877373 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 4 17:26:29.903656 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Sep 4 17:26:29.903985 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Sep 4 17:26:29.926719 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 17:26:29.940460 disk-uuid[629]: Primary Header is updated. Sep 4 17:26:29.940460 disk-uuid[629]: Secondary Entries is updated. Sep 4 17:26:29.940460 disk-uuid[629]: Secondary Header is updated. Sep 4 17:26:29.945401 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 17:26:29.950391 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 17:26:29.957316 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 17:26:30.962349 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 17:26:30.966500 disk-uuid[630]: The operation has completed successfully. Sep 4 17:26:31.178440 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 17:26:31.178564 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 17:26:31.217508 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 17:26:31.232653 sh[973]: Success Sep 4 17:26:31.252165 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 4 17:26:31.371978 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 17:26:31.390398 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 17:26:31.400665 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 17:26:31.421395 kernel: BTRFS info (device dm-0): first mount of filesystem d110be6f-93a3-451a-b365-11b5d04e0602 Sep 4 17:26:31.421644 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:26:31.421679 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 4 17:26:31.422710 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 17:26:31.424000 kernel: BTRFS info (device dm-0): using free space tree Sep 4 17:26:31.532381 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 4 17:26:31.546450 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 17:26:31.547218 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 17:26:31.555774 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 17:26:31.561704 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 17:26:31.590960 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b Sep 4 17:26:31.591092 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:26:31.591117 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 4 17:26:31.596310 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 4 17:26:31.616823 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 4 17:26:31.618177 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b Sep 4 17:26:31.642887 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 17:26:31.653612 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 17:26:31.711818 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:26:31.720592 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:26:31.754870 systemd-networkd[1165]: lo: Link UP Sep 4 17:26:31.754882 systemd-networkd[1165]: lo: Gained carrier Sep 4 17:26:31.757442 systemd-networkd[1165]: Enumeration completed Sep 4 17:26:31.758107 systemd-networkd[1165]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:26:31.758112 systemd-networkd[1165]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:26:31.758520 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:26:31.766255 systemd[1]: Reached target network.target - Network. Sep 4 17:26:31.769128 systemd-networkd[1165]: eth0: Link UP Sep 4 17:26:31.769136 systemd-networkd[1165]: eth0: Gained carrier Sep 4 17:26:31.769153 systemd-networkd[1165]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:26:31.786384 systemd-networkd[1165]: eth0: DHCPv4 address 172.31.30.103/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 4 17:26:32.088830 ignition[1104]: Ignition 2.18.0 Sep 4 17:26:32.088845 ignition[1104]: Stage: fetch-offline Sep 4 17:26:32.089115 ignition[1104]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:26:32.089128 ignition[1104]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:26:32.090793 ignition[1104]: Ignition finished successfully Sep 4 17:26:32.095035 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:26:32.105532 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 4 17:26:32.126657 ignition[1175]: Ignition 2.18.0 Sep 4 17:26:32.126671 ignition[1175]: Stage: fetch Sep 4 17:26:32.127113 ignition[1175]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:26:32.127127 ignition[1175]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:26:32.127305 ignition[1175]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:26:32.137686 ignition[1175]: PUT result: OK Sep 4 17:26:32.140245 ignition[1175]: parsed url from cmdline: "" Sep 4 17:26:32.140252 ignition[1175]: no config URL provided Sep 4 17:26:32.140265 ignition[1175]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 17:26:32.140292 ignition[1175]: no config at "/usr/lib/ignition/user.ign" Sep 4 17:26:32.140315 ignition[1175]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:26:32.141616 ignition[1175]: PUT result: OK Sep 4 17:26:32.141775 ignition[1175]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Sep 4 17:26:32.144575 ignition[1175]: GET result: OK Sep 4 17:26:32.144674 ignition[1175]: parsing config with SHA512: 5f5d93d8a0856ad1bef11f3c00d61fad54740b3003a6fd5e0284488f7998addcb0d4756f98c876d11331d9e65d2804431e50f4a50344b5e19d2132a842c2aaea Sep 4 17:26:32.160886 unknown[1175]: fetched base config from "system" Sep 4 17:26:32.162079 ignition[1175]: fetch: fetch complete Sep 4 17:26:32.160903 unknown[1175]: fetched base config from "system" Sep 4 17:26:32.162087 ignition[1175]: fetch: fetch passed Sep 4 17:26:32.160913 unknown[1175]: fetched user config from "aws" Sep 4 17:26:32.162142 ignition[1175]: Ignition finished successfully Sep 4 17:26:32.172607 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 4 17:26:32.192720 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 17:26:32.222245 ignition[1182]: Ignition 2.18.0 Sep 4 17:26:32.222259 ignition[1182]: Stage: kargs Sep 4 17:26:32.222739 ignition[1182]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:26:32.222752 ignition[1182]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:26:32.222856 ignition[1182]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:26:32.224079 ignition[1182]: PUT result: OK Sep 4 17:26:32.230008 ignition[1182]: kargs: kargs passed Sep 4 17:26:32.230088 ignition[1182]: Ignition finished successfully Sep 4 17:26:32.233140 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 17:26:32.239832 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 17:26:32.258103 ignition[1189]: Ignition 2.18.0 Sep 4 17:26:32.258117 ignition[1189]: Stage: disks Sep 4 17:26:32.258641 ignition[1189]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:26:32.258656 ignition[1189]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:26:32.258767 ignition[1189]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:26:32.260260 ignition[1189]: PUT result: OK Sep 4 17:26:32.265910 ignition[1189]: disks: disks passed Sep 4 17:26:32.265985 ignition[1189]: Ignition finished successfully Sep 4 17:26:32.268433 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 17:26:32.270590 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 17:26:32.273798 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 17:26:32.277436 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:26:32.281089 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:26:32.287450 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:26:32.293446 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 17:26:32.348454 systemd-fsck[1198]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 4 17:26:32.356672 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 17:26:32.372734 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 17:26:32.539299 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 84a5cefa-c3c7-47d7-9305-7e6877f73628 r/w with ordered data mode. Quota mode: none. Sep 4 17:26:32.540643 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 17:26:32.541524 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 17:26:32.561524 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:26:32.568488 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 17:26:32.569163 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 4 17:26:32.569223 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 17:26:32.569256 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:26:32.586863 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 17:26:32.596321 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1217) Sep 4 17:26:32.600830 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b Sep 4 17:26:32.600907 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:26:32.600929 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 4 17:26:32.601257 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 17:26:32.607716 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 4 17:26:32.608245 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:26:33.058189 initrd-setup-root[1241]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 17:26:33.098971 initrd-setup-root[1248]: cut: /sysroot/etc/group: No such file or directory Sep 4 17:26:33.106461 initrd-setup-root[1255]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 17:26:33.114118 initrd-setup-root[1262]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 17:26:33.397227 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 17:26:33.404496 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 17:26:33.412797 systemd-networkd[1165]: eth0: Gained IPv6LL Sep 4 17:26:33.415774 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 17:26:33.437730 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 17:26:33.441379 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b Sep 4 17:26:33.485886 ignition[1330]: INFO : Ignition 2.18.0 Sep 4 17:26:33.485886 ignition[1330]: INFO : Stage: mount Sep 4 17:26:33.488447 ignition[1330]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:26:33.488447 ignition[1330]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:26:33.488447 ignition[1330]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:26:33.494799 ignition[1330]: INFO : PUT result: OK Sep 4 17:26:33.488684 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 17:26:33.498841 ignition[1330]: INFO : mount: mount passed Sep 4 17:26:33.498841 ignition[1330]: INFO : Ignition finished successfully Sep 4 17:26:33.500105 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 17:26:33.507511 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 17:26:33.549606 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:26:33.579302 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1342) Sep 4 17:26:33.579365 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b Sep 4 17:26:33.581794 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:26:33.581959 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 4 17:26:33.586305 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 4 17:26:33.589006 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:26:33.619780 ignition[1359]: INFO : Ignition 2.18.0 Sep 4 17:26:33.619780 ignition[1359]: INFO : Stage: files Sep 4 17:26:33.622196 ignition[1359]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:26:33.622196 ignition[1359]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:26:33.625213 ignition[1359]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:26:33.628317 ignition[1359]: INFO : PUT result: OK Sep 4 17:26:33.631727 ignition[1359]: DEBUG : files: compiled without relabeling support, skipping Sep 4 17:26:33.633417 ignition[1359]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 17:26:33.633417 ignition[1359]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 17:26:33.674732 ignition[1359]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 17:26:33.676724 ignition[1359]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 17:26:33.681120 unknown[1359]: wrote ssh authorized keys file for user: core Sep 4 17:26:33.688737 ignition[1359]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 17:26:33.698315 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 4 17:26:33.698315 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 4 17:26:33.770375 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 4 17:26:33.867395 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 4 17:26:33.869658 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 4 17:26:33.873589 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 17:26:33.873589 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:26:33.873589 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:26:33.873589 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:26:33.873589 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:26:33.873589 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:26:33.873589 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:26:33.873589 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:26:33.873589 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:26:33.873589 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Sep 4 17:26:33.873589 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Sep 4 17:26:33.873589 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Sep 4 17:26:33.873589 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Sep 4 17:26:34.306035 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 4 17:26:34.823708 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Sep 4 17:26:34.823708 ignition[1359]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 4 17:26:34.828776 ignition[1359]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:26:34.828776 ignition[1359]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:26:34.828776 ignition[1359]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 4 17:26:34.834241 ignition[1359]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 4 17:26:34.834241 ignition[1359]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 4 17:26:34.834241 ignition[1359]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:26:34.834241 ignition[1359]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:26:34.834241 ignition[1359]: INFO : files: files passed Sep 4 17:26:34.834241 ignition[1359]: INFO : Ignition finished successfully Sep 4 17:26:34.847838 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 17:26:34.860624 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 17:26:34.865226 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 17:26:34.869858 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 17:26:34.869960 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 17:26:34.887406 initrd-setup-root-after-ignition[1389]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:26:34.887406 initrd-setup-root-after-ignition[1389]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:26:34.891492 initrd-setup-root-after-ignition[1393]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:26:34.894193 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:26:34.894851 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 17:26:34.903729 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 17:26:34.967106 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 17:26:34.967238 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 17:26:34.971475 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 17:26:34.974400 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 17:26:34.975806 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 17:26:34.983487 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 17:26:35.015766 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:26:35.023619 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 17:26:35.048520 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:26:35.048760 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:26:35.056193 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 17:26:35.057417 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 17:26:35.057593 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:26:35.062634 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 17:26:35.064912 systemd[1]: Stopped target basic.target - Basic System. Sep 4 17:26:35.066940 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 17:26:35.078676 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:26:35.082893 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 17:26:35.085580 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 17:26:35.089182 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:26:35.105981 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 17:26:35.109949 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 17:26:35.113368 systemd[1]: Stopped target swap.target - Swaps. Sep 4 17:26:35.114456 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 17:26:35.114589 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:26:35.120636 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:26:35.122224 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:26:35.128953 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 17:26:35.132872 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:26:35.137712 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 17:26:35.137952 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 17:26:35.143348 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 17:26:35.143873 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:26:35.149024 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 17:26:35.152719 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 17:26:35.160964 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 17:26:35.199620 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 17:26:35.207225 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 17:26:35.207482 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:26:35.223081 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 17:26:35.223265 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:26:35.249553 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 17:26:35.249681 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 17:26:35.264982 ignition[1413]: INFO : Ignition 2.18.0 Sep 4 17:26:35.264982 ignition[1413]: INFO : Stage: umount Sep 4 17:26:35.264982 ignition[1413]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:26:35.264982 ignition[1413]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:26:35.264982 ignition[1413]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:26:35.271697 ignition[1413]: INFO : PUT result: OK Sep 4 17:26:35.271697 ignition[1413]: INFO : umount: umount passed Sep 4 17:26:35.271697 ignition[1413]: INFO : Ignition finished successfully Sep 4 17:26:35.274821 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 17:26:35.274950 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 17:26:35.278357 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 17:26:35.278472 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 17:26:35.282675 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 17:26:35.282762 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 17:26:35.286366 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 4 17:26:35.286430 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 4 17:26:35.288765 systemd[1]: Stopped target network.target - Network. Sep 4 17:26:35.293685 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 17:26:35.293768 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:26:35.296048 systemd[1]: Stopped target paths.target - Path Units. Sep 4 17:26:35.301248 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 17:26:35.308462 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:26:35.308605 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 17:26:35.313955 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 17:26:35.316104 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 17:26:35.316169 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:26:35.318254 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 17:26:35.318441 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:26:35.319606 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 17:26:35.319679 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 17:26:35.319918 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 17:26:35.319961 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 17:26:35.326441 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 17:26:35.328007 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 17:26:35.337334 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 17:26:35.345138 systemd-networkd[1165]: eth0: DHCPv6 lease lost Sep 4 17:26:35.346772 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 17:26:35.347083 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 17:26:35.352580 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 17:26:35.353377 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 17:26:35.357043 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 17:26:35.357112 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:26:35.363952 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 17:26:35.367713 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 17:26:35.367798 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:26:35.368152 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 17:26:35.368210 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:26:35.371352 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 17:26:35.371411 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 17:26:35.373821 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 17:26:35.373877 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Sep 4 17:26:35.376975 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:26:35.396321 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 17:26:35.396498 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 17:26:35.400084 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 17:26:35.400649 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:26:35.404867 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 17:26:35.404961 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 17:26:35.406501 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 17:26:35.406537 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:26:35.407734 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 17:26:35.407972 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:26:35.411353 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 17:26:35.411427 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 17:26:35.417943 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:26:35.418041 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:26:35.432550 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 17:26:35.435820 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 17:26:35.435917 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:26:35.437369 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 4 17:26:35.437439 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:26:35.439102 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 17:26:35.439169 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:26:35.441466 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:26:35.441529 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:26:35.446738 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 17:26:35.446835 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 17:26:35.452368 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 17:26:35.452467 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 17:26:35.468816 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 17:26:35.469041 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 17:26:35.471333 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 17:26:35.482469 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 17:26:35.515244 systemd[1]: Switching root. Sep 4 17:26:35.548964 systemd-journald[178]: Journal stopped Sep 4 17:26:37.882229 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Sep 4 17:26:37.894588 kernel: SELinux: policy capability network_peer_controls=1 Sep 4 17:26:37.894631 kernel: SELinux: policy capability open_perms=1 Sep 4 17:26:37.894650 kernel: SELinux: policy capability extended_socket_class=1 Sep 4 17:26:37.894667 kernel: SELinux: policy capability always_check_network=0 Sep 4 17:26:37.894685 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 4 17:26:37.894714 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 4 17:26:37.894741 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 4 17:26:37.894758 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 4 17:26:37.894775 kernel: audit: type=1403 audit(1725470796.472:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 4 17:26:37.894796 systemd[1]: Successfully loaded SELinux policy in 71.030ms. Sep 4 17:26:37.894822 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.064ms. Sep 4 17:26:37.894843 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:26:37.894862 systemd[1]: Detected virtualization amazon. Sep 4 17:26:37.894881 systemd[1]: Detected architecture x86-64. Sep 4 17:26:37.894903 systemd[1]: Detected first boot. Sep 4 17:26:37.894924 systemd[1]: Initializing machine ID from VM UUID. Sep 4 17:26:37.894943 zram_generator::config[1456]: No configuration found. Sep 4 17:26:37.894967 systemd[1]: Populated /etc with preset unit settings. Sep 4 17:26:37.894985 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 4 17:26:37.895004 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 4 17:26:37.895022 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 4 17:26:37.895043 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 4 17:26:37.895065 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 4 17:26:37.895084 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 4 17:26:37.895102 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 4 17:26:37.895122 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 4 17:26:37.895139 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 4 17:26:37.895158 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 4 17:26:37.895176 systemd[1]: Created slice user.slice - User and Session Slice. Sep 4 17:26:37.895194 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:26:37.895214 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:26:37.895236 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 4 17:26:37.895253 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 4 17:26:37.895272 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 4 17:26:37.895311 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:26:37.895330 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 4 17:26:37.895348 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:26:37.895366 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 4 17:26:37.895384 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 4 17:26:37.895402 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 4 17:26:37.895425 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 4 17:26:37.895442 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:26:37.895461 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:26:37.895479 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:26:37.895497 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:26:37.895515 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 4 17:26:37.895533 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 4 17:26:37.895551 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:26:37.895573 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:26:37.895591 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:26:37.895609 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 4 17:26:37.895628 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 4 17:26:37.895645 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 4 17:26:37.895664 systemd[1]: Mounting media.mount - External Media Directory... Sep 4 17:26:37.895682 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:26:37.895700 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 4 17:26:37.895790 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 4 17:26:37.895823 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 4 17:26:37.895844 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 4 17:26:37.895862 systemd[1]: Reached target machines.target - Containers. Sep 4 17:26:37.895880 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 4 17:26:37.895898 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:26:37.895917 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:26:37.895936 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 4 17:26:37.895953 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:26:37.895974 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 17:26:37.895993 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:26:37.896012 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 4 17:26:37.896030 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:26:37.896048 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 4 17:26:37.896067 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 4 17:26:37.896085 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 4 17:26:37.896103 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 4 17:26:37.896122 systemd[1]: Stopped systemd-fsck-usr.service. Sep 4 17:26:37.896143 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:26:37.896164 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:26:37.896182 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 17:26:37.896200 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 4 17:26:37.896218 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:26:37.896236 systemd[1]: verity-setup.service: Deactivated successfully. Sep 4 17:26:37.896254 systemd[1]: Stopped verity-setup.service. Sep 4 17:26:37.896272 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:26:37.907546 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 4 17:26:37.907578 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 4 17:26:37.907597 systemd[1]: Mounted media.mount - External Media Directory. Sep 4 17:26:37.907615 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 4 17:26:37.907633 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 4 17:26:37.907655 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 4 17:26:37.907675 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:26:37.907693 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 4 17:26:37.907712 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 4 17:26:37.907730 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:26:37.907789 systemd-journald[1530]: Collecting audit messages is disabled. Sep 4 17:26:37.907823 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:26:37.907842 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:26:37.907864 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:26:37.907883 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 17:26:37.907901 kernel: loop: module loaded Sep 4 17:26:37.907925 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 17:26:37.907943 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 4 17:26:37.907965 systemd-journald[1530]: Journal started Sep 4 17:26:37.908000 systemd-journald[1530]: Runtime Journal (/run/log/journal/ec210d0e3da1e74e050a4df80e532de6) is 4.8M, max 38.6M, 33.7M free. Sep 4 17:26:37.403835 systemd[1]: Queued start job for default target multi-user.target. Sep 4 17:26:37.459494 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Sep 4 17:26:37.460090 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 4 17:26:37.933220 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 17:26:37.933415 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:26:37.934848 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:26:37.935169 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:26:37.936954 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 4 17:26:37.941468 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:26:37.943403 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 4 17:26:37.947301 kernel: fuse: init (API version 7.39) Sep 4 17:26:37.959840 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 4 17:26:37.960686 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 4 17:26:38.020599 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 4 17:26:38.023452 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 4 17:26:38.023506 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:26:38.028156 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 4 17:26:38.053488 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 4 17:26:38.061672 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 4 17:26:38.061930 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:26:38.079537 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 4 17:26:38.089808 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 4 17:26:38.091799 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 17:26:38.097636 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 4 17:26:38.099681 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 17:26:38.107552 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:26:38.114330 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 4 17:26:38.134359 kernel: ACPI: bus type drm_connector registered Sep 4 17:26:38.121958 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 4 17:26:38.123877 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 17:26:38.124271 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 17:26:38.126133 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 4 17:26:38.134648 systemd-tmpfiles[1540]: ACLs are not supported, ignoring. Sep 4 17:26:38.134671 systemd-tmpfiles[1540]: ACLs are not supported, ignoring. Sep 4 17:26:38.193198 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:26:38.199963 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:26:38.216512 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 4 17:26:38.225391 systemd-journald[1530]: Time spent on flushing to /var/log/journal/ec210d0e3da1e74e050a4df80e532de6 is 75.436ms for 964 entries. Sep 4 17:26:38.225391 systemd-journald[1530]: System Journal (/var/log/journal/ec210d0e3da1e74e050a4df80e532de6) is 8.0M, max 195.6M, 187.6M free. Sep 4 17:26:38.308766 systemd-journald[1530]: Received client request to flush runtime journal. Sep 4 17:26:38.308826 kernel: loop0: detected capacity change from 0 to 60984 Sep 4 17:26:38.308854 kernel: block loop0: the capability attribute has been deprecated. Sep 4 17:26:38.236014 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 4 17:26:38.237753 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 4 17:26:38.246626 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 4 17:26:38.281072 udevadm[1585]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 4 17:26:38.285790 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 4 17:26:38.298693 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 4 17:26:38.300552 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:26:38.316299 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 4 17:26:38.361304 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 4 17:26:38.366674 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 4 17:26:38.369715 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 4 17:26:38.397554 kernel: loop1: detected capacity change from 0 to 211296 Sep 4 17:26:38.402053 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 4 17:26:38.409608 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:26:38.440392 systemd-tmpfiles[1603]: ACLs are not supported, ignoring. Sep 4 17:26:38.440829 systemd-tmpfiles[1603]: ACLs are not supported, ignoring. Sep 4 17:26:38.448987 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:26:38.464311 kernel: loop2: detected capacity change from 0 to 139904 Sep 4 17:26:38.583490 kernel: loop3: detected capacity change from 0 to 80568 Sep 4 17:26:38.761317 kernel: loop4: detected capacity change from 0 to 60984 Sep 4 17:26:38.797378 kernel: loop5: detected capacity change from 0 to 211296 Sep 4 17:26:38.846959 kernel: loop6: detected capacity change from 0 to 139904 Sep 4 17:26:38.900753 kernel: loop7: detected capacity change from 0 to 80568 Sep 4 17:26:38.942692 (sd-merge)[1609]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Sep 4 17:26:38.944427 (sd-merge)[1609]: Merged extensions into '/usr'. Sep 4 17:26:38.955192 systemd[1]: Reloading requested from client PID 1572 ('systemd-sysext') (unit systemd-sysext.service)... Sep 4 17:26:38.955213 systemd[1]: Reloading... Sep 4 17:26:39.099411 zram_generator::config[1634]: No configuration found. Sep 4 17:26:39.434074 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:26:39.554982 systemd[1]: Reloading finished in 599 ms. Sep 4 17:26:39.595486 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 4 17:26:39.613670 systemd[1]: Starting ensure-sysext.service... Sep 4 17:26:39.632694 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Sep 4 17:26:39.673023 systemd-tmpfiles[1683]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 4 17:26:39.673549 systemd-tmpfiles[1683]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 4 17:26:39.674441 systemd[1]: Reloading requested from client PID 1682 ('systemctl') (unit ensure-sysext.service)... Sep 4 17:26:39.674462 systemd[1]: Reloading... Sep 4 17:26:39.677108 systemd-tmpfiles[1683]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 4 17:26:39.679651 systemd-tmpfiles[1683]: ACLs are not supported, ignoring. Sep 4 17:26:39.679744 systemd-tmpfiles[1683]: ACLs are not supported, ignoring. Sep 4 17:26:39.714699 systemd-tmpfiles[1683]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 17:26:39.714719 systemd-tmpfiles[1683]: Skipping /boot Sep 4 17:26:39.733453 systemd-tmpfiles[1683]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 17:26:39.733474 systemd-tmpfiles[1683]: Skipping /boot Sep 4 17:26:39.842810 zram_generator::config[1709]: No configuration found. Sep 4 17:26:40.011162 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:26:40.082174 systemd[1]: Reloading finished in 407 ms. Sep 4 17:26:40.084676 ldconfig[1567]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 4 17:26:40.099706 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 4 17:26:40.101738 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 4 17:26:40.108848 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Sep 4 17:26:40.121507 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 17:26:40.124879 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 4 17:26:40.134565 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 4 17:26:40.139635 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:26:40.146591 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:26:40.149873 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 4 17:26:40.164976 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:26:40.165271 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:26:40.173899 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:26:40.187709 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:26:40.196730 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:26:40.198652 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:26:40.198853 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:26:40.204041 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:26:40.204838 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:26:40.212377 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:26:40.221055 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 4 17:26:40.224336 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:26:40.238765 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:26:40.240083 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:26:40.248837 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 17:26:40.251793 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:26:40.252380 systemd[1]: Reached target time-set.target - System Time Set. Sep 4 17:26:40.254601 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:26:40.258485 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 4 17:26:40.260945 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:26:40.262552 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:26:40.270857 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 4 17:26:40.284943 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:26:40.285230 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:26:40.296570 systemd[1]: Finished ensure-sysext.service. Sep 4 17:26:40.299596 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:26:40.299801 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:26:40.301752 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 17:26:40.301934 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 17:26:40.304379 systemd-udevd[1770]: Using default interface naming scheme 'v255'. Sep 4 17:26:40.312828 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 17:26:40.312948 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 17:26:40.320852 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 4 17:26:40.347409 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 4 17:26:40.349555 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 17:26:40.361809 augenrules[1797]: No rules Sep 4 17:26:40.362216 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 4 17:26:40.365859 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 17:26:40.405319 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:26:40.420637 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:26:40.450601 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 4 17:26:40.609442 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 4 17:26:40.654307 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1826) Sep 4 17:26:40.661366 (udev-worker)[1811]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:26:40.717385 systemd-networkd[1810]: lo: Link UP Sep 4 17:26:40.717750 systemd-networkd[1810]: lo: Gained carrier Sep 4 17:26:40.721573 systemd-networkd[1810]: Enumeration completed Sep 4 17:26:40.724431 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:26:40.725106 systemd-networkd[1810]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:26:40.725111 systemd-networkd[1810]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:26:40.733193 systemd-networkd[1810]: eth0: Link UP Sep 4 17:26:40.733692 systemd-networkd[1810]: eth0: Gained carrier Sep 4 17:26:40.733809 systemd-networkd[1810]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:26:40.734497 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 4 17:26:40.747494 systemd-networkd[1810]: eth0: DHCPv4 address 172.31.30.103/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 4 17:26:40.774074 systemd-resolved[1768]: Positive Trust Anchors: Sep 4 17:26:40.774166 systemd-resolved[1768]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:26:40.774220 systemd-resolved[1768]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Sep 4 17:26:40.783454 systemd-resolved[1768]: Defaulting to hostname 'linux'. Sep 4 17:26:40.786879 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:26:40.788567 systemd[1]: Reached target network.target - Network. Sep 4 17:26:40.789570 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:26:40.804317 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Sep 4 17:26:40.820338 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 4 17:26:40.827340 systemd-networkd[1810]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:26:40.836364 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Sep 4 17:26:40.839078 kernel: ACPI: button: Power Button [PWRF] Sep 4 17:26:40.842370 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Sep 4 17:26:40.851332 kernel: ACPI: button: Sleep Button [SLPF] Sep 4 17:26:40.907350 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (1821) Sep 4 17:26:40.913305 kernel: mousedev: PS/2 mouse device common for all mice Sep 4 17:26:40.945344 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:26:41.083262 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 4 17:26:41.092938 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 4 17:26:41.094869 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 4 17:26:41.103968 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 4 17:26:41.128817 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 4 17:26:41.143306 lvm[1927]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 17:26:41.177659 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 4 17:26:41.287749 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:26:41.296597 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 4 17:26:41.298331 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:26:41.301259 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:26:41.302603 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 4 17:26:41.303993 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 4 17:26:41.305618 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 4 17:26:41.307235 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 4 17:26:41.308877 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 4 17:26:41.310302 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 4 17:26:41.310340 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:26:41.311572 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:26:41.314541 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 4 17:26:41.315915 lvm[1932]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 17:26:41.318241 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 4 17:26:41.325092 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 4 17:26:41.329394 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 4 17:26:41.330753 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:26:41.331777 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:26:41.332860 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 4 17:26:41.332892 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 4 17:26:41.338398 systemd[1]: Starting containerd.service - containerd container runtime... Sep 4 17:26:41.343470 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 4 17:26:41.356599 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 4 17:26:41.361742 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 4 17:26:41.365506 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 4 17:26:41.368581 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 4 17:26:41.375646 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 4 17:26:41.381494 systemd[1]: Started ntpd.service - Network Time Service. Sep 4 17:26:41.389369 jq[1939]: false Sep 4 17:26:41.393747 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 4 17:26:41.398654 systemd[1]: Starting setup-oem.service - Setup OEM... Sep 4 17:26:41.407887 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 4 17:26:41.414507 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 4 17:26:41.453952 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 4 17:26:41.461357 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 4 17:26:41.462046 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 4 17:26:41.471516 systemd[1]: Starting update-engine.service - Update Engine... Sep 4 17:26:41.476425 extend-filesystems[1940]: Found loop4 Sep 4 17:26:41.476425 extend-filesystems[1940]: Found loop5 Sep 4 17:26:41.474357 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 4 17:26:41.478630 extend-filesystems[1940]: Found loop6 Sep 4 17:26:41.478630 extend-filesystems[1940]: Found loop7 Sep 4 17:26:41.478630 extend-filesystems[1940]: Found nvme0n1 Sep 4 17:26:41.483728 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 4 17:26:41.489791 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 4 17:26:41.490423 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 4 17:26:41.496799 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 4 17:26:41.497435 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 4 17:26:41.506357 extend-filesystems[1940]: Found nvme0n1p1 Sep 4 17:26:41.506357 extend-filesystems[1940]: Found nvme0n1p2 Sep 4 17:26:41.506357 extend-filesystems[1940]: Found nvme0n1p3 Sep 4 17:26:41.506357 extend-filesystems[1940]: Found usr Sep 4 17:26:41.506357 extend-filesystems[1940]: Found nvme0n1p4 Sep 4 17:26:41.506357 extend-filesystems[1940]: Found nvme0n1p6 Sep 4 17:26:41.506357 extend-filesystems[1940]: Found nvme0n1p7 Sep 4 17:26:41.506357 extend-filesystems[1940]: Found nvme0n1p9 Sep 4 17:26:41.506357 extend-filesystems[1940]: Checking size of /dev/nvme0n1p9 Sep 4 17:26:41.507391 systemd[1]: motdgen.service: Deactivated successfully. Sep 4 17:26:41.507612 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 4 17:26:41.522849 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 4 17:26:41.576561 systemd[1]: Finished setup-oem.service - Setup OEM. Sep 4 17:26:41.586560 jq[1955]: true Sep 4 17:26:41.597465 ntpd[1942]: 4 Sep 17:26:41 ntpd[1942]: ntpd 4.2.8p17@1.4004-o Wed Sep 4 15:12:45 UTC 2024 (1): Starting Sep 4 17:26:41.597465 ntpd[1942]: 4 Sep 17:26:41 ntpd[1942]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 4 17:26:41.597465 ntpd[1942]: 4 Sep 17:26:41 ntpd[1942]: ---------------------------------------------------- Sep 4 17:26:41.597465 ntpd[1942]: 4 Sep 17:26:41 ntpd[1942]: ntp-4 is maintained by Network Time Foundation, Sep 4 17:26:41.597465 ntpd[1942]: 4 Sep 17:26:41 ntpd[1942]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 4 17:26:41.597465 ntpd[1942]: 4 Sep 17:26:41 ntpd[1942]: corporation. Support and training for ntp-4 are Sep 4 17:26:41.597465 ntpd[1942]: 4 Sep 17:26:41 ntpd[1942]: available at https://www.nwtime.org/support Sep 4 17:26:41.597465 ntpd[1942]: 4 Sep 17:26:41 ntpd[1942]: ---------------------------------------------------- Sep 4 17:26:41.594023 ntpd[1942]: ntpd 4.2.8p17@1.4004-o Wed Sep 4 15:12:45 UTC 2024 (1): Starting Sep 4 17:26:41.594054 ntpd[1942]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 4 17:26:41.594066 ntpd[1942]: ---------------------------------------------------- Sep 4 17:26:41.594078 ntpd[1942]: ntp-4 is maintained by Network Time Foundation, Sep 4 17:26:41.594089 ntpd[1942]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 4 17:26:41.594099 ntpd[1942]: corporation. Support and training for ntp-4 are Sep 4 17:26:41.594110 ntpd[1942]: available at https://www.nwtime.org/support Sep 4 17:26:41.595108 ntpd[1942]: ---------------------------------------------------- Sep 4 17:26:41.610741 ntpd[1942]: proto: precision = 0.091 usec (-23) Sep 4 17:26:41.619097 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 4 17:26:41.625526 ntpd[1942]: 4 Sep 17:26:41 ntpd[1942]: proto: precision = 0.091 usec (-23) Sep 4 17:26:41.625526 ntpd[1942]: 4 Sep 17:26:41 ntpd[1942]: basedate set to 2024-08-23 Sep 4 17:26:41.625526 ntpd[1942]: 4 Sep 17:26:41 ntpd[1942]: gps base set to 2024-08-25 (week 2329) Sep 4 17:26:41.616220 ntpd[1942]: basedate set to 2024-08-23 Sep 4 17:26:41.616242 ntpd[1942]: gps base set to 2024-08-25 (week 2329) Sep 4 17:26:41.618783 dbus-daemon[1938]: [system] SELinux support is enabled Sep 4 17:26:41.636940 extend-filesystems[1940]: Resized partition /dev/nvme0n1p9 Sep 4 17:26:41.633300 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 4 17:26:41.642353 ntpd[1942]: 4 Sep 17:26:41 ntpd[1942]: Listen and drop on 0 v6wildcard [::]:123 Sep 4 17:26:41.642353 ntpd[1942]: 4 Sep 17:26:41 ntpd[1942]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 4 17:26:41.642353 ntpd[1942]: 4 Sep 17:26:41 ntpd[1942]: Listen normally on 2 lo 127.0.0.1:123 Sep 4 17:26:41.642353 ntpd[1942]: 4 Sep 17:26:41 ntpd[1942]: Listen normally on 3 eth0 172.31.30.103:123 Sep 4 17:26:41.642353 ntpd[1942]: 4 Sep 17:26:41 ntpd[1942]: Listen normally on 4 lo [::1]:123 Sep 4 17:26:41.642353 ntpd[1942]: 4 Sep 17:26:41 ntpd[1942]: bind(21) AF_INET6 fe80::452:6cff:fe42:e3c1%2#123 flags 0x11 failed: Cannot assign requested address Sep 4 17:26:41.642353 ntpd[1942]: 4 Sep 17:26:41 ntpd[1942]: unable to create socket on eth0 (5) for fe80::452:6cff:fe42:e3c1%2#123 Sep 4 17:26:41.642353 ntpd[1942]: 4 Sep 17:26:41 ntpd[1942]: failed to init interface for address fe80::452:6cff:fe42:e3c1%2 Sep 4 17:26:41.642353 ntpd[1942]: 4 Sep 17:26:41 ntpd[1942]: Listening on routing socket on fd #21 for interface updates Sep 4 17:26:41.638466 dbus-daemon[1938]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1810 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 4 17:26:41.633434 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 4 17:26:41.639676 ntpd[1942]: Listen and drop on 0 v6wildcard [::]:123 Sep 4 17:26:41.635231 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 4 17:26:41.639740 ntpd[1942]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 4 17:26:41.635257 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 4 17:26:41.641635 ntpd[1942]: Listen normally on 2 lo 127.0.0.1:123 Sep 4 17:26:41.641688 ntpd[1942]: Listen normally on 3 eth0 172.31.30.103:123 Sep 4 17:26:41.641732 ntpd[1942]: Listen normally on 4 lo [::1]:123 Sep 4 17:26:41.641792 ntpd[1942]: bind(21) AF_INET6 fe80::452:6cff:fe42:e3c1%2#123 flags 0x11 failed: Cannot assign requested address Sep 4 17:26:41.641814 ntpd[1942]: unable to create socket on eth0 (5) for fe80::452:6cff:fe42:e3c1%2#123 Sep 4 17:26:41.641831 ntpd[1942]: failed to init interface for address fe80::452:6cff:fe42:e3c1%2 Sep 4 17:26:41.641865 ntpd[1942]: Listening on routing socket on fd #21 for interface updates Sep 4 17:26:41.684237 extend-filesystems[1989]: resize2fs 1.47.0 (5-Feb-2023) Sep 4 17:26:41.699473 tar[1960]: linux-amd64/helm Sep 4 17:26:41.656083 dbus-daemon[1938]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 4 17:26:41.655713 (ntainerd)[1981]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 4 17:26:41.691517 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 4 17:26:41.715446 ntpd[1942]: 4 Sep 17:26:41 ntpd[1942]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 17:26:41.715446 ntpd[1942]: 4 Sep 17:26:41 ntpd[1942]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 17:26:41.715839 update_engine[1951]: I0904 17:26:41.714066 1951 main.cc:92] Flatcar Update Engine starting Sep 4 17:26:41.724513 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Sep 4 17:26:41.700843 ntpd[1942]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 17:26:41.724641 jq[1980]: true Sep 4 17:26:41.700887 ntpd[1942]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 17:26:41.742132 systemd[1]: Started update-engine.service - Update Engine. Sep 4 17:26:41.747995 update_engine[1951]: I0904 17:26:41.747553 1951 update_check_scheduler.cc:74] Next update check in 5m29s Sep 4 17:26:41.752720 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 4 17:26:41.853674 systemd-networkd[1810]: eth0: Gained IPv6LL Sep 4 17:26:41.875045 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 4 17:26:41.878733 systemd[1]: Reached target network-online.target - Network is Online. Sep 4 17:26:41.893950 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Sep 4 17:26:41.907512 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:26:41.935314 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Sep 4 17:26:41.916670 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 4 17:26:41.939164 dbus-daemon[1938]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 4 17:26:41.939370 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 4 17:26:41.940234 dbus-daemon[1938]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.4' (uid=0 pid=1992 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 4 17:26:41.954180 systemd[1]: Starting polkit.service - Authorization Manager... Sep 4 17:26:41.995593 extend-filesystems[1989]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Sep 4 17:26:41.995593 extend-filesystems[1989]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 4 17:26:41.995593 extend-filesystems[1989]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Sep 4 17:26:42.000605 extend-filesystems[1940]: Resized filesystem in /dev/nvme0n1p9 Sep 4 17:26:42.000239 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 4 17:26:42.001379 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 4 17:26:42.011299 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (1812) Sep 4 17:26:42.043178 coreos-metadata[1937]: Sep 04 17:26:42.039 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 4 17:26:42.043178 coreos-metadata[1937]: Sep 04 17:26:42.039 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Sep 4 17:26:42.043178 coreos-metadata[1937]: Sep 04 17:26:42.039 INFO Fetch successful Sep 4 17:26:42.043178 coreos-metadata[1937]: Sep 04 17:26:42.039 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Sep 4 17:26:42.043178 coreos-metadata[1937]: Sep 04 17:26:42.039 INFO Fetch successful Sep 4 17:26:42.043178 coreos-metadata[1937]: Sep 04 17:26:42.039 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Sep 4 17:26:42.043178 coreos-metadata[1937]: Sep 04 17:26:42.039 INFO Fetch successful Sep 4 17:26:42.043178 coreos-metadata[1937]: Sep 04 17:26:42.039 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Sep 4 17:26:42.043178 coreos-metadata[1937]: Sep 04 17:26:42.039 INFO Fetch successful Sep 4 17:26:42.043178 coreos-metadata[1937]: Sep 04 17:26:42.039 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Sep 4 17:26:42.043178 coreos-metadata[1937]: Sep 04 17:26:42.039 INFO Fetch failed with 404: resource not found Sep 4 17:26:42.043178 coreos-metadata[1937]: Sep 04 17:26:42.039 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Sep 4 17:26:42.043178 coreos-metadata[1937]: Sep 04 17:26:42.039 INFO Fetch successful Sep 4 17:26:42.043178 coreos-metadata[1937]: Sep 04 17:26:42.039 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Sep 4 17:26:42.043178 coreos-metadata[1937]: Sep 04 17:26:42.039 INFO Fetch successful Sep 4 17:26:42.043178 coreos-metadata[1937]: Sep 04 17:26:42.039 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Sep 4 17:26:42.043178 coreos-metadata[1937]: Sep 04 17:26:42.039 INFO Fetch successful Sep 4 17:26:42.043178 coreos-metadata[1937]: Sep 04 17:26:42.039 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Sep 4 17:26:42.043178 coreos-metadata[1937]: Sep 04 17:26:42.039 INFO Fetch successful Sep 4 17:26:42.043178 coreos-metadata[1937]: Sep 04 17:26:42.039 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Sep 4 17:26:42.043178 coreos-metadata[1937]: Sep 04 17:26:42.039 INFO Fetch successful Sep 4 17:26:42.128175 polkitd[2017]: Started polkitd version 121 Sep 4 17:26:42.137885 bash[2029]: Updated "/home/core/.ssh/authorized_keys" Sep 4 17:26:42.141907 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 4 17:26:42.156801 systemd[1]: Starting sshkeys.service... Sep 4 17:26:42.170071 systemd-logind[1947]: Watching system buttons on /dev/input/event1 (Power Button) Sep 4 17:26:42.170108 systemd-logind[1947]: Watching system buttons on /dev/input/event3 (Sleep Button) Sep 4 17:26:42.170131 systemd-logind[1947]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 4 17:26:42.189258 systemd-logind[1947]: New seat seat0. Sep 4 17:26:42.189779 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 4 17:26:42.197714 polkitd[2017]: Loading rules from directory /etc/polkit-1/rules.d Sep 4 17:26:42.197792 polkitd[2017]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 4 17:26:42.198974 polkitd[2017]: Finished loading, compiling and executing 2 rules Sep 4 17:26:42.202489 dbus-daemon[1938]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 4 17:26:42.202695 systemd[1]: Started polkit.service - Authorization Manager. Sep 4 17:26:42.203614 polkitd[2017]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 4 17:26:42.217061 systemd[1]: Started systemd-logind.service - User Login Management. Sep 4 17:26:42.266104 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 4 17:26:42.275876 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 4 17:26:42.305224 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 4 17:26:42.325271 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 4 17:26:42.357937 systemd-resolved[1768]: System hostname changed to 'ip-172-31-30-103'. Sep 4 17:26:42.357937 systemd-hostnamed[1992]: Hostname set to (transient) Sep 4 17:26:42.367304 amazon-ssm-agent[2009]: Initializing new seelog logger Sep 4 17:26:42.367304 amazon-ssm-agent[2009]: New Seelog Logger Creation Complete Sep 4 17:26:42.367304 amazon-ssm-agent[2009]: 2024/09/04 17:26:42 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:26:42.367304 amazon-ssm-agent[2009]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:26:42.367304 amazon-ssm-agent[2009]: 2024/09/04 17:26:42 processing appconfig overrides Sep 4 17:26:42.367304 amazon-ssm-agent[2009]: 2024/09/04 17:26:42 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:26:42.367304 amazon-ssm-agent[2009]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:26:42.367304 amazon-ssm-agent[2009]: 2024/09/04 17:26:42 processing appconfig overrides Sep 4 17:26:42.367304 amazon-ssm-agent[2009]: 2024/09/04 17:26:42 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:26:42.367304 amazon-ssm-agent[2009]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:26:42.367304 amazon-ssm-agent[2009]: 2024/09/04 17:26:42 processing appconfig overrides Sep 4 17:26:42.367304 amazon-ssm-agent[2009]: 2024-09-04 17:26:42 INFO Proxy environment variables: Sep 4 17:26:42.395705 amazon-ssm-agent[2009]: 2024/09/04 17:26:42 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:26:42.395705 amazon-ssm-agent[2009]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:26:42.395705 amazon-ssm-agent[2009]: 2024/09/04 17:26:42 processing appconfig overrides Sep 4 17:26:42.499375 amazon-ssm-agent[2009]: 2024-09-04 17:26:42 INFO https_proxy: Sep 4 17:26:42.519045 coreos-metadata[2103]: Sep 04 17:26:42.518 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 4 17:26:42.519765 coreos-metadata[2103]: Sep 04 17:26:42.519 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Sep 4 17:26:42.523797 coreos-metadata[2103]: Sep 04 17:26:42.523 INFO Fetch successful Sep 4 17:26:42.523906 coreos-metadata[2103]: Sep 04 17:26:42.523 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 4 17:26:42.528303 coreos-metadata[2103]: Sep 04 17:26:42.526 INFO Fetch successful Sep 4 17:26:42.531756 unknown[2103]: wrote ssh authorized keys file for user: core Sep 4 17:26:42.613892 amazon-ssm-agent[2009]: 2024-09-04 17:26:42 INFO http_proxy: Sep 4 17:26:42.621575 locksmithd[2000]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 4 17:26:42.639310 update-ssh-keys[2153]: Updated "/home/core/.ssh/authorized_keys" Sep 4 17:26:42.642906 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 4 17:26:42.652519 systemd[1]: Finished sshkeys.service. Sep 4 17:26:42.734478 amazon-ssm-agent[2009]: 2024-09-04 17:26:42 INFO no_proxy: Sep 4 17:26:42.833069 amazon-ssm-agent[2009]: 2024-09-04 17:26:42 INFO Checking if agent identity type OnPrem can be assumed Sep 4 17:26:42.944315 amazon-ssm-agent[2009]: 2024-09-04 17:26:42 INFO Checking if agent identity type EC2 can be assumed Sep 4 17:26:43.043309 amazon-ssm-agent[2009]: 2024-09-04 17:26:42 INFO Agent will take identity from EC2 Sep 4 17:26:43.098355 containerd[1981]: time="2024-09-04T17:26:43.097686704Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Sep 4 17:26:43.146527 amazon-ssm-agent[2009]: 2024-09-04 17:26:42 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 4 17:26:43.244812 amazon-ssm-agent[2009]: 2024-09-04 17:26:42 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 4 17:26:43.247497 containerd[1981]: time="2024-09-04T17:26:43.247449999Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 4 17:26:43.247666 containerd[1981]: time="2024-09-04T17:26:43.247646868Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:26:43.261101 containerd[1981]: time="2024-09-04T17:26:43.260389662Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.48-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:26:43.261101 containerd[1981]: time="2024-09-04T17:26:43.260443721Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:26:43.261101 containerd[1981]: time="2024-09-04T17:26:43.260728438Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:26:43.261101 containerd[1981]: time="2024-09-04T17:26:43.260751991Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 4 17:26:43.261101 containerd[1981]: time="2024-09-04T17:26:43.260850958Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 4 17:26:43.261101 containerd[1981]: time="2024-09-04T17:26:43.260914679Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:26:43.261101 containerd[1981]: time="2024-09-04T17:26:43.260987074Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 4 17:26:43.261101 containerd[1981]: time="2024-09-04T17:26:43.261081564Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:26:43.261483 containerd[1981]: time="2024-09-04T17:26:43.261338347Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 4 17:26:43.261483 containerd[1981]: time="2024-09-04T17:26:43.261365070Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 4 17:26:43.261483 containerd[1981]: time="2024-09-04T17:26:43.261381171Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:26:43.261595 containerd[1981]: time="2024-09-04T17:26:43.261533119Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:26:43.261595 containerd[1981]: time="2024-09-04T17:26:43.261554479Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 4 17:26:43.261670 containerd[1981]: time="2024-09-04T17:26:43.261619905Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 4 17:26:43.261670 containerd[1981]: time="2024-09-04T17:26:43.261635382Z" level=info msg="metadata content store policy set" policy=shared Sep 4 17:26:43.306337 containerd[1981]: time="2024-09-04T17:26:43.304014031Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 4 17:26:43.306337 containerd[1981]: time="2024-09-04T17:26:43.304074952Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 4 17:26:43.306337 containerd[1981]: time="2024-09-04T17:26:43.304098564Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 4 17:26:43.306337 containerd[1981]: time="2024-09-04T17:26:43.304144934Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 4 17:26:43.306337 containerd[1981]: time="2024-09-04T17:26:43.304165235Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 4 17:26:43.306337 containerd[1981]: time="2024-09-04T17:26:43.304181000Z" level=info msg="NRI interface is disabled by configuration." Sep 4 17:26:43.306337 containerd[1981]: time="2024-09-04T17:26:43.304200713Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 4 17:26:43.306337 containerd[1981]: time="2024-09-04T17:26:43.304413833Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 4 17:26:43.306337 containerd[1981]: time="2024-09-04T17:26:43.304435867Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 4 17:26:43.306337 containerd[1981]: time="2024-09-04T17:26:43.304455015Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 4 17:26:43.306337 containerd[1981]: time="2024-09-04T17:26:43.304484039Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 4 17:26:43.306337 containerd[1981]: time="2024-09-04T17:26:43.304505901Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 4 17:26:43.306337 containerd[1981]: time="2024-09-04T17:26:43.304530350Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 4 17:26:43.306337 containerd[1981]: time="2024-09-04T17:26:43.304551784Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 4 17:26:43.306981 containerd[1981]: time="2024-09-04T17:26:43.304570814Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 4 17:26:43.306981 containerd[1981]: time="2024-09-04T17:26:43.304592243Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 4 17:26:43.306981 containerd[1981]: time="2024-09-04T17:26:43.304613999Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 4 17:26:43.306981 containerd[1981]: time="2024-09-04T17:26:43.304633017Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 4 17:26:43.306981 containerd[1981]: time="2024-09-04T17:26:43.304652533Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 4 17:26:43.306981 containerd[1981]: time="2024-09-04T17:26:43.304770286Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 4 17:26:43.306981 containerd[1981]: time="2024-09-04T17:26:43.305145669Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 4 17:26:43.306981 containerd[1981]: time="2024-09-04T17:26:43.305182325Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 4 17:26:43.306981 containerd[1981]: time="2024-09-04T17:26:43.305202577Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 4 17:26:43.306981 containerd[1981]: time="2024-09-04T17:26:43.305234731Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 4 17:26:43.307351 containerd[1981]: time="2024-09-04T17:26:43.307205980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 4 17:26:43.307351 containerd[1981]: time="2024-09-04T17:26:43.307245606Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 4 17:26:43.307351 containerd[1981]: time="2024-09-04T17:26:43.307267263Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 4 17:26:43.312298 containerd[1981]: time="2024-09-04T17:26:43.309783479Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 4 17:26:43.312298 containerd[1981]: time="2024-09-04T17:26:43.309828595Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 4 17:26:43.312298 containerd[1981]: time="2024-09-04T17:26:43.309851688Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 4 17:26:43.312298 containerd[1981]: time="2024-09-04T17:26:43.309882015Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 4 17:26:43.312298 containerd[1981]: time="2024-09-04T17:26:43.309905633Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 4 17:26:43.312298 containerd[1981]: time="2024-09-04T17:26:43.309963486Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 4 17:26:43.312298 containerd[1981]: time="2024-09-04T17:26:43.310153141Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 4 17:26:43.312298 containerd[1981]: time="2024-09-04T17:26:43.310207911Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 4 17:26:43.315621 containerd[1981]: time="2024-09-04T17:26:43.315573178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 4 17:26:43.315726 containerd[1981]: time="2024-09-04T17:26:43.315638759Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 4 17:26:43.315726 containerd[1981]: time="2024-09-04T17:26:43.315659173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 4 17:26:43.315726 containerd[1981]: time="2024-09-04T17:26:43.315680488Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 4 17:26:43.315726 containerd[1981]: time="2024-09-04T17:26:43.315700904Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 4 17:26:43.315726 containerd[1981]: time="2024-09-04T17:26:43.315720970Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 4 17:26:43.316249 containerd[1981]: time="2024-09-04T17:26:43.316159612Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 4 17:26:43.316511 containerd[1981]: time="2024-09-04T17:26:43.316255632Z" level=info msg="Connect containerd service" Sep 4 17:26:43.316511 containerd[1981]: time="2024-09-04T17:26:43.316319036Z" level=info msg="using legacy CRI server" Sep 4 17:26:43.316511 containerd[1981]: time="2024-09-04T17:26:43.316329432Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 4 17:26:43.316511 containerd[1981]: time="2024-09-04T17:26:43.316506048Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 4 17:26:43.325678 containerd[1981]: time="2024-09-04T17:26:43.325628197Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 17:26:43.328371 containerd[1981]: time="2024-09-04T17:26:43.325725498Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 4 17:26:43.328371 containerd[1981]: time="2024-09-04T17:26:43.325755214Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 4 17:26:43.328371 containerd[1981]: time="2024-09-04T17:26:43.325772223Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 4 17:26:43.328371 containerd[1981]: time="2024-09-04T17:26:43.325790331Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 4 17:26:43.328371 containerd[1981]: time="2024-09-04T17:26:43.325792036Z" level=info msg="Start subscribing containerd event" Sep 4 17:26:43.328371 containerd[1981]: time="2024-09-04T17:26:43.325860443Z" level=info msg="Start recovering state" Sep 4 17:26:43.328371 containerd[1981]: time="2024-09-04T17:26:43.325992645Z" level=info msg="Start event monitor" Sep 4 17:26:43.328371 containerd[1981]: time="2024-09-04T17:26:43.326016193Z" level=info msg="Start snapshots syncer" Sep 4 17:26:43.328371 containerd[1981]: time="2024-09-04T17:26:43.326029367Z" level=info msg="Start cni network conf syncer for default" Sep 4 17:26:43.328371 containerd[1981]: time="2024-09-04T17:26:43.326040582Z" level=info msg="Start streaming server" Sep 4 17:26:43.328371 containerd[1981]: time="2024-09-04T17:26:43.326197448Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 4 17:26:43.328371 containerd[1981]: time="2024-09-04T17:26:43.326252210Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 4 17:26:43.335506 systemd[1]: Started containerd.service - containerd container runtime. Sep 4 17:26:43.339469 containerd[1981]: time="2024-09-04T17:26:43.337239490Z" level=info msg="containerd successfully booted in 0.240709s" Sep 4 17:26:43.345794 amazon-ssm-agent[2009]: 2024-09-04 17:26:42 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 4 17:26:43.427208 sshd_keygen[1984]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 4 17:26:43.446306 amazon-ssm-agent[2009]: 2024-09-04 17:26:42 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Sep 4 17:26:43.477057 amazon-ssm-agent[2009]: 2024-09-04 17:26:42 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Sep 4 17:26:43.477057 amazon-ssm-agent[2009]: 2024-09-04 17:26:42 INFO [amazon-ssm-agent] Starting Core Agent Sep 4 17:26:43.477057 amazon-ssm-agent[2009]: 2024-09-04 17:26:42 INFO [amazon-ssm-agent] registrar detected. Attempting registration Sep 4 17:26:43.477256 amazon-ssm-agent[2009]: 2024-09-04 17:26:42 INFO [Registrar] Starting registrar module Sep 4 17:26:43.477256 amazon-ssm-agent[2009]: 2024-09-04 17:26:42 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Sep 4 17:26:43.477256 amazon-ssm-agent[2009]: 2024-09-04 17:26:43 INFO [EC2Identity] EC2 registration was successful. Sep 4 17:26:43.477256 amazon-ssm-agent[2009]: 2024-09-04 17:26:43 INFO [CredentialRefresher] credentialRefresher has started Sep 4 17:26:43.477256 amazon-ssm-agent[2009]: 2024-09-04 17:26:43 INFO [CredentialRefresher] Starting credentials refresher loop Sep 4 17:26:43.477256 amazon-ssm-agent[2009]: 2024-09-04 17:26:43 INFO EC2RoleProvider Successfully connected with instance profile role credentials Sep 4 17:26:43.481568 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 4 17:26:43.491702 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 4 17:26:43.500694 systemd[1]: Started sshd@0-172.31.30.103:22-139.178.68.195:60440.service - OpenSSH per-connection server daemon (139.178.68.195:60440). Sep 4 17:26:43.521015 systemd[1]: issuegen.service: Deactivated successfully. Sep 4 17:26:43.522618 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 4 17:26:43.534939 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 4 17:26:43.544705 amazon-ssm-agent[2009]: 2024-09-04 17:26:43 INFO [CredentialRefresher] Next credential rotation will be in 31.824992706716667 minutes Sep 4 17:26:43.553748 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 4 17:26:43.574919 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 4 17:26:43.587076 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 4 17:26:43.589232 systemd[1]: Reached target getty.target - Login Prompts. Sep 4 17:26:43.801619 sshd[2173]: Accepted publickey for core from 139.178.68.195 port 60440 ssh2: RSA SHA256:RWdrpmeD0uODGtoWpmuFEK/G9FvWcalt1Ic3MhVma0g Sep 4 17:26:43.805760 sshd[2173]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:26:43.828862 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 4 17:26:43.838335 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 4 17:26:43.839535 tar[1960]: linux-amd64/LICENSE Sep 4 17:26:43.842575 tar[1960]: linux-amd64/README.md Sep 4 17:26:43.845959 systemd-logind[1947]: New session 1 of user core. Sep 4 17:26:43.883116 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 4 17:26:43.891089 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 4 17:26:43.904299 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 4 17:26:43.918808 (systemd)[2187]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:26:44.105794 systemd[2187]: Queued start job for default target default.target. Sep 4 17:26:44.111406 systemd[2187]: Created slice app.slice - User Application Slice. Sep 4 17:26:44.111515 systemd[2187]: Reached target paths.target - Paths. Sep 4 17:26:44.111541 systemd[2187]: Reached target timers.target - Timers. Sep 4 17:26:44.113903 systemd[2187]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 4 17:26:44.141322 systemd[2187]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 4 17:26:44.141478 systemd[2187]: Reached target sockets.target - Sockets. Sep 4 17:26:44.141499 systemd[2187]: Reached target basic.target - Basic System. Sep 4 17:26:44.141551 systemd[2187]: Reached target default.target - Main User Target. Sep 4 17:26:44.141656 systemd[2187]: Startup finished in 212ms. Sep 4 17:26:44.142527 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 4 17:26:44.150128 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 4 17:26:44.316432 systemd[1]: Started sshd@1-172.31.30.103:22-139.178.68.195:60450.service - OpenSSH per-connection server daemon (139.178.68.195:60450). Sep 4 17:26:44.325596 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:26:44.329321 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 4 17:26:44.335874 systemd[1]: Startup finished in 689ms (kernel) + 8.725s (initrd) + 7.932s (userspace) = 17.347s. Sep 4 17:26:44.501209 amazon-ssm-agent[2009]: 2024-09-04 17:26:44 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Sep 4 17:26:44.511983 (kubelet)[2203]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:26:44.596875 ntpd[1942]: Listen normally on 6 eth0 [fe80::452:6cff:fe42:e3c1%2]:123 Sep 4 17:26:44.598196 ntpd[1942]: 4 Sep 17:26:44 ntpd[1942]: Listen normally on 6 eth0 [fe80::452:6cff:fe42:e3c1%2]:123 Sep 4 17:26:44.603471 amazon-ssm-agent[2009]: 2024-09-04 17:26:44 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2206) started Sep 4 17:26:44.704934 amazon-ssm-agent[2009]: 2024-09-04 17:26:44 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Sep 4 17:26:44.737241 sshd[2202]: Accepted publickey for core from 139.178.68.195 port 60450 ssh2: RSA SHA256:RWdrpmeD0uODGtoWpmuFEK/G9FvWcalt1Ic3MhVma0g Sep 4 17:26:44.747125 sshd[2202]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:26:44.758265 systemd-logind[1947]: New session 2 of user core. Sep 4 17:26:44.761529 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 4 17:26:44.891625 sshd[2202]: pam_unix(sshd:session): session closed for user core Sep 4 17:26:44.896128 systemd[1]: sshd@1-172.31.30.103:22-139.178.68.195:60450.service: Deactivated successfully. Sep 4 17:26:44.896293 systemd-logind[1947]: Session 2 logged out. Waiting for processes to exit. Sep 4 17:26:44.901006 systemd[1]: session-2.scope: Deactivated successfully. Sep 4 17:26:44.903870 systemd-logind[1947]: Removed session 2. Sep 4 17:26:44.926668 systemd[1]: Started sshd@2-172.31.30.103:22-139.178.68.195:60454.service - OpenSSH per-connection server daemon (139.178.68.195:60454). Sep 4 17:26:45.088015 sshd[2231]: Accepted publickey for core from 139.178.68.195 port 60454 ssh2: RSA SHA256:RWdrpmeD0uODGtoWpmuFEK/G9FvWcalt1Ic3MhVma0g Sep 4 17:26:45.090337 sshd[2231]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:26:45.097935 systemd-logind[1947]: New session 3 of user core. Sep 4 17:26:45.101488 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 4 17:26:45.221257 sshd[2231]: pam_unix(sshd:session): session closed for user core Sep 4 17:26:45.227581 systemd-logind[1947]: Session 3 logged out. Waiting for processes to exit. Sep 4 17:26:45.229615 systemd[1]: sshd@2-172.31.30.103:22-139.178.68.195:60454.service: Deactivated successfully. Sep 4 17:26:45.232647 systemd[1]: session-3.scope: Deactivated successfully. Sep 4 17:26:45.234390 systemd-logind[1947]: Removed session 3. Sep 4 17:26:45.260909 systemd[1]: Started sshd@3-172.31.30.103:22-139.178.68.195:60468.service - OpenSSH per-connection server daemon (139.178.68.195:60468). Sep 4 17:26:45.432985 sshd[2240]: Accepted publickey for core from 139.178.68.195 port 60468 ssh2: RSA SHA256:RWdrpmeD0uODGtoWpmuFEK/G9FvWcalt1Ic3MhVma0g Sep 4 17:26:45.434591 sshd[2240]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:26:45.437955 kubelet[2203]: E0904 17:26:45.437887 2203 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:26:45.442486 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:26:45.442529 systemd-logind[1947]: New session 4 of user core. Sep 4 17:26:45.443834 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:26:45.444150 systemd[1]: kubelet.service: Consumed 1.093s CPU time. Sep 4 17:26:45.453548 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 4 17:26:45.585242 sshd[2240]: pam_unix(sshd:session): session closed for user core Sep 4 17:26:45.590176 systemd[1]: sshd@3-172.31.30.103:22-139.178.68.195:60468.service: Deactivated successfully. Sep 4 17:26:45.593785 systemd[1]: session-4.scope: Deactivated successfully. Sep 4 17:26:45.594571 systemd-logind[1947]: Session 4 logged out. Waiting for processes to exit. Sep 4 17:26:45.595801 systemd-logind[1947]: Removed session 4. Sep 4 17:26:45.625680 systemd[1]: Started sshd@4-172.31.30.103:22-139.178.68.195:60482.service - OpenSSH per-connection server daemon (139.178.68.195:60482). Sep 4 17:26:45.785097 sshd[2248]: Accepted publickey for core from 139.178.68.195 port 60482 ssh2: RSA SHA256:RWdrpmeD0uODGtoWpmuFEK/G9FvWcalt1Ic3MhVma0g Sep 4 17:26:45.786871 sshd[2248]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:26:45.793712 systemd-logind[1947]: New session 5 of user core. Sep 4 17:26:45.801539 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 4 17:26:45.911467 sudo[2251]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 4 17:26:45.911825 sudo[2251]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 17:26:45.928147 sudo[2251]: pam_unix(sudo:session): session closed for user root Sep 4 17:26:45.951882 sshd[2248]: pam_unix(sshd:session): session closed for user core Sep 4 17:26:45.956550 systemd[1]: sshd@4-172.31.30.103:22-139.178.68.195:60482.service: Deactivated successfully. Sep 4 17:26:45.959741 systemd[1]: session-5.scope: Deactivated successfully. Sep 4 17:26:45.962324 systemd-logind[1947]: Session 5 logged out. Waiting for processes to exit. Sep 4 17:26:45.963449 systemd-logind[1947]: Removed session 5. Sep 4 17:26:45.988762 systemd[1]: Started sshd@5-172.31.30.103:22-139.178.68.195:48414.service - OpenSSH per-connection server daemon (139.178.68.195:48414). Sep 4 17:26:46.151774 sshd[2256]: Accepted publickey for core from 139.178.68.195 port 48414 ssh2: RSA SHA256:RWdrpmeD0uODGtoWpmuFEK/G9FvWcalt1Ic3MhVma0g Sep 4 17:26:46.153616 sshd[2256]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:26:46.158369 systemd-logind[1947]: New session 6 of user core. Sep 4 17:26:46.169517 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 4 17:26:46.266567 sudo[2260]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 4 17:26:46.267428 sudo[2260]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 17:26:46.272206 sudo[2260]: pam_unix(sudo:session): session closed for user root Sep 4 17:26:46.278366 sudo[2259]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 4 17:26:46.278777 sudo[2259]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 17:26:46.310690 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 4 17:26:46.333533 auditctl[2263]: No rules Sep 4 17:26:46.335127 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 17:26:46.335493 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 4 17:26:46.342910 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 17:26:46.382662 augenrules[2281]: No rules Sep 4 17:26:46.384102 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 17:26:46.386476 sudo[2259]: pam_unix(sudo:session): session closed for user root Sep 4 17:26:46.409264 sshd[2256]: pam_unix(sshd:session): session closed for user core Sep 4 17:26:46.413854 systemd[1]: sshd@5-172.31.30.103:22-139.178.68.195:48414.service: Deactivated successfully. Sep 4 17:26:46.416058 systemd[1]: session-6.scope: Deactivated successfully. Sep 4 17:26:46.418721 systemd-logind[1947]: Session 6 logged out. Waiting for processes to exit. Sep 4 17:26:46.420406 systemd-logind[1947]: Removed session 6. Sep 4 17:26:46.451680 systemd[1]: Started sshd@6-172.31.30.103:22-139.178.68.195:48422.service - OpenSSH per-connection server daemon (139.178.68.195:48422). Sep 4 17:26:46.611665 sshd[2289]: Accepted publickey for core from 139.178.68.195 port 48422 ssh2: RSA SHA256:RWdrpmeD0uODGtoWpmuFEK/G9FvWcalt1Ic3MhVma0g Sep 4 17:26:46.613551 sshd[2289]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:26:46.623002 systemd-logind[1947]: New session 7 of user core. Sep 4 17:26:46.634855 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 4 17:26:46.737052 sudo[2292]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 4 17:26:46.737439 sudo[2292]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 17:26:47.028397 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 4 17:26:47.047008 (dockerd)[2302]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 4 17:26:47.873527 dockerd[2302]: time="2024-09-04T17:26:47.873464745Z" level=info msg="Starting up" Sep 4 17:26:48.123966 dockerd[2302]: time="2024-09-04T17:26:48.123830888Z" level=info msg="Loading containers: start." Sep 4 17:26:48.357510 kernel: Initializing XFRM netlink socket Sep 4 17:26:48.414831 (udev-worker)[2317]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:26:48.508167 systemd-networkd[1810]: docker0: Link UP Sep 4 17:26:48.531071 dockerd[2302]: time="2024-09-04T17:26:48.531029158Z" level=info msg="Loading containers: done." Sep 4 17:26:49.019803 systemd-resolved[1768]: Clock change detected. Flushing caches. Sep 4 17:26:49.138034 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1865265617-merged.mount: Deactivated successfully. Sep 4 17:26:49.147320 dockerd[2302]: time="2024-09-04T17:26:49.147209209Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 4 17:26:49.147617 dockerd[2302]: time="2024-09-04T17:26:49.147589923Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Sep 4 17:26:49.147746 dockerd[2302]: time="2024-09-04T17:26:49.147724201Z" level=info msg="Daemon has completed initialization" Sep 4 17:26:49.197391 dockerd[2302]: time="2024-09-04T17:26:49.197270110Z" level=info msg="API listen on /run/docker.sock" Sep 4 17:26:49.197699 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 4 17:26:50.403421 containerd[1981]: time="2024-09-04T17:26:50.403375830Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.8\"" Sep 4 17:26:51.142800 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2542895993.mount: Deactivated successfully. Sep 4 17:26:54.935156 containerd[1981]: time="2024-09-04T17:26:54.935097425Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:26:54.936754 containerd[1981]: time="2024-09-04T17:26:54.936701065Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.8: active requests=0, bytes read=35232949" Sep 4 17:26:54.938975 containerd[1981]: time="2024-09-04T17:26:54.938554277Z" level=info msg="ImageCreate event name:\"sha256:ea7e9c4af6a6f4f2fc0b86f81d102bf60167b3cbd4ce7d1545833b0283ab80b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:26:54.943262 containerd[1981]: time="2024-09-04T17:26:54.943209719Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6f72fa926c9b05e10629fe1a092fd28dcd65b4fdfd0cc7bd55f85a57a6ba1fa5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:26:54.944597 containerd[1981]: time="2024-09-04T17:26:54.944561496Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.8\" with image id \"sha256:ea7e9c4af6a6f4f2fc0b86f81d102bf60167b3cbd4ce7d1545833b0283ab80b7\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6f72fa926c9b05e10629fe1a092fd28dcd65b4fdfd0cc7bd55f85a57a6ba1fa5\", size \"35229749\" in 4.541136918s" Sep 4 17:26:54.944732 containerd[1981]: time="2024-09-04T17:26:54.944712737Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.8\" returns image reference \"sha256:ea7e9c4af6a6f4f2fc0b86f81d102bf60167b3cbd4ce7d1545833b0283ab80b7\"" Sep 4 17:26:54.974111 containerd[1981]: time="2024-09-04T17:26:54.974070625Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.8\"" Sep 4 17:26:56.116927 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 4 17:26:56.122835 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:26:56.618566 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:26:56.624066 (kubelet)[2504]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:26:56.694691 kubelet[2504]: E0904 17:26:56.694650 2504 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:26:56.700091 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:26:56.700457 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:26:59.492888 containerd[1981]: time="2024-09-04T17:26:59.492833125Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:26:59.494799 containerd[1981]: time="2024-09-04T17:26:59.494746228Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.8: active requests=0, bytes read=32206206" Sep 4 17:26:59.499124 containerd[1981]: time="2024-09-04T17:26:59.496958712Z" level=info msg="ImageCreate event name:\"sha256:b469e8ed7312f97f28340218ee5884606f9998ad73d3692a6078a2692253589a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:26:59.504910 containerd[1981]: time="2024-09-04T17:26:59.504834630Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6f27d63ded20614c68554b477cd7a78eda78a498a92bfe8935cf964ca5b74d0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:26:59.506258 containerd[1981]: time="2024-09-04T17:26:59.506204300Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.8\" with image id \"sha256:b469e8ed7312f97f28340218ee5884606f9998ad73d3692a6078a2692253589a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6f27d63ded20614c68554b477cd7a78eda78a498a92bfe8935cf964ca5b74d0b\", size \"33756152\" in 4.532085027s" Sep 4 17:26:59.506682 containerd[1981]: time="2024-09-04T17:26:59.506490272Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.8\" returns image reference \"sha256:b469e8ed7312f97f28340218ee5884606f9998ad73d3692a6078a2692253589a\"" Sep 4 17:26:59.538177 containerd[1981]: time="2024-09-04T17:26:59.538138620Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.8\"" Sep 4 17:27:02.110652 containerd[1981]: time="2024-09-04T17:27:02.110594830Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:27:02.114392 containerd[1981]: time="2024-09-04T17:27:02.114254966Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.8: active requests=0, bytes read=17321507" Sep 4 17:27:02.118264 containerd[1981]: time="2024-09-04T17:27:02.117876010Z" level=info msg="ImageCreate event name:\"sha256:e932331104a0d08ad33e8c298f0c2a9a23378869c8fc0915df299b611c196f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:27:02.126490 containerd[1981]: time="2024-09-04T17:27:02.126437669Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:da74a66675d95e39ec25da5e70729da746d0fa0b15ee0da872ac980519bc28bd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:27:02.133444 containerd[1981]: time="2024-09-04T17:27:02.133394697Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.8\" with image id \"sha256:e932331104a0d08ad33e8c298f0c2a9a23378869c8fc0915df299b611c196f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:da74a66675d95e39ec25da5e70729da746d0fa0b15ee0da872ac980519bc28bd\", size \"18871471\" in 2.595212635s" Sep 4 17:27:02.133582 containerd[1981]: time="2024-09-04T17:27:02.133450976Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.8\" returns image reference \"sha256:e932331104a0d08ad33e8c298f0c2a9a23378869c8fc0915df299b611c196f21\"" Sep 4 17:27:02.174400 containerd[1981]: time="2024-09-04T17:27:02.174363808Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.8\"" Sep 4 17:27:03.713371 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3187377713.mount: Deactivated successfully. Sep 4 17:27:04.444629 containerd[1981]: time="2024-09-04T17:27:04.444579851Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:27:04.446014 containerd[1981]: time="2024-09-04T17:27:04.445818328Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.8: active requests=0, bytes read=28600380" Sep 4 17:27:04.447741 containerd[1981]: time="2024-09-04T17:27:04.447477215Z" level=info msg="ImageCreate event name:\"sha256:b6e10835ec72a48862d901a23b7c4c924300c3f6cfe89cd6031533b67e1f4e54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:27:04.450665 containerd[1981]: time="2024-09-04T17:27:04.450624151Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:559a093080f70ca863922f5e4bb90d6926d52653a91edb5b72c685ebb65f1858\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:27:04.451516 containerd[1981]: time="2024-09-04T17:27:04.451472145Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.8\" with image id \"sha256:b6e10835ec72a48862d901a23b7c4c924300c3f6cfe89cd6031533b67e1f4e54\", repo tag \"registry.k8s.io/kube-proxy:v1.29.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:559a093080f70ca863922f5e4bb90d6926d52653a91edb5b72c685ebb65f1858\", size \"28599399\" in 2.276923605s" Sep 4 17:27:04.451742 containerd[1981]: time="2024-09-04T17:27:04.451631403Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.8\" returns image reference \"sha256:b6e10835ec72a48862d901a23b7c4c924300c3f6cfe89cd6031533b67e1f4e54\"" Sep 4 17:27:04.477079 containerd[1981]: time="2024-09-04T17:27:04.477039427Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Sep 4 17:27:05.116578 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3176917675.mount: Deactivated successfully. Sep 4 17:27:06.780906 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 4 17:27:06.785327 containerd[1981]: time="2024-09-04T17:27:06.785274107Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:27:06.799713 containerd[1981]: time="2024-09-04T17:27:06.790507853Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Sep 4 17:27:06.799418 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:27:06.802579 containerd[1981]: time="2024-09-04T17:27:06.802304973Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:27:06.819404 containerd[1981]: time="2024-09-04T17:27:06.819330641Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:27:06.824112 containerd[1981]: time="2024-09-04T17:27:06.823281038Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.346031366s" Sep 4 17:27:06.824112 containerd[1981]: time="2024-09-04T17:27:06.823432196Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Sep 4 17:27:06.857084 containerd[1981]: time="2024-09-04T17:27:06.856897835Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Sep 4 17:27:07.423551 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:27:07.446699 (kubelet)[2598]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:27:07.537716 kubelet[2598]: E0904 17:27:07.537570 2598 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:27:07.541976 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:27:07.542190 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:27:07.687425 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount433694002.mount: Deactivated successfully. Sep 4 17:27:07.717221 containerd[1981]: time="2024-09-04T17:27:07.717130077Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:27:07.719791 containerd[1981]: time="2024-09-04T17:27:07.719564549Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Sep 4 17:27:07.721404 containerd[1981]: time="2024-09-04T17:27:07.721329472Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:27:07.731378 containerd[1981]: time="2024-09-04T17:27:07.728990024Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:27:07.731378 containerd[1981]: time="2024-09-04T17:27:07.731170971Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 873.567367ms" Sep 4 17:27:07.731378 containerd[1981]: time="2024-09-04T17:27:07.731220845Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Sep 4 17:27:07.766583 containerd[1981]: time="2024-09-04T17:27:07.766546983Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Sep 4 17:27:08.378744 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2740727598.mount: Deactivated successfully. Sep 4 17:27:11.488900 containerd[1981]: time="2024-09-04T17:27:11.488611998Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:27:11.498735 containerd[1981]: time="2024-09-04T17:27:11.498643943Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Sep 4 17:27:11.503639 containerd[1981]: time="2024-09-04T17:27:11.503553787Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:27:11.517821 containerd[1981]: time="2024-09-04T17:27:11.517726309Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:27:11.519486 containerd[1981]: time="2024-09-04T17:27:11.519296115Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 3.752710522s" Sep 4 17:27:11.519486 containerd[1981]: time="2024-09-04T17:27:11.519348934Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Sep 4 17:27:12.816395 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 4 17:27:15.253504 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:27:15.268046 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:27:15.302315 systemd[1]: Reloading requested from client PID 2731 ('systemctl') (unit session-7.scope)... Sep 4 17:27:15.302334 systemd[1]: Reloading... Sep 4 17:27:15.445279 zram_generator::config[2772]: No configuration found. Sep 4 17:27:15.594941 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:27:15.707799 systemd[1]: Reloading finished in 405 ms. Sep 4 17:27:15.767092 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 4 17:27:15.767276 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 4 17:27:15.767574 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:27:15.776039 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:27:16.182297 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:27:16.194668 (kubelet)[2826]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 17:27:16.271798 kubelet[2826]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:27:16.274338 kubelet[2826]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 17:27:16.274338 kubelet[2826]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:27:16.274338 kubelet[2826]: I0904 17:27:16.272892 2826 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 17:27:16.598846 kubelet[2826]: I0904 17:27:16.598802 2826 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Sep 4 17:27:16.598846 kubelet[2826]: I0904 17:27:16.598842 2826 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 17:27:16.599210 kubelet[2826]: I0904 17:27:16.599188 2826 server.go:919] "Client rotation is on, will bootstrap in background" Sep 4 17:27:16.643676 kubelet[2826]: I0904 17:27:16.643632 2826 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:27:16.645619 kubelet[2826]: E0904 17:27:16.645510 2826 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.30.103:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.30.103:6443: connect: connection refused Sep 4 17:27:16.655831 kubelet[2826]: I0904 17:27:16.655800 2826 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 17:27:16.656141 kubelet[2826]: I0904 17:27:16.656117 2826 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 17:27:16.657868 kubelet[2826]: I0904 17:27:16.657836 2826 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 17:27:16.658017 kubelet[2826]: I0904 17:27:16.657875 2826 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 17:27:16.658017 kubelet[2826]: I0904 17:27:16.657893 2826 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 17:27:16.658017 kubelet[2826]: I0904 17:27:16.658016 2826 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:27:16.658476 kubelet[2826]: I0904 17:27:16.658465 2826 kubelet.go:396] "Attempting to sync node with API server" Sep 4 17:27:16.658518 kubelet[2826]: I0904 17:27:16.658488 2826 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 17:27:16.658563 kubelet[2826]: I0904 17:27:16.658522 2826 kubelet.go:312] "Adding apiserver pod source" Sep 4 17:27:16.658563 kubelet[2826]: I0904 17:27:16.658543 2826 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 17:27:16.661561 kubelet[2826]: W0904 17:27:16.661021 2826 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.30.103:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.30.103:6443: connect: connection refused Sep 4 17:27:16.661561 kubelet[2826]: E0904 17:27:16.661081 2826 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.30.103:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.30.103:6443: connect: connection refused Sep 4 17:27:16.661561 kubelet[2826]: W0904 17:27:16.661151 2826 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.30.103:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-103&limit=500&resourceVersion=0": dial tcp 172.31.30.103:6443: connect: connection refused Sep 4 17:27:16.661561 kubelet[2826]: E0904 17:27:16.661189 2826 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.30.103:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-103&limit=500&resourceVersion=0": dial tcp 172.31.30.103:6443: connect: connection refused Sep 4 17:27:16.661984 kubelet[2826]: I0904 17:27:16.661967 2826 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Sep 4 17:27:16.668878 kubelet[2826]: I0904 17:27:16.668834 2826 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 17:27:16.670224 kubelet[2826]: W0904 17:27:16.670192 2826 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 4 17:27:16.671541 kubelet[2826]: I0904 17:27:16.670905 2826 server.go:1256] "Started kubelet" Sep 4 17:27:16.673606 kubelet[2826]: I0904 17:27:16.673001 2826 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 17:27:16.682590 kubelet[2826]: I0904 17:27:16.682512 2826 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 17:27:16.683954 kubelet[2826]: E0904 17:27:16.683885 2826 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.30.103:6443/api/v1/namespaces/default/events\": dial tcp 172.31.30.103:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-30-103.17f21a9582105247 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-30-103,UID:ip-172-31-30-103,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-30-103,},FirstTimestamp:2024-09-04 17:27:16.670878279 +0000 UTC m=+0.470549411,LastTimestamp:2024-09-04 17:27:16.670878279 +0000 UTC m=+0.470549411,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-30-103,}" Sep 4 17:27:16.684537 kubelet[2826]: I0904 17:27:16.684442 2826 server.go:461] "Adding debug handlers to kubelet server" Sep 4 17:27:16.686153 kubelet[2826]: I0904 17:27:16.686136 2826 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 17:27:16.686494 kubelet[2826]: I0904 17:27:16.686482 2826 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 17:27:16.689254 kubelet[2826]: I0904 17:27:16.689054 2826 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 17:27:16.692141 kubelet[2826]: E0904 17:27:16.692113 2826 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-103?timeout=10s\": dial tcp 172.31.30.103:6443: connect: connection refused" interval="200ms" Sep 4 17:27:16.692572 kubelet[2826]: I0904 17:27:16.692158 2826 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Sep 4 17:27:16.694252 kubelet[2826]: W0904 17:27:16.692836 2826 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.30.103:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.103:6443: connect: connection refused Sep 4 17:27:16.694252 kubelet[2826]: E0904 17:27:16.692892 2826 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.30.103:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.103:6443: connect: connection refused Sep 4 17:27:16.694252 kubelet[2826]: I0904 17:27:16.692956 2826 reconciler_new.go:29] "Reconciler: start to sync state" Sep 4 17:27:16.694252 kubelet[2826]: I0904 17:27:16.693867 2826 factory.go:221] Registration of the systemd container factory successfully Sep 4 17:27:16.694252 kubelet[2826]: I0904 17:27:16.693965 2826 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 17:27:16.701638 kubelet[2826]: E0904 17:27:16.701613 2826 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 17:27:16.712363 kubelet[2826]: I0904 17:27:16.712332 2826 factory.go:221] Registration of the containerd container factory successfully Sep 4 17:27:16.729898 kubelet[2826]: I0904 17:27:16.729853 2826 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 17:27:16.732010 kubelet[2826]: I0904 17:27:16.731954 2826 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 17:27:16.732010 kubelet[2826]: I0904 17:27:16.731999 2826 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 17:27:16.732223 kubelet[2826]: I0904 17:27:16.732024 2826 kubelet.go:2329] "Starting kubelet main sync loop" Sep 4 17:27:16.732223 kubelet[2826]: E0904 17:27:16.732123 2826 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 17:27:16.743510 kubelet[2826]: W0904 17:27:16.743099 2826 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.30.103:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.103:6443: connect: connection refused Sep 4 17:27:16.743510 kubelet[2826]: E0904 17:27:16.743332 2826 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.30.103:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.103:6443: connect: connection refused Sep 4 17:27:16.746142 kubelet[2826]: I0904 17:27:16.746015 2826 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 17:27:16.746142 kubelet[2826]: I0904 17:27:16.746046 2826 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 17:27:16.746142 kubelet[2826]: I0904 17:27:16.746067 2826 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:27:16.750035 kubelet[2826]: I0904 17:27:16.749999 2826 policy_none.go:49] "None policy: Start" Sep 4 17:27:16.750990 kubelet[2826]: I0904 17:27:16.750954 2826 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 4 17:27:16.750990 kubelet[2826]: I0904 17:27:16.750986 2826 state_mem.go:35] "Initializing new in-memory state store" Sep 4 17:27:16.762470 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 4 17:27:16.774148 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 4 17:27:16.789997 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 4 17:27:16.793837 kubelet[2826]: I0904 17:27:16.791816 2826 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 17:27:16.793837 kubelet[2826]: I0904 17:27:16.792302 2826 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 17:27:16.794456 kubelet[2826]: I0904 17:27:16.794439 2826 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-30-103" Sep 4 17:27:16.795038 kubelet[2826]: E0904 17:27:16.795022 2826 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.30.103:6443/api/v1/nodes\": dial tcp 172.31.30.103:6443: connect: connection refused" node="ip-172-31-30-103" Sep 4 17:27:16.795249 kubelet[2826]: E0904 17:27:16.795054 2826 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-30-103\" not found" Sep 4 17:27:16.833207 kubelet[2826]: I0904 17:27:16.833153 2826 topology_manager.go:215] "Topology Admit Handler" podUID="77e49488f75e22132c2fd3527f18d066" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-30-103" Sep 4 17:27:16.835324 kubelet[2826]: I0904 17:27:16.835286 2826 topology_manager.go:215] "Topology Admit Handler" podUID="68e9fcaa3f467086ba18a73cd79f8f1f" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-30-103" Sep 4 17:27:16.837177 kubelet[2826]: I0904 17:27:16.836948 2826 topology_manager.go:215] "Topology Admit Handler" podUID="a5d7f1b16c8c86e815f18da8cafe6ed2" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-30-103" Sep 4 17:27:16.850098 systemd[1]: Created slice kubepods-burstable-pod77e49488f75e22132c2fd3527f18d066.slice - libcontainer container kubepods-burstable-pod77e49488f75e22132c2fd3527f18d066.slice. Sep 4 17:27:16.868789 systemd[1]: Created slice kubepods-burstable-poda5d7f1b16c8c86e815f18da8cafe6ed2.slice - libcontainer container kubepods-burstable-poda5d7f1b16c8c86e815f18da8cafe6ed2.slice. Sep 4 17:27:16.875490 systemd[1]: Created slice kubepods-burstable-pod68e9fcaa3f467086ba18a73cd79f8f1f.slice - libcontainer container kubepods-burstable-pod68e9fcaa3f467086ba18a73cd79f8f1f.slice. Sep 4 17:27:16.893098 kubelet[2826]: E0904 17:27:16.893059 2826 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-103?timeout=10s\": dial tcp 172.31.30.103:6443: connect: connection refused" interval="400ms" Sep 4 17:27:16.894715 kubelet[2826]: I0904 17:27:16.894385 2826 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/68e9fcaa3f467086ba18a73cd79f8f1f-ca-certs\") pod \"kube-controller-manager-ip-172-31-30-103\" (UID: \"68e9fcaa3f467086ba18a73cd79f8f1f\") " pod="kube-system/kube-controller-manager-ip-172-31-30-103" Sep 4 17:27:16.894715 kubelet[2826]: I0904 17:27:16.894439 2826 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/68e9fcaa3f467086ba18a73cd79f8f1f-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-30-103\" (UID: \"68e9fcaa3f467086ba18a73cd79f8f1f\") " pod="kube-system/kube-controller-manager-ip-172-31-30-103" Sep 4 17:27:16.894715 kubelet[2826]: I0904 17:27:16.894475 2826 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/68e9fcaa3f467086ba18a73cd79f8f1f-k8s-certs\") pod \"kube-controller-manager-ip-172-31-30-103\" (UID: \"68e9fcaa3f467086ba18a73cd79f8f1f\") " pod="kube-system/kube-controller-manager-ip-172-31-30-103" Sep 4 17:27:16.894715 kubelet[2826]: I0904 17:27:16.894509 2826 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a5d7f1b16c8c86e815f18da8cafe6ed2-kubeconfig\") pod \"kube-scheduler-ip-172-31-30-103\" (UID: \"a5d7f1b16c8c86e815f18da8cafe6ed2\") " pod="kube-system/kube-scheduler-ip-172-31-30-103" Sep 4 17:27:16.894715 kubelet[2826]: I0904 17:27:16.894539 2826 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/77e49488f75e22132c2fd3527f18d066-ca-certs\") pod \"kube-apiserver-ip-172-31-30-103\" (UID: \"77e49488f75e22132c2fd3527f18d066\") " pod="kube-system/kube-apiserver-ip-172-31-30-103" Sep 4 17:27:16.894923 kubelet[2826]: I0904 17:27:16.894566 2826 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/77e49488f75e22132c2fd3527f18d066-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-30-103\" (UID: \"77e49488f75e22132c2fd3527f18d066\") " pod="kube-system/kube-apiserver-ip-172-31-30-103" Sep 4 17:27:16.894923 kubelet[2826]: I0904 17:27:16.894599 2826 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/68e9fcaa3f467086ba18a73cd79f8f1f-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-30-103\" (UID: \"68e9fcaa3f467086ba18a73cd79f8f1f\") " pod="kube-system/kube-controller-manager-ip-172-31-30-103" Sep 4 17:27:16.894923 kubelet[2826]: I0904 17:27:16.894618 2826 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/77e49488f75e22132c2fd3527f18d066-k8s-certs\") pod \"kube-apiserver-ip-172-31-30-103\" (UID: \"77e49488f75e22132c2fd3527f18d066\") " pod="kube-system/kube-apiserver-ip-172-31-30-103" Sep 4 17:27:16.894923 kubelet[2826]: I0904 17:27:16.894638 2826 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/68e9fcaa3f467086ba18a73cd79f8f1f-kubeconfig\") pod \"kube-controller-manager-ip-172-31-30-103\" (UID: \"68e9fcaa3f467086ba18a73cd79f8f1f\") " pod="kube-system/kube-controller-manager-ip-172-31-30-103" Sep 4 17:27:16.998270 kubelet[2826]: I0904 17:27:16.997811 2826 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-30-103" Sep 4 17:27:16.998270 kubelet[2826]: E0904 17:27:16.998165 2826 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.30.103:6443/api/v1/nodes\": dial tcp 172.31.30.103:6443: connect: connection refused" node="ip-172-31-30-103" Sep 4 17:27:17.166544 containerd[1981]: time="2024-09-04T17:27:17.166422321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-30-103,Uid:77e49488f75e22132c2fd3527f18d066,Namespace:kube-system,Attempt:0,}" Sep 4 17:27:17.174174 containerd[1981]: time="2024-09-04T17:27:17.174123618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-30-103,Uid:a5d7f1b16c8c86e815f18da8cafe6ed2,Namespace:kube-system,Attempt:0,}" Sep 4 17:27:17.178679 containerd[1981]: time="2024-09-04T17:27:17.178633195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-30-103,Uid:68e9fcaa3f467086ba18a73cd79f8f1f,Namespace:kube-system,Attempt:0,}" Sep 4 17:27:17.294393 kubelet[2826]: E0904 17:27:17.294358 2826 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-103?timeout=10s\": dial tcp 172.31.30.103:6443: connect: connection refused" interval="800ms" Sep 4 17:27:17.400417 kubelet[2826]: I0904 17:27:17.400383 2826 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-30-103" Sep 4 17:27:17.400748 kubelet[2826]: E0904 17:27:17.400731 2826 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.30.103:6443/api/v1/nodes\": dial tcp 172.31.30.103:6443: connect: connection refused" node="ip-172-31-30-103" Sep 4 17:27:17.722922 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1298156660.mount: Deactivated successfully. Sep 4 17:27:17.738362 containerd[1981]: time="2024-09-04T17:27:17.738296090Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:27:17.743291 containerd[1981]: time="2024-09-04T17:27:17.743179132Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:27:17.746210 containerd[1981]: time="2024-09-04T17:27:17.745827961Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 17:27:17.747064 containerd[1981]: time="2024-09-04T17:27:17.747009760Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 4 17:27:17.750450 containerd[1981]: time="2024-09-04T17:27:17.750404192Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:27:17.753871 containerd[1981]: time="2024-09-04T17:27:17.753469039Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:27:17.754807 containerd[1981]: time="2024-09-04T17:27:17.754467289Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 17:27:17.757881 kubelet[2826]: W0904 17:27:17.757825 2826 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.30.103:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-103&limit=500&resourceVersion=0": dial tcp 172.31.30.103:6443: connect: connection refused Sep 4 17:27:17.758139 kubelet[2826]: E0904 17:27:17.757889 2826 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.30.103:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-103&limit=500&resourceVersion=0": dial tcp 172.31.30.103:6443: connect: connection refused Sep 4 17:27:17.781252 containerd[1981]: time="2024-09-04T17:27:17.777611187Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:27:17.781252 containerd[1981]: time="2024-09-04T17:27:17.781167398Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 602.43384ms" Sep 4 17:27:17.783891 containerd[1981]: time="2024-09-04T17:27:17.783844522Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 609.614411ms" Sep 4 17:27:17.791451 containerd[1981]: time="2024-09-04T17:27:17.791385803Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 624.853424ms" Sep 4 17:27:17.887640 kubelet[2826]: W0904 17:27:17.887444 2826 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.30.103:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.30.103:6443: connect: connection refused Sep 4 17:27:17.887640 kubelet[2826]: E0904 17:27:17.887521 2826 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.30.103:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.30.103:6443: connect: connection refused Sep 4 17:27:17.898762 kubelet[2826]: W0904 17:27:17.898510 2826 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.30.103:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.103:6443: connect: connection refused Sep 4 17:27:17.898762 kubelet[2826]: E0904 17:27:17.898770 2826 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.30.103:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.103:6443: connect: connection refused Sep 4 17:27:18.098300 kubelet[2826]: E0904 17:27:18.097749 2826 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-103?timeout=10s\": dial tcp 172.31.30.103:6443: connect: connection refused" interval="1.6s" Sep 4 17:27:18.189998 containerd[1981]: time="2024-09-04T17:27:18.189787943Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:27:18.189998 containerd[1981]: time="2024-09-04T17:27:18.189912267Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:27:18.189998 containerd[1981]: time="2024-09-04T17:27:18.189960966Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:27:18.191137 containerd[1981]: time="2024-09-04T17:27:18.189982669Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:27:18.191261 containerd[1981]: time="2024-09-04T17:27:18.190581304Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:27:18.191261 containerd[1981]: time="2024-09-04T17:27:18.190646835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:27:18.191261 containerd[1981]: time="2024-09-04T17:27:18.190683689Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:27:18.191261 containerd[1981]: time="2024-09-04T17:27:18.190704254Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:27:18.206576 kubelet[2826]: I0904 17:27:18.206506 2826 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-30-103" Sep 4 17:27:18.207515 containerd[1981]: time="2024-09-04T17:27:18.204953399Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:27:18.207515 containerd[1981]: time="2024-09-04T17:27:18.205034673Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:27:18.207515 containerd[1981]: time="2024-09-04T17:27:18.205081370Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:27:18.207515 containerd[1981]: time="2024-09-04T17:27:18.205105202Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:27:18.208921 kubelet[2826]: E0904 17:27:18.208777 2826 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.30.103:6443/api/v1/nodes\": dial tcp 172.31.30.103:6443: connect: connection refused" node="ip-172-31-30-103" Sep 4 17:27:18.254745 systemd[1]: Started cri-containerd-5616b8481dc76820f76e2bd03fc2295429a7fb4a6269a5b0d450d54055f1f650.scope - libcontainer container 5616b8481dc76820f76e2bd03fc2295429a7fb4a6269a5b0d450d54055f1f650. Sep 4 17:27:18.273123 systemd[1]: Started cri-containerd-482d9fc0928288b81a8301dbe8164b2810253355b638d4ce2b735fa04d5c192c.scope - libcontainer container 482d9fc0928288b81a8301dbe8164b2810253355b638d4ce2b735fa04d5c192c. Sep 4 17:27:18.287748 systemd[1]: Started cri-containerd-7908a6ac979c0b62f3707e63f99da75eab26994defb894d70ec5b7d7a5fa3e80.scope - libcontainer container 7908a6ac979c0b62f3707e63f99da75eab26994defb894d70ec5b7d7a5fa3e80. Sep 4 17:27:18.305290 kubelet[2826]: W0904 17:27:18.304437 2826 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.30.103:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.103:6443: connect: connection refused Sep 4 17:27:18.305290 kubelet[2826]: E0904 17:27:18.304599 2826 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.30.103:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.103:6443: connect: connection refused Sep 4 17:27:18.386320 containerd[1981]: time="2024-09-04T17:27:18.386110489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-30-103,Uid:a5d7f1b16c8c86e815f18da8cafe6ed2,Namespace:kube-system,Attempt:0,} returns sandbox id \"5616b8481dc76820f76e2bd03fc2295429a7fb4a6269a5b0d450d54055f1f650\"" Sep 4 17:27:18.397619 containerd[1981]: time="2024-09-04T17:27:18.397551626Z" level=info msg="CreateContainer within sandbox \"5616b8481dc76820f76e2bd03fc2295429a7fb4a6269a5b0d450d54055f1f650\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 4 17:27:18.405734 containerd[1981]: time="2024-09-04T17:27:18.405683463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-30-103,Uid:77e49488f75e22132c2fd3527f18d066,Namespace:kube-system,Attempt:0,} returns sandbox id \"7908a6ac979c0b62f3707e63f99da75eab26994defb894d70ec5b7d7a5fa3e80\"" Sep 4 17:27:18.415671 containerd[1981]: time="2024-09-04T17:27:18.415631626Z" level=info msg="CreateContainer within sandbox \"7908a6ac979c0b62f3707e63f99da75eab26994defb894d70ec5b7d7a5fa3e80\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 4 17:27:18.434443 containerd[1981]: time="2024-09-04T17:27:18.433170339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-30-103,Uid:68e9fcaa3f467086ba18a73cd79f8f1f,Namespace:kube-system,Attempt:0,} returns sandbox id \"482d9fc0928288b81a8301dbe8164b2810253355b638d4ce2b735fa04d5c192c\"" Sep 4 17:27:18.459741 containerd[1981]: time="2024-09-04T17:27:18.459696068Z" level=info msg="CreateContainer within sandbox \"482d9fc0928288b81a8301dbe8164b2810253355b638d4ce2b735fa04d5c192c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 4 17:27:18.468994 containerd[1981]: time="2024-09-04T17:27:18.468941247Z" level=info msg="CreateContainer within sandbox \"5616b8481dc76820f76e2bd03fc2295429a7fb4a6269a5b0d450d54055f1f650\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6206820158e65f516bfc116ae0509d583a8d0b891f11feaabcd970bfe5b161ec\"" Sep 4 17:27:18.470379 containerd[1981]: time="2024-09-04T17:27:18.470342188Z" level=info msg="StartContainer for \"6206820158e65f516bfc116ae0509d583a8d0b891f11feaabcd970bfe5b161ec\"" Sep 4 17:27:18.496300 containerd[1981]: time="2024-09-04T17:27:18.495120145Z" level=info msg="CreateContainer within sandbox \"7908a6ac979c0b62f3707e63f99da75eab26994defb894d70ec5b7d7a5fa3e80\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"cdc27794494558005cdca96d674d6adaf44721d9e71960fc1b99dfa911e67b5b\"" Sep 4 17:27:18.496665 containerd[1981]: time="2024-09-04T17:27:18.496634691Z" level=info msg="StartContainer for \"cdc27794494558005cdca96d674d6adaf44721d9e71960fc1b99dfa911e67b5b\"" Sep 4 17:27:18.509949 containerd[1981]: time="2024-09-04T17:27:18.509903448Z" level=info msg="CreateContainer within sandbox \"482d9fc0928288b81a8301dbe8164b2810253355b638d4ce2b735fa04d5c192c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"319d379c9f852d89f87d8331840c6c83c26cc13953ded5ae96f834485412a5d1\"" Sep 4 17:27:18.511170 containerd[1981]: time="2024-09-04T17:27:18.511126036Z" level=info msg="StartContainer for \"319d379c9f852d89f87d8331840c6c83c26cc13953ded5ae96f834485412a5d1\"" Sep 4 17:27:18.521470 systemd[1]: Started cri-containerd-6206820158e65f516bfc116ae0509d583a8d0b891f11feaabcd970bfe5b161ec.scope - libcontainer container 6206820158e65f516bfc116ae0509d583a8d0b891f11feaabcd970bfe5b161ec. Sep 4 17:27:18.571967 systemd[1]: Started cri-containerd-cdc27794494558005cdca96d674d6adaf44721d9e71960fc1b99dfa911e67b5b.scope - libcontainer container cdc27794494558005cdca96d674d6adaf44721d9e71960fc1b99dfa911e67b5b. Sep 4 17:27:18.586829 systemd[1]: Started cri-containerd-319d379c9f852d89f87d8331840c6c83c26cc13953ded5ae96f834485412a5d1.scope - libcontainer container 319d379c9f852d89f87d8331840c6c83c26cc13953ded5ae96f834485412a5d1. Sep 4 17:27:18.642160 containerd[1981]: time="2024-09-04T17:27:18.641815280Z" level=info msg="StartContainer for \"6206820158e65f516bfc116ae0509d583a8d0b891f11feaabcd970bfe5b161ec\" returns successfully" Sep 4 17:27:18.682546 containerd[1981]: time="2024-09-04T17:27:18.682498140Z" level=info msg="StartContainer for \"cdc27794494558005cdca96d674d6adaf44721d9e71960fc1b99dfa911e67b5b\" returns successfully" Sep 4 17:27:18.736202 containerd[1981]: time="2024-09-04T17:27:18.736151411Z" level=info msg="StartContainer for \"319d379c9f852d89f87d8331840c6c83c26cc13953ded5ae96f834485412a5d1\" returns successfully" Sep 4 17:27:18.794657 kubelet[2826]: E0904 17:27:18.794618 2826 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.30.103:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.30.103:6443: connect: connection refused Sep 4 17:27:19.272998 kubelet[2826]: E0904 17:27:19.272946 2826 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.30.103:6443/api/v1/namespaces/default/events\": dial tcp 172.31.30.103:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-30-103.17f21a9582105247 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-30-103,UID:ip-172-31-30-103,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-30-103,},FirstTimestamp:2024-09-04 17:27:16.670878279 +0000 UTC m=+0.470549411,LastTimestamp:2024-09-04 17:27:16.670878279 +0000 UTC m=+0.470549411,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-30-103,}" Sep 4 17:27:19.813205 kubelet[2826]: I0904 17:27:19.813174 2826 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-30-103" Sep 4 17:27:21.791687 kubelet[2826]: E0904 17:27:21.791640 2826 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-30-103\" not found" node="ip-172-31-30-103" Sep 4 17:27:21.811988 kubelet[2826]: I0904 17:27:21.811828 2826 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-30-103" Sep 4 17:27:22.668022 kubelet[2826]: I0904 17:27:22.667972 2826 apiserver.go:52] "Watching apiserver" Sep 4 17:27:22.693350 kubelet[2826]: I0904 17:27:22.693155 2826 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Sep 4 17:27:24.908169 systemd[1]: Reloading requested from client PID 3100 ('systemctl') (unit session-7.scope)... Sep 4 17:27:24.908189 systemd[1]: Reloading... Sep 4 17:27:25.070263 zram_generator::config[3138]: No configuration found. Sep 4 17:27:25.245435 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:27:25.364165 systemd[1]: Reloading finished in 455 ms. Sep 4 17:27:25.424572 kubelet[2826]: I0904 17:27:25.424489 2826 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:27:25.425221 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:27:25.448441 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 17:27:25.449395 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:27:25.456008 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:27:25.961657 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:27:25.975806 (kubelet)[3195]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 17:27:26.111212 kubelet[3195]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:27:26.111212 kubelet[3195]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 17:27:26.111635 kubelet[3195]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:27:26.111635 kubelet[3195]: I0904 17:27:26.111345 3195 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 17:27:26.119093 kubelet[3195]: I0904 17:27:26.119032 3195 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Sep 4 17:27:26.121077 kubelet[3195]: I0904 17:27:26.121047 3195 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 17:27:26.121399 kubelet[3195]: I0904 17:27:26.121379 3195 server.go:919] "Client rotation is on, will bootstrap in background" Sep 4 17:27:26.128982 kubelet[3195]: I0904 17:27:26.128943 3195 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 4 17:27:26.133490 kubelet[3195]: I0904 17:27:26.133332 3195 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:27:26.149401 kubelet[3195]: I0904 17:27:26.149166 3195 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 17:27:26.150540 kubelet[3195]: I0904 17:27:26.150179 3195 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 17:27:26.150540 kubelet[3195]: I0904 17:27:26.150367 3195 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 17:27:26.150540 kubelet[3195]: I0904 17:27:26.150390 3195 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 17:27:26.150540 kubelet[3195]: I0904 17:27:26.150400 3195 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 17:27:26.153585 kubelet[3195]: I0904 17:27:26.153552 3195 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:27:26.153924 kubelet[3195]: I0904 17:27:26.153913 3195 kubelet.go:396] "Attempting to sync node with API server" Sep 4 17:27:26.154434 kubelet[3195]: I0904 17:27:26.154418 3195 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 17:27:26.155245 kubelet[3195]: I0904 17:27:26.154564 3195 kubelet.go:312] "Adding apiserver pod source" Sep 4 17:27:26.155245 kubelet[3195]: I0904 17:27:26.154587 3195 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 17:27:26.157044 kubelet[3195]: I0904 17:27:26.157028 3195 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Sep 4 17:27:26.159243 kubelet[3195]: I0904 17:27:26.157507 3195 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 17:27:26.161099 kubelet[3195]: I0904 17:27:26.161081 3195 server.go:1256] "Started kubelet" Sep 4 17:27:26.163968 kubelet[3195]: I0904 17:27:26.163692 3195 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 17:27:26.178446 kubelet[3195]: I0904 17:27:26.178414 3195 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 17:27:26.182524 kubelet[3195]: I0904 17:27:26.179486 3195 server.go:461] "Adding debug handlers to kubelet server" Sep 4 17:27:26.182524 kubelet[3195]: I0904 17:27:26.182223 3195 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 17:27:26.182735 kubelet[3195]: I0904 17:27:26.182719 3195 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 17:27:26.187085 kubelet[3195]: I0904 17:27:26.186059 3195 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 17:27:26.187085 kubelet[3195]: I0904 17:27:26.186473 3195 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Sep 4 17:27:26.187085 kubelet[3195]: I0904 17:27:26.186621 3195 reconciler_new.go:29] "Reconciler: start to sync state" Sep 4 17:27:26.211793 kubelet[3195]: I0904 17:27:26.211760 3195 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 17:27:26.212555 kubelet[3195]: I0904 17:27:26.212482 3195 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 17:27:26.216357 kubelet[3195]: I0904 17:27:26.216336 3195 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 17:27:26.219272 kubelet[3195]: I0904 17:27:26.216499 3195 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 17:27:26.219272 kubelet[3195]: I0904 17:27:26.216523 3195 kubelet.go:2329] "Starting kubelet main sync loop" Sep 4 17:27:26.219272 kubelet[3195]: E0904 17:27:26.216579 3195 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 17:27:26.221251 kubelet[3195]: E0904 17:27:26.221177 3195 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 17:27:26.235488 kubelet[3195]: I0904 17:27:26.235210 3195 factory.go:221] Registration of the containerd container factory successfully Sep 4 17:27:26.235488 kubelet[3195]: I0904 17:27:26.235287 3195 factory.go:221] Registration of the systemd container factory successfully Sep 4 17:27:26.296330 kubelet[3195]: I0904 17:27:26.295567 3195 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-30-103" Sep 4 17:27:26.315914 kubelet[3195]: I0904 17:27:26.313357 3195 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-30-103" Sep 4 17:27:26.315914 kubelet[3195]: I0904 17:27:26.313473 3195 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-30-103" Sep 4 17:27:26.319908 kubelet[3195]: E0904 17:27:26.317664 3195 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 17:27:26.331592 kubelet[3195]: I0904 17:27:26.331510 3195 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 17:27:26.331592 kubelet[3195]: I0904 17:27:26.331590 3195 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 17:27:26.331778 kubelet[3195]: I0904 17:27:26.331611 3195 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:27:26.331901 kubelet[3195]: I0904 17:27:26.331786 3195 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 4 17:27:26.331901 kubelet[3195]: I0904 17:27:26.331886 3195 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 4 17:27:26.331901 kubelet[3195]: I0904 17:27:26.331899 3195 policy_none.go:49] "None policy: Start" Sep 4 17:27:26.332933 kubelet[3195]: I0904 17:27:26.332911 3195 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 4 17:27:26.333140 kubelet[3195]: I0904 17:27:26.332941 3195 state_mem.go:35] "Initializing new in-memory state store" Sep 4 17:27:26.333140 kubelet[3195]: I0904 17:27:26.333110 3195 state_mem.go:75] "Updated machine memory state" Sep 4 17:27:26.344029 kubelet[3195]: I0904 17:27:26.343989 3195 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 17:27:26.345426 kubelet[3195]: I0904 17:27:26.344557 3195 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 17:27:26.519900 kubelet[3195]: I0904 17:27:26.518466 3195 topology_manager.go:215] "Topology Admit Handler" podUID="68e9fcaa3f467086ba18a73cd79f8f1f" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-30-103" Sep 4 17:27:26.519900 kubelet[3195]: I0904 17:27:26.518588 3195 topology_manager.go:215] "Topology Admit Handler" podUID="a5d7f1b16c8c86e815f18da8cafe6ed2" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-30-103" Sep 4 17:27:26.519900 kubelet[3195]: I0904 17:27:26.518634 3195 topology_manager.go:215] "Topology Admit Handler" podUID="77e49488f75e22132c2fd3527f18d066" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-30-103" Sep 4 17:27:26.529067 kubelet[3195]: E0904 17:27:26.529033 3195 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-30-103\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-30-103" Sep 4 17:27:26.530865 kubelet[3195]: E0904 17:27:26.530731 3195 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-30-103\" already exists" pod="kube-system/kube-apiserver-ip-172-31-30-103" Sep 4 17:27:26.530865 kubelet[3195]: E0904 17:27:26.530737 3195 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-30-103\" already exists" pod="kube-system/kube-scheduler-ip-172-31-30-103" Sep 4 17:27:26.588513 kubelet[3195]: I0904 17:27:26.588465 3195 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/68e9fcaa3f467086ba18a73cd79f8f1f-ca-certs\") pod \"kube-controller-manager-ip-172-31-30-103\" (UID: \"68e9fcaa3f467086ba18a73cd79f8f1f\") " pod="kube-system/kube-controller-manager-ip-172-31-30-103" Sep 4 17:27:26.588513 kubelet[3195]: I0904 17:27:26.588517 3195 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/68e9fcaa3f467086ba18a73cd79f8f1f-k8s-certs\") pod \"kube-controller-manager-ip-172-31-30-103\" (UID: \"68e9fcaa3f467086ba18a73cd79f8f1f\") " pod="kube-system/kube-controller-manager-ip-172-31-30-103" Sep 4 17:27:26.588716 kubelet[3195]: I0904 17:27:26.588550 3195 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/68e9fcaa3f467086ba18a73cd79f8f1f-kubeconfig\") pod \"kube-controller-manager-ip-172-31-30-103\" (UID: \"68e9fcaa3f467086ba18a73cd79f8f1f\") " pod="kube-system/kube-controller-manager-ip-172-31-30-103" Sep 4 17:27:26.588716 kubelet[3195]: I0904 17:27:26.588576 3195 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a5d7f1b16c8c86e815f18da8cafe6ed2-kubeconfig\") pod \"kube-scheduler-ip-172-31-30-103\" (UID: \"a5d7f1b16c8c86e815f18da8cafe6ed2\") " pod="kube-system/kube-scheduler-ip-172-31-30-103" Sep 4 17:27:26.588716 kubelet[3195]: I0904 17:27:26.588607 3195 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/77e49488f75e22132c2fd3527f18d066-k8s-certs\") pod \"kube-apiserver-ip-172-31-30-103\" (UID: \"77e49488f75e22132c2fd3527f18d066\") " pod="kube-system/kube-apiserver-ip-172-31-30-103" Sep 4 17:27:26.588716 kubelet[3195]: I0904 17:27:26.588639 3195 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/77e49488f75e22132c2fd3527f18d066-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-30-103\" (UID: \"77e49488f75e22132c2fd3527f18d066\") " pod="kube-system/kube-apiserver-ip-172-31-30-103" Sep 4 17:27:26.588716 kubelet[3195]: I0904 17:27:26.588666 3195 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/68e9fcaa3f467086ba18a73cd79f8f1f-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-30-103\" (UID: \"68e9fcaa3f467086ba18a73cd79f8f1f\") " pod="kube-system/kube-controller-manager-ip-172-31-30-103" Sep 4 17:27:26.588935 kubelet[3195]: I0904 17:27:26.588698 3195 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/68e9fcaa3f467086ba18a73cd79f8f1f-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-30-103\" (UID: \"68e9fcaa3f467086ba18a73cd79f8f1f\") " pod="kube-system/kube-controller-manager-ip-172-31-30-103" Sep 4 17:27:26.588935 kubelet[3195]: I0904 17:27:26.588726 3195 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/77e49488f75e22132c2fd3527f18d066-ca-certs\") pod \"kube-apiserver-ip-172-31-30-103\" (UID: \"77e49488f75e22132c2fd3527f18d066\") " pod="kube-system/kube-apiserver-ip-172-31-30-103" Sep 4 17:27:27.170810 kubelet[3195]: I0904 17:27:27.170763 3195 apiserver.go:52] "Watching apiserver" Sep 4 17:27:27.187113 kubelet[3195]: I0904 17:27:27.187051 3195 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Sep 4 17:27:27.316739 kubelet[3195]: E0904 17:27:27.315317 3195 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-30-103\" already exists" pod="kube-system/kube-apiserver-ip-172-31-30-103" Sep 4 17:27:27.414414 update_engine[1951]: I0904 17:27:27.413280 1951 update_attempter.cc:509] Updating boot flags... Sep 4 17:27:27.482367 kubelet[3195]: I0904 17:27:27.478809 3195 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-30-103" podStartSLOduration=3.478740248 podStartE2EDuration="3.478740248s" podCreationTimestamp="2024-09-04 17:27:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:27:27.408749874 +0000 UTC m=+1.423162839" watchObservedRunningTime="2024-09-04 17:27:27.478740248 +0000 UTC m=+1.493153210" Sep 4 17:27:27.534417 kubelet[3195]: I0904 17:27:27.532743 3195 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-30-103" podStartSLOduration=2.532696605 podStartE2EDuration="2.532696605s" podCreationTimestamp="2024-09-04 17:27:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:27:27.48471171 +0000 UTC m=+1.499124677" watchObservedRunningTime="2024-09-04 17:27:27.532696605 +0000 UTC m=+1.547109572" Sep 4 17:27:27.601258 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (3248) Sep 4 17:27:28.052421 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (3248) Sep 4 17:27:28.602335 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (3248) Sep 4 17:27:32.274422 kubelet[3195]: I0904 17:27:32.274218 3195 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-30-103" podStartSLOduration=9.274165356 podStartE2EDuration="9.274165356s" podCreationTimestamp="2024-09-04 17:27:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:27:27.538925968 +0000 UTC m=+1.553338934" watchObservedRunningTime="2024-09-04 17:27:32.274165356 +0000 UTC m=+6.288578321" Sep 4 17:27:32.916322 sudo[2292]: pam_unix(sudo:session): session closed for user root Sep 4 17:27:32.943694 sshd[2289]: pam_unix(sshd:session): session closed for user core Sep 4 17:27:32.947167 systemd[1]: sshd@6-172.31.30.103:22-139.178.68.195:48422.service: Deactivated successfully. Sep 4 17:27:32.950884 systemd[1]: session-7.scope: Deactivated successfully. Sep 4 17:27:32.951443 systemd[1]: session-7.scope: Consumed 5.474s CPU time, 133.0M memory peak, 0B memory swap peak. Sep 4 17:27:32.953291 systemd-logind[1947]: Session 7 logged out. Waiting for processes to exit. Sep 4 17:27:32.954695 systemd-logind[1947]: Removed session 7. Sep 4 17:27:37.223340 kubelet[3195]: I0904 17:27:37.222609 3195 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 4 17:27:37.225747 containerd[1981]: time="2024-09-04T17:27:37.224884914Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 4 17:27:37.226959 kubelet[3195]: I0904 17:27:37.225532 3195 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 4 17:27:38.150639 kubelet[3195]: I0904 17:27:38.150593 3195 topology_manager.go:215] "Topology Admit Handler" podUID="6296e1c0-f6ec-429b-8f40-1e6a71e4e803" podNamespace="kube-system" podName="kube-proxy-hnd2c" Sep 4 17:27:38.165264 systemd[1]: Created slice kubepods-besteffort-pod6296e1c0_f6ec_429b_8f40_1e6a71e4e803.slice - libcontainer container kubepods-besteffort-pod6296e1c0_f6ec_429b_8f40_1e6a71e4e803.slice. Sep 4 17:27:38.292412 kubelet[3195]: I0904 17:27:38.291937 3195 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6296e1c0-f6ec-429b-8f40-1e6a71e4e803-kube-proxy\") pod \"kube-proxy-hnd2c\" (UID: \"6296e1c0-f6ec-429b-8f40-1e6a71e4e803\") " pod="kube-system/kube-proxy-hnd2c" Sep 4 17:27:38.292412 kubelet[3195]: I0904 17:27:38.291994 3195 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6296e1c0-f6ec-429b-8f40-1e6a71e4e803-xtables-lock\") pod \"kube-proxy-hnd2c\" (UID: \"6296e1c0-f6ec-429b-8f40-1e6a71e4e803\") " pod="kube-system/kube-proxy-hnd2c" Sep 4 17:27:38.292412 kubelet[3195]: I0904 17:27:38.292045 3195 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6296e1c0-f6ec-429b-8f40-1e6a71e4e803-lib-modules\") pod \"kube-proxy-hnd2c\" (UID: \"6296e1c0-f6ec-429b-8f40-1e6a71e4e803\") " pod="kube-system/kube-proxy-hnd2c" Sep 4 17:27:38.292412 kubelet[3195]: I0904 17:27:38.292086 3195 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bppqb\" (UniqueName: \"kubernetes.io/projected/6296e1c0-f6ec-429b-8f40-1e6a71e4e803-kube-api-access-bppqb\") pod \"kube-proxy-hnd2c\" (UID: \"6296e1c0-f6ec-429b-8f40-1e6a71e4e803\") " pod="kube-system/kube-proxy-hnd2c" Sep 4 17:27:38.380629 kubelet[3195]: I0904 17:27:38.379585 3195 topology_manager.go:215] "Topology Admit Handler" podUID="c159d260-fed7-4ee8-89ea-4c01b4f8e437" podNamespace="tigera-operator" podName="tigera-operator-5d56685c77-5cb69" Sep 4 17:27:38.427687 systemd[1]: Created slice kubepods-besteffort-podc159d260_fed7_4ee8_89ea_4c01b4f8e437.slice - libcontainer container kubepods-besteffort-podc159d260_fed7_4ee8_89ea_4c01b4f8e437.slice. Sep 4 17:27:38.493951 kubelet[3195]: I0904 17:27:38.493908 3195 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c159d260-fed7-4ee8-89ea-4c01b4f8e437-var-lib-calico\") pod \"tigera-operator-5d56685c77-5cb69\" (UID: \"c159d260-fed7-4ee8-89ea-4c01b4f8e437\") " pod="tigera-operator/tigera-operator-5d56685c77-5cb69" Sep 4 17:27:38.494090 kubelet[3195]: I0904 17:27:38.493970 3195 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxq86\" (UniqueName: \"kubernetes.io/projected/c159d260-fed7-4ee8-89ea-4c01b4f8e437-kube-api-access-mxq86\") pod \"tigera-operator-5d56685c77-5cb69\" (UID: \"c159d260-fed7-4ee8-89ea-4c01b4f8e437\") " pod="tigera-operator/tigera-operator-5d56685c77-5cb69" Sep 4 17:27:38.750465 containerd[1981]: time="2024-09-04T17:27:38.750328563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-5cb69,Uid:c159d260-fed7-4ee8-89ea-4c01b4f8e437,Namespace:tigera-operator,Attempt:0,}" Sep 4 17:27:38.786154 containerd[1981]: time="2024-09-04T17:27:38.776876063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hnd2c,Uid:6296e1c0-f6ec-429b-8f40-1e6a71e4e803,Namespace:kube-system,Attempt:0,}" Sep 4 17:27:38.819916 containerd[1981]: time="2024-09-04T17:27:38.812018447Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:27:38.819916 containerd[1981]: time="2024-09-04T17:27:38.812090268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:27:38.819916 containerd[1981]: time="2024-09-04T17:27:38.812122518Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:27:38.819916 containerd[1981]: time="2024-09-04T17:27:38.812145569Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:27:38.882964 containerd[1981]: time="2024-09-04T17:27:38.882850759Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:27:38.886485 systemd[1]: Started cri-containerd-3a8dbf8b6333b8f7e309ac3b0ff069ca37641a68116abc5b4d20f107a9e7d6f5.scope - libcontainer container 3a8dbf8b6333b8f7e309ac3b0ff069ca37641a68116abc5b4d20f107a9e7d6f5. Sep 4 17:27:38.888117 containerd[1981]: time="2024-09-04T17:27:38.887869993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:27:38.888518 containerd[1981]: time="2024-09-04T17:27:38.888150036Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:27:38.888518 containerd[1981]: time="2024-09-04T17:27:38.888190310Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:27:38.961675 systemd[1]: Started cri-containerd-f5f06858275eea899cd744604161d01c7064d802b86173158cea000f740e9716.scope - libcontainer container f5f06858275eea899cd744604161d01c7064d802b86173158cea000f740e9716. Sep 4 17:27:39.026854 containerd[1981]: time="2024-09-04T17:27:39.026634969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hnd2c,Uid:6296e1c0-f6ec-429b-8f40-1e6a71e4e803,Namespace:kube-system,Attempt:0,} returns sandbox id \"f5f06858275eea899cd744604161d01c7064d802b86173158cea000f740e9716\"" Sep 4 17:27:39.035096 containerd[1981]: time="2024-09-04T17:27:39.035049128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-5cb69,Uid:c159d260-fed7-4ee8-89ea-4c01b4f8e437,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"3a8dbf8b6333b8f7e309ac3b0ff069ca37641a68116abc5b4d20f107a9e7d6f5\"" Sep 4 17:27:39.045175 containerd[1981]: time="2024-09-04T17:27:39.044887114Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\"" Sep 4 17:27:39.049546 containerd[1981]: time="2024-09-04T17:27:39.048661356Z" level=info msg="CreateContainer within sandbox \"f5f06858275eea899cd744604161d01c7064d802b86173158cea000f740e9716\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 4 17:27:39.087703 containerd[1981]: time="2024-09-04T17:27:39.087653636Z" level=info msg="CreateContainer within sandbox \"f5f06858275eea899cd744604161d01c7064d802b86173158cea000f740e9716\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a16e968cb3d9fbc6aedc692c2d6b28b67c49e0ed917f9b4e05999ffca9312265\"" Sep 4 17:27:39.089107 containerd[1981]: time="2024-09-04T17:27:39.088890019Z" level=info msg="StartContainer for \"a16e968cb3d9fbc6aedc692c2d6b28b67c49e0ed917f9b4e05999ffca9312265\"" Sep 4 17:27:39.160444 systemd[1]: Started cri-containerd-a16e968cb3d9fbc6aedc692c2d6b28b67c49e0ed917f9b4e05999ffca9312265.scope - libcontainer container a16e968cb3d9fbc6aedc692c2d6b28b67c49e0ed917f9b4e05999ffca9312265. Sep 4 17:27:39.201819 containerd[1981]: time="2024-09-04T17:27:39.201768680Z" level=info msg="StartContainer for \"a16e968cb3d9fbc6aedc692c2d6b28b67c49e0ed917f9b4e05999ffca9312265\" returns successfully" Sep 4 17:27:40.339204 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4151324492.mount: Deactivated successfully. Sep 4 17:27:41.502110 containerd[1981]: time="2024-09-04T17:27:41.502053666Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:27:41.506354 containerd[1981]: time="2024-09-04T17:27:41.506288575Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=22136553" Sep 4 17:27:41.509565 containerd[1981]: time="2024-09-04T17:27:41.509253885Z" level=info msg="ImageCreate event name:\"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:27:41.519480 containerd[1981]: time="2024-09-04T17:27:41.519426546Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:27:41.523318 containerd[1981]: time="2024-09-04T17:27:41.522070775Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"22130728\" in 2.477131396s" Sep 4 17:27:41.523318 containerd[1981]: time="2024-09-04T17:27:41.522119992Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\"" Sep 4 17:27:41.528100 containerd[1981]: time="2024-09-04T17:27:41.528061380Z" level=info msg="CreateContainer within sandbox \"3a8dbf8b6333b8f7e309ac3b0ff069ca37641a68116abc5b4d20f107a9e7d6f5\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 4 17:27:41.571399 containerd[1981]: time="2024-09-04T17:27:41.571352737Z" level=info msg="CreateContainer within sandbox \"3a8dbf8b6333b8f7e309ac3b0ff069ca37641a68116abc5b4d20f107a9e7d6f5\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"b4724da9109fda2181bc97cc8f308e9b8d5dff477eadcc8b72aafa03aefcba33\"" Sep 4 17:27:41.573253 containerd[1981]: time="2024-09-04T17:27:41.572015345Z" level=info msg="StartContainer for \"b4724da9109fda2181bc97cc8f308e9b8d5dff477eadcc8b72aafa03aefcba33\"" Sep 4 17:27:41.615411 systemd[1]: run-containerd-runc-k8s.io-b4724da9109fda2181bc97cc8f308e9b8d5dff477eadcc8b72aafa03aefcba33-runc.R9LHeN.mount: Deactivated successfully. Sep 4 17:27:41.623529 systemd[1]: Started cri-containerd-b4724da9109fda2181bc97cc8f308e9b8d5dff477eadcc8b72aafa03aefcba33.scope - libcontainer container b4724da9109fda2181bc97cc8f308e9b8d5dff477eadcc8b72aafa03aefcba33. Sep 4 17:27:41.665097 containerd[1981]: time="2024-09-04T17:27:41.665047135Z" level=info msg="StartContainer for \"b4724da9109fda2181bc97cc8f308e9b8d5dff477eadcc8b72aafa03aefcba33\" returns successfully" Sep 4 17:27:42.357482 kubelet[3195]: I0904 17:27:42.356875 3195 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-hnd2c" podStartSLOduration=4.356825887 podStartE2EDuration="4.356825887s" podCreationTimestamp="2024-09-04 17:27:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:27:39.334303223 +0000 UTC m=+13.348716216" watchObservedRunningTime="2024-09-04 17:27:42.356825887 +0000 UTC m=+16.371238849" Sep 4 17:27:42.358037 kubelet[3195]: I0904 17:27:42.357804 3195 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5d56685c77-5cb69" podStartSLOduration=1.872804403 podStartE2EDuration="4.357761135s" podCreationTimestamp="2024-09-04 17:27:38 +0000 UTC" firstStartedPulling="2024-09-04 17:27:39.037579562 +0000 UTC m=+13.051992514" lastFinishedPulling="2024-09-04 17:27:41.522536295 +0000 UTC m=+15.536949246" observedRunningTime="2024-09-04 17:27:42.35769345 +0000 UTC m=+16.372106416" watchObservedRunningTime="2024-09-04 17:27:42.357761135 +0000 UTC m=+16.372174100" Sep 4 17:27:44.897794 kubelet[3195]: I0904 17:27:44.897753 3195 topology_manager.go:215] "Topology Admit Handler" podUID="4b0f2f9e-991e-4c45-8578-ba5b13a487a8" podNamespace="calico-system" podName="calico-typha-66874c69cc-vngbt" Sep 4 17:27:44.909149 systemd[1]: Created slice kubepods-besteffort-pod4b0f2f9e_991e_4c45_8578_ba5b13a487a8.slice - libcontainer container kubepods-besteffort-pod4b0f2f9e_991e_4c45_8578_ba5b13a487a8.slice. Sep 4 17:27:44.953742 kubelet[3195]: I0904 17:27:44.950845 3195 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b0f2f9e-991e-4c45-8578-ba5b13a487a8-tigera-ca-bundle\") pod \"calico-typha-66874c69cc-vngbt\" (UID: \"4b0f2f9e-991e-4c45-8578-ba5b13a487a8\") " pod="calico-system/calico-typha-66874c69cc-vngbt" Sep 4 17:27:44.953742 kubelet[3195]: I0904 17:27:44.950916 3195 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmh64\" (UniqueName: \"kubernetes.io/projected/4b0f2f9e-991e-4c45-8578-ba5b13a487a8-kube-api-access-kmh64\") pod \"calico-typha-66874c69cc-vngbt\" (UID: \"4b0f2f9e-991e-4c45-8578-ba5b13a487a8\") " pod="calico-system/calico-typha-66874c69cc-vngbt" Sep 4 17:27:44.953742 kubelet[3195]: I0904 17:27:44.951054 3195 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/4b0f2f9e-991e-4c45-8578-ba5b13a487a8-typha-certs\") pod \"calico-typha-66874c69cc-vngbt\" (UID: \"4b0f2f9e-991e-4c45-8578-ba5b13a487a8\") " pod="calico-system/calico-typha-66874c69cc-vngbt" Sep 4 17:27:45.048517 kubelet[3195]: I0904 17:27:45.048473 3195 topology_manager.go:215] "Topology Admit Handler" podUID="104d4934-f254-4783-a298-14b5b46692a9" podNamespace="calico-system" podName="calico-node-2v98z" Sep 4 17:27:45.081339 systemd[1]: Created slice kubepods-besteffort-pod104d4934_f254_4783_a298_14b5b46692a9.slice - libcontainer container kubepods-besteffort-pod104d4934_f254_4783_a298_14b5b46692a9.slice. Sep 4 17:27:45.153392 kubelet[3195]: I0904 17:27:45.153254 3195 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/104d4934-f254-4783-a298-14b5b46692a9-lib-modules\") pod \"calico-node-2v98z\" (UID: \"104d4934-f254-4783-a298-14b5b46692a9\") " pod="calico-system/calico-node-2v98z" Sep 4 17:27:45.153392 kubelet[3195]: I0904 17:27:45.153318 3195 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/104d4934-f254-4783-a298-14b5b46692a9-cni-log-dir\") pod \"calico-node-2v98z\" (UID: \"104d4934-f254-4783-a298-14b5b46692a9\") " pod="calico-system/calico-node-2v98z" Sep 4 17:27:45.153392 kubelet[3195]: I0904 17:27:45.153351 3195 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwclg\" (UniqueName: \"kubernetes.io/projected/104d4934-f254-4783-a298-14b5b46692a9-kube-api-access-pwclg\") pod \"calico-node-2v98z\" (UID: \"104d4934-f254-4783-a298-14b5b46692a9\") " pod="calico-system/calico-node-2v98z" Sep 4 17:27:45.153392 kubelet[3195]: I0904 17:27:45.153379 3195 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/104d4934-f254-4783-a298-14b5b46692a9-node-certs\") pod \"calico-node-2v98z\" (UID: \"104d4934-f254-4783-a298-14b5b46692a9\") " pod="calico-system/calico-node-2v98z" Sep 4 17:27:45.153871 kubelet[3195]: I0904 17:27:45.153409 3195 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/104d4934-f254-4783-a298-14b5b46692a9-var-run-calico\") pod \"calico-node-2v98z\" (UID: \"104d4934-f254-4783-a298-14b5b46692a9\") " pod="calico-system/calico-node-2v98z" Sep 4 17:27:45.153871 kubelet[3195]: I0904 17:27:45.153612 3195 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/104d4934-f254-4783-a298-14b5b46692a9-flexvol-driver-host\") pod \"calico-node-2v98z\" (UID: \"104d4934-f254-4783-a298-14b5b46692a9\") " pod="calico-system/calico-node-2v98z" Sep 4 17:27:45.153871 kubelet[3195]: I0904 17:27:45.153649 3195 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/104d4934-f254-4783-a298-14b5b46692a9-xtables-lock\") pod \"calico-node-2v98z\" (UID: \"104d4934-f254-4783-a298-14b5b46692a9\") " pod="calico-system/calico-node-2v98z" Sep 4 17:27:45.153871 kubelet[3195]: I0904 17:27:45.153679 3195 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/104d4934-f254-4783-a298-14b5b46692a9-policysync\") pod \"calico-node-2v98z\" (UID: \"104d4934-f254-4783-a298-14b5b46692a9\") " pod="calico-system/calico-node-2v98z" Sep 4 17:27:45.153871 kubelet[3195]: I0904 17:27:45.153705 3195 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/104d4934-f254-4783-a298-14b5b46692a9-tigera-ca-bundle\") pod \"calico-node-2v98z\" (UID: \"104d4934-f254-4783-a298-14b5b46692a9\") " pod="calico-system/calico-node-2v98z" Sep 4 17:27:45.154145 kubelet[3195]: I0904 17:27:45.153740 3195 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/104d4934-f254-4783-a298-14b5b46692a9-cni-net-dir\") pod \"calico-node-2v98z\" (UID: \"104d4934-f254-4783-a298-14b5b46692a9\") " pod="calico-system/calico-node-2v98z" Sep 4 17:27:45.154145 kubelet[3195]: I0904 17:27:45.153774 3195 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/104d4934-f254-4783-a298-14b5b46692a9-var-lib-calico\") pod \"calico-node-2v98z\" (UID: \"104d4934-f254-4783-a298-14b5b46692a9\") " pod="calico-system/calico-node-2v98z" Sep 4 17:27:45.154145 kubelet[3195]: I0904 17:27:45.153804 3195 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/104d4934-f254-4783-a298-14b5b46692a9-cni-bin-dir\") pod \"calico-node-2v98z\" (UID: \"104d4934-f254-4783-a298-14b5b46692a9\") " pod="calico-system/calico-node-2v98z" Sep 4 17:27:45.199318 kubelet[3195]: I0904 17:27:45.198837 3195 topology_manager.go:215] "Topology Admit Handler" podUID="906685ae-b7d7-4862-82f6-b94651385380" podNamespace="calico-system" podName="csi-node-driver-rnqrz" Sep 4 17:27:45.199318 kubelet[3195]: E0904 17:27:45.199195 3195 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rnqrz" podUID="906685ae-b7d7-4862-82f6-b94651385380" Sep 4 17:27:45.225804 containerd[1981]: time="2024-09-04T17:27:45.225752755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-66874c69cc-vngbt,Uid:4b0f2f9e-991e-4c45-8578-ba5b13a487a8,Namespace:calico-system,Attempt:0,}" Sep 4 17:27:45.256729 kubelet[3195]: I0904 17:27:45.254820 3195 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8c6g\" (UniqueName: \"kubernetes.io/projected/906685ae-b7d7-4862-82f6-b94651385380-kube-api-access-t8c6g\") pod \"csi-node-driver-rnqrz\" (UID: \"906685ae-b7d7-4862-82f6-b94651385380\") " pod="calico-system/csi-node-driver-rnqrz" Sep 4 17:27:45.256729 kubelet[3195]: I0904 17:27:45.254994 3195 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/906685ae-b7d7-4862-82f6-b94651385380-registration-dir\") pod \"csi-node-driver-rnqrz\" (UID: \"906685ae-b7d7-4862-82f6-b94651385380\") " pod="calico-system/csi-node-driver-rnqrz" Sep 4 17:27:45.256729 kubelet[3195]: I0904 17:27:45.255028 3195 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/906685ae-b7d7-4862-82f6-b94651385380-varrun\") pod \"csi-node-driver-rnqrz\" (UID: \"906685ae-b7d7-4862-82f6-b94651385380\") " pod="calico-system/csi-node-driver-rnqrz" Sep 4 17:27:45.256729 kubelet[3195]: I0904 17:27:45.255057 3195 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/906685ae-b7d7-4862-82f6-b94651385380-kubelet-dir\") pod \"csi-node-driver-rnqrz\" (UID: \"906685ae-b7d7-4862-82f6-b94651385380\") " pod="calico-system/csi-node-driver-rnqrz" Sep 4 17:27:45.256729 kubelet[3195]: I0904 17:27:45.255158 3195 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/906685ae-b7d7-4862-82f6-b94651385380-socket-dir\") pod \"csi-node-driver-rnqrz\" (UID: \"906685ae-b7d7-4862-82f6-b94651385380\") " pod="calico-system/csi-node-driver-rnqrz" Sep 4 17:27:45.271846 kubelet[3195]: E0904 17:27:45.271610 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:27:45.271846 kubelet[3195]: W0904 17:27:45.271743 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:27:45.272038 kubelet[3195]: E0904 17:27:45.271960 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:27:45.284565 kubelet[3195]: E0904 17:27:45.282504 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:27:45.284565 kubelet[3195]: W0904 17:27:45.282530 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:27:45.284565 kubelet[3195]: E0904 17:27:45.282558 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:27:45.322300 containerd[1981]: time="2024-09-04T17:27:45.321623207Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:27:45.322300 containerd[1981]: time="2024-09-04T17:27:45.321721859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:27:45.322300 containerd[1981]: time="2024-09-04T17:27:45.321750969Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:27:45.322300 containerd[1981]: time="2024-09-04T17:27:45.321772692Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:27:45.357398 kubelet[3195]: E0904 17:27:45.356912 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:27:45.357398 kubelet[3195]: W0904 17:27:45.356936 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:27:45.357398 kubelet[3195]: E0904 17:27:45.356963 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:27:45.359700 kubelet[3195]: E0904 17:27:45.359519 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:27:45.359700 kubelet[3195]: W0904 17:27:45.359537 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:27:45.359700 kubelet[3195]: E0904 17:27:45.359564 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:27:45.361964 kubelet[3195]: E0904 17:27:45.361597 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:27:45.361964 kubelet[3195]: W0904 17:27:45.361615 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:27:45.361964 kubelet[3195]: E0904 17:27:45.361643 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:27:45.364933 kubelet[3195]: E0904 17:27:45.364524 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:27:45.364933 kubelet[3195]: W0904 17:27:45.364543 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:27:45.364933 kubelet[3195]: E0904 17:27:45.364582 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:27:45.371131 kubelet[3195]: E0904 17:27:45.370193 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:27:45.371131 kubelet[3195]: W0904 17:27:45.370218 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:27:45.371131 kubelet[3195]: E0904 17:27:45.370585 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:27:45.371131 kubelet[3195]: W0904 17:27:45.370599 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:27:45.371131 kubelet[3195]: E0904 17:27:45.370973 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:27:45.371131 kubelet[3195]: E0904 17:27:45.371008 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:27:45.376366 kubelet[3195]: E0904 17:27:45.376238 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:27:45.376366 kubelet[3195]: W0904 17:27:45.376268 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:27:45.376530 kubelet[3195]: E0904 17:27:45.376446 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:27:45.376848 kubelet[3195]: E0904 17:27:45.376740 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:27:45.376848 kubelet[3195]: W0904 17:27:45.376757 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:27:45.376848 kubelet[3195]: E0904 17:27:45.376793 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:27:45.376848 kubelet[3195]: E0904 17:27:45.376977 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:27:45.376848 kubelet[3195]: W0904 17:27:45.376985 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:27:45.376848 kubelet[3195]: E0904 17:27:45.377017 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:27:45.376848 kubelet[3195]: E0904 17:27:45.377199 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:27:45.376848 kubelet[3195]: W0904 17:27:45.377208 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:27:45.376848 kubelet[3195]: E0904 17:27:45.377274 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:27:45.376848 kubelet[3195]: E0904 17:27:45.377468 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:27:45.380903 kubelet[3195]: W0904 17:27:45.377477 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:27:45.380903 kubelet[3195]: E0904 17:27:45.377760 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:27:45.380903 kubelet[3195]: E0904 17:27:45.379410 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:27:45.380903 kubelet[3195]: W0904 17:27:45.379421 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:27:45.380903 kubelet[3195]: E0904 17:27:45.379920 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:27:45.380903 kubelet[3195]: E0904 17:27:45.380065 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:27:45.380903 kubelet[3195]: W0904 17:27:45.380074 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:27:45.380903 kubelet[3195]: E0904 17:27:45.380107 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:27:45.380903 kubelet[3195]: E0904 17:27:45.380407 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:27:45.380903 kubelet[3195]: W0904 17:27:45.380417 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:27:45.385215 kubelet[3195]: E0904 17:27:45.380448 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:27:45.385215 kubelet[3195]: E0904 17:27:45.380885 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:27:45.385215 kubelet[3195]: W0904 17:27:45.380896 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:27:45.385215 kubelet[3195]: E0904 17:27:45.383403 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:27:45.385215 kubelet[3195]: E0904 17:27:45.383747 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:27:45.385215 kubelet[3195]: W0904 17:27:45.383760 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:27:45.385215 kubelet[3195]: E0904 17:27:45.384829 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:27:45.388262 containerd[1981]: time="2024-09-04T17:27:45.386908413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2v98z,Uid:104d4934-f254-4783-a298-14b5b46692a9,Namespace:calico-system,Attempt:0,}" Sep 4 17:27:45.388355 kubelet[3195]: E0904 17:27:45.387020 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:27:45.388355 kubelet[3195]: W0904 17:27:45.388334 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:27:45.388517 kubelet[3195]: E0904 17:27:45.388381 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:27:45.388941 kubelet[3195]: E0904 17:27:45.388872 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:27:45.388941 kubelet[3195]: W0904 17:27:45.388886 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:27:45.388941 kubelet[3195]: E0904 17:27:45.388918 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:27:45.391424 kubelet[3195]: E0904 17:27:45.389176 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:27:45.391424 kubelet[3195]: W0904 17:27:45.389187 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:27:45.391424 kubelet[3195]: E0904 17:27:45.389503 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:27:45.391424 kubelet[3195]: W0904 17:27:45.389514 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:27:45.391424 kubelet[3195]: E0904 17:27:45.389793 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:27:45.391424 kubelet[3195]: W0904 17:27:45.389806 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:27:45.391424 kubelet[3195]: E0904 17:27:45.390038 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:27:45.391424 kubelet[3195]: W0904 17:27:45.390046 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:27:45.391424 kubelet[3195]: E0904 17:27:45.390294 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:27:45.391424 kubelet[3195]: W0904 17:27:45.390303 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:27:45.391782 kubelet[3195]: E0904 17:27:45.390335 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:27:45.391782 kubelet[3195]: E0904 17:27:45.391086 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:27:45.391782 kubelet[3195]: E0904 17:27:45.391131 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:27:45.394586 kubelet[3195]: E0904 17:27:45.392538 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:27:45.394586 kubelet[3195]: W0904 17:27:45.392555 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:27:45.394586 kubelet[3195]: E0904 17:27:45.392687 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:27:45.394586 kubelet[3195]: E0904 17:27:45.392730 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:27:45.394586 kubelet[3195]: E0904 17:27:45.393382 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:27:45.394586 kubelet[3195]: E0904 17:27:45.394365 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:27:45.394586 kubelet[3195]: W0904 17:27:45.394377 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:27:45.394586 kubelet[3195]: E0904 17:27:45.394401 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:27:45.397015 kubelet[3195]: E0904 17:27:45.395607 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:27:45.397015 kubelet[3195]: W0904 17:27:45.395619 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:27:45.397015 kubelet[3195]: E0904 17:27:45.395643 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:27:45.397015 kubelet[3195]: E0904 17:27:45.397009 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:27:45.397248 kubelet[3195]: W0904 17:27:45.397022 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:27:45.397248 kubelet[3195]: E0904 17:27:45.397053 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:27:45.400260 systemd[1]: Started cri-containerd-995e67a9bceb2307b23e655a6e6d4b3c1a567084bfaf9637e70c2a551f809e48.scope - libcontainer container 995e67a9bceb2307b23e655a6e6d4b3c1a567084bfaf9637e70c2a551f809e48. Sep 4 17:27:45.454792 kubelet[3195]: E0904 17:27:45.453932 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:27:45.454792 kubelet[3195]: W0904 17:27:45.453970 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:27:45.454792 kubelet[3195]: E0904 17:27:45.453995 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:27:45.481853 containerd[1981]: time="2024-09-04T17:27:45.481491378Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:27:45.481853 containerd[1981]: time="2024-09-04T17:27:45.481576177Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:27:45.481853 containerd[1981]: time="2024-09-04T17:27:45.481606002Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:27:45.481853 containerd[1981]: time="2024-09-04T17:27:45.481625868Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:27:45.527173 systemd[1]: Started cri-containerd-ae4dd82920ce1bdc76b755875afc611ee44e20bb79bfbef63052cafc20e55bf1.scope - libcontainer container ae4dd82920ce1bdc76b755875afc611ee44e20bb79bfbef63052cafc20e55bf1. Sep 4 17:27:45.600346 containerd[1981]: time="2024-09-04T17:27:45.600280554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2v98z,Uid:104d4934-f254-4783-a298-14b5b46692a9,Namespace:calico-system,Attempt:0,} returns sandbox id \"ae4dd82920ce1bdc76b755875afc611ee44e20bb79bfbef63052cafc20e55bf1\"" Sep 4 17:27:45.604713 containerd[1981]: time="2024-09-04T17:27:45.604567610Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"" Sep 4 17:27:45.689424 containerd[1981]: time="2024-09-04T17:27:45.689383556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-66874c69cc-vngbt,Uid:4b0f2f9e-991e-4c45-8578-ba5b13a487a8,Namespace:calico-system,Attempt:0,} returns sandbox id \"995e67a9bceb2307b23e655a6e6d4b3c1a567084bfaf9637e70c2a551f809e48\"" Sep 4 17:27:47.108204 containerd[1981]: time="2024-09-04T17:27:47.108156214Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:27:47.112431 containerd[1981]: time="2024-09-04T17:27:47.112377688Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=5141007" Sep 4 17:27:47.118249 containerd[1981]: time="2024-09-04T17:27:47.115723428Z" level=info msg="ImageCreate event name:\"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:27:47.122494 containerd[1981]: time="2024-09-04T17:27:47.122346297Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:27:47.124969 containerd[1981]: time="2024-09-04T17:27:47.124842199Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6633368\" in 1.520120221s" Sep 4 17:27:47.125132 containerd[1981]: time="2024-09-04T17:27:47.125112465Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\"" Sep 4 17:27:47.130265 containerd[1981]: time="2024-09-04T17:27:47.127481573Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\"" Sep 4 17:27:47.132120 containerd[1981]: time="2024-09-04T17:27:47.132084824Z" level=info msg="CreateContainer within sandbox \"ae4dd82920ce1bdc76b755875afc611ee44e20bb79bfbef63052cafc20e55bf1\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 4 17:27:47.170257 containerd[1981]: time="2024-09-04T17:27:47.170190390Z" level=info msg="CreateContainer within sandbox \"ae4dd82920ce1bdc76b755875afc611ee44e20bb79bfbef63052cafc20e55bf1\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"48e098b8a6e958f6f1d9c20c062c02b6eaa116e9eed72299a84decef76208a8f\"" Sep 4 17:27:47.171146 containerd[1981]: time="2024-09-04T17:27:47.171117687Z" level=info msg="StartContainer for \"48e098b8a6e958f6f1d9c20c062c02b6eaa116e9eed72299a84decef76208a8f\"" Sep 4 17:27:47.219492 kubelet[3195]: E0904 17:27:47.217856 3195 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rnqrz" podUID="906685ae-b7d7-4862-82f6-b94651385380" Sep 4 17:27:47.246478 systemd[1]: Started cri-containerd-48e098b8a6e958f6f1d9c20c062c02b6eaa116e9eed72299a84decef76208a8f.scope - libcontainer container 48e098b8a6e958f6f1d9c20c062c02b6eaa116e9eed72299a84decef76208a8f. Sep 4 17:27:47.312640 containerd[1981]: time="2024-09-04T17:27:47.312594434Z" level=info msg="StartContainer for \"48e098b8a6e958f6f1d9c20c062c02b6eaa116e9eed72299a84decef76208a8f\" returns successfully" Sep 4 17:27:47.341441 systemd[1]: cri-containerd-48e098b8a6e958f6f1d9c20c062c02b6eaa116e9eed72299a84decef76208a8f.scope: Deactivated successfully. Sep 4 17:27:47.390785 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-48e098b8a6e958f6f1d9c20c062c02b6eaa116e9eed72299a84decef76208a8f-rootfs.mount: Deactivated successfully. Sep 4 17:27:47.488578 containerd[1981]: time="2024-09-04T17:27:47.452224543Z" level=info msg="shim disconnected" id=48e098b8a6e958f6f1d9c20c062c02b6eaa116e9eed72299a84decef76208a8f namespace=k8s.io Sep 4 17:27:47.489083 containerd[1981]: time="2024-09-04T17:27:47.488846095Z" level=warning msg="cleaning up after shim disconnected" id=48e098b8a6e958f6f1d9c20c062c02b6eaa116e9eed72299a84decef76208a8f namespace=k8s.io Sep 4 17:27:47.489083 containerd[1981]: time="2024-09-04T17:27:47.488876799Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:27:49.220776 kubelet[3195]: E0904 17:27:49.220705 3195 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rnqrz" podUID="906685ae-b7d7-4862-82f6-b94651385380" Sep 4 17:27:50.151317 containerd[1981]: time="2024-09-04T17:27:50.150813093Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:27:50.152694 containerd[1981]: time="2024-09-04T17:27:50.152640277Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=29471335" Sep 4 17:27:50.154676 containerd[1981]: time="2024-09-04T17:27:50.154639177Z" level=info msg="ImageCreate event name:\"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:27:50.161375 containerd[1981]: time="2024-09-04T17:27:50.161315688Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:27:50.164899 containerd[1981]: time="2024-09-04T17:27:50.164761915Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"30963728\" in 3.033552779s" Sep 4 17:27:50.165574 containerd[1981]: time="2024-09-04T17:27:50.165262914Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\"" Sep 4 17:27:50.167908 containerd[1981]: time="2024-09-04T17:27:50.167125475Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\"" Sep 4 17:27:50.191671 containerd[1981]: time="2024-09-04T17:27:50.191624795Z" level=info msg="CreateContainer within sandbox \"995e67a9bceb2307b23e655a6e6d4b3c1a567084bfaf9637e70c2a551f809e48\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 4 17:27:50.231126 containerd[1981]: time="2024-09-04T17:27:50.231036092Z" level=info msg="CreateContainer within sandbox \"995e67a9bceb2307b23e655a6e6d4b3c1a567084bfaf9637e70c2a551f809e48\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"29c6fb2a9c18fd84459ada9711aa8ae524df38f8dbd7eedcfaddd0c7f922c0ba\"" Sep 4 17:27:50.232286 containerd[1981]: time="2024-09-04T17:27:50.232061707Z" level=info msg="StartContainer for \"29c6fb2a9c18fd84459ada9711aa8ae524df38f8dbd7eedcfaddd0c7f922c0ba\"" Sep 4 17:27:50.333547 systemd[1]: Started cri-containerd-29c6fb2a9c18fd84459ada9711aa8ae524df38f8dbd7eedcfaddd0c7f922c0ba.scope - libcontainer container 29c6fb2a9c18fd84459ada9711aa8ae524df38f8dbd7eedcfaddd0c7f922c0ba. Sep 4 17:27:50.415724 containerd[1981]: time="2024-09-04T17:27:50.415220141Z" level=info msg="StartContainer for \"29c6fb2a9c18fd84459ada9711aa8ae524df38f8dbd7eedcfaddd0c7f922c0ba\" returns successfully" Sep 4 17:27:51.218926 kubelet[3195]: E0904 17:27:51.218486 3195 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rnqrz" podUID="906685ae-b7d7-4862-82f6-b94651385380" Sep 4 17:27:52.404933 kubelet[3195]: I0904 17:27:52.404903 3195 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 17:27:53.216975 kubelet[3195]: E0904 17:27:53.216926 3195 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rnqrz" podUID="906685ae-b7d7-4862-82f6-b94651385380" Sep 4 17:27:54.887108 containerd[1981]: time="2024-09-04T17:27:54.887059978Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:27:54.888586 containerd[1981]: time="2024-09-04T17:27:54.888456168Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=93083736" Sep 4 17:27:54.890948 containerd[1981]: time="2024-09-04T17:27:54.890656918Z" level=info msg="ImageCreate event name:\"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:27:54.893668 containerd[1981]: time="2024-09-04T17:27:54.893607311Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:27:54.894442 containerd[1981]: time="2024-09-04T17:27:54.894411462Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"94576137\" in 4.727247127s" Sep 4 17:27:54.894563 containerd[1981]: time="2024-09-04T17:27:54.894543509Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\"" Sep 4 17:27:54.897762 containerd[1981]: time="2024-09-04T17:27:54.897650485Z" level=info msg="CreateContainer within sandbox \"ae4dd82920ce1bdc76b755875afc611ee44e20bb79bfbef63052cafc20e55bf1\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 4 17:27:54.953933 containerd[1981]: time="2024-09-04T17:27:54.953773916Z" level=info msg="CreateContainer within sandbox \"ae4dd82920ce1bdc76b755875afc611ee44e20bb79bfbef63052cafc20e55bf1\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"2197efe84a4f60902b0eca609e020300c42a8613fc028a30fa5622673237c729\"" Sep 4 17:27:54.956749 containerd[1981]: time="2024-09-04T17:27:54.955945995Z" level=info msg="StartContainer for \"2197efe84a4f60902b0eca609e020300c42a8613fc028a30fa5622673237c729\"" Sep 4 17:27:55.041982 systemd[1]: run-containerd-runc-k8s.io-2197efe84a4f60902b0eca609e020300c42a8613fc028a30fa5622673237c729-runc.3xY4IS.mount: Deactivated successfully. Sep 4 17:27:55.055546 systemd[1]: Started cri-containerd-2197efe84a4f60902b0eca609e020300c42a8613fc028a30fa5622673237c729.scope - libcontainer container 2197efe84a4f60902b0eca609e020300c42a8613fc028a30fa5622673237c729. Sep 4 17:27:55.112382 containerd[1981]: time="2024-09-04T17:27:55.112224694Z" level=info msg="StartContainer for \"2197efe84a4f60902b0eca609e020300c42a8613fc028a30fa5622673237c729\" returns successfully" Sep 4 17:27:55.208669 kubelet[3195]: I0904 17:27:55.208555 3195 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 17:27:55.218069 kubelet[3195]: E0904 17:27:55.217128 3195 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rnqrz" podUID="906685ae-b7d7-4862-82f6-b94651385380" Sep 4 17:27:55.245853 kubelet[3195]: I0904 17:27:55.245818 3195 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-66874c69cc-vngbt" podStartSLOduration=6.77045388 podStartE2EDuration="11.245757871s" podCreationTimestamp="2024-09-04 17:27:44 +0000 UTC" firstStartedPulling="2024-09-04 17:27:45.69114095 +0000 UTC m=+19.705553896" lastFinishedPulling="2024-09-04 17:27:50.166444941 +0000 UTC m=+24.180857887" observedRunningTime="2024-09-04 17:27:51.451934738 +0000 UTC m=+25.466347705" watchObservedRunningTime="2024-09-04 17:27:55.245757871 +0000 UTC m=+29.260170832" Sep 4 17:27:56.310264 systemd[1]: cri-containerd-2197efe84a4f60902b0eca609e020300c42a8613fc028a30fa5622673237c729.scope: Deactivated successfully. Sep 4 17:27:56.353986 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2197efe84a4f60902b0eca609e020300c42a8613fc028a30fa5622673237c729-rootfs.mount: Deactivated successfully. Sep 4 17:27:56.365486 containerd[1981]: time="2024-09-04T17:27:56.365392880Z" level=info msg="shim disconnected" id=2197efe84a4f60902b0eca609e020300c42a8613fc028a30fa5622673237c729 namespace=k8s.io Sep 4 17:27:56.365486 containerd[1981]: time="2024-09-04T17:27:56.365453335Z" level=warning msg="cleaning up after shim disconnected" id=2197efe84a4f60902b0eca609e020300c42a8613fc028a30fa5622673237c729 namespace=k8s.io Sep 4 17:27:56.365486 containerd[1981]: time="2024-09-04T17:27:56.365465425Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:27:56.379423 kubelet[3195]: I0904 17:27:56.378470 3195 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Sep 4 17:27:56.419186 kubelet[3195]: I0904 17:27:56.419151 3195 topology_manager.go:215] "Topology Admit Handler" podUID="5361f75e-f056-4911-be4c-82491629749e" podNamespace="kube-system" podName="coredns-76f75df574-56xn4" Sep 4 17:27:56.429153 kubelet[3195]: I0904 17:27:56.429011 3195 topology_manager.go:215] "Topology Admit Handler" podUID="3e32abad-5cb7-4593-ad6e-3b408e428271" podNamespace="calico-system" podName="calico-kube-controllers-58c545f596-pnc4h" Sep 4 17:27:56.435134 kubelet[3195]: I0904 17:27:56.434201 3195 topology_manager.go:215] "Topology Admit Handler" podUID="374ecc78-947f-4762-95e1-b7832d67c6f4" podNamespace="kube-system" podName="coredns-76f75df574-drqh6" Sep 4 17:27:56.445298 systemd[1]: Created slice kubepods-burstable-pod5361f75e_f056_4911_be4c_82491629749e.slice - libcontainer container kubepods-burstable-pod5361f75e_f056_4911_be4c_82491629749e.slice. Sep 4 17:27:56.449080 containerd[1981]: time="2024-09-04T17:27:56.448888577Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\"" Sep 4 17:27:56.508254 kubelet[3195]: I0904 17:27:56.506037 3195 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxppw\" (UniqueName: \"kubernetes.io/projected/5361f75e-f056-4911-be4c-82491629749e-kube-api-access-dxppw\") pod \"coredns-76f75df574-56xn4\" (UID: \"5361f75e-f056-4911-be4c-82491629749e\") " pod="kube-system/coredns-76f75df574-56xn4" Sep 4 17:27:56.508254 kubelet[3195]: I0904 17:27:56.506094 3195 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5361f75e-f056-4911-be4c-82491629749e-config-volume\") pod \"coredns-76f75df574-56xn4\" (UID: \"5361f75e-f056-4911-be4c-82491629749e\") " pod="kube-system/coredns-76f75df574-56xn4" Sep 4 17:27:56.512602 systemd[1]: Created slice kubepods-besteffort-pod3e32abad_5cb7_4593_ad6e_3b408e428271.slice - libcontainer container kubepods-besteffort-pod3e32abad_5cb7_4593_ad6e_3b408e428271.slice. Sep 4 17:27:56.577729 systemd[1]: Created slice kubepods-burstable-pod374ecc78_947f_4762_95e1_b7832d67c6f4.slice - libcontainer container kubepods-burstable-pod374ecc78_947f_4762_95e1_b7832d67c6f4.slice. Sep 4 17:27:56.606426 kubelet[3195]: I0904 17:27:56.606393 3195 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/374ecc78-947f-4762-95e1-b7832d67c6f4-config-volume\") pod \"coredns-76f75df574-drqh6\" (UID: \"374ecc78-947f-4762-95e1-b7832d67c6f4\") " pod="kube-system/coredns-76f75df574-drqh6" Sep 4 17:27:56.608732 kubelet[3195]: I0904 17:27:56.608319 3195 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjrnx\" (UniqueName: \"kubernetes.io/projected/374ecc78-947f-4762-95e1-b7832d67c6f4-kube-api-access-vjrnx\") pod \"coredns-76f75df574-drqh6\" (UID: \"374ecc78-947f-4762-95e1-b7832d67c6f4\") " pod="kube-system/coredns-76f75df574-drqh6" Sep 4 17:27:56.608732 kubelet[3195]: I0904 17:27:56.608376 3195 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e32abad-5cb7-4593-ad6e-3b408e428271-tigera-ca-bundle\") pod \"calico-kube-controllers-58c545f596-pnc4h\" (UID: \"3e32abad-5cb7-4593-ad6e-3b408e428271\") " pod="calico-system/calico-kube-controllers-58c545f596-pnc4h" Sep 4 17:27:56.608732 kubelet[3195]: I0904 17:27:56.608432 3195 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9c2t9\" (UniqueName: \"kubernetes.io/projected/3e32abad-5cb7-4593-ad6e-3b408e428271-kube-api-access-9c2t9\") pod \"calico-kube-controllers-58c545f596-pnc4h\" (UID: \"3e32abad-5cb7-4593-ad6e-3b408e428271\") " pod="calico-system/calico-kube-controllers-58c545f596-pnc4h" Sep 4 17:27:56.780441 containerd[1981]: time="2024-09-04T17:27:56.780394137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-56xn4,Uid:5361f75e-f056-4911-be4c-82491629749e,Namespace:kube-system,Attempt:0,}" Sep 4 17:27:56.846002 containerd[1981]: time="2024-09-04T17:27:56.845538451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58c545f596-pnc4h,Uid:3e32abad-5cb7-4593-ad6e-3b408e428271,Namespace:calico-system,Attempt:0,}" Sep 4 17:27:56.918773 containerd[1981]: time="2024-09-04T17:27:56.918721083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-drqh6,Uid:374ecc78-947f-4762-95e1-b7832d67c6f4,Namespace:kube-system,Attempt:0,}" Sep 4 17:27:57.234878 systemd[1]: Created slice kubepods-besteffort-pod906685ae_b7d7_4862_82f6_b94651385380.slice - libcontainer container kubepods-besteffort-pod906685ae_b7d7_4862_82f6_b94651385380.slice. Sep 4 17:27:57.242196 containerd[1981]: time="2024-09-04T17:27:57.242139046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rnqrz,Uid:906685ae-b7d7-4862-82f6-b94651385380,Namespace:calico-system,Attempt:0,}" Sep 4 17:27:57.407433 containerd[1981]: time="2024-09-04T17:27:57.405621719Z" level=error msg="Failed to destroy network for sandbox \"8bbeef170971c6a45fa0be4285542e081ae730d19d7c6dc4e82e3d9a8cb95fd7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:27:57.412473 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8bbeef170971c6a45fa0be4285542e081ae730d19d7c6dc4e82e3d9a8cb95fd7-shm.mount: Deactivated successfully. Sep 4 17:27:57.432211 containerd[1981]: time="2024-09-04T17:27:57.432153139Z" level=error msg="encountered an error cleaning up failed sandbox \"8bbeef170971c6a45fa0be4285542e081ae730d19d7c6dc4e82e3d9a8cb95fd7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:27:57.432450 containerd[1981]: time="2024-09-04T17:27:57.432409423Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-drqh6,Uid:374ecc78-947f-4762-95e1-b7832d67c6f4,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8bbeef170971c6a45fa0be4285542e081ae730d19d7c6dc4e82e3d9a8cb95fd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:27:57.432545 containerd[1981]: time="2024-09-04T17:27:57.432513898Z" level=error msg="Failed to destroy network for sandbox \"24cea8cb97f40f2c6769a319aa7b219f6e0c50aae4fe346948dc6a4dd9d1a3b7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:27:57.436386 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-24cea8cb97f40f2c6769a319aa7b219f6e0c50aae4fe346948dc6a4dd9d1a3b7-shm.mount: Deactivated successfully. Sep 4 17:27:57.436708 containerd[1981]: time="2024-09-04T17:27:57.436660858Z" level=error msg="encountered an error cleaning up failed sandbox \"24cea8cb97f40f2c6769a319aa7b219f6e0c50aae4fe346948dc6a4dd9d1a3b7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:27:57.436793 containerd[1981]: time="2024-09-04T17:27:57.436751944Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58c545f596-pnc4h,Uid:3e32abad-5cb7-4593-ad6e-3b408e428271,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"24cea8cb97f40f2c6769a319aa7b219f6e0c50aae4fe346948dc6a4dd9d1a3b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:27:57.438490 kubelet[3195]: E0904 17:27:57.438447 3195 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24cea8cb97f40f2c6769a319aa7b219f6e0c50aae4fe346948dc6a4dd9d1a3b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:27:57.439171 kubelet[3195]: E0904 17:27:57.438519 3195 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24cea8cb97f40f2c6769a319aa7b219f6e0c50aae4fe346948dc6a4dd9d1a3b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-58c545f596-pnc4h" Sep 4 17:27:57.439171 kubelet[3195]: E0904 17:27:57.438447 3195 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bbeef170971c6a45fa0be4285542e081ae730d19d7c6dc4e82e3d9a8cb95fd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:27:57.439171 kubelet[3195]: E0904 17:27:57.438551 3195 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24cea8cb97f40f2c6769a319aa7b219f6e0c50aae4fe346948dc6a4dd9d1a3b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-58c545f596-pnc4h" Sep 4 17:27:57.439171 kubelet[3195]: E0904 17:27:57.438575 3195 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bbeef170971c6a45fa0be4285542e081ae730d19d7c6dc4e82e3d9a8cb95fd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-drqh6" Sep 4 17:27:57.440110 kubelet[3195]: E0904 17:27:57.438599 3195 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bbeef170971c6a45fa0be4285542e081ae730d19d7c6dc4e82e3d9a8cb95fd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-drqh6" Sep 4 17:27:57.440110 kubelet[3195]: E0904 17:27:57.438621 3195 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-58c545f596-pnc4h_calico-system(3e32abad-5cb7-4593-ad6e-3b408e428271)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-58c545f596-pnc4h_calico-system(3e32abad-5cb7-4593-ad6e-3b408e428271)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"24cea8cb97f40f2c6769a319aa7b219f6e0c50aae4fe346948dc6a4dd9d1a3b7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-58c545f596-pnc4h" podUID="3e32abad-5cb7-4593-ad6e-3b408e428271" Sep 4 17:27:57.440110 kubelet[3195]: E0904 17:27:57.438644 3195 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-drqh6_kube-system(374ecc78-947f-4762-95e1-b7832d67c6f4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-drqh6_kube-system(374ecc78-947f-4762-95e1-b7832d67c6f4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8bbeef170971c6a45fa0be4285542e081ae730d19d7c6dc4e82e3d9a8cb95fd7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-drqh6" podUID="374ecc78-947f-4762-95e1-b7832d67c6f4" Sep 4 17:27:57.450921 kubelet[3195]: I0904 17:27:57.450408 3195 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8bbeef170971c6a45fa0be4285542e081ae730d19d7c6dc4e82e3d9a8cb95fd7" Sep 4 17:27:57.459520 containerd[1981]: time="2024-09-04T17:27:57.458709295Z" level=info msg="StopPodSandbox for \"8bbeef170971c6a45fa0be4285542e081ae730d19d7c6dc4e82e3d9a8cb95fd7\"" Sep 4 17:27:57.464218 kubelet[3195]: I0904 17:27:57.463314 3195 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24cea8cb97f40f2c6769a319aa7b219f6e0c50aae4fe346948dc6a4dd9d1a3b7" Sep 4 17:27:57.469343 containerd[1981]: time="2024-09-04T17:27:57.467828799Z" level=info msg="StopPodSandbox for \"24cea8cb97f40f2c6769a319aa7b219f6e0c50aae4fe346948dc6a4dd9d1a3b7\"" Sep 4 17:27:57.469343 containerd[1981]: time="2024-09-04T17:27:57.469001898Z" level=info msg="Ensure that sandbox 24cea8cb97f40f2c6769a319aa7b219f6e0c50aae4fe346948dc6a4dd9d1a3b7 in task-service has been cleanup successfully" Sep 4 17:27:57.469903 containerd[1981]: time="2024-09-04T17:27:57.469875525Z" level=info msg="Ensure that sandbox 8bbeef170971c6a45fa0be4285542e081ae730d19d7c6dc4e82e3d9a8cb95fd7 in task-service has been cleanup successfully" Sep 4 17:27:57.470595 containerd[1981]: time="2024-09-04T17:27:57.470502216Z" level=error msg="Failed to destroy network for sandbox \"7a2a9bdb47db04fe8631cc4d371f1b447fdd623e7562000870a0ca39bde78d75\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:27:57.471847 containerd[1981]: time="2024-09-04T17:27:57.471808088Z" level=error msg="encountered an error cleaning up failed sandbox \"7a2a9bdb47db04fe8631cc4d371f1b447fdd623e7562000870a0ca39bde78d75\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:27:57.471925 containerd[1981]: time="2024-09-04T17:27:57.471876104Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-56xn4,Uid:5361f75e-f056-4911-be4c-82491629749e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7a2a9bdb47db04fe8631cc4d371f1b447fdd623e7562000870a0ca39bde78d75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:27:57.473481 kubelet[3195]: E0904 17:27:57.473453 3195 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a2a9bdb47db04fe8631cc4d371f1b447fdd623e7562000870a0ca39bde78d75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:27:57.473578 kubelet[3195]: E0904 17:27:57.473528 3195 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a2a9bdb47db04fe8631cc4d371f1b447fdd623e7562000870a0ca39bde78d75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-56xn4" Sep 4 17:27:57.473578 kubelet[3195]: E0904 17:27:57.473562 3195 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a2a9bdb47db04fe8631cc4d371f1b447fdd623e7562000870a0ca39bde78d75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-56xn4" Sep 4 17:27:57.473668 kubelet[3195]: E0904 17:27:57.473632 3195 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-56xn4_kube-system(5361f75e-f056-4911-be4c-82491629749e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-56xn4_kube-system(5361f75e-f056-4911-be4c-82491629749e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7a2a9bdb47db04fe8631cc4d371f1b447fdd623e7562000870a0ca39bde78d75\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-56xn4" podUID="5361f75e-f056-4911-be4c-82491629749e" Sep 4 17:27:57.475602 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7a2a9bdb47db04fe8631cc4d371f1b447fdd623e7562000870a0ca39bde78d75-shm.mount: Deactivated successfully. Sep 4 17:27:57.558206 containerd[1981]: time="2024-09-04T17:27:57.558149613Z" level=error msg="StopPodSandbox for \"24cea8cb97f40f2c6769a319aa7b219f6e0c50aae4fe346948dc6a4dd9d1a3b7\" failed" error="failed to destroy network for sandbox \"24cea8cb97f40f2c6769a319aa7b219f6e0c50aae4fe346948dc6a4dd9d1a3b7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:27:57.558939 kubelet[3195]: E0904 17:27:57.558492 3195 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"24cea8cb97f40f2c6769a319aa7b219f6e0c50aae4fe346948dc6a4dd9d1a3b7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="24cea8cb97f40f2c6769a319aa7b219f6e0c50aae4fe346948dc6a4dd9d1a3b7" Sep 4 17:27:57.558939 kubelet[3195]: E0904 17:27:57.558580 3195 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"24cea8cb97f40f2c6769a319aa7b219f6e0c50aae4fe346948dc6a4dd9d1a3b7"} Sep 4 17:27:57.558939 kubelet[3195]: E0904 17:27:57.558630 3195 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3e32abad-5cb7-4593-ad6e-3b408e428271\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"24cea8cb97f40f2c6769a319aa7b219f6e0c50aae4fe346948dc6a4dd9d1a3b7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:27:57.558939 kubelet[3195]: E0904 17:27:57.558671 3195 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3e32abad-5cb7-4593-ad6e-3b408e428271\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"24cea8cb97f40f2c6769a319aa7b219f6e0c50aae4fe346948dc6a4dd9d1a3b7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-58c545f596-pnc4h" podUID="3e32abad-5cb7-4593-ad6e-3b408e428271" Sep 4 17:27:57.563224 containerd[1981]: time="2024-09-04T17:27:57.563174101Z" level=error msg="Failed to destroy network for sandbox \"5203c9460a703b6e418b5fe03e1638e0f68b1f3f5ad6cd39d3463c46382ffabc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:27:57.567374 containerd[1981]: time="2024-09-04T17:27:57.567120386Z" level=error msg="encountered an error cleaning up failed sandbox \"5203c9460a703b6e418b5fe03e1638e0f68b1f3f5ad6cd39d3463c46382ffabc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:27:57.567408 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5203c9460a703b6e418b5fe03e1638e0f68b1f3f5ad6cd39d3463c46382ffabc-shm.mount: Deactivated successfully. Sep 4 17:27:57.569209 containerd[1981]: time="2024-09-04T17:27:57.567218742Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rnqrz,Uid:906685ae-b7d7-4862-82f6-b94651385380,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5203c9460a703b6e418b5fe03e1638e0f68b1f3f5ad6cd39d3463c46382ffabc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:27:57.569668 kubelet[3195]: E0904 17:27:57.569627 3195 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5203c9460a703b6e418b5fe03e1638e0f68b1f3f5ad6cd39d3463c46382ffabc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:27:57.570010 kubelet[3195]: E0904 17:27:57.569867 3195 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5203c9460a703b6e418b5fe03e1638e0f68b1f3f5ad6cd39d3463c46382ffabc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rnqrz" Sep 4 17:27:57.570010 kubelet[3195]: E0904 17:27:57.569904 3195 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5203c9460a703b6e418b5fe03e1638e0f68b1f3f5ad6cd39d3463c46382ffabc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rnqrz" Sep 4 17:27:57.570270 kubelet[3195]: E0904 17:27:57.569994 3195 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rnqrz_calico-system(906685ae-b7d7-4862-82f6-b94651385380)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rnqrz_calico-system(906685ae-b7d7-4862-82f6-b94651385380)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5203c9460a703b6e418b5fe03e1638e0f68b1f3f5ad6cd39d3463c46382ffabc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rnqrz" podUID="906685ae-b7d7-4862-82f6-b94651385380" Sep 4 17:27:57.571799 containerd[1981]: time="2024-09-04T17:27:57.571757232Z" level=error msg="StopPodSandbox for \"8bbeef170971c6a45fa0be4285542e081ae730d19d7c6dc4e82e3d9a8cb95fd7\" failed" error="failed to destroy network for sandbox \"8bbeef170971c6a45fa0be4285542e081ae730d19d7c6dc4e82e3d9a8cb95fd7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:27:57.572246 kubelet[3195]: E0904 17:27:57.572040 3195 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8bbeef170971c6a45fa0be4285542e081ae730d19d7c6dc4e82e3d9a8cb95fd7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8bbeef170971c6a45fa0be4285542e081ae730d19d7c6dc4e82e3d9a8cb95fd7" Sep 4 17:27:57.572246 kubelet[3195]: E0904 17:27:57.572082 3195 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8bbeef170971c6a45fa0be4285542e081ae730d19d7c6dc4e82e3d9a8cb95fd7"} Sep 4 17:27:57.572246 kubelet[3195]: E0904 17:27:57.572149 3195 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"374ecc78-947f-4762-95e1-b7832d67c6f4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8bbeef170971c6a45fa0be4285542e081ae730d19d7c6dc4e82e3d9a8cb95fd7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:27:57.572246 kubelet[3195]: E0904 17:27:57.572186 3195 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"374ecc78-947f-4762-95e1-b7832d67c6f4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8bbeef170971c6a45fa0be4285542e081ae730d19d7c6dc4e82e3d9a8cb95fd7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-drqh6" podUID="374ecc78-947f-4762-95e1-b7832d67c6f4" Sep 4 17:27:58.467812 kubelet[3195]: I0904 17:27:58.467777 3195 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5203c9460a703b6e418b5fe03e1638e0f68b1f3f5ad6cd39d3463c46382ffabc" Sep 4 17:27:58.475125 kubelet[3195]: I0904 17:27:58.472765 3195 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a2a9bdb47db04fe8631cc4d371f1b447fdd623e7562000870a0ca39bde78d75" Sep 4 17:27:58.475366 containerd[1981]: time="2024-09-04T17:27:58.474396666Z" level=info msg="StopPodSandbox for \"7a2a9bdb47db04fe8631cc4d371f1b447fdd623e7562000870a0ca39bde78d75\"" Sep 4 17:27:58.475366 containerd[1981]: time="2024-09-04T17:27:58.474663926Z" level=info msg="Ensure that sandbox 7a2a9bdb47db04fe8631cc4d371f1b447fdd623e7562000870a0ca39bde78d75 in task-service has been cleanup successfully" Sep 4 17:27:58.480938 containerd[1981]: time="2024-09-04T17:27:58.478588456Z" level=info msg="StopPodSandbox for \"5203c9460a703b6e418b5fe03e1638e0f68b1f3f5ad6cd39d3463c46382ffabc\"" Sep 4 17:27:58.480938 containerd[1981]: time="2024-09-04T17:27:58.478998014Z" level=info msg="Ensure that sandbox 5203c9460a703b6e418b5fe03e1638e0f68b1f3f5ad6cd39d3463c46382ffabc in task-service has been cleanup successfully" Sep 4 17:27:58.640660 containerd[1981]: time="2024-09-04T17:27:58.640505975Z" level=error msg="StopPodSandbox for \"5203c9460a703b6e418b5fe03e1638e0f68b1f3f5ad6cd39d3463c46382ffabc\" failed" error="failed to destroy network for sandbox \"5203c9460a703b6e418b5fe03e1638e0f68b1f3f5ad6cd39d3463c46382ffabc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:27:58.640919 kubelet[3195]: E0904 17:27:58.640849 3195 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5203c9460a703b6e418b5fe03e1638e0f68b1f3f5ad6cd39d3463c46382ffabc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5203c9460a703b6e418b5fe03e1638e0f68b1f3f5ad6cd39d3463c46382ffabc" Sep 4 17:27:58.640919 kubelet[3195]: E0904 17:27:58.640902 3195 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5203c9460a703b6e418b5fe03e1638e0f68b1f3f5ad6cd39d3463c46382ffabc"} Sep 4 17:27:58.642900 kubelet[3195]: E0904 17:27:58.640954 3195 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"906685ae-b7d7-4862-82f6-b94651385380\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5203c9460a703b6e418b5fe03e1638e0f68b1f3f5ad6cd39d3463c46382ffabc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:27:58.642900 kubelet[3195]: E0904 17:27:58.641076 3195 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"906685ae-b7d7-4862-82f6-b94651385380\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5203c9460a703b6e418b5fe03e1638e0f68b1f3f5ad6cd39d3463c46382ffabc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rnqrz" podUID="906685ae-b7d7-4862-82f6-b94651385380" Sep 4 17:27:58.647699 containerd[1981]: time="2024-09-04T17:27:58.647590245Z" level=error msg="StopPodSandbox for \"7a2a9bdb47db04fe8631cc4d371f1b447fdd623e7562000870a0ca39bde78d75\" failed" error="failed to destroy network for sandbox \"7a2a9bdb47db04fe8631cc4d371f1b447fdd623e7562000870a0ca39bde78d75\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:27:58.648931 kubelet[3195]: E0904 17:27:58.648905 3195 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7a2a9bdb47db04fe8631cc4d371f1b447fdd623e7562000870a0ca39bde78d75\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7a2a9bdb47db04fe8631cc4d371f1b447fdd623e7562000870a0ca39bde78d75" Sep 4 17:27:58.649039 kubelet[3195]: E0904 17:27:58.648958 3195 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7a2a9bdb47db04fe8631cc4d371f1b447fdd623e7562000870a0ca39bde78d75"} Sep 4 17:27:58.649039 kubelet[3195]: E0904 17:27:58.649004 3195 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5361f75e-f056-4911-be4c-82491629749e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a2a9bdb47db04fe8631cc4d371f1b447fdd623e7562000870a0ca39bde78d75\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:27:58.649292 kubelet[3195]: E0904 17:27:58.649049 3195 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5361f75e-f056-4911-be4c-82491629749e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a2a9bdb47db04fe8631cc4d371f1b447fdd623e7562000870a0ca39bde78d75\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-56xn4" podUID="5361f75e-f056-4911-be4c-82491629749e" Sep 4 17:28:04.696662 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1316183683.mount: Deactivated successfully. Sep 4 17:28:04.795934 containerd[1981]: time="2024-09-04T17:28:04.790205927Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=117873564" Sep 4 17:28:04.798348 containerd[1981]: time="2024-09-04T17:28:04.797626729Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:28:04.814162 containerd[1981]: time="2024-09-04T17:28:04.814109054Z" level=info msg="ImageCreate event name:\"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:28:04.816191 containerd[1981]: time="2024-09-04T17:28:04.816133257Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:28:04.821758 containerd[1981]: time="2024-09-04T17:28:04.821694874Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"117873426\" in 8.366174233s" Sep 4 17:28:04.821966 containerd[1981]: time="2024-09-04T17:28:04.821945193Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\"" Sep 4 17:28:04.945549 containerd[1981]: time="2024-09-04T17:28:04.945391085Z" level=info msg="CreateContainer within sandbox \"ae4dd82920ce1bdc76b755875afc611ee44e20bb79bfbef63052cafc20e55bf1\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 4 17:28:05.068678 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1851705848.mount: Deactivated successfully. Sep 4 17:28:05.125345 containerd[1981]: time="2024-09-04T17:28:05.125296993Z" level=info msg="CreateContainer within sandbox \"ae4dd82920ce1bdc76b755875afc611ee44e20bb79bfbef63052cafc20e55bf1\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"89f3a67f8d922e8bc44260d8c46f643b646726450c56041721cbcdf710801967\"" Sep 4 17:28:05.134694 containerd[1981]: time="2024-09-04T17:28:05.134486387Z" level=info msg="StartContainer for \"89f3a67f8d922e8bc44260d8c46f643b646726450c56041721cbcdf710801967\"" Sep 4 17:28:05.295825 systemd[1]: Started cri-containerd-89f3a67f8d922e8bc44260d8c46f643b646726450c56041721cbcdf710801967.scope - libcontainer container 89f3a67f8d922e8bc44260d8c46f643b646726450c56041721cbcdf710801967. Sep 4 17:28:05.380515 containerd[1981]: time="2024-09-04T17:28:05.380035211Z" level=info msg="StartContainer for \"89f3a67f8d922e8bc44260d8c46f643b646726450c56041721cbcdf710801967\" returns successfully" Sep 4 17:28:05.651694 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 4 17:28:05.654378 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 4 17:28:06.598118 systemd[1]: run-containerd-runc-k8s.io-89f3a67f8d922e8bc44260d8c46f643b646726450c56041721cbcdf710801967-runc.fdPlaH.mount: Deactivated successfully. Sep 4 17:28:08.225269 containerd[1981]: time="2024-09-04T17:28:08.221534208Z" level=info msg="StopPodSandbox for \"24cea8cb97f40f2c6769a319aa7b219f6e0c50aae4fe346948dc6a4dd9d1a3b7\"" Sep 4 17:28:08.470867 kubelet[3195]: I0904 17:28:08.470618 3195 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-2v98z" podStartSLOduration=4.212149885 podStartE2EDuration="23.430870901s" podCreationTimestamp="2024-09-04 17:27:45 +0000 UTC" firstStartedPulling="2024-09-04 17:27:45.603618303 +0000 UTC m=+19.618031267" lastFinishedPulling="2024-09-04 17:28:04.822339326 +0000 UTC m=+38.836752283" observedRunningTime="2024-09-04 17:28:05.650408942 +0000 UTC m=+39.664821904" watchObservedRunningTime="2024-09-04 17:28:08.430870901 +0000 UTC m=+42.445283867" Sep 4 17:28:08.572839 kernel: bpftool[4640]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 4 17:28:08.785260 containerd[1981]: 2024-09-04 17:28:08.427 [INFO][4602] k8s.go 608: Cleaning up netns ContainerID="24cea8cb97f40f2c6769a319aa7b219f6e0c50aae4fe346948dc6a4dd9d1a3b7" Sep 4 17:28:08.785260 containerd[1981]: 2024-09-04 17:28:08.428 [INFO][4602] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="24cea8cb97f40f2c6769a319aa7b219f6e0c50aae4fe346948dc6a4dd9d1a3b7" iface="eth0" netns="/var/run/netns/cni-6ab066fd-8bc9-ac6a-d080-db4e8e572f5e" Sep 4 17:28:08.785260 containerd[1981]: 2024-09-04 17:28:08.428 [INFO][4602] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="24cea8cb97f40f2c6769a319aa7b219f6e0c50aae4fe346948dc6a4dd9d1a3b7" iface="eth0" netns="/var/run/netns/cni-6ab066fd-8bc9-ac6a-d080-db4e8e572f5e" Sep 4 17:28:08.785260 containerd[1981]: 2024-09-04 17:28:08.434 [INFO][4602] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="24cea8cb97f40f2c6769a319aa7b219f6e0c50aae4fe346948dc6a4dd9d1a3b7" iface="eth0" netns="/var/run/netns/cni-6ab066fd-8bc9-ac6a-d080-db4e8e572f5e" Sep 4 17:28:08.785260 containerd[1981]: 2024-09-04 17:28:08.434 [INFO][4602] k8s.go 615: Releasing IP address(es) ContainerID="24cea8cb97f40f2c6769a319aa7b219f6e0c50aae4fe346948dc6a4dd9d1a3b7" Sep 4 17:28:08.785260 containerd[1981]: 2024-09-04 17:28:08.434 [INFO][4602] utils.go 188: Calico CNI releasing IP address ContainerID="24cea8cb97f40f2c6769a319aa7b219f6e0c50aae4fe346948dc6a4dd9d1a3b7" Sep 4 17:28:08.785260 containerd[1981]: 2024-09-04 17:28:08.745 [INFO][4627] ipam_plugin.go 417: Releasing address using handleID ContainerID="24cea8cb97f40f2c6769a319aa7b219f6e0c50aae4fe346948dc6a4dd9d1a3b7" HandleID="k8s-pod-network.24cea8cb97f40f2c6769a319aa7b219f6e0c50aae4fe346948dc6a4dd9d1a3b7" Workload="ip--172--31--30--103-k8s-calico--kube--controllers--58c545f596--pnc4h-eth0" Sep 4 17:28:08.785260 containerd[1981]: 2024-09-04 17:28:08.746 [INFO][4627] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:28:08.785260 containerd[1981]: 2024-09-04 17:28:08.747 [INFO][4627] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:28:08.785260 containerd[1981]: 2024-09-04 17:28:08.772 [WARNING][4627] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="24cea8cb97f40f2c6769a319aa7b219f6e0c50aae4fe346948dc6a4dd9d1a3b7" HandleID="k8s-pod-network.24cea8cb97f40f2c6769a319aa7b219f6e0c50aae4fe346948dc6a4dd9d1a3b7" Workload="ip--172--31--30--103-k8s-calico--kube--controllers--58c545f596--pnc4h-eth0" Sep 4 17:28:08.785260 containerd[1981]: 2024-09-04 17:28:08.772 [INFO][4627] ipam_plugin.go 445: Releasing address using workloadID ContainerID="24cea8cb97f40f2c6769a319aa7b219f6e0c50aae4fe346948dc6a4dd9d1a3b7" HandleID="k8s-pod-network.24cea8cb97f40f2c6769a319aa7b219f6e0c50aae4fe346948dc6a4dd9d1a3b7" Workload="ip--172--31--30--103-k8s-calico--kube--controllers--58c545f596--pnc4h-eth0" Sep 4 17:28:08.785260 containerd[1981]: 2024-09-04 17:28:08.774 [INFO][4627] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:28:08.785260 containerd[1981]: 2024-09-04 17:28:08.777 [INFO][4602] k8s.go 621: Teardown processing complete. ContainerID="24cea8cb97f40f2c6769a319aa7b219f6e0c50aae4fe346948dc6a4dd9d1a3b7" Sep 4 17:28:08.785260 containerd[1981]: time="2024-09-04T17:28:08.782016092Z" level=info msg="TearDown network for sandbox \"24cea8cb97f40f2c6769a319aa7b219f6e0c50aae4fe346948dc6a4dd9d1a3b7\" successfully" Sep 4 17:28:08.785260 containerd[1981]: time="2024-09-04T17:28:08.782063480Z" level=info msg="StopPodSandbox for \"24cea8cb97f40f2c6769a319aa7b219f6e0c50aae4fe346948dc6a4dd9d1a3b7\" returns successfully" Sep 4 17:28:08.785260 containerd[1981]: time="2024-09-04T17:28:08.784907253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58c545f596-pnc4h,Uid:3e32abad-5cb7-4593-ad6e-3b408e428271,Namespace:calico-system,Attempt:1,}" Sep 4 17:28:08.789837 systemd[1]: run-netns-cni\x2d6ab066fd\x2d8bc9\x2dac6a\x2dd080\x2ddb4e8e572f5e.mount: Deactivated successfully. Sep 4 17:28:09.218552 systemd-networkd[1810]: vxlan.calico: Link UP Sep 4 17:28:09.218568 systemd-networkd[1810]: vxlan.calico: Gained carrier Sep 4 17:28:09.222550 (udev-worker)[4680]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:28:09.250213 systemd-networkd[1810]: cali9fc37347d16: Link UP Sep 4 17:28:09.254337 systemd-networkd[1810]: cali9fc37347d16: Gained carrier Sep 4 17:28:09.256904 (udev-worker)[4688]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:28:09.294540 containerd[1981]: 2024-09-04 17:28:08.913 [INFO][4646] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--103-k8s-calico--kube--controllers--58c545f596--pnc4h-eth0 calico-kube-controllers-58c545f596- calico-system 3e32abad-5cb7-4593-ad6e-3b408e428271 684 0 2024-09-04 17:27:45 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:58c545f596 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-30-103 calico-kube-controllers-58c545f596-pnc4h eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali9fc37347d16 [] []}} ContainerID="2fef45a941af4be5a651996aa8d3cd11f29d48bd436faec8e9c3cdfb925ef626" Namespace="calico-system" Pod="calico-kube-controllers-58c545f596-pnc4h" WorkloadEndpoint="ip--172--31--30--103-k8s-calico--kube--controllers--58c545f596--pnc4h-" Sep 4 17:28:09.294540 containerd[1981]: 2024-09-04 17:28:08.914 [INFO][4646] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2fef45a941af4be5a651996aa8d3cd11f29d48bd436faec8e9c3cdfb925ef626" Namespace="calico-system" Pod="calico-kube-controllers-58c545f596-pnc4h" WorkloadEndpoint="ip--172--31--30--103-k8s-calico--kube--controllers--58c545f596--pnc4h-eth0" Sep 4 17:28:09.294540 containerd[1981]: 2024-09-04 17:28:09.070 [INFO][4656] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2fef45a941af4be5a651996aa8d3cd11f29d48bd436faec8e9c3cdfb925ef626" HandleID="k8s-pod-network.2fef45a941af4be5a651996aa8d3cd11f29d48bd436faec8e9c3cdfb925ef626" Workload="ip--172--31--30--103-k8s-calico--kube--controllers--58c545f596--pnc4h-eth0" Sep 4 17:28:09.294540 containerd[1981]: 2024-09-04 17:28:09.088 [INFO][4656] ipam_plugin.go 270: Auto assigning IP ContainerID="2fef45a941af4be5a651996aa8d3cd11f29d48bd436faec8e9c3cdfb925ef626" HandleID="k8s-pod-network.2fef45a941af4be5a651996aa8d3cd11f29d48bd436faec8e9c3cdfb925ef626" Workload="ip--172--31--30--103-k8s-calico--kube--controllers--58c545f596--pnc4h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003a6900), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-30-103", "pod":"calico-kube-controllers-58c545f596-pnc4h", "timestamp":"2024-09-04 17:28:09.070593567 +0000 UTC"}, Hostname:"ip-172-31-30-103", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:28:09.294540 containerd[1981]: 2024-09-04 17:28:09.088 [INFO][4656] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:28:09.294540 containerd[1981]: 2024-09-04 17:28:09.088 [INFO][4656] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:28:09.294540 containerd[1981]: 2024-09-04 17:28:09.088 [INFO][4656] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-103' Sep 4 17:28:09.294540 containerd[1981]: 2024-09-04 17:28:09.091 [INFO][4656] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2fef45a941af4be5a651996aa8d3cd11f29d48bd436faec8e9c3cdfb925ef626" host="ip-172-31-30-103" Sep 4 17:28:09.294540 containerd[1981]: 2024-09-04 17:28:09.112 [INFO][4656] ipam.go 372: Looking up existing affinities for host host="ip-172-31-30-103" Sep 4 17:28:09.294540 containerd[1981]: 2024-09-04 17:28:09.126 [INFO][4656] ipam.go 489: Trying affinity for 192.168.10.192/26 host="ip-172-31-30-103" Sep 4 17:28:09.294540 containerd[1981]: 2024-09-04 17:28:09.135 [INFO][4656] ipam.go 155: Attempting to load block cidr=192.168.10.192/26 host="ip-172-31-30-103" Sep 4 17:28:09.294540 containerd[1981]: 2024-09-04 17:28:09.148 [INFO][4656] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.10.192/26 host="ip-172-31-30-103" Sep 4 17:28:09.294540 containerd[1981]: 2024-09-04 17:28:09.148 [INFO][4656] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.10.192/26 handle="k8s-pod-network.2fef45a941af4be5a651996aa8d3cd11f29d48bd436faec8e9c3cdfb925ef626" host="ip-172-31-30-103" Sep 4 17:28:09.294540 containerd[1981]: 2024-09-04 17:28:09.150 [INFO][4656] ipam.go 1685: Creating new handle: k8s-pod-network.2fef45a941af4be5a651996aa8d3cd11f29d48bd436faec8e9c3cdfb925ef626 Sep 4 17:28:09.294540 containerd[1981]: 2024-09-04 17:28:09.175 [INFO][4656] ipam.go 1203: Writing block in order to claim IPs block=192.168.10.192/26 handle="k8s-pod-network.2fef45a941af4be5a651996aa8d3cd11f29d48bd436faec8e9c3cdfb925ef626" host="ip-172-31-30-103" Sep 4 17:28:09.294540 containerd[1981]: 2024-09-04 17:28:09.224 [INFO][4656] ipam.go 1216: Successfully claimed IPs: [192.168.10.193/26] block=192.168.10.192/26 handle="k8s-pod-network.2fef45a941af4be5a651996aa8d3cd11f29d48bd436faec8e9c3cdfb925ef626" host="ip-172-31-30-103" Sep 4 17:28:09.294540 containerd[1981]: 2024-09-04 17:28:09.224 [INFO][4656] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.10.193/26] handle="k8s-pod-network.2fef45a941af4be5a651996aa8d3cd11f29d48bd436faec8e9c3cdfb925ef626" host="ip-172-31-30-103" Sep 4 17:28:09.294540 containerd[1981]: 2024-09-04 17:28:09.225 [INFO][4656] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:28:09.294540 containerd[1981]: 2024-09-04 17:28:09.225 [INFO][4656] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.10.193/26] IPv6=[] ContainerID="2fef45a941af4be5a651996aa8d3cd11f29d48bd436faec8e9c3cdfb925ef626" HandleID="k8s-pod-network.2fef45a941af4be5a651996aa8d3cd11f29d48bd436faec8e9c3cdfb925ef626" Workload="ip--172--31--30--103-k8s-calico--kube--controllers--58c545f596--pnc4h-eth0" Sep 4 17:28:09.296528 containerd[1981]: 2024-09-04 17:28:09.233 [INFO][4646] k8s.go 386: Populated endpoint ContainerID="2fef45a941af4be5a651996aa8d3cd11f29d48bd436faec8e9c3cdfb925ef626" Namespace="calico-system" Pod="calico-kube-controllers-58c545f596-pnc4h" WorkloadEndpoint="ip--172--31--30--103-k8s-calico--kube--controllers--58c545f596--pnc4h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--103-k8s-calico--kube--controllers--58c545f596--pnc4h-eth0", GenerateName:"calico-kube-controllers-58c545f596-", Namespace:"calico-system", SelfLink:"", UID:"3e32abad-5cb7-4593-ad6e-3b408e428271", ResourceVersion:"684", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 27, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"58c545f596", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-103", ContainerID:"", Pod:"calico-kube-controllers-58c545f596-pnc4h", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.10.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9fc37347d16", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:28:09.296528 containerd[1981]: 2024-09-04 17:28:09.233 [INFO][4646] k8s.go 387: Calico CNI using IPs: [192.168.10.193/32] ContainerID="2fef45a941af4be5a651996aa8d3cd11f29d48bd436faec8e9c3cdfb925ef626" Namespace="calico-system" Pod="calico-kube-controllers-58c545f596-pnc4h" WorkloadEndpoint="ip--172--31--30--103-k8s-calico--kube--controllers--58c545f596--pnc4h-eth0" Sep 4 17:28:09.296528 containerd[1981]: 2024-09-04 17:28:09.234 [INFO][4646] dataplane_linux.go 68: Setting the host side veth name to cali9fc37347d16 ContainerID="2fef45a941af4be5a651996aa8d3cd11f29d48bd436faec8e9c3cdfb925ef626" Namespace="calico-system" Pod="calico-kube-controllers-58c545f596-pnc4h" WorkloadEndpoint="ip--172--31--30--103-k8s-calico--kube--controllers--58c545f596--pnc4h-eth0" Sep 4 17:28:09.296528 containerd[1981]: 2024-09-04 17:28:09.249 [INFO][4646] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="2fef45a941af4be5a651996aa8d3cd11f29d48bd436faec8e9c3cdfb925ef626" Namespace="calico-system" Pod="calico-kube-controllers-58c545f596-pnc4h" WorkloadEndpoint="ip--172--31--30--103-k8s-calico--kube--controllers--58c545f596--pnc4h-eth0" Sep 4 17:28:09.296528 containerd[1981]: 2024-09-04 17:28:09.250 [INFO][4646] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2fef45a941af4be5a651996aa8d3cd11f29d48bd436faec8e9c3cdfb925ef626" Namespace="calico-system" Pod="calico-kube-controllers-58c545f596-pnc4h" WorkloadEndpoint="ip--172--31--30--103-k8s-calico--kube--controllers--58c545f596--pnc4h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--103-k8s-calico--kube--controllers--58c545f596--pnc4h-eth0", GenerateName:"calico-kube-controllers-58c545f596-", Namespace:"calico-system", SelfLink:"", UID:"3e32abad-5cb7-4593-ad6e-3b408e428271", ResourceVersion:"684", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 27, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"58c545f596", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-103", ContainerID:"2fef45a941af4be5a651996aa8d3cd11f29d48bd436faec8e9c3cdfb925ef626", Pod:"calico-kube-controllers-58c545f596-pnc4h", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.10.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9fc37347d16", MAC:"1e:ac:07:ea:bb:d7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:28:09.296528 containerd[1981]: 2024-09-04 17:28:09.289 [INFO][4646] k8s.go 500: Wrote updated endpoint to datastore ContainerID="2fef45a941af4be5a651996aa8d3cd11f29d48bd436faec8e9c3cdfb925ef626" Namespace="calico-system" Pod="calico-kube-controllers-58c545f596-pnc4h" WorkloadEndpoint="ip--172--31--30--103-k8s-calico--kube--controllers--58c545f596--pnc4h-eth0" Sep 4 17:28:09.412909 containerd[1981]: time="2024-09-04T17:28:09.412161527Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:28:09.413409 containerd[1981]: time="2024-09-04T17:28:09.413091894Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:28:09.413409 containerd[1981]: time="2024-09-04T17:28:09.413252897Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:28:09.413714 containerd[1981]: time="2024-09-04T17:28:09.413367926Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:28:09.454740 systemd[1]: Started cri-containerd-2fef45a941af4be5a651996aa8d3cd11f29d48bd436faec8e9c3cdfb925ef626.scope - libcontainer container 2fef45a941af4be5a651996aa8d3cd11f29d48bd436faec8e9c3cdfb925ef626. Sep 4 17:28:09.523364 containerd[1981]: time="2024-09-04T17:28:09.523301107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58c545f596-pnc4h,Uid:3e32abad-5cb7-4593-ad6e-3b408e428271,Namespace:calico-system,Attempt:1,} returns sandbox id \"2fef45a941af4be5a651996aa8d3cd11f29d48bd436faec8e9c3cdfb925ef626\"" Sep 4 17:28:09.526898 containerd[1981]: time="2024-09-04T17:28:09.526662645Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"" Sep 4 17:28:10.664605 systemd-networkd[1810]: vxlan.calico: Gained IPv6LL Sep 4 17:28:10.853496 systemd-networkd[1810]: cali9fc37347d16: Gained IPv6LL Sep 4 17:28:11.223307 containerd[1981]: time="2024-09-04T17:28:11.222591764Z" level=info msg="StopPodSandbox for \"8bbeef170971c6a45fa0be4285542e081ae730d19d7c6dc4e82e3d9a8cb95fd7\"" Sep 4 17:28:11.223862 containerd[1981]: time="2024-09-04T17:28:11.223558758Z" level=info msg="StopPodSandbox for \"5203c9460a703b6e418b5fe03e1638e0f68b1f3f5ad6cd39d3463c46382ffabc\"" Sep 4 17:28:11.570684 systemd[1]: Started sshd@7-172.31.30.103:22-139.178.68.195:45222.service - OpenSSH per-connection server daemon (139.178.68.195:45222). Sep 4 17:28:11.769379 containerd[1981]: 2024-09-04 17:28:11.474 [INFO][4821] k8s.go 608: Cleaning up netns ContainerID="5203c9460a703b6e418b5fe03e1638e0f68b1f3f5ad6cd39d3463c46382ffabc" Sep 4 17:28:11.769379 containerd[1981]: 2024-09-04 17:28:11.474 [INFO][4821] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="5203c9460a703b6e418b5fe03e1638e0f68b1f3f5ad6cd39d3463c46382ffabc" iface="eth0" netns="/var/run/netns/cni-116cfd86-d40e-90d6-0acb-53e9bf44f31c" Sep 4 17:28:11.769379 containerd[1981]: 2024-09-04 17:28:11.477 [INFO][4821] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="5203c9460a703b6e418b5fe03e1638e0f68b1f3f5ad6cd39d3463c46382ffabc" iface="eth0" netns="/var/run/netns/cni-116cfd86-d40e-90d6-0acb-53e9bf44f31c" Sep 4 17:28:11.769379 containerd[1981]: 2024-09-04 17:28:11.477 [INFO][4821] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="5203c9460a703b6e418b5fe03e1638e0f68b1f3f5ad6cd39d3463c46382ffabc" iface="eth0" netns="/var/run/netns/cni-116cfd86-d40e-90d6-0acb-53e9bf44f31c" Sep 4 17:28:11.769379 containerd[1981]: 2024-09-04 17:28:11.477 [INFO][4821] k8s.go 615: Releasing IP address(es) ContainerID="5203c9460a703b6e418b5fe03e1638e0f68b1f3f5ad6cd39d3463c46382ffabc" Sep 4 17:28:11.769379 containerd[1981]: 2024-09-04 17:28:11.477 [INFO][4821] utils.go 188: Calico CNI releasing IP address ContainerID="5203c9460a703b6e418b5fe03e1638e0f68b1f3f5ad6cd39d3463c46382ffabc" Sep 4 17:28:11.769379 containerd[1981]: 2024-09-04 17:28:11.707 [INFO][4833] ipam_plugin.go 417: Releasing address using handleID ContainerID="5203c9460a703b6e418b5fe03e1638e0f68b1f3f5ad6cd39d3463c46382ffabc" HandleID="k8s-pod-network.5203c9460a703b6e418b5fe03e1638e0f68b1f3f5ad6cd39d3463c46382ffabc" Workload="ip--172--31--30--103-k8s-csi--node--driver--rnqrz-eth0" Sep 4 17:28:11.769379 containerd[1981]: 2024-09-04 17:28:11.707 [INFO][4833] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:28:11.769379 containerd[1981]: 2024-09-04 17:28:11.707 [INFO][4833] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:28:11.769379 containerd[1981]: 2024-09-04 17:28:11.734 [WARNING][4833] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="5203c9460a703b6e418b5fe03e1638e0f68b1f3f5ad6cd39d3463c46382ffabc" HandleID="k8s-pod-network.5203c9460a703b6e418b5fe03e1638e0f68b1f3f5ad6cd39d3463c46382ffabc" Workload="ip--172--31--30--103-k8s-csi--node--driver--rnqrz-eth0" Sep 4 17:28:11.769379 containerd[1981]: 2024-09-04 17:28:11.734 [INFO][4833] ipam_plugin.go 445: Releasing address using workloadID ContainerID="5203c9460a703b6e418b5fe03e1638e0f68b1f3f5ad6cd39d3463c46382ffabc" HandleID="k8s-pod-network.5203c9460a703b6e418b5fe03e1638e0f68b1f3f5ad6cd39d3463c46382ffabc" Workload="ip--172--31--30--103-k8s-csi--node--driver--rnqrz-eth0" Sep 4 17:28:11.769379 containerd[1981]: 2024-09-04 17:28:11.737 [INFO][4833] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:28:11.769379 containerd[1981]: 2024-09-04 17:28:11.762 [INFO][4821] k8s.go 621: Teardown processing complete. ContainerID="5203c9460a703b6e418b5fe03e1638e0f68b1f3f5ad6cd39d3463c46382ffabc" Sep 4 17:28:11.777042 containerd[1981]: time="2024-09-04T17:28:11.773205661Z" level=info msg="TearDown network for sandbox \"5203c9460a703b6e418b5fe03e1638e0f68b1f3f5ad6cd39d3463c46382ffabc\" successfully" Sep 4 17:28:11.777042 containerd[1981]: time="2024-09-04T17:28:11.773306409Z" level=info msg="StopPodSandbox for \"5203c9460a703b6e418b5fe03e1638e0f68b1f3f5ad6cd39d3463c46382ffabc\" returns successfully" Sep 4 17:28:11.777770 systemd[1]: run-netns-cni\x2d116cfd86\x2dd40e\x2d90d6\x2d0acb\x2d53e9bf44f31c.mount: Deactivated successfully. Sep 4 17:28:11.784938 containerd[1981]: time="2024-09-04T17:28:11.784815349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rnqrz,Uid:906685ae-b7d7-4862-82f6-b94651385380,Namespace:calico-system,Attempt:1,}" Sep 4 17:28:11.799521 containerd[1981]: 2024-09-04 17:28:11.509 [INFO][4822] k8s.go 608: Cleaning up netns ContainerID="8bbeef170971c6a45fa0be4285542e081ae730d19d7c6dc4e82e3d9a8cb95fd7" Sep 4 17:28:11.799521 containerd[1981]: 2024-09-04 17:28:11.512 [INFO][4822] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="8bbeef170971c6a45fa0be4285542e081ae730d19d7c6dc4e82e3d9a8cb95fd7" iface="eth0" netns="/var/run/netns/cni-f8e151f5-a479-5d25-68fb-afaaf7df85e7" Sep 4 17:28:11.799521 containerd[1981]: 2024-09-04 17:28:11.513 [INFO][4822] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="8bbeef170971c6a45fa0be4285542e081ae730d19d7c6dc4e82e3d9a8cb95fd7" iface="eth0" netns="/var/run/netns/cni-f8e151f5-a479-5d25-68fb-afaaf7df85e7" Sep 4 17:28:11.799521 containerd[1981]: 2024-09-04 17:28:11.514 [INFO][4822] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="8bbeef170971c6a45fa0be4285542e081ae730d19d7c6dc4e82e3d9a8cb95fd7" iface="eth0" netns="/var/run/netns/cni-f8e151f5-a479-5d25-68fb-afaaf7df85e7" Sep 4 17:28:11.799521 containerd[1981]: 2024-09-04 17:28:11.514 [INFO][4822] k8s.go 615: Releasing IP address(es) ContainerID="8bbeef170971c6a45fa0be4285542e081ae730d19d7c6dc4e82e3d9a8cb95fd7" Sep 4 17:28:11.799521 containerd[1981]: 2024-09-04 17:28:11.516 [INFO][4822] utils.go 188: Calico CNI releasing IP address ContainerID="8bbeef170971c6a45fa0be4285542e081ae730d19d7c6dc4e82e3d9a8cb95fd7" Sep 4 17:28:11.799521 containerd[1981]: 2024-09-04 17:28:11.714 [INFO][4838] ipam_plugin.go 417: Releasing address using handleID ContainerID="8bbeef170971c6a45fa0be4285542e081ae730d19d7c6dc4e82e3d9a8cb95fd7" HandleID="k8s-pod-network.8bbeef170971c6a45fa0be4285542e081ae730d19d7c6dc4e82e3d9a8cb95fd7" Workload="ip--172--31--30--103-k8s-coredns--76f75df574--drqh6-eth0" Sep 4 17:28:11.799521 containerd[1981]: 2024-09-04 17:28:11.715 [INFO][4838] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:28:11.799521 containerd[1981]: 2024-09-04 17:28:11.737 [INFO][4838] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:28:11.799521 containerd[1981]: 2024-09-04 17:28:11.766 [WARNING][4838] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="8bbeef170971c6a45fa0be4285542e081ae730d19d7c6dc4e82e3d9a8cb95fd7" HandleID="k8s-pod-network.8bbeef170971c6a45fa0be4285542e081ae730d19d7c6dc4e82e3d9a8cb95fd7" Workload="ip--172--31--30--103-k8s-coredns--76f75df574--drqh6-eth0" Sep 4 17:28:11.799521 containerd[1981]: 2024-09-04 17:28:11.766 [INFO][4838] ipam_plugin.go 445: Releasing address using workloadID ContainerID="8bbeef170971c6a45fa0be4285542e081ae730d19d7c6dc4e82e3d9a8cb95fd7" HandleID="k8s-pod-network.8bbeef170971c6a45fa0be4285542e081ae730d19d7c6dc4e82e3d9a8cb95fd7" Workload="ip--172--31--30--103-k8s-coredns--76f75df574--drqh6-eth0" Sep 4 17:28:11.799521 containerd[1981]: 2024-09-04 17:28:11.774 [INFO][4838] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:28:11.799521 containerd[1981]: 2024-09-04 17:28:11.790 [INFO][4822] k8s.go 621: Teardown processing complete. ContainerID="8bbeef170971c6a45fa0be4285542e081ae730d19d7c6dc4e82e3d9a8cb95fd7" Sep 4 17:28:11.799521 containerd[1981]: time="2024-09-04T17:28:11.797809536Z" level=info msg="TearDown network for sandbox \"8bbeef170971c6a45fa0be4285542e081ae730d19d7c6dc4e82e3d9a8cb95fd7\" successfully" Sep 4 17:28:11.799521 containerd[1981]: time="2024-09-04T17:28:11.797845283Z" level=info msg="StopPodSandbox for \"8bbeef170971c6a45fa0be4285542e081ae730d19d7c6dc4e82e3d9a8cb95fd7\" returns successfully" Sep 4 17:28:11.804441 systemd[1]: run-netns-cni\x2df8e151f5\x2da479\x2d5d25\x2d68fb\x2dafaaf7df85e7.mount: Deactivated successfully. Sep 4 17:28:11.806917 containerd[1981]: time="2024-09-04T17:28:11.805168405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-drqh6,Uid:374ecc78-947f-4762-95e1-b7832d67c6f4,Namespace:kube-system,Attempt:1,}" Sep 4 17:28:11.921384 sshd[4843]: Accepted publickey for core from 139.178.68.195 port 45222 ssh2: RSA SHA256:RWdrpmeD0uODGtoWpmuFEK/G9FvWcalt1Ic3MhVma0g Sep 4 17:28:11.926087 sshd[4843]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:28:11.940615 systemd-logind[1947]: New session 8 of user core. Sep 4 17:28:11.948216 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 4 17:28:12.298868 systemd-networkd[1810]: cali270d2f73b80: Link UP Sep 4 17:28:12.322016 systemd-networkd[1810]: cali270d2f73b80: Gained carrier Sep 4 17:28:12.365661 containerd[1981]: 2024-09-04 17:28:12.002 [INFO][4860] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--103-k8s-coredns--76f75df574--drqh6-eth0 coredns-76f75df574- kube-system 374ecc78-947f-4762-95e1-b7832d67c6f4 726 0 2024-09-04 17:27:38 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-30-103 coredns-76f75df574-drqh6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali270d2f73b80 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="f1cd11aaa069a7d31809c7d39154d61b6ceacc275317d507fd68c9b0701fde37" Namespace="kube-system" Pod="coredns-76f75df574-drqh6" WorkloadEndpoint="ip--172--31--30--103-k8s-coredns--76f75df574--drqh6-" Sep 4 17:28:12.365661 containerd[1981]: 2024-09-04 17:28:12.004 [INFO][4860] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f1cd11aaa069a7d31809c7d39154d61b6ceacc275317d507fd68c9b0701fde37" Namespace="kube-system" Pod="coredns-76f75df574-drqh6" WorkloadEndpoint="ip--172--31--30--103-k8s-coredns--76f75df574--drqh6-eth0" Sep 4 17:28:12.365661 containerd[1981]: 2024-09-04 17:28:12.130 [INFO][4875] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f1cd11aaa069a7d31809c7d39154d61b6ceacc275317d507fd68c9b0701fde37" HandleID="k8s-pod-network.f1cd11aaa069a7d31809c7d39154d61b6ceacc275317d507fd68c9b0701fde37" Workload="ip--172--31--30--103-k8s-coredns--76f75df574--drqh6-eth0" Sep 4 17:28:12.365661 containerd[1981]: 2024-09-04 17:28:12.149 [INFO][4875] ipam_plugin.go 270: Auto assigning IP ContainerID="f1cd11aaa069a7d31809c7d39154d61b6ceacc275317d507fd68c9b0701fde37" HandleID="k8s-pod-network.f1cd11aaa069a7d31809c7d39154d61b6ceacc275317d507fd68c9b0701fde37" Workload="ip--172--31--30--103-k8s-coredns--76f75df574--drqh6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000185510), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-30-103", "pod":"coredns-76f75df574-drqh6", "timestamp":"2024-09-04 17:28:12.130122115 +0000 UTC"}, Hostname:"ip-172-31-30-103", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:28:12.365661 containerd[1981]: 2024-09-04 17:28:12.149 [INFO][4875] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:28:12.365661 containerd[1981]: 2024-09-04 17:28:12.149 [INFO][4875] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:28:12.365661 containerd[1981]: 2024-09-04 17:28:12.149 [INFO][4875] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-103' Sep 4 17:28:12.365661 containerd[1981]: 2024-09-04 17:28:12.154 [INFO][4875] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f1cd11aaa069a7d31809c7d39154d61b6ceacc275317d507fd68c9b0701fde37" host="ip-172-31-30-103" Sep 4 17:28:12.365661 containerd[1981]: 2024-09-04 17:28:12.168 [INFO][4875] ipam.go 372: Looking up existing affinities for host host="ip-172-31-30-103" Sep 4 17:28:12.365661 containerd[1981]: 2024-09-04 17:28:12.186 [INFO][4875] ipam.go 489: Trying affinity for 192.168.10.192/26 host="ip-172-31-30-103" Sep 4 17:28:12.365661 containerd[1981]: 2024-09-04 17:28:12.197 [INFO][4875] ipam.go 155: Attempting to load block cidr=192.168.10.192/26 host="ip-172-31-30-103" Sep 4 17:28:12.365661 containerd[1981]: 2024-09-04 17:28:12.207 [INFO][4875] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.10.192/26 host="ip-172-31-30-103" Sep 4 17:28:12.365661 containerd[1981]: 2024-09-04 17:28:12.207 [INFO][4875] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.10.192/26 handle="k8s-pod-network.f1cd11aaa069a7d31809c7d39154d61b6ceacc275317d507fd68c9b0701fde37" host="ip-172-31-30-103" Sep 4 17:28:12.365661 containerd[1981]: 2024-09-04 17:28:12.210 [INFO][4875] ipam.go 1685: Creating new handle: k8s-pod-network.f1cd11aaa069a7d31809c7d39154d61b6ceacc275317d507fd68c9b0701fde37 Sep 4 17:28:12.365661 containerd[1981]: 2024-09-04 17:28:12.219 [INFO][4875] ipam.go 1203: Writing block in order to claim IPs block=192.168.10.192/26 handle="k8s-pod-network.f1cd11aaa069a7d31809c7d39154d61b6ceacc275317d507fd68c9b0701fde37" host="ip-172-31-30-103" Sep 4 17:28:12.365661 containerd[1981]: 2024-09-04 17:28:12.250 [INFO][4875] ipam.go 1216: Successfully claimed IPs: [192.168.10.194/26] block=192.168.10.192/26 handle="k8s-pod-network.f1cd11aaa069a7d31809c7d39154d61b6ceacc275317d507fd68c9b0701fde37" host="ip-172-31-30-103" Sep 4 17:28:12.365661 containerd[1981]: 2024-09-04 17:28:12.251 [INFO][4875] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.10.194/26] handle="k8s-pod-network.f1cd11aaa069a7d31809c7d39154d61b6ceacc275317d507fd68c9b0701fde37" host="ip-172-31-30-103" Sep 4 17:28:12.365661 containerd[1981]: 2024-09-04 17:28:12.252 [INFO][4875] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:28:12.365661 containerd[1981]: 2024-09-04 17:28:12.252 [INFO][4875] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.10.194/26] IPv6=[] ContainerID="f1cd11aaa069a7d31809c7d39154d61b6ceacc275317d507fd68c9b0701fde37" HandleID="k8s-pod-network.f1cd11aaa069a7d31809c7d39154d61b6ceacc275317d507fd68c9b0701fde37" Workload="ip--172--31--30--103-k8s-coredns--76f75df574--drqh6-eth0" Sep 4 17:28:12.368916 containerd[1981]: 2024-09-04 17:28:12.274 [INFO][4860] k8s.go 386: Populated endpoint ContainerID="f1cd11aaa069a7d31809c7d39154d61b6ceacc275317d507fd68c9b0701fde37" Namespace="kube-system" Pod="coredns-76f75df574-drqh6" WorkloadEndpoint="ip--172--31--30--103-k8s-coredns--76f75df574--drqh6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--103-k8s-coredns--76f75df574--drqh6-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"374ecc78-947f-4762-95e1-b7832d67c6f4", ResourceVersion:"726", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 27, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-103", ContainerID:"", Pod:"coredns-76f75df574-drqh6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.10.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali270d2f73b80", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:28:12.368916 containerd[1981]: 2024-09-04 17:28:12.275 [INFO][4860] k8s.go 387: Calico CNI using IPs: [192.168.10.194/32] ContainerID="f1cd11aaa069a7d31809c7d39154d61b6ceacc275317d507fd68c9b0701fde37" Namespace="kube-system" Pod="coredns-76f75df574-drqh6" WorkloadEndpoint="ip--172--31--30--103-k8s-coredns--76f75df574--drqh6-eth0" Sep 4 17:28:12.368916 containerd[1981]: 2024-09-04 17:28:12.275 [INFO][4860] dataplane_linux.go 68: Setting the host side veth name to cali270d2f73b80 ContainerID="f1cd11aaa069a7d31809c7d39154d61b6ceacc275317d507fd68c9b0701fde37" Namespace="kube-system" Pod="coredns-76f75df574-drqh6" WorkloadEndpoint="ip--172--31--30--103-k8s-coredns--76f75df574--drqh6-eth0" Sep 4 17:28:12.368916 containerd[1981]: 2024-09-04 17:28:12.298 [INFO][4860] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="f1cd11aaa069a7d31809c7d39154d61b6ceacc275317d507fd68c9b0701fde37" Namespace="kube-system" Pod="coredns-76f75df574-drqh6" WorkloadEndpoint="ip--172--31--30--103-k8s-coredns--76f75df574--drqh6-eth0" Sep 4 17:28:12.368916 containerd[1981]: 2024-09-04 17:28:12.301 [INFO][4860] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f1cd11aaa069a7d31809c7d39154d61b6ceacc275317d507fd68c9b0701fde37" Namespace="kube-system" Pod="coredns-76f75df574-drqh6" WorkloadEndpoint="ip--172--31--30--103-k8s-coredns--76f75df574--drqh6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--103-k8s-coredns--76f75df574--drqh6-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"374ecc78-947f-4762-95e1-b7832d67c6f4", ResourceVersion:"726", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 27, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-103", ContainerID:"f1cd11aaa069a7d31809c7d39154d61b6ceacc275317d507fd68c9b0701fde37", Pod:"coredns-76f75df574-drqh6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.10.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali270d2f73b80", MAC:"fe:3c:1c:47:96:69", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:28:12.368916 containerd[1981]: 2024-09-04 17:28:12.351 [INFO][4860] k8s.go 500: Wrote updated endpoint to datastore ContainerID="f1cd11aaa069a7d31809c7d39154d61b6ceacc275317d507fd68c9b0701fde37" Namespace="kube-system" Pod="coredns-76f75df574-drqh6" WorkloadEndpoint="ip--172--31--30--103-k8s-coredns--76f75df574--drqh6-eth0" Sep 4 17:28:12.572644 systemd-networkd[1810]: caliece3c4198aa: Link UP Sep 4 17:28:12.579508 systemd-networkd[1810]: caliece3c4198aa: Gained carrier Sep 4 17:28:12.590673 containerd[1981]: time="2024-09-04T17:28:12.588586594Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:28:12.590673 containerd[1981]: time="2024-09-04T17:28:12.588657807Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:28:12.590673 containerd[1981]: time="2024-09-04T17:28:12.588699585Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:28:12.590673 containerd[1981]: time="2024-09-04T17:28:12.588721540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:28:12.662085 containerd[1981]: 2024-09-04 17:28:12.025 [INFO][4850] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--103-k8s-csi--node--driver--rnqrz-eth0 csi-node-driver- calico-system 906685ae-b7d7-4862-82f6-b94651385380 724 0 2024-09-04 17:27:45 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78cd84fb8c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ip-172-31-30-103 csi-node-driver-rnqrz eth0 default [] [] [kns.calico-system ksa.calico-system.default] caliece3c4198aa [] []}} ContainerID="edf9dcedf64cd58b1a3773e5c3676cd2d5715031e731b490ea4c532ec172b310" Namespace="calico-system" Pod="csi-node-driver-rnqrz" WorkloadEndpoint="ip--172--31--30--103-k8s-csi--node--driver--rnqrz-" Sep 4 17:28:12.662085 containerd[1981]: 2024-09-04 17:28:12.026 [INFO][4850] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="edf9dcedf64cd58b1a3773e5c3676cd2d5715031e731b490ea4c532ec172b310" Namespace="calico-system" Pod="csi-node-driver-rnqrz" WorkloadEndpoint="ip--172--31--30--103-k8s-csi--node--driver--rnqrz-eth0" Sep 4 17:28:12.662085 containerd[1981]: 2024-09-04 17:28:12.230 [INFO][4880] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="edf9dcedf64cd58b1a3773e5c3676cd2d5715031e731b490ea4c532ec172b310" HandleID="k8s-pod-network.edf9dcedf64cd58b1a3773e5c3676cd2d5715031e731b490ea4c532ec172b310" Workload="ip--172--31--30--103-k8s-csi--node--driver--rnqrz-eth0" Sep 4 17:28:12.662085 containerd[1981]: 2024-09-04 17:28:12.295 [INFO][4880] ipam_plugin.go 270: Auto assigning IP ContainerID="edf9dcedf64cd58b1a3773e5c3676cd2d5715031e731b490ea4c532ec172b310" HandleID="k8s-pod-network.edf9dcedf64cd58b1a3773e5c3676cd2d5715031e731b490ea4c532ec172b310" Workload="ip--172--31--30--103-k8s-csi--node--driver--rnqrz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003c8af0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-30-103", "pod":"csi-node-driver-rnqrz", "timestamp":"2024-09-04 17:28:12.230307004 +0000 UTC"}, Hostname:"ip-172-31-30-103", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:28:12.662085 containerd[1981]: 2024-09-04 17:28:12.295 [INFO][4880] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:28:12.662085 containerd[1981]: 2024-09-04 17:28:12.295 [INFO][4880] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:28:12.662085 containerd[1981]: 2024-09-04 17:28:12.295 [INFO][4880] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-103' Sep 4 17:28:12.662085 containerd[1981]: 2024-09-04 17:28:12.312 [INFO][4880] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.edf9dcedf64cd58b1a3773e5c3676cd2d5715031e731b490ea4c532ec172b310" host="ip-172-31-30-103" Sep 4 17:28:12.662085 containerd[1981]: 2024-09-04 17:28:12.378 [INFO][4880] ipam.go 372: Looking up existing affinities for host host="ip-172-31-30-103" Sep 4 17:28:12.662085 containerd[1981]: 2024-09-04 17:28:12.417 [INFO][4880] ipam.go 489: Trying affinity for 192.168.10.192/26 host="ip-172-31-30-103" Sep 4 17:28:12.662085 containerd[1981]: 2024-09-04 17:28:12.432 [INFO][4880] ipam.go 155: Attempting to load block cidr=192.168.10.192/26 host="ip-172-31-30-103" Sep 4 17:28:12.662085 containerd[1981]: 2024-09-04 17:28:12.441 [INFO][4880] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.10.192/26 host="ip-172-31-30-103" Sep 4 17:28:12.662085 containerd[1981]: 2024-09-04 17:28:12.444 [INFO][4880] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.10.192/26 handle="k8s-pod-network.edf9dcedf64cd58b1a3773e5c3676cd2d5715031e731b490ea4c532ec172b310" host="ip-172-31-30-103" Sep 4 17:28:12.662085 containerd[1981]: 2024-09-04 17:28:12.454 [INFO][4880] ipam.go 1685: Creating new handle: k8s-pod-network.edf9dcedf64cd58b1a3773e5c3676cd2d5715031e731b490ea4c532ec172b310 Sep 4 17:28:12.662085 containerd[1981]: 2024-09-04 17:28:12.480 [INFO][4880] ipam.go 1203: Writing block in order to claim IPs block=192.168.10.192/26 handle="k8s-pod-network.edf9dcedf64cd58b1a3773e5c3676cd2d5715031e731b490ea4c532ec172b310" host="ip-172-31-30-103" Sep 4 17:28:12.662085 containerd[1981]: 2024-09-04 17:28:12.516 [INFO][4880] ipam.go 1216: Successfully claimed IPs: [192.168.10.195/26] block=192.168.10.192/26 handle="k8s-pod-network.edf9dcedf64cd58b1a3773e5c3676cd2d5715031e731b490ea4c532ec172b310" host="ip-172-31-30-103" Sep 4 17:28:12.662085 containerd[1981]: 2024-09-04 17:28:12.516 [INFO][4880] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.10.195/26] handle="k8s-pod-network.edf9dcedf64cd58b1a3773e5c3676cd2d5715031e731b490ea4c532ec172b310" host="ip-172-31-30-103" Sep 4 17:28:12.662085 containerd[1981]: 2024-09-04 17:28:12.516 [INFO][4880] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:28:12.662085 containerd[1981]: 2024-09-04 17:28:12.516 [INFO][4880] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.10.195/26] IPv6=[] ContainerID="edf9dcedf64cd58b1a3773e5c3676cd2d5715031e731b490ea4c532ec172b310" HandleID="k8s-pod-network.edf9dcedf64cd58b1a3773e5c3676cd2d5715031e731b490ea4c532ec172b310" Workload="ip--172--31--30--103-k8s-csi--node--driver--rnqrz-eth0" Sep 4 17:28:12.663897 containerd[1981]: 2024-09-04 17:28:12.536 [INFO][4850] k8s.go 386: Populated endpoint ContainerID="edf9dcedf64cd58b1a3773e5c3676cd2d5715031e731b490ea4c532ec172b310" Namespace="calico-system" Pod="csi-node-driver-rnqrz" WorkloadEndpoint="ip--172--31--30--103-k8s-csi--node--driver--rnqrz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--103-k8s-csi--node--driver--rnqrz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"906685ae-b7d7-4862-82f6-b94651385380", ResourceVersion:"724", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 27, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-103", ContainerID:"", Pod:"csi-node-driver-rnqrz", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.10.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"caliece3c4198aa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:28:12.663897 containerd[1981]: 2024-09-04 17:28:12.536 [INFO][4850] k8s.go 387: Calico CNI using IPs: [192.168.10.195/32] ContainerID="edf9dcedf64cd58b1a3773e5c3676cd2d5715031e731b490ea4c532ec172b310" Namespace="calico-system" Pod="csi-node-driver-rnqrz" WorkloadEndpoint="ip--172--31--30--103-k8s-csi--node--driver--rnqrz-eth0" Sep 4 17:28:12.663897 containerd[1981]: 2024-09-04 17:28:12.536 [INFO][4850] dataplane_linux.go 68: Setting the host side veth name to caliece3c4198aa ContainerID="edf9dcedf64cd58b1a3773e5c3676cd2d5715031e731b490ea4c532ec172b310" Namespace="calico-system" Pod="csi-node-driver-rnqrz" WorkloadEndpoint="ip--172--31--30--103-k8s-csi--node--driver--rnqrz-eth0" Sep 4 17:28:12.663897 containerd[1981]: 2024-09-04 17:28:12.601 [INFO][4850] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="edf9dcedf64cd58b1a3773e5c3676cd2d5715031e731b490ea4c532ec172b310" Namespace="calico-system" Pod="csi-node-driver-rnqrz" WorkloadEndpoint="ip--172--31--30--103-k8s-csi--node--driver--rnqrz-eth0" Sep 4 17:28:12.663897 containerd[1981]: 2024-09-04 17:28:12.608 [INFO][4850] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="edf9dcedf64cd58b1a3773e5c3676cd2d5715031e731b490ea4c532ec172b310" Namespace="calico-system" Pod="csi-node-driver-rnqrz" WorkloadEndpoint="ip--172--31--30--103-k8s-csi--node--driver--rnqrz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--103-k8s-csi--node--driver--rnqrz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"906685ae-b7d7-4862-82f6-b94651385380", ResourceVersion:"724", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 27, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-103", ContainerID:"edf9dcedf64cd58b1a3773e5c3676cd2d5715031e731b490ea4c532ec172b310", Pod:"csi-node-driver-rnqrz", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.10.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"caliece3c4198aa", MAC:"56:cc:5a:7a:5b:e6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:28:12.663897 containerd[1981]: 2024-09-04 17:28:12.649 [INFO][4850] k8s.go 500: Wrote updated endpoint to datastore ContainerID="edf9dcedf64cd58b1a3773e5c3676cd2d5715031e731b490ea4c532ec172b310" Namespace="calico-system" Pod="csi-node-driver-rnqrz" WorkloadEndpoint="ip--172--31--30--103-k8s-csi--node--driver--rnqrz-eth0" Sep 4 17:28:12.697017 systemd[1]: Started cri-containerd-f1cd11aaa069a7d31809c7d39154d61b6ceacc275317d507fd68c9b0701fde37.scope - libcontainer container f1cd11aaa069a7d31809c7d39154d61b6ceacc275317d507fd68c9b0701fde37. Sep 4 17:28:12.725718 sshd[4843]: pam_unix(sshd:session): session closed for user core Sep 4 17:28:12.736592 systemd[1]: sshd@7-172.31.30.103:22-139.178.68.195:45222.service: Deactivated successfully. Sep 4 17:28:12.743693 systemd[1]: session-8.scope: Deactivated successfully. Sep 4 17:28:12.750334 systemd-logind[1947]: Session 8 logged out. Waiting for processes to exit. Sep 4 17:28:12.753807 systemd-logind[1947]: Removed session 8. Sep 4 17:28:12.773808 containerd[1981]: time="2024-09-04T17:28:12.769102963Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:28:12.773808 containerd[1981]: time="2024-09-04T17:28:12.769178041Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:28:12.773808 containerd[1981]: time="2024-09-04T17:28:12.769208521Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:28:12.774580 containerd[1981]: time="2024-09-04T17:28:12.769811469Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:28:12.850159 systemd[1]: Started cri-containerd-edf9dcedf64cd58b1a3773e5c3676cd2d5715031e731b490ea4c532ec172b310.scope - libcontainer container edf9dcedf64cd58b1a3773e5c3676cd2d5715031e731b490ea4c532ec172b310. Sep 4 17:28:12.855419 containerd[1981]: time="2024-09-04T17:28:12.855366213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-drqh6,Uid:374ecc78-947f-4762-95e1-b7832d67c6f4,Namespace:kube-system,Attempt:1,} returns sandbox id \"f1cd11aaa069a7d31809c7d39154d61b6ceacc275317d507fd68c9b0701fde37\"" Sep 4 17:28:12.871577 containerd[1981]: time="2024-09-04T17:28:12.871481585Z" level=info msg="CreateContainer within sandbox \"f1cd11aaa069a7d31809c7d39154d61b6ceacc275317d507fd68c9b0701fde37\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 17:28:12.934954 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4214225997.mount: Deactivated successfully. Sep 4 17:28:12.939977 containerd[1981]: time="2024-09-04T17:28:12.939901042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rnqrz,Uid:906685ae-b7d7-4862-82f6-b94651385380,Namespace:calico-system,Attempt:1,} returns sandbox id \"edf9dcedf64cd58b1a3773e5c3676cd2d5715031e731b490ea4c532ec172b310\"" Sep 4 17:28:12.943556 containerd[1981]: time="2024-09-04T17:28:12.943451163Z" level=info msg="CreateContainer within sandbox \"f1cd11aaa069a7d31809c7d39154d61b6ceacc275317d507fd68c9b0701fde37\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5c8911a31ae225c14aac6b4dce5cf2766d49e23e038fedc787d153f24c40ecda\"" Sep 4 17:28:12.945047 containerd[1981]: time="2024-09-04T17:28:12.944582097Z" level=info msg="StartContainer for \"5c8911a31ae225c14aac6b4dce5cf2766d49e23e038fedc787d153f24c40ecda\"" Sep 4 17:28:12.992484 systemd[1]: Started cri-containerd-5c8911a31ae225c14aac6b4dce5cf2766d49e23e038fedc787d153f24c40ecda.scope - libcontainer container 5c8911a31ae225c14aac6b4dce5cf2766d49e23e038fedc787d153f24c40ecda. Sep 4 17:28:13.050947 containerd[1981]: time="2024-09-04T17:28:13.050832053Z" level=info msg="StartContainer for \"5c8911a31ae225c14aac6b4dce5cf2766d49e23e038fedc787d153f24c40ecda\" returns successfully" Sep 4 17:28:13.219995 containerd[1981]: time="2024-09-04T17:28:13.217808439Z" level=info msg="StopPodSandbox for \"7a2a9bdb47db04fe8631cc4d371f1b447fdd623e7562000870a0ca39bde78d75\"" Sep 4 17:28:13.429863 containerd[1981]: 2024-09-04 17:28:13.342 [INFO][5055] k8s.go 608: Cleaning up netns ContainerID="7a2a9bdb47db04fe8631cc4d371f1b447fdd623e7562000870a0ca39bde78d75" Sep 4 17:28:13.429863 containerd[1981]: 2024-09-04 17:28:13.343 [INFO][5055] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="7a2a9bdb47db04fe8631cc4d371f1b447fdd623e7562000870a0ca39bde78d75" iface="eth0" netns="/var/run/netns/cni-d8d478f9-4307-74cc-d05c-4fd566ddf1a8" Sep 4 17:28:13.429863 containerd[1981]: 2024-09-04 17:28:13.344 [INFO][5055] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="7a2a9bdb47db04fe8631cc4d371f1b447fdd623e7562000870a0ca39bde78d75" iface="eth0" netns="/var/run/netns/cni-d8d478f9-4307-74cc-d05c-4fd566ddf1a8" Sep 4 17:28:13.429863 containerd[1981]: 2024-09-04 17:28:13.344 [INFO][5055] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="7a2a9bdb47db04fe8631cc4d371f1b447fdd623e7562000870a0ca39bde78d75" iface="eth0" netns="/var/run/netns/cni-d8d478f9-4307-74cc-d05c-4fd566ddf1a8" Sep 4 17:28:13.429863 containerd[1981]: 2024-09-04 17:28:13.344 [INFO][5055] k8s.go 615: Releasing IP address(es) ContainerID="7a2a9bdb47db04fe8631cc4d371f1b447fdd623e7562000870a0ca39bde78d75" Sep 4 17:28:13.429863 containerd[1981]: 2024-09-04 17:28:13.345 [INFO][5055] utils.go 188: Calico CNI releasing IP address ContainerID="7a2a9bdb47db04fe8631cc4d371f1b447fdd623e7562000870a0ca39bde78d75" Sep 4 17:28:13.429863 containerd[1981]: 2024-09-04 17:28:13.400 [INFO][5062] ipam_plugin.go 417: Releasing address using handleID ContainerID="7a2a9bdb47db04fe8631cc4d371f1b447fdd623e7562000870a0ca39bde78d75" HandleID="k8s-pod-network.7a2a9bdb47db04fe8631cc4d371f1b447fdd623e7562000870a0ca39bde78d75" Workload="ip--172--31--30--103-k8s-coredns--76f75df574--56xn4-eth0" Sep 4 17:28:13.429863 containerd[1981]: 2024-09-04 17:28:13.400 [INFO][5062] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:28:13.429863 containerd[1981]: 2024-09-04 17:28:13.400 [INFO][5062] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:28:13.429863 containerd[1981]: 2024-09-04 17:28:13.408 [WARNING][5062] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="7a2a9bdb47db04fe8631cc4d371f1b447fdd623e7562000870a0ca39bde78d75" HandleID="k8s-pod-network.7a2a9bdb47db04fe8631cc4d371f1b447fdd623e7562000870a0ca39bde78d75" Workload="ip--172--31--30--103-k8s-coredns--76f75df574--56xn4-eth0" Sep 4 17:28:13.429863 containerd[1981]: 2024-09-04 17:28:13.422 [INFO][5062] ipam_plugin.go 445: Releasing address using workloadID ContainerID="7a2a9bdb47db04fe8631cc4d371f1b447fdd623e7562000870a0ca39bde78d75" HandleID="k8s-pod-network.7a2a9bdb47db04fe8631cc4d371f1b447fdd623e7562000870a0ca39bde78d75" Workload="ip--172--31--30--103-k8s-coredns--76f75df574--56xn4-eth0" Sep 4 17:28:13.429863 containerd[1981]: 2024-09-04 17:28:13.425 [INFO][5062] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:28:13.429863 containerd[1981]: 2024-09-04 17:28:13.427 [INFO][5055] k8s.go 621: Teardown processing complete. ContainerID="7a2a9bdb47db04fe8631cc4d371f1b447fdd623e7562000870a0ca39bde78d75" Sep 4 17:28:13.441520 containerd[1981]: time="2024-09-04T17:28:13.430032947Z" level=info msg="TearDown network for sandbox \"7a2a9bdb47db04fe8631cc4d371f1b447fdd623e7562000870a0ca39bde78d75\" successfully" Sep 4 17:28:13.441520 containerd[1981]: time="2024-09-04T17:28:13.430064951Z" level=info msg="StopPodSandbox for \"7a2a9bdb47db04fe8631cc4d371f1b447fdd623e7562000870a0ca39bde78d75\" returns successfully" Sep 4 17:28:13.441520 containerd[1981]: time="2024-09-04T17:28:13.431068215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-56xn4,Uid:5361f75e-f056-4911-be4c-82491629749e,Namespace:kube-system,Attempt:1,}" Sep 4 17:28:13.483938 containerd[1981]: time="2024-09-04T17:28:13.483799542Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:28:13.503891 containerd[1981]: time="2024-09-04T17:28:13.503815313Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=33507125" Sep 4 17:28:13.514128 containerd[1981]: time="2024-09-04T17:28:13.514069589Z" level=info msg="ImageCreate event name:\"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:28:13.542123 containerd[1981]: time="2024-09-04T17:28:13.542058408Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:28:13.546626 containerd[1981]: time="2024-09-04T17:28:13.546575853Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"34999494\" in 4.019869074s" Sep 4 17:28:13.546626 containerd[1981]: time="2024-09-04T17:28:13.546624221Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\"" Sep 4 17:28:13.556336 containerd[1981]: time="2024-09-04T17:28:13.553487879Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\"" Sep 4 17:28:13.578909 containerd[1981]: time="2024-09-04T17:28:13.578865335Z" level=info msg="CreateContainer within sandbox \"2fef45a941af4be5a651996aa8d3cd11f29d48bd436faec8e9c3cdfb925ef626\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 4 17:28:13.619350 containerd[1981]: time="2024-09-04T17:28:13.617625534Z" level=info msg="CreateContainer within sandbox \"2fef45a941af4be5a651996aa8d3cd11f29d48bd436faec8e9c3cdfb925ef626\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"e6a1469bcf28683fd5182ef2b718cf9f0883e6d6c6318bf53efe70a757937290\"" Sep 4 17:28:13.619496 containerd[1981]: time="2024-09-04T17:28:13.619360062Z" level=info msg="StartContainer for \"e6a1469bcf28683fd5182ef2b718cf9f0883e6d6c6318bf53efe70a757937290\"" Sep 4 17:28:13.672384 systemd-networkd[1810]: caliece3c4198aa: Gained IPv6LL Sep 4 17:28:13.682439 systemd[1]: Started cri-containerd-e6a1469bcf28683fd5182ef2b718cf9f0883e6d6c6318bf53efe70a757937290.scope - libcontainer container e6a1469bcf28683fd5182ef2b718cf9f0883e6d6c6318bf53efe70a757937290. Sep 4 17:28:13.797265 systemd[1]: run-netns-cni\x2dd8d478f9\x2d4307\x2d74cc\x2dd05c\x2d4fd566ddf1a8.mount: Deactivated successfully. Sep 4 17:28:13.860139 kubelet[3195]: I0904 17:28:13.860098 3195 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-drqh6" podStartSLOduration=35.860043341 podStartE2EDuration="35.860043341s" podCreationTimestamp="2024-09-04 17:27:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:28:13.85976959 +0000 UTC m=+47.874182556" watchObservedRunningTime="2024-09-04 17:28:13.860043341 +0000 UTC m=+47.874456308" Sep 4 17:28:13.870452 systemd-networkd[1810]: cali66735e6ebf0: Link UP Sep 4 17:28:13.872330 systemd-networkd[1810]: cali66735e6ebf0: Gained carrier Sep 4 17:28:13.907312 containerd[1981]: 2024-09-04 17:28:13.655 [INFO][5070] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--103-k8s-coredns--76f75df574--56xn4-eth0 coredns-76f75df574- kube-system 5361f75e-f056-4911-be4c-82491629749e 748 0 2024-09-04 17:27:38 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-30-103 coredns-76f75df574-56xn4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali66735e6ebf0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="705ec2ae833068b421d429a2b215ee7e97de77f21335a8ae2cbe9dee33f3d3ea" Namespace="kube-system" Pod="coredns-76f75df574-56xn4" WorkloadEndpoint="ip--172--31--30--103-k8s-coredns--76f75df574--56xn4-" Sep 4 17:28:13.907312 containerd[1981]: 2024-09-04 17:28:13.656 [INFO][5070] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="705ec2ae833068b421d429a2b215ee7e97de77f21335a8ae2cbe9dee33f3d3ea" Namespace="kube-system" Pod="coredns-76f75df574-56xn4" WorkloadEndpoint="ip--172--31--30--103-k8s-coredns--76f75df574--56xn4-eth0" Sep 4 17:28:13.907312 containerd[1981]: 2024-09-04 17:28:13.735 [INFO][5102] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="705ec2ae833068b421d429a2b215ee7e97de77f21335a8ae2cbe9dee33f3d3ea" HandleID="k8s-pod-network.705ec2ae833068b421d429a2b215ee7e97de77f21335a8ae2cbe9dee33f3d3ea" Workload="ip--172--31--30--103-k8s-coredns--76f75df574--56xn4-eth0" Sep 4 17:28:13.907312 containerd[1981]: 2024-09-04 17:28:13.766 [INFO][5102] ipam_plugin.go 270: Auto assigning IP ContainerID="705ec2ae833068b421d429a2b215ee7e97de77f21335a8ae2cbe9dee33f3d3ea" HandleID="k8s-pod-network.705ec2ae833068b421d429a2b215ee7e97de77f21335a8ae2cbe9dee33f3d3ea" Workload="ip--172--31--30--103-k8s-coredns--76f75df574--56xn4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00034a170), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-30-103", "pod":"coredns-76f75df574-56xn4", "timestamp":"2024-09-04 17:28:13.735182708 +0000 UTC"}, Hostname:"ip-172-31-30-103", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:28:13.907312 containerd[1981]: 2024-09-04 17:28:13.767 [INFO][5102] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:28:13.907312 containerd[1981]: 2024-09-04 17:28:13.767 [INFO][5102] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:28:13.907312 containerd[1981]: 2024-09-04 17:28:13.767 [INFO][5102] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-103' Sep 4 17:28:13.907312 containerd[1981]: 2024-09-04 17:28:13.774 [INFO][5102] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.705ec2ae833068b421d429a2b215ee7e97de77f21335a8ae2cbe9dee33f3d3ea" host="ip-172-31-30-103" Sep 4 17:28:13.907312 containerd[1981]: 2024-09-04 17:28:13.793 [INFO][5102] ipam.go 372: Looking up existing affinities for host host="ip-172-31-30-103" Sep 4 17:28:13.907312 containerd[1981]: 2024-09-04 17:28:13.802 [INFO][5102] ipam.go 489: Trying affinity for 192.168.10.192/26 host="ip-172-31-30-103" Sep 4 17:28:13.907312 containerd[1981]: 2024-09-04 17:28:13.805 [INFO][5102] ipam.go 155: Attempting to load block cidr=192.168.10.192/26 host="ip-172-31-30-103" Sep 4 17:28:13.907312 containerd[1981]: 2024-09-04 17:28:13.812 [INFO][5102] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.10.192/26 host="ip-172-31-30-103" Sep 4 17:28:13.907312 containerd[1981]: 2024-09-04 17:28:13.812 [INFO][5102] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.10.192/26 handle="k8s-pod-network.705ec2ae833068b421d429a2b215ee7e97de77f21335a8ae2cbe9dee33f3d3ea" host="ip-172-31-30-103" Sep 4 17:28:13.907312 containerd[1981]: 2024-09-04 17:28:13.815 [INFO][5102] ipam.go 1685: Creating new handle: k8s-pod-network.705ec2ae833068b421d429a2b215ee7e97de77f21335a8ae2cbe9dee33f3d3ea Sep 4 17:28:13.907312 containerd[1981]: 2024-09-04 17:28:13.830 [INFO][5102] ipam.go 1203: Writing block in order to claim IPs block=192.168.10.192/26 handle="k8s-pod-network.705ec2ae833068b421d429a2b215ee7e97de77f21335a8ae2cbe9dee33f3d3ea" host="ip-172-31-30-103" Sep 4 17:28:13.907312 containerd[1981]: 2024-09-04 17:28:13.850 [INFO][5102] ipam.go 1216: Successfully claimed IPs: [192.168.10.196/26] block=192.168.10.192/26 handle="k8s-pod-network.705ec2ae833068b421d429a2b215ee7e97de77f21335a8ae2cbe9dee33f3d3ea" host="ip-172-31-30-103" Sep 4 17:28:13.907312 containerd[1981]: 2024-09-04 17:28:13.850 [INFO][5102] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.10.196/26] handle="k8s-pod-network.705ec2ae833068b421d429a2b215ee7e97de77f21335a8ae2cbe9dee33f3d3ea" host="ip-172-31-30-103" Sep 4 17:28:13.907312 containerd[1981]: 2024-09-04 17:28:13.850 [INFO][5102] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:28:13.907312 containerd[1981]: 2024-09-04 17:28:13.850 [INFO][5102] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.10.196/26] IPv6=[] ContainerID="705ec2ae833068b421d429a2b215ee7e97de77f21335a8ae2cbe9dee33f3d3ea" HandleID="k8s-pod-network.705ec2ae833068b421d429a2b215ee7e97de77f21335a8ae2cbe9dee33f3d3ea" Workload="ip--172--31--30--103-k8s-coredns--76f75df574--56xn4-eth0" Sep 4 17:28:13.908333 containerd[1981]: 2024-09-04 17:28:13.861 [INFO][5070] k8s.go 386: Populated endpoint ContainerID="705ec2ae833068b421d429a2b215ee7e97de77f21335a8ae2cbe9dee33f3d3ea" Namespace="kube-system" Pod="coredns-76f75df574-56xn4" WorkloadEndpoint="ip--172--31--30--103-k8s-coredns--76f75df574--56xn4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--103-k8s-coredns--76f75df574--56xn4-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"5361f75e-f056-4911-be4c-82491629749e", ResourceVersion:"748", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 27, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-103", ContainerID:"", Pod:"coredns-76f75df574-56xn4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.10.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali66735e6ebf0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:28:13.908333 containerd[1981]: 2024-09-04 17:28:13.863 [INFO][5070] k8s.go 387: Calico CNI using IPs: [192.168.10.196/32] ContainerID="705ec2ae833068b421d429a2b215ee7e97de77f21335a8ae2cbe9dee33f3d3ea" Namespace="kube-system" Pod="coredns-76f75df574-56xn4" WorkloadEndpoint="ip--172--31--30--103-k8s-coredns--76f75df574--56xn4-eth0" Sep 4 17:28:13.908333 containerd[1981]: 2024-09-04 17:28:13.864 [INFO][5070] dataplane_linux.go 68: Setting the host side veth name to cali66735e6ebf0 ContainerID="705ec2ae833068b421d429a2b215ee7e97de77f21335a8ae2cbe9dee33f3d3ea" Namespace="kube-system" Pod="coredns-76f75df574-56xn4" WorkloadEndpoint="ip--172--31--30--103-k8s-coredns--76f75df574--56xn4-eth0" Sep 4 17:28:13.908333 containerd[1981]: 2024-09-04 17:28:13.871 [INFO][5070] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="705ec2ae833068b421d429a2b215ee7e97de77f21335a8ae2cbe9dee33f3d3ea" Namespace="kube-system" Pod="coredns-76f75df574-56xn4" WorkloadEndpoint="ip--172--31--30--103-k8s-coredns--76f75df574--56xn4-eth0" Sep 4 17:28:13.908333 containerd[1981]: 2024-09-04 17:28:13.872 [INFO][5070] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="705ec2ae833068b421d429a2b215ee7e97de77f21335a8ae2cbe9dee33f3d3ea" Namespace="kube-system" Pod="coredns-76f75df574-56xn4" WorkloadEndpoint="ip--172--31--30--103-k8s-coredns--76f75df574--56xn4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--103-k8s-coredns--76f75df574--56xn4-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"5361f75e-f056-4911-be4c-82491629749e", ResourceVersion:"748", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 27, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-103", ContainerID:"705ec2ae833068b421d429a2b215ee7e97de77f21335a8ae2cbe9dee33f3d3ea", Pod:"coredns-76f75df574-56xn4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.10.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali66735e6ebf0", MAC:"ae:6b:28:98:f5:a5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:28:13.908333 containerd[1981]: 2024-09-04 17:28:13.902 [INFO][5070] k8s.go 500: Wrote updated endpoint to datastore ContainerID="705ec2ae833068b421d429a2b215ee7e97de77f21335a8ae2cbe9dee33f3d3ea" Namespace="kube-system" Pod="coredns-76f75df574-56xn4" WorkloadEndpoint="ip--172--31--30--103-k8s-coredns--76f75df574--56xn4-eth0" Sep 4 17:28:13.924696 containerd[1981]: time="2024-09-04T17:28:13.922781435Z" level=info msg="StartContainer for \"e6a1469bcf28683fd5182ef2b718cf9f0883e6d6c6318bf53efe70a757937290\" returns successfully" Sep 4 17:28:13.925609 systemd-networkd[1810]: cali270d2f73b80: Gained IPv6LL Sep 4 17:28:14.004402 containerd[1981]: time="2024-09-04T17:28:14.004300603Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:28:14.004552 containerd[1981]: time="2024-09-04T17:28:14.004501381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:28:14.004610 containerd[1981]: time="2024-09-04T17:28:14.004547674Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:28:14.004610 containerd[1981]: time="2024-09-04T17:28:14.004577212Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:28:14.049319 systemd[1]: run-containerd-runc-k8s.io-705ec2ae833068b421d429a2b215ee7e97de77f21335a8ae2cbe9dee33f3d3ea-runc.YwiYqW.mount: Deactivated successfully. Sep 4 17:28:14.058464 systemd[1]: Started cri-containerd-705ec2ae833068b421d429a2b215ee7e97de77f21335a8ae2cbe9dee33f3d3ea.scope - libcontainer container 705ec2ae833068b421d429a2b215ee7e97de77f21335a8ae2cbe9dee33f3d3ea. Sep 4 17:28:14.146321 containerd[1981]: time="2024-09-04T17:28:14.146273531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-56xn4,Uid:5361f75e-f056-4911-be4c-82491629749e,Namespace:kube-system,Attempt:1,} returns sandbox id \"705ec2ae833068b421d429a2b215ee7e97de77f21335a8ae2cbe9dee33f3d3ea\"" Sep 4 17:28:14.154800 containerd[1981]: time="2024-09-04T17:28:14.154600906Z" level=info msg="CreateContainer within sandbox \"705ec2ae833068b421d429a2b215ee7e97de77f21335a8ae2cbe9dee33f3d3ea\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 17:28:14.189253 containerd[1981]: time="2024-09-04T17:28:14.188975362Z" level=info msg="CreateContainer within sandbox \"705ec2ae833068b421d429a2b215ee7e97de77f21335a8ae2cbe9dee33f3d3ea\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"22b2ad1bcb9f2774730f1e125bf115d5d0ee5f55a206681b61c57200a87c7cc5\"" Sep 4 17:28:14.191474 containerd[1981]: time="2024-09-04T17:28:14.191440400Z" level=info msg="StartContainer for \"22b2ad1bcb9f2774730f1e125bf115d5d0ee5f55a206681b61c57200a87c7cc5\"" Sep 4 17:28:14.250524 systemd[1]: Started cri-containerd-22b2ad1bcb9f2774730f1e125bf115d5d0ee5f55a206681b61c57200a87c7cc5.scope - libcontainer container 22b2ad1bcb9f2774730f1e125bf115d5d0ee5f55a206681b61c57200a87c7cc5. Sep 4 17:28:14.338996 containerd[1981]: time="2024-09-04T17:28:14.338717875Z" level=info msg="StartContainer for \"22b2ad1bcb9f2774730f1e125bf115d5d0ee5f55a206681b61c57200a87c7cc5\" returns successfully" Sep 4 17:28:14.781790 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3820932372.mount: Deactivated successfully. Sep 4 17:28:14.898989 kubelet[3195]: I0904 17:28:14.898497 3195 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-56xn4" podStartSLOduration=36.898446561 podStartE2EDuration="36.898446561s" podCreationTimestamp="2024-09-04 17:27:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:28:14.832330644 +0000 UTC m=+48.846743612" watchObservedRunningTime="2024-09-04 17:28:14.898446561 +0000 UTC m=+48.912859529" Sep 4 17:28:14.898989 kubelet[3195]: I0904 17:28:14.898630 3195 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-58c545f596-pnc4h" podStartSLOduration=25.876037522 podStartE2EDuration="29.898594336s" podCreationTimestamp="2024-09-04 17:27:45 +0000 UTC" firstStartedPulling="2024-09-04 17:28:09.526072568 +0000 UTC m=+43.540485525" lastFinishedPulling="2024-09-04 17:28:13.548629394 +0000 UTC m=+47.563042339" observedRunningTime="2024-09-04 17:28:14.896567069 +0000 UTC m=+48.910980036" watchObservedRunningTime="2024-09-04 17:28:14.898594336 +0000 UTC m=+48.913007300" Sep 4 17:28:14.921479 systemd[1]: run-containerd-runc-k8s.io-e6a1469bcf28683fd5182ef2b718cf9f0883e6d6c6318bf53efe70a757937290-runc.xauXCJ.mount: Deactivated successfully. Sep 4 17:28:15.146868 systemd-networkd[1810]: cali66735e6ebf0: Gained IPv6LL Sep 4 17:28:15.348345 containerd[1981]: time="2024-09-04T17:28:15.348285192Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:28:15.350164 containerd[1981]: time="2024-09-04T17:28:15.350021240Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7642081" Sep 4 17:28:15.352119 containerd[1981]: time="2024-09-04T17:28:15.352056363Z" level=info msg="ImageCreate event name:\"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:28:15.356277 containerd[1981]: time="2024-09-04T17:28:15.356030458Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:28:15.357143 containerd[1981]: time="2024-09-04T17:28:15.357102176Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"9134482\" in 1.803565355s" Sep 4 17:28:15.357291 containerd[1981]: time="2024-09-04T17:28:15.357178995Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\"" Sep 4 17:28:15.360423 containerd[1981]: time="2024-09-04T17:28:15.360191623Z" level=info msg="CreateContainer within sandbox \"edf9dcedf64cd58b1a3773e5c3676cd2d5715031e731b490ea4c532ec172b310\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 4 17:28:15.397903 containerd[1981]: time="2024-09-04T17:28:15.397838369Z" level=info msg="CreateContainer within sandbox \"edf9dcedf64cd58b1a3773e5c3676cd2d5715031e731b490ea4c532ec172b310\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"8224fb822baa29150fa7d09bc284bd177260e3f448eb74046cea6032b906983e\"" Sep 4 17:28:15.398660 containerd[1981]: time="2024-09-04T17:28:15.398599412Z" level=info msg="StartContainer for \"8224fb822baa29150fa7d09bc284bd177260e3f448eb74046cea6032b906983e\"" Sep 4 17:28:15.449517 systemd[1]: Started cri-containerd-8224fb822baa29150fa7d09bc284bd177260e3f448eb74046cea6032b906983e.scope - libcontainer container 8224fb822baa29150fa7d09bc284bd177260e3f448eb74046cea6032b906983e. Sep 4 17:28:15.495673 containerd[1981]: time="2024-09-04T17:28:15.495624985Z" level=info msg="StartContainer for \"8224fb822baa29150fa7d09bc284bd177260e3f448eb74046cea6032b906983e\" returns successfully" Sep 4 17:28:15.497063 containerd[1981]: time="2024-09-04T17:28:15.497031231Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"" Sep 4 17:28:17.360206 containerd[1981]: time="2024-09-04T17:28:17.359702373Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:28:17.362723 containerd[1981]: time="2024-09-04T17:28:17.362672104Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12907822" Sep 4 17:28:17.366259 containerd[1981]: time="2024-09-04T17:28:17.364986108Z" level=info msg="ImageCreate event name:\"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:28:17.369620 containerd[1981]: time="2024-09-04T17:28:17.369583691Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:28:17.372082 containerd[1981]: time="2024-09-04T17:28:17.372040825Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"14400175\" in 1.874966171s" Sep 4 17:28:17.372595 containerd[1981]: time="2024-09-04T17:28:17.372569035Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\"" Sep 4 17:28:17.378110 containerd[1981]: time="2024-09-04T17:28:17.377622618Z" level=info msg="CreateContainer within sandbox \"edf9dcedf64cd58b1a3773e5c3676cd2d5715031e731b490ea4c532ec172b310\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 4 17:28:17.406663 containerd[1981]: time="2024-09-04T17:28:17.406611423Z" level=info msg="CreateContainer within sandbox \"edf9dcedf64cd58b1a3773e5c3676cd2d5715031e731b490ea4c532ec172b310\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"185e4a9fa74c0ac9a0910c36de6996a5759b9dd2657c8549beb14a0a08e4844c\"" Sep 4 17:28:17.407595 containerd[1981]: time="2024-09-04T17:28:17.407562902Z" level=info msg="StartContainer for \"185e4a9fa74c0ac9a0910c36de6996a5759b9dd2657c8549beb14a0a08e4844c\"" Sep 4 17:28:17.511860 systemd[1]: run-containerd-runc-k8s.io-185e4a9fa74c0ac9a0910c36de6996a5759b9dd2657c8549beb14a0a08e4844c-runc.eNivx9.mount: Deactivated successfully. Sep 4 17:28:17.521746 systemd[1]: Started cri-containerd-185e4a9fa74c0ac9a0910c36de6996a5759b9dd2657c8549beb14a0a08e4844c.scope - libcontainer container 185e4a9fa74c0ac9a0910c36de6996a5759b9dd2657c8549beb14a0a08e4844c. Sep 4 17:28:17.586728 containerd[1981]: time="2024-09-04T17:28:17.586555141Z" level=info msg="StartContainer for \"185e4a9fa74c0ac9a0910c36de6996a5759b9dd2657c8549beb14a0a08e4844c\" returns successfully" Sep 4 17:28:17.769723 systemd[1]: Started sshd@8-172.31.30.103:22-139.178.68.195:43190.service - OpenSSH per-connection server daemon (139.178.68.195:43190). Sep 4 17:28:17.911171 kubelet[3195]: I0904 17:28:17.911117 3195 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-rnqrz" podStartSLOduration=28.481992605 podStartE2EDuration="32.911058206s" podCreationTimestamp="2024-09-04 17:27:45 +0000 UTC" firstStartedPulling="2024-09-04 17:28:12.94432807 +0000 UTC m=+46.958741023" lastFinishedPulling="2024-09-04 17:28:17.373393678 +0000 UTC m=+51.387806624" observedRunningTime="2024-09-04 17:28:17.910645279 +0000 UTC m=+51.925058268" watchObservedRunningTime="2024-09-04 17:28:17.911058206 +0000 UTC m=+51.925471170" Sep 4 17:28:18.019667 ntpd[1942]: Listen normally on 7 vxlan.calico 192.168.10.192:123 Sep 4 17:28:18.020913 ntpd[1942]: 4 Sep 17:28:18 ntpd[1942]: Listen normally on 7 vxlan.calico 192.168.10.192:123 Sep 4 17:28:18.020913 ntpd[1942]: 4 Sep 17:28:18 ntpd[1942]: Listen normally on 8 vxlan.calico [fe80::648a:beff:fe1b:84a5%4]:123 Sep 4 17:28:18.020913 ntpd[1942]: 4 Sep 17:28:18 ntpd[1942]: Listen normally on 9 cali9fc37347d16 [fe80::ecee:eeff:feee:eeee%5]:123 Sep 4 17:28:18.020913 ntpd[1942]: 4 Sep 17:28:18 ntpd[1942]: Listen normally on 10 cali270d2f73b80 [fe80::ecee:eeff:feee:eeee%8]:123 Sep 4 17:28:18.020913 ntpd[1942]: 4 Sep 17:28:18 ntpd[1942]: Listen normally on 11 caliece3c4198aa [fe80::ecee:eeff:feee:eeee%9]:123 Sep 4 17:28:18.020913 ntpd[1942]: 4 Sep 17:28:18 ntpd[1942]: Listen normally on 12 cali66735e6ebf0 [fe80::ecee:eeff:feee:eeee%10]:123 Sep 4 17:28:18.020464 ntpd[1942]: Listen normally on 8 vxlan.calico [fe80::648a:beff:fe1b:84a5%4]:123 Sep 4 17:28:18.020531 ntpd[1942]: Listen normally on 9 cali9fc37347d16 [fe80::ecee:eeff:feee:eeee%5]:123 Sep 4 17:28:18.020583 ntpd[1942]: Listen normally on 10 cali270d2f73b80 [fe80::ecee:eeff:feee:eeee%8]:123 Sep 4 17:28:18.020623 ntpd[1942]: Listen normally on 11 caliece3c4198aa [fe80::ecee:eeff:feee:eeee%9]:123 Sep 4 17:28:18.020674 ntpd[1942]: Listen normally on 12 cali66735e6ebf0 [fe80::ecee:eeff:feee:eeee%10]:123 Sep 4 17:28:18.039932 sshd[5334]: Accepted publickey for core from 139.178.68.195 port 43190 ssh2: RSA SHA256:RWdrpmeD0uODGtoWpmuFEK/G9FvWcalt1Ic3MhVma0g Sep 4 17:28:18.056401 sshd[5334]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:28:18.069057 systemd-logind[1947]: New session 9 of user core. Sep 4 17:28:18.077716 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 4 17:28:18.630133 kubelet[3195]: I0904 17:28:18.629930 3195 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 4 17:28:18.638480 kubelet[3195]: I0904 17:28:18.638396 3195 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 4 17:28:18.942156 sshd[5334]: pam_unix(sshd:session): session closed for user core Sep 4 17:28:18.957531 systemd-logind[1947]: Session 9 logged out. Waiting for processes to exit. Sep 4 17:28:18.964901 systemd[1]: sshd@8-172.31.30.103:22-139.178.68.195:43190.service: Deactivated successfully. Sep 4 17:28:18.972746 systemd[1]: session-9.scope: Deactivated successfully. Sep 4 17:28:18.985739 systemd-logind[1947]: Removed session 9. Sep 4 17:28:23.983872 systemd[1]: Started sshd@9-172.31.30.103:22-139.178.68.195:43204.service - OpenSSH per-connection server daemon (139.178.68.195:43204). Sep 4 17:28:24.200089 sshd[5381]: Accepted publickey for core from 139.178.68.195 port 43204 ssh2: RSA SHA256:RWdrpmeD0uODGtoWpmuFEK/G9FvWcalt1Ic3MhVma0g Sep 4 17:28:24.202402 sshd[5381]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:28:24.208304 systemd-logind[1947]: New session 10 of user core. Sep 4 17:28:24.215481 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 4 17:28:24.478963 sshd[5381]: pam_unix(sshd:session): session closed for user core Sep 4 17:28:24.483657 systemd[1]: sshd@9-172.31.30.103:22-139.178.68.195:43204.service: Deactivated successfully. Sep 4 17:28:24.486798 systemd[1]: session-10.scope: Deactivated successfully. Sep 4 17:28:24.488600 systemd-logind[1947]: Session 10 logged out. Waiting for processes to exit. Sep 4 17:28:24.490796 systemd-logind[1947]: Removed session 10. Sep 4 17:28:24.512655 systemd[1]: Started sshd@10-172.31.30.103:22-139.178.68.195:43212.service - OpenSSH per-connection server daemon (139.178.68.195:43212). Sep 4 17:28:24.695218 sshd[5397]: Accepted publickey for core from 139.178.68.195 port 43212 ssh2: RSA SHA256:RWdrpmeD0uODGtoWpmuFEK/G9FvWcalt1Ic3MhVma0g Sep 4 17:28:24.697015 sshd[5397]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:28:24.704645 systemd-logind[1947]: New session 11 of user core. Sep 4 17:28:24.709430 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 4 17:28:25.071569 sshd[5397]: pam_unix(sshd:session): session closed for user core Sep 4 17:28:25.085838 systemd[1]: sshd@10-172.31.30.103:22-139.178.68.195:43212.service: Deactivated successfully. Sep 4 17:28:25.099278 systemd[1]: session-11.scope: Deactivated successfully. Sep 4 17:28:25.125631 systemd-logind[1947]: Session 11 logged out. Waiting for processes to exit. Sep 4 17:28:25.140919 systemd[1]: Started sshd@11-172.31.30.103:22-139.178.68.195:43216.service - OpenSSH per-connection server daemon (139.178.68.195:43216). Sep 4 17:28:25.150627 systemd-logind[1947]: Removed session 11. Sep 4 17:28:25.340628 sshd[5408]: Accepted publickey for core from 139.178.68.195 port 43216 ssh2: RSA SHA256:RWdrpmeD0uODGtoWpmuFEK/G9FvWcalt1Ic3MhVma0g Sep 4 17:28:25.343579 sshd[5408]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:28:25.353713 systemd-logind[1947]: New session 12 of user core. Sep 4 17:28:25.364023 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 4 17:28:25.656123 sshd[5408]: pam_unix(sshd:session): session closed for user core Sep 4 17:28:25.662263 systemd[1]: sshd@11-172.31.30.103:22-139.178.68.195:43216.service: Deactivated successfully. Sep 4 17:28:25.664938 systemd[1]: session-12.scope: Deactivated successfully. Sep 4 17:28:25.666208 systemd-logind[1947]: Session 12 logged out. Waiting for processes to exit. Sep 4 17:28:25.667552 systemd-logind[1947]: Removed session 12. Sep 4 17:28:26.224073 containerd[1981]: time="2024-09-04T17:28:26.224033436Z" level=info msg="StopPodSandbox for \"5203c9460a703b6e418b5fe03e1638e0f68b1f3f5ad6cd39d3463c46382ffabc\"" Sep 4 17:28:26.476291 containerd[1981]: 2024-09-04 17:28:26.415 [WARNING][5434] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5203c9460a703b6e418b5fe03e1638e0f68b1f3f5ad6cd39d3463c46382ffabc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--103-k8s-csi--node--driver--rnqrz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"906685ae-b7d7-4862-82f6-b94651385380", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 27, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-103", ContainerID:"edf9dcedf64cd58b1a3773e5c3676cd2d5715031e731b490ea4c532ec172b310", Pod:"csi-node-driver-rnqrz", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.10.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"caliece3c4198aa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:28:26.476291 containerd[1981]: 2024-09-04 17:28:26.416 [INFO][5434] k8s.go 608: Cleaning up netns ContainerID="5203c9460a703b6e418b5fe03e1638e0f68b1f3f5ad6cd39d3463c46382ffabc" Sep 4 17:28:26.476291 containerd[1981]: 2024-09-04 17:28:26.416 [INFO][5434] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="5203c9460a703b6e418b5fe03e1638e0f68b1f3f5ad6cd39d3463c46382ffabc" iface="eth0" netns="" Sep 4 17:28:26.476291 containerd[1981]: 2024-09-04 17:28:26.416 [INFO][5434] k8s.go 615: Releasing IP address(es) ContainerID="5203c9460a703b6e418b5fe03e1638e0f68b1f3f5ad6cd39d3463c46382ffabc" Sep 4 17:28:26.476291 containerd[1981]: 2024-09-04 17:28:26.416 [INFO][5434] utils.go 188: Calico CNI releasing IP address ContainerID="5203c9460a703b6e418b5fe03e1638e0f68b1f3f5ad6cd39d3463c46382ffabc" Sep 4 17:28:26.476291 containerd[1981]: 2024-09-04 17:28:26.463 [INFO][5441] ipam_plugin.go 417: Releasing address using handleID ContainerID="5203c9460a703b6e418b5fe03e1638e0f68b1f3f5ad6cd39d3463c46382ffabc" HandleID="k8s-pod-network.5203c9460a703b6e418b5fe03e1638e0f68b1f3f5ad6cd39d3463c46382ffabc" Workload="ip--172--31--30--103-k8s-csi--node--driver--rnqrz-eth0" Sep 4 17:28:26.476291 containerd[1981]: 2024-09-04 17:28:26.463 [INFO][5441] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:28:26.476291 containerd[1981]: 2024-09-04 17:28:26.464 [INFO][5441] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:28:26.476291 containerd[1981]: 2024-09-04 17:28:26.469 [WARNING][5441] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="5203c9460a703b6e418b5fe03e1638e0f68b1f3f5ad6cd39d3463c46382ffabc" HandleID="k8s-pod-network.5203c9460a703b6e418b5fe03e1638e0f68b1f3f5ad6cd39d3463c46382ffabc" Workload="ip--172--31--30--103-k8s-csi--node--driver--rnqrz-eth0" Sep 4 17:28:26.476291 containerd[1981]: 2024-09-04 17:28:26.469 [INFO][5441] ipam_plugin.go 445: Releasing address using workloadID ContainerID="5203c9460a703b6e418b5fe03e1638e0f68b1f3f5ad6cd39d3463c46382ffabc" HandleID="k8s-pod-network.5203c9460a703b6e418b5fe03e1638e0f68b1f3f5ad6cd39d3463c46382ffabc" Workload="ip--172--31--30--103-k8s-csi--node--driver--rnqrz-eth0" Sep 4 17:28:26.476291 containerd[1981]: 2024-09-04 17:28:26.471 [INFO][5441] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:28:26.476291 containerd[1981]: 2024-09-04 17:28:26.474 [INFO][5434] k8s.go 621: Teardown processing complete. ContainerID="5203c9460a703b6e418b5fe03e1638e0f68b1f3f5ad6cd39d3463c46382ffabc" Sep 4 17:28:26.476291 containerd[1981]: time="2024-09-04T17:28:26.476066864Z" level=info msg="TearDown network for sandbox \"5203c9460a703b6e418b5fe03e1638e0f68b1f3f5ad6cd39d3463c46382ffabc\" successfully" Sep 4 17:28:26.476291 containerd[1981]: time="2024-09-04T17:28:26.476098115Z" level=info msg="StopPodSandbox for \"5203c9460a703b6e418b5fe03e1638e0f68b1f3f5ad6cd39d3463c46382ffabc\" returns successfully" Sep 4 17:28:26.477336 containerd[1981]: time="2024-09-04T17:28:26.477299155Z" level=info msg="RemovePodSandbox for \"5203c9460a703b6e418b5fe03e1638e0f68b1f3f5ad6cd39d3463c46382ffabc\"" Sep 4 17:28:26.479839 containerd[1981]: time="2024-09-04T17:28:26.479799603Z" level=info msg="Forcibly stopping sandbox \"5203c9460a703b6e418b5fe03e1638e0f68b1f3f5ad6cd39d3463c46382ffabc\"" Sep 4 17:28:26.581526 containerd[1981]: 2024-09-04 17:28:26.536 [WARNING][5459] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5203c9460a703b6e418b5fe03e1638e0f68b1f3f5ad6cd39d3463c46382ffabc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--103-k8s-csi--node--driver--rnqrz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"906685ae-b7d7-4862-82f6-b94651385380", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 27, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-103", ContainerID:"edf9dcedf64cd58b1a3773e5c3676cd2d5715031e731b490ea4c532ec172b310", Pod:"csi-node-driver-rnqrz", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.10.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"caliece3c4198aa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:28:26.581526 containerd[1981]: 2024-09-04 17:28:26.537 [INFO][5459] k8s.go 608: Cleaning up netns ContainerID="5203c9460a703b6e418b5fe03e1638e0f68b1f3f5ad6cd39d3463c46382ffabc" Sep 4 17:28:26.581526 containerd[1981]: 2024-09-04 17:28:26.537 [INFO][5459] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="5203c9460a703b6e418b5fe03e1638e0f68b1f3f5ad6cd39d3463c46382ffabc" iface="eth0" netns="" Sep 4 17:28:26.581526 containerd[1981]: 2024-09-04 17:28:26.537 [INFO][5459] k8s.go 615: Releasing IP address(es) ContainerID="5203c9460a703b6e418b5fe03e1638e0f68b1f3f5ad6cd39d3463c46382ffabc" Sep 4 17:28:26.581526 containerd[1981]: 2024-09-04 17:28:26.537 [INFO][5459] utils.go 188: Calico CNI releasing IP address ContainerID="5203c9460a703b6e418b5fe03e1638e0f68b1f3f5ad6cd39d3463c46382ffabc" Sep 4 17:28:26.581526 containerd[1981]: 2024-09-04 17:28:26.568 [INFO][5465] ipam_plugin.go 417: Releasing address using handleID ContainerID="5203c9460a703b6e418b5fe03e1638e0f68b1f3f5ad6cd39d3463c46382ffabc" HandleID="k8s-pod-network.5203c9460a703b6e418b5fe03e1638e0f68b1f3f5ad6cd39d3463c46382ffabc" Workload="ip--172--31--30--103-k8s-csi--node--driver--rnqrz-eth0" Sep 4 17:28:26.581526 containerd[1981]: 2024-09-04 17:28:26.568 [INFO][5465] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:28:26.581526 containerd[1981]: 2024-09-04 17:28:26.568 [INFO][5465] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:28:26.581526 containerd[1981]: 2024-09-04 17:28:26.574 [WARNING][5465] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="5203c9460a703b6e418b5fe03e1638e0f68b1f3f5ad6cd39d3463c46382ffabc" HandleID="k8s-pod-network.5203c9460a703b6e418b5fe03e1638e0f68b1f3f5ad6cd39d3463c46382ffabc" Workload="ip--172--31--30--103-k8s-csi--node--driver--rnqrz-eth0" Sep 4 17:28:26.581526 containerd[1981]: 2024-09-04 17:28:26.575 [INFO][5465] ipam_plugin.go 445: Releasing address using workloadID ContainerID="5203c9460a703b6e418b5fe03e1638e0f68b1f3f5ad6cd39d3463c46382ffabc" HandleID="k8s-pod-network.5203c9460a703b6e418b5fe03e1638e0f68b1f3f5ad6cd39d3463c46382ffabc" Workload="ip--172--31--30--103-k8s-csi--node--driver--rnqrz-eth0" Sep 4 17:28:26.581526 containerd[1981]: 2024-09-04 17:28:26.577 [INFO][5465] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:28:26.581526 containerd[1981]: 2024-09-04 17:28:26.579 [INFO][5459] k8s.go 621: Teardown processing complete. ContainerID="5203c9460a703b6e418b5fe03e1638e0f68b1f3f5ad6cd39d3463c46382ffabc" Sep 4 17:28:26.583853 containerd[1981]: time="2024-09-04T17:28:26.581599606Z" level=info msg="TearDown network for sandbox \"5203c9460a703b6e418b5fe03e1638e0f68b1f3f5ad6cd39d3463c46382ffabc\" successfully" Sep 4 17:28:26.630179 containerd[1981]: time="2024-09-04T17:28:26.630116871Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5203c9460a703b6e418b5fe03e1638e0f68b1f3f5ad6cd39d3463c46382ffabc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:28:26.630326 containerd[1981]: time="2024-09-04T17:28:26.630237033Z" level=info msg="RemovePodSandbox \"5203c9460a703b6e418b5fe03e1638e0f68b1f3f5ad6cd39d3463c46382ffabc\" returns successfully" Sep 4 17:28:26.631085 containerd[1981]: time="2024-09-04T17:28:26.631053489Z" level=info msg="StopPodSandbox for \"24cea8cb97f40f2c6769a319aa7b219f6e0c50aae4fe346948dc6a4dd9d1a3b7\"" Sep 4 17:28:26.748981 containerd[1981]: 2024-09-04 17:28:26.703 [WARNING][5483] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="24cea8cb97f40f2c6769a319aa7b219f6e0c50aae4fe346948dc6a4dd9d1a3b7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--103-k8s-calico--kube--controllers--58c545f596--pnc4h-eth0", GenerateName:"calico-kube-controllers-58c545f596-", Namespace:"calico-system", SelfLink:"", UID:"3e32abad-5cb7-4593-ad6e-3b408e428271", ResourceVersion:"791", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 27, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"58c545f596", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-103", ContainerID:"2fef45a941af4be5a651996aa8d3cd11f29d48bd436faec8e9c3cdfb925ef626", Pod:"calico-kube-controllers-58c545f596-pnc4h", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.10.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9fc37347d16", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:28:26.748981 containerd[1981]: 2024-09-04 17:28:26.703 [INFO][5483] k8s.go 608: Cleaning up netns ContainerID="24cea8cb97f40f2c6769a319aa7b219f6e0c50aae4fe346948dc6a4dd9d1a3b7" Sep 4 17:28:26.748981 containerd[1981]: 2024-09-04 17:28:26.703 [INFO][5483] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="24cea8cb97f40f2c6769a319aa7b219f6e0c50aae4fe346948dc6a4dd9d1a3b7" iface="eth0" netns="" Sep 4 17:28:26.748981 containerd[1981]: 2024-09-04 17:28:26.703 [INFO][5483] k8s.go 615: Releasing IP address(es) ContainerID="24cea8cb97f40f2c6769a319aa7b219f6e0c50aae4fe346948dc6a4dd9d1a3b7" Sep 4 17:28:26.748981 containerd[1981]: 2024-09-04 17:28:26.703 [INFO][5483] utils.go 188: Calico CNI releasing IP address ContainerID="24cea8cb97f40f2c6769a319aa7b219f6e0c50aae4fe346948dc6a4dd9d1a3b7" Sep 4 17:28:26.748981 containerd[1981]: 2024-09-04 17:28:26.732 [INFO][5489] ipam_plugin.go 417: Releasing address using handleID ContainerID="24cea8cb97f40f2c6769a319aa7b219f6e0c50aae4fe346948dc6a4dd9d1a3b7" HandleID="k8s-pod-network.24cea8cb97f40f2c6769a319aa7b219f6e0c50aae4fe346948dc6a4dd9d1a3b7" Workload="ip--172--31--30--103-k8s-calico--kube--controllers--58c545f596--pnc4h-eth0" Sep 4 17:28:26.748981 containerd[1981]: 2024-09-04 17:28:26.733 [INFO][5489] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:28:26.748981 containerd[1981]: 2024-09-04 17:28:26.733 [INFO][5489] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:28:26.748981 containerd[1981]: 2024-09-04 17:28:26.740 [WARNING][5489] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="24cea8cb97f40f2c6769a319aa7b219f6e0c50aae4fe346948dc6a4dd9d1a3b7" HandleID="k8s-pod-network.24cea8cb97f40f2c6769a319aa7b219f6e0c50aae4fe346948dc6a4dd9d1a3b7" Workload="ip--172--31--30--103-k8s-calico--kube--controllers--58c545f596--pnc4h-eth0" Sep 4 17:28:26.748981 containerd[1981]: 2024-09-04 17:28:26.740 [INFO][5489] ipam_plugin.go 445: Releasing address using workloadID ContainerID="24cea8cb97f40f2c6769a319aa7b219f6e0c50aae4fe346948dc6a4dd9d1a3b7" HandleID="k8s-pod-network.24cea8cb97f40f2c6769a319aa7b219f6e0c50aae4fe346948dc6a4dd9d1a3b7" Workload="ip--172--31--30--103-k8s-calico--kube--controllers--58c545f596--pnc4h-eth0" Sep 4 17:28:26.748981 containerd[1981]: 2024-09-04 17:28:26.742 [INFO][5489] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:28:26.748981 containerd[1981]: 2024-09-04 17:28:26.745 [INFO][5483] k8s.go 621: Teardown processing complete. ContainerID="24cea8cb97f40f2c6769a319aa7b219f6e0c50aae4fe346948dc6a4dd9d1a3b7" Sep 4 17:28:26.748981 containerd[1981]: time="2024-09-04T17:28:26.748964713Z" level=info msg="TearDown network for sandbox \"24cea8cb97f40f2c6769a319aa7b219f6e0c50aae4fe346948dc6a4dd9d1a3b7\" successfully" Sep 4 17:28:26.750077 containerd[1981]: time="2024-09-04T17:28:26.749000006Z" level=info msg="StopPodSandbox for \"24cea8cb97f40f2c6769a319aa7b219f6e0c50aae4fe346948dc6a4dd9d1a3b7\" returns successfully" Sep 4 17:28:26.750422 containerd[1981]: time="2024-09-04T17:28:26.750388850Z" level=info msg="RemovePodSandbox for \"24cea8cb97f40f2c6769a319aa7b219f6e0c50aae4fe346948dc6a4dd9d1a3b7\"" Sep 4 17:28:26.750528 containerd[1981]: time="2024-09-04T17:28:26.750431790Z" level=info msg="Forcibly stopping sandbox \"24cea8cb97f40f2c6769a319aa7b219f6e0c50aae4fe346948dc6a4dd9d1a3b7\"" Sep 4 17:28:26.940518 systemd[1]: run-containerd-runc-k8s.io-e6a1469bcf28683fd5182ef2b718cf9f0883e6d6c6318bf53efe70a757937290-runc.cpzZDw.mount: Deactivated successfully. Sep 4 17:28:26.960848 containerd[1981]: 2024-09-04 17:28:26.822 [WARNING][5507] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="24cea8cb97f40f2c6769a319aa7b219f6e0c50aae4fe346948dc6a4dd9d1a3b7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--103-k8s-calico--kube--controllers--58c545f596--pnc4h-eth0", GenerateName:"calico-kube-controllers-58c545f596-", Namespace:"calico-system", SelfLink:"", UID:"3e32abad-5cb7-4593-ad6e-3b408e428271", ResourceVersion:"791", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 27, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"58c545f596", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-103", ContainerID:"2fef45a941af4be5a651996aa8d3cd11f29d48bd436faec8e9c3cdfb925ef626", Pod:"calico-kube-controllers-58c545f596-pnc4h", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.10.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9fc37347d16", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:28:26.960848 containerd[1981]: 2024-09-04 17:28:26.822 [INFO][5507] k8s.go 608: Cleaning up netns ContainerID="24cea8cb97f40f2c6769a319aa7b219f6e0c50aae4fe346948dc6a4dd9d1a3b7" Sep 4 17:28:26.960848 containerd[1981]: 2024-09-04 17:28:26.822 [INFO][5507] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="24cea8cb97f40f2c6769a319aa7b219f6e0c50aae4fe346948dc6a4dd9d1a3b7" iface="eth0" netns="" Sep 4 17:28:26.960848 containerd[1981]: 2024-09-04 17:28:26.822 [INFO][5507] k8s.go 615: Releasing IP address(es) ContainerID="24cea8cb97f40f2c6769a319aa7b219f6e0c50aae4fe346948dc6a4dd9d1a3b7" Sep 4 17:28:26.960848 containerd[1981]: 2024-09-04 17:28:26.822 [INFO][5507] utils.go 188: Calico CNI releasing IP address ContainerID="24cea8cb97f40f2c6769a319aa7b219f6e0c50aae4fe346948dc6a4dd9d1a3b7" Sep 4 17:28:26.960848 containerd[1981]: 2024-09-04 17:28:26.925 [INFO][5513] ipam_plugin.go 417: Releasing address using handleID ContainerID="24cea8cb97f40f2c6769a319aa7b219f6e0c50aae4fe346948dc6a4dd9d1a3b7" HandleID="k8s-pod-network.24cea8cb97f40f2c6769a319aa7b219f6e0c50aae4fe346948dc6a4dd9d1a3b7" Workload="ip--172--31--30--103-k8s-calico--kube--controllers--58c545f596--pnc4h-eth0" Sep 4 17:28:26.960848 containerd[1981]: 2024-09-04 17:28:26.938 [INFO][5513] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:28:26.960848 containerd[1981]: 2024-09-04 17:28:26.939 [INFO][5513] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:28:26.960848 containerd[1981]: 2024-09-04 17:28:26.954 [WARNING][5513] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="24cea8cb97f40f2c6769a319aa7b219f6e0c50aae4fe346948dc6a4dd9d1a3b7" HandleID="k8s-pod-network.24cea8cb97f40f2c6769a319aa7b219f6e0c50aae4fe346948dc6a4dd9d1a3b7" Workload="ip--172--31--30--103-k8s-calico--kube--controllers--58c545f596--pnc4h-eth0" Sep 4 17:28:26.960848 containerd[1981]: 2024-09-04 17:28:26.954 [INFO][5513] ipam_plugin.go 445: Releasing address using workloadID ContainerID="24cea8cb97f40f2c6769a319aa7b219f6e0c50aae4fe346948dc6a4dd9d1a3b7" HandleID="k8s-pod-network.24cea8cb97f40f2c6769a319aa7b219f6e0c50aae4fe346948dc6a4dd9d1a3b7" Workload="ip--172--31--30--103-k8s-calico--kube--controllers--58c545f596--pnc4h-eth0" Sep 4 17:28:26.960848 containerd[1981]: 2024-09-04 17:28:26.956 [INFO][5513] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:28:26.960848 containerd[1981]: 2024-09-04 17:28:26.958 [INFO][5507] k8s.go 621: Teardown processing complete. ContainerID="24cea8cb97f40f2c6769a319aa7b219f6e0c50aae4fe346948dc6a4dd9d1a3b7" Sep 4 17:28:26.961764 containerd[1981]: time="2024-09-04T17:28:26.961294064Z" level=info msg="TearDown network for sandbox \"24cea8cb97f40f2c6769a319aa7b219f6e0c50aae4fe346948dc6a4dd9d1a3b7\" successfully" Sep 4 17:28:26.969658 containerd[1981]: time="2024-09-04T17:28:26.969447710Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"24cea8cb97f40f2c6769a319aa7b219f6e0c50aae4fe346948dc6a4dd9d1a3b7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:28:26.969658 containerd[1981]: time="2024-09-04T17:28:26.969531693Z" level=info msg="RemovePodSandbox \"24cea8cb97f40f2c6769a319aa7b219f6e0c50aae4fe346948dc6a4dd9d1a3b7\" returns successfully" Sep 4 17:28:26.970667 containerd[1981]: time="2024-09-04T17:28:26.970035822Z" level=info msg="StopPodSandbox for \"8bbeef170971c6a45fa0be4285542e081ae730d19d7c6dc4e82e3d9a8cb95fd7\"" Sep 4 17:28:27.089991 containerd[1981]: 2024-09-04 17:28:27.046 [WARNING][5553] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8bbeef170971c6a45fa0be4285542e081ae730d19d7c6dc4e82e3d9a8cb95fd7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--103-k8s-coredns--76f75df574--drqh6-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"374ecc78-947f-4762-95e1-b7832d67c6f4", ResourceVersion:"781", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 27, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-103", ContainerID:"f1cd11aaa069a7d31809c7d39154d61b6ceacc275317d507fd68c9b0701fde37", Pod:"coredns-76f75df574-drqh6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.10.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali270d2f73b80", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:28:27.089991 containerd[1981]: 2024-09-04 17:28:27.047 [INFO][5553] k8s.go 608: Cleaning up netns ContainerID="8bbeef170971c6a45fa0be4285542e081ae730d19d7c6dc4e82e3d9a8cb95fd7" Sep 4 17:28:27.089991 containerd[1981]: 2024-09-04 17:28:27.047 [INFO][5553] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="8bbeef170971c6a45fa0be4285542e081ae730d19d7c6dc4e82e3d9a8cb95fd7" iface="eth0" netns="" Sep 4 17:28:27.089991 containerd[1981]: 2024-09-04 17:28:27.047 [INFO][5553] k8s.go 615: Releasing IP address(es) ContainerID="8bbeef170971c6a45fa0be4285542e081ae730d19d7c6dc4e82e3d9a8cb95fd7" Sep 4 17:28:27.089991 containerd[1981]: 2024-09-04 17:28:27.047 [INFO][5553] utils.go 188: Calico CNI releasing IP address ContainerID="8bbeef170971c6a45fa0be4285542e081ae730d19d7c6dc4e82e3d9a8cb95fd7" Sep 4 17:28:27.089991 containerd[1981]: 2024-09-04 17:28:27.076 [INFO][5559] ipam_plugin.go 417: Releasing address using handleID ContainerID="8bbeef170971c6a45fa0be4285542e081ae730d19d7c6dc4e82e3d9a8cb95fd7" HandleID="k8s-pod-network.8bbeef170971c6a45fa0be4285542e081ae730d19d7c6dc4e82e3d9a8cb95fd7" Workload="ip--172--31--30--103-k8s-coredns--76f75df574--drqh6-eth0" Sep 4 17:28:27.089991 containerd[1981]: 2024-09-04 17:28:27.076 [INFO][5559] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:28:27.089991 containerd[1981]: 2024-09-04 17:28:27.076 [INFO][5559] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:28:27.089991 containerd[1981]: 2024-09-04 17:28:27.083 [WARNING][5559] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="8bbeef170971c6a45fa0be4285542e081ae730d19d7c6dc4e82e3d9a8cb95fd7" HandleID="k8s-pod-network.8bbeef170971c6a45fa0be4285542e081ae730d19d7c6dc4e82e3d9a8cb95fd7" Workload="ip--172--31--30--103-k8s-coredns--76f75df574--drqh6-eth0" Sep 4 17:28:27.089991 containerd[1981]: 2024-09-04 17:28:27.083 [INFO][5559] ipam_plugin.go 445: Releasing address using workloadID ContainerID="8bbeef170971c6a45fa0be4285542e081ae730d19d7c6dc4e82e3d9a8cb95fd7" HandleID="k8s-pod-network.8bbeef170971c6a45fa0be4285542e081ae730d19d7c6dc4e82e3d9a8cb95fd7" Workload="ip--172--31--30--103-k8s-coredns--76f75df574--drqh6-eth0" Sep 4 17:28:27.089991 containerd[1981]: 2024-09-04 17:28:27.085 [INFO][5559] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:28:27.089991 containerd[1981]: 2024-09-04 17:28:27.087 [INFO][5553] k8s.go 621: Teardown processing complete. ContainerID="8bbeef170971c6a45fa0be4285542e081ae730d19d7c6dc4e82e3d9a8cb95fd7" Sep 4 17:28:27.091907 containerd[1981]: time="2024-09-04T17:28:27.090039446Z" level=info msg="TearDown network for sandbox \"8bbeef170971c6a45fa0be4285542e081ae730d19d7c6dc4e82e3d9a8cb95fd7\" successfully" Sep 4 17:28:27.091907 containerd[1981]: time="2024-09-04T17:28:27.090069860Z" level=info msg="StopPodSandbox for \"8bbeef170971c6a45fa0be4285542e081ae730d19d7c6dc4e82e3d9a8cb95fd7\" returns successfully" Sep 4 17:28:27.091907 containerd[1981]: time="2024-09-04T17:28:27.090641221Z" level=info msg="RemovePodSandbox for \"8bbeef170971c6a45fa0be4285542e081ae730d19d7c6dc4e82e3d9a8cb95fd7\"" Sep 4 17:28:27.091907 containerd[1981]: time="2024-09-04T17:28:27.090666807Z" level=info msg="Forcibly stopping sandbox \"8bbeef170971c6a45fa0be4285542e081ae730d19d7c6dc4e82e3d9a8cb95fd7\"" Sep 4 17:28:27.182062 containerd[1981]: 2024-09-04 17:28:27.141 [WARNING][5577] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8bbeef170971c6a45fa0be4285542e081ae730d19d7c6dc4e82e3d9a8cb95fd7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--103-k8s-coredns--76f75df574--drqh6-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"374ecc78-947f-4762-95e1-b7832d67c6f4", ResourceVersion:"781", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 27, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-103", ContainerID:"f1cd11aaa069a7d31809c7d39154d61b6ceacc275317d507fd68c9b0701fde37", Pod:"coredns-76f75df574-drqh6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.10.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali270d2f73b80", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:28:27.182062 containerd[1981]: 2024-09-04 17:28:27.141 [INFO][5577] k8s.go 608: Cleaning up netns ContainerID="8bbeef170971c6a45fa0be4285542e081ae730d19d7c6dc4e82e3d9a8cb95fd7" Sep 4 17:28:27.182062 containerd[1981]: 2024-09-04 17:28:27.141 [INFO][5577] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="8bbeef170971c6a45fa0be4285542e081ae730d19d7c6dc4e82e3d9a8cb95fd7" iface="eth0" netns="" Sep 4 17:28:27.182062 containerd[1981]: 2024-09-04 17:28:27.141 [INFO][5577] k8s.go 615: Releasing IP address(es) ContainerID="8bbeef170971c6a45fa0be4285542e081ae730d19d7c6dc4e82e3d9a8cb95fd7" Sep 4 17:28:27.182062 containerd[1981]: 2024-09-04 17:28:27.141 [INFO][5577] utils.go 188: Calico CNI releasing IP address ContainerID="8bbeef170971c6a45fa0be4285542e081ae730d19d7c6dc4e82e3d9a8cb95fd7" Sep 4 17:28:27.182062 containerd[1981]: 2024-09-04 17:28:27.170 [INFO][5583] ipam_plugin.go 417: Releasing address using handleID ContainerID="8bbeef170971c6a45fa0be4285542e081ae730d19d7c6dc4e82e3d9a8cb95fd7" HandleID="k8s-pod-network.8bbeef170971c6a45fa0be4285542e081ae730d19d7c6dc4e82e3d9a8cb95fd7" Workload="ip--172--31--30--103-k8s-coredns--76f75df574--drqh6-eth0" Sep 4 17:28:27.182062 containerd[1981]: 2024-09-04 17:28:27.170 [INFO][5583] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:28:27.182062 containerd[1981]: 2024-09-04 17:28:27.170 [INFO][5583] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:28:27.182062 containerd[1981]: 2024-09-04 17:28:27.176 [WARNING][5583] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="8bbeef170971c6a45fa0be4285542e081ae730d19d7c6dc4e82e3d9a8cb95fd7" HandleID="k8s-pod-network.8bbeef170971c6a45fa0be4285542e081ae730d19d7c6dc4e82e3d9a8cb95fd7" Workload="ip--172--31--30--103-k8s-coredns--76f75df574--drqh6-eth0" Sep 4 17:28:27.182062 containerd[1981]: 2024-09-04 17:28:27.176 [INFO][5583] ipam_plugin.go 445: Releasing address using workloadID ContainerID="8bbeef170971c6a45fa0be4285542e081ae730d19d7c6dc4e82e3d9a8cb95fd7" HandleID="k8s-pod-network.8bbeef170971c6a45fa0be4285542e081ae730d19d7c6dc4e82e3d9a8cb95fd7" Workload="ip--172--31--30--103-k8s-coredns--76f75df574--drqh6-eth0" Sep 4 17:28:27.182062 containerd[1981]: 2024-09-04 17:28:27.178 [INFO][5583] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:28:27.182062 containerd[1981]: 2024-09-04 17:28:27.180 [INFO][5577] k8s.go 621: Teardown processing complete. ContainerID="8bbeef170971c6a45fa0be4285542e081ae730d19d7c6dc4e82e3d9a8cb95fd7" Sep 4 17:28:27.183572 containerd[1981]: time="2024-09-04T17:28:27.182110731Z" level=info msg="TearDown network for sandbox \"8bbeef170971c6a45fa0be4285542e081ae730d19d7c6dc4e82e3d9a8cb95fd7\" successfully" Sep 4 17:28:27.187595 containerd[1981]: time="2024-09-04T17:28:27.187542658Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8bbeef170971c6a45fa0be4285542e081ae730d19d7c6dc4e82e3d9a8cb95fd7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:28:27.187595 containerd[1981]: time="2024-09-04T17:28:27.187659757Z" level=info msg="RemovePodSandbox \"8bbeef170971c6a45fa0be4285542e081ae730d19d7c6dc4e82e3d9a8cb95fd7\" returns successfully" Sep 4 17:28:27.188290 containerd[1981]: time="2024-09-04T17:28:27.188263338Z" level=info msg="StopPodSandbox for \"7a2a9bdb47db04fe8631cc4d371f1b447fdd623e7562000870a0ca39bde78d75\"" Sep 4 17:28:27.290444 containerd[1981]: 2024-09-04 17:28:27.248 [WARNING][5601] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7a2a9bdb47db04fe8631cc4d371f1b447fdd623e7562000870a0ca39bde78d75" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--103-k8s-coredns--76f75df574--56xn4-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"5361f75e-f056-4911-be4c-82491629749e", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 27, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-103", ContainerID:"705ec2ae833068b421d429a2b215ee7e97de77f21335a8ae2cbe9dee33f3d3ea", Pod:"coredns-76f75df574-56xn4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.10.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali66735e6ebf0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:28:27.290444 containerd[1981]: 2024-09-04 17:28:27.249 [INFO][5601] k8s.go 608: Cleaning up netns ContainerID="7a2a9bdb47db04fe8631cc4d371f1b447fdd623e7562000870a0ca39bde78d75" Sep 4 17:28:27.290444 containerd[1981]: 2024-09-04 17:28:27.249 [INFO][5601] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="7a2a9bdb47db04fe8631cc4d371f1b447fdd623e7562000870a0ca39bde78d75" iface="eth0" netns="" Sep 4 17:28:27.290444 containerd[1981]: 2024-09-04 17:28:27.249 [INFO][5601] k8s.go 615: Releasing IP address(es) ContainerID="7a2a9bdb47db04fe8631cc4d371f1b447fdd623e7562000870a0ca39bde78d75" Sep 4 17:28:27.290444 containerd[1981]: 2024-09-04 17:28:27.249 [INFO][5601] utils.go 188: Calico CNI releasing IP address ContainerID="7a2a9bdb47db04fe8631cc4d371f1b447fdd623e7562000870a0ca39bde78d75" Sep 4 17:28:27.290444 containerd[1981]: 2024-09-04 17:28:27.276 [INFO][5607] ipam_plugin.go 417: Releasing address using handleID ContainerID="7a2a9bdb47db04fe8631cc4d371f1b447fdd623e7562000870a0ca39bde78d75" HandleID="k8s-pod-network.7a2a9bdb47db04fe8631cc4d371f1b447fdd623e7562000870a0ca39bde78d75" Workload="ip--172--31--30--103-k8s-coredns--76f75df574--56xn4-eth0" Sep 4 17:28:27.290444 containerd[1981]: 2024-09-04 17:28:27.277 [INFO][5607] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:28:27.290444 containerd[1981]: 2024-09-04 17:28:27.277 [INFO][5607] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:28:27.290444 containerd[1981]: 2024-09-04 17:28:27.283 [WARNING][5607] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="7a2a9bdb47db04fe8631cc4d371f1b447fdd623e7562000870a0ca39bde78d75" HandleID="k8s-pod-network.7a2a9bdb47db04fe8631cc4d371f1b447fdd623e7562000870a0ca39bde78d75" Workload="ip--172--31--30--103-k8s-coredns--76f75df574--56xn4-eth0" Sep 4 17:28:27.290444 containerd[1981]: 2024-09-04 17:28:27.283 [INFO][5607] ipam_plugin.go 445: Releasing address using workloadID ContainerID="7a2a9bdb47db04fe8631cc4d371f1b447fdd623e7562000870a0ca39bde78d75" HandleID="k8s-pod-network.7a2a9bdb47db04fe8631cc4d371f1b447fdd623e7562000870a0ca39bde78d75" Workload="ip--172--31--30--103-k8s-coredns--76f75df574--56xn4-eth0" Sep 4 17:28:27.290444 containerd[1981]: 2024-09-04 17:28:27.285 [INFO][5607] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:28:27.290444 containerd[1981]: 2024-09-04 17:28:27.288 [INFO][5601] k8s.go 621: Teardown processing complete. ContainerID="7a2a9bdb47db04fe8631cc4d371f1b447fdd623e7562000870a0ca39bde78d75" Sep 4 17:28:27.292499 containerd[1981]: time="2024-09-04T17:28:27.290548299Z" level=info msg="TearDown network for sandbox \"7a2a9bdb47db04fe8631cc4d371f1b447fdd623e7562000870a0ca39bde78d75\" successfully" Sep 4 17:28:27.292499 containerd[1981]: time="2024-09-04T17:28:27.290627839Z" level=info msg="StopPodSandbox for \"7a2a9bdb47db04fe8631cc4d371f1b447fdd623e7562000870a0ca39bde78d75\" returns successfully" Sep 4 17:28:27.292690 containerd[1981]: time="2024-09-04T17:28:27.292582289Z" level=info msg="RemovePodSandbox for \"7a2a9bdb47db04fe8631cc4d371f1b447fdd623e7562000870a0ca39bde78d75\"" Sep 4 17:28:27.292690 containerd[1981]: time="2024-09-04T17:28:27.292644750Z" level=info msg="Forcibly stopping sandbox \"7a2a9bdb47db04fe8631cc4d371f1b447fdd623e7562000870a0ca39bde78d75\"" Sep 4 17:28:27.389155 containerd[1981]: 2024-09-04 17:28:27.342 [WARNING][5625] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7a2a9bdb47db04fe8631cc4d371f1b447fdd623e7562000870a0ca39bde78d75" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--103-k8s-coredns--76f75df574--56xn4-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"5361f75e-f056-4911-be4c-82491629749e", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 27, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-103", ContainerID:"705ec2ae833068b421d429a2b215ee7e97de77f21335a8ae2cbe9dee33f3d3ea", Pod:"coredns-76f75df574-56xn4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.10.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali66735e6ebf0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:28:27.389155 containerd[1981]: 2024-09-04 17:28:27.342 [INFO][5625] k8s.go 608: Cleaning up netns ContainerID="7a2a9bdb47db04fe8631cc4d371f1b447fdd623e7562000870a0ca39bde78d75" Sep 4 17:28:27.389155 containerd[1981]: 2024-09-04 17:28:27.342 [INFO][5625] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="7a2a9bdb47db04fe8631cc4d371f1b447fdd623e7562000870a0ca39bde78d75" iface="eth0" netns="" Sep 4 17:28:27.389155 containerd[1981]: 2024-09-04 17:28:27.342 [INFO][5625] k8s.go 615: Releasing IP address(es) ContainerID="7a2a9bdb47db04fe8631cc4d371f1b447fdd623e7562000870a0ca39bde78d75" Sep 4 17:28:27.389155 containerd[1981]: 2024-09-04 17:28:27.342 [INFO][5625] utils.go 188: Calico CNI releasing IP address ContainerID="7a2a9bdb47db04fe8631cc4d371f1b447fdd623e7562000870a0ca39bde78d75" Sep 4 17:28:27.389155 containerd[1981]: 2024-09-04 17:28:27.373 [INFO][5631] ipam_plugin.go 417: Releasing address using handleID ContainerID="7a2a9bdb47db04fe8631cc4d371f1b447fdd623e7562000870a0ca39bde78d75" HandleID="k8s-pod-network.7a2a9bdb47db04fe8631cc4d371f1b447fdd623e7562000870a0ca39bde78d75" Workload="ip--172--31--30--103-k8s-coredns--76f75df574--56xn4-eth0" Sep 4 17:28:27.389155 containerd[1981]: 2024-09-04 17:28:27.373 [INFO][5631] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:28:27.389155 containerd[1981]: 2024-09-04 17:28:27.373 [INFO][5631] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:28:27.389155 containerd[1981]: 2024-09-04 17:28:27.380 [WARNING][5631] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="7a2a9bdb47db04fe8631cc4d371f1b447fdd623e7562000870a0ca39bde78d75" HandleID="k8s-pod-network.7a2a9bdb47db04fe8631cc4d371f1b447fdd623e7562000870a0ca39bde78d75" Workload="ip--172--31--30--103-k8s-coredns--76f75df574--56xn4-eth0" Sep 4 17:28:27.389155 containerd[1981]: 2024-09-04 17:28:27.380 [INFO][5631] ipam_plugin.go 445: Releasing address using workloadID ContainerID="7a2a9bdb47db04fe8631cc4d371f1b447fdd623e7562000870a0ca39bde78d75" HandleID="k8s-pod-network.7a2a9bdb47db04fe8631cc4d371f1b447fdd623e7562000870a0ca39bde78d75" Workload="ip--172--31--30--103-k8s-coredns--76f75df574--56xn4-eth0" Sep 4 17:28:27.389155 containerd[1981]: 2024-09-04 17:28:27.383 [INFO][5631] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:28:27.389155 containerd[1981]: 2024-09-04 17:28:27.385 [INFO][5625] k8s.go 621: Teardown processing complete. ContainerID="7a2a9bdb47db04fe8631cc4d371f1b447fdd623e7562000870a0ca39bde78d75" Sep 4 17:28:27.389934 containerd[1981]: time="2024-09-04T17:28:27.389159522Z" level=info msg="TearDown network for sandbox \"7a2a9bdb47db04fe8631cc4d371f1b447fdd623e7562000870a0ca39bde78d75\" successfully" Sep 4 17:28:27.396418 containerd[1981]: time="2024-09-04T17:28:27.396279649Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7a2a9bdb47db04fe8631cc4d371f1b447fdd623e7562000870a0ca39bde78d75\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:28:27.396578 containerd[1981]: time="2024-09-04T17:28:27.396452732Z" level=info msg="RemovePodSandbox \"7a2a9bdb47db04fe8631cc4d371f1b447fdd623e7562000870a0ca39bde78d75\" returns successfully" Sep 4 17:28:30.693077 systemd[1]: Started sshd@12-172.31.30.103:22-139.178.68.195:53380.service - OpenSSH per-connection server daemon (139.178.68.195:53380). Sep 4 17:28:30.957316 sshd[5651]: Accepted publickey for core from 139.178.68.195 port 53380 ssh2: RSA SHA256:RWdrpmeD0uODGtoWpmuFEK/G9FvWcalt1Ic3MhVma0g Sep 4 17:28:30.960892 sshd[5651]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:28:30.968267 systemd-logind[1947]: New session 13 of user core. Sep 4 17:28:30.978497 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 4 17:28:31.250131 sshd[5651]: pam_unix(sshd:session): session closed for user core Sep 4 17:28:31.255529 systemd[1]: sshd@12-172.31.30.103:22-139.178.68.195:53380.service: Deactivated successfully. Sep 4 17:28:31.258095 systemd[1]: session-13.scope: Deactivated successfully. Sep 4 17:28:31.258896 systemd-logind[1947]: Session 13 logged out. Waiting for processes to exit. Sep 4 17:28:31.260515 systemd-logind[1947]: Removed session 13. Sep 4 17:28:36.293636 systemd[1]: Started sshd@13-172.31.30.103:22-139.178.68.195:55740.service - OpenSSH per-connection server daemon (139.178.68.195:55740). Sep 4 17:28:36.487812 sshd[5668]: Accepted publickey for core from 139.178.68.195 port 55740 ssh2: RSA SHA256:RWdrpmeD0uODGtoWpmuFEK/G9FvWcalt1Ic3MhVma0g Sep 4 17:28:36.490405 sshd[5668]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:28:36.495187 systemd-logind[1947]: New session 14 of user core. Sep 4 17:28:36.500451 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 4 17:28:36.731557 sshd[5668]: pam_unix(sshd:session): session closed for user core Sep 4 17:28:36.752975 systemd[1]: sshd@13-172.31.30.103:22-139.178.68.195:55740.service: Deactivated successfully. Sep 4 17:28:36.758636 systemd[1]: session-14.scope: Deactivated successfully. Sep 4 17:28:36.760144 systemd-logind[1947]: Session 14 logged out. Waiting for processes to exit. Sep 4 17:28:36.761595 systemd-logind[1947]: Removed session 14. Sep 4 17:28:41.775968 systemd[1]: Started sshd@14-172.31.30.103:22-139.178.68.195:55748.service - OpenSSH per-connection server daemon (139.178.68.195:55748). Sep 4 17:28:41.975610 sshd[5688]: Accepted publickey for core from 139.178.68.195 port 55748 ssh2: RSA SHA256:RWdrpmeD0uODGtoWpmuFEK/G9FvWcalt1Ic3MhVma0g Sep 4 17:28:41.977819 sshd[5688]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:28:41.985420 systemd-logind[1947]: New session 15 of user core. Sep 4 17:28:41.988436 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 4 17:28:42.210673 sshd[5688]: pam_unix(sshd:session): session closed for user core Sep 4 17:28:42.216769 systemd-logind[1947]: Session 15 logged out. Waiting for processes to exit. Sep 4 17:28:42.218043 systemd[1]: sshd@14-172.31.30.103:22-139.178.68.195:55748.service: Deactivated successfully. Sep 4 17:28:42.222479 systemd[1]: session-15.scope: Deactivated successfully. Sep 4 17:28:42.224682 systemd-logind[1947]: Removed session 15. Sep 4 17:28:47.254691 systemd[1]: Started sshd@15-172.31.30.103:22-139.178.68.195:33158.service - OpenSSH per-connection server daemon (139.178.68.195:33158). Sep 4 17:28:47.442565 sshd[5702]: Accepted publickey for core from 139.178.68.195 port 33158 ssh2: RSA SHA256:RWdrpmeD0uODGtoWpmuFEK/G9FvWcalt1Ic3MhVma0g Sep 4 17:28:47.444687 sshd[5702]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:28:47.452760 systemd-logind[1947]: New session 16 of user core. Sep 4 17:28:47.458469 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 4 17:28:47.785801 sshd[5702]: pam_unix(sshd:session): session closed for user core Sep 4 17:28:47.795485 systemd-logind[1947]: Session 16 logged out. Waiting for processes to exit. Sep 4 17:28:47.796770 systemd[1]: sshd@15-172.31.30.103:22-139.178.68.195:33158.service: Deactivated successfully. Sep 4 17:28:47.802395 systemd[1]: session-16.scope: Deactivated successfully. Sep 4 17:28:47.822899 systemd[1]: Started sshd@16-172.31.30.103:22-139.178.68.195:33164.service - OpenSSH per-connection server daemon (139.178.68.195:33164). Sep 4 17:28:47.825979 systemd-logind[1947]: Removed session 16. Sep 4 17:28:48.039575 sshd[5715]: Accepted publickey for core from 139.178.68.195 port 33164 ssh2: RSA SHA256:RWdrpmeD0uODGtoWpmuFEK/G9FvWcalt1Ic3MhVma0g Sep 4 17:28:48.039372 sshd[5715]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:28:48.048693 systemd-logind[1947]: New session 17 of user core. Sep 4 17:28:48.056736 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 4 17:28:48.829596 sshd[5715]: pam_unix(sshd:session): session closed for user core Sep 4 17:28:48.840625 systemd[1]: sshd@16-172.31.30.103:22-139.178.68.195:33164.service: Deactivated successfully. Sep 4 17:28:48.845735 systemd[1]: session-17.scope: Deactivated successfully. Sep 4 17:28:48.847860 systemd-logind[1947]: Session 17 logged out. Waiting for processes to exit. Sep 4 17:28:48.884278 systemd[1]: Started sshd@17-172.31.30.103:22-139.178.68.195:33170.service - OpenSSH per-connection server daemon (139.178.68.195:33170). Sep 4 17:28:48.885918 systemd-logind[1947]: Removed session 17. Sep 4 17:28:49.071706 sshd[5749]: Accepted publickey for core from 139.178.68.195 port 33170 ssh2: RSA SHA256:RWdrpmeD0uODGtoWpmuFEK/G9FvWcalt1Ic3MhVma0g Sep 4 17:28:49.075199 sshd[5749]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:28:49.081374 systemd-logind[1947]: New session 18 of user core. Sep 4 17:28:49.086450 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 4 17:28:51.947905 sshd[5749]: pam_unix(sshd:session): session closed for user core Sep 4 17:28:51.961286 systemd-logind[1947]: Session 18 logged out. Waiting for processes to exit. Sep 4 17:28:51.962563 systemd[1]: sshd@17-172.31.30.103:22-139.178.68.195:33170.service: Deactivated successfully. Sep 4 17:28:51.969419 systemd[1]: session-18.scope: Deactivated successfully. Sep 4 17:28:51.997090 systemd[1]: Started sshd@18-172.31.30.103:22-139.178.68.195:33176.service - OpenSSH per-connection server daemon (139.178.68.195:33176). Sep 4 17:28:52.001730 systemd-logind[1947]: Removed session 18. Sep 4 17:28:52.214769 sshd[5779]: Accepted publickey for core from 139.178.68.195 port 33176 ssh2: RSA SHA256:RWdrpmeD0uODGtoWpmuFEK/G9FvWcalt1Ic3MhVma0g Sep 4 17:28:52.216812 sshd[5779]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:28:52.238465 systemd-logind[1947]: New session 19 of user core. Sep 4 17:28:52.245493 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 4 17:28:53.007308 sshd[5779]: pam_unix(sshd:session): session closed for user core Sep 4 17:28:53.014079 systemd-logind[1947]: Session 19 logged out. Waiting for processes to exit. Sep 4 17:28:53.015083 systemd[1]: sshd@18-172.31.30.103:22-139.178.68.195:33176.service: Deactivated successfully. Sep 4 17:28:53.020017 systemd[1]: session-19.scope: Deactivated successfully. Sep 4 17:28:53.022899 systemd-logind[1947]: Removed session 19. Sep 4 17:28:53.043613 systemd[1]: Started sshd@19-172.31.30.103:22-139.178.68.195:33192.service - OpenSSH per-connection server daemon (139.178.68.195:33192). Sep 4 17:28:53.284098 sshd[5796]: Accepted publickey for core from 139.178.68.195 port 33192 ssh2: RSA SHA256:RWdrpmeD0uODGtoWpmuFEK/G9FvWcalt1Ic3MhVma0g Sep 4 17:28:53.286774 sshd[5796]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:28:53.293829 systemd-logind[1947]: New session 20 of user core. Sep 4 17:28:53.298419 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 4 17:28:53.577487 sshd[5796]: pam_unix(sshd:session): session closed for user core Sep 4 17:28:53.585781 systemd-logind[1947]: Session 20 logged out. Waiting for processes to exit. Sep 4 17:28:53.588760 systemd[1]: sshd@19-172.31.30.103:22-139.178.68.195:33192.service: Deactivated successfully. Sep 4 17:28:53.593774 systemd[1]: session-20.scope: Deactivated successfully. Sep 4 17:28:53.595215 systemd-logind[1947]: Removed session 20. Sep 4 17:28:58.617405 systemd[1]: Started sshd@20-172.31.30.103:22-139.178.68.195:33708.service - OpenSSH per-connection server daemon (139.178.68.195:33708). Sep 4 17:28:58.827435 sshd[5830]: Accepted publickey for core from 139.178.68.195 port 33708 ssh2: RSA SHA256:RWdrpmeD0uODGtoWpmuFEK/G9FvWcalt1Ic3MhVma0g Sep 4 17:28:58.831450 sshd[5830]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:28:58.858171 systemd-logind[1947]: New session 21 of user core. Sep 4 17:28:58.864495 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 4 17:28:59.179485 sshd[5830]: pam_unix(sshd:session): session closed for user core Sep 4 17:28:59.188384 systemd-logind[1947]: Session 21 logged out. Waiting for processes to exit. Sep 4 17:28:59.189542 systemd[1]: sshd@20-172.31.30.103:22-139.178.68.195:33708.service: Deactivated successfully. Sep 4 17:28:59.197817 systemd[1]: session-21.scope: Deactivated successfully. Sep 4 17:28:59.200384 systemd-logind[1947]: Removed session 21. Sep 4 17:29:00.100708 kubelet[3195]: I0904 17:29:00.099808 3195 topology_manager.go:215] "Topology Admit Handler" podUID="c8ee9856-fca3-490f-ad69-82b626f20d2a" podNamespace="calico-apiserver" podName="calico-apiserver-7f7cc95944-6bp59" Sep 4 17:29:00.155594 systemd[1]: Created slice kubepods-besteffort-podc8ee9856_fca3_490f_ad69_82b626f20d2a.slice - libcontainer container kubepods-besteffort-podc8ee9856_fca3_490f_ad69_82b626f20d2a.slice. Sep 4 17:29:00.223577 kubelet[3195]: I0904 17:29:00.223530 3195 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c8ee9856-fca3-490f-ad69-82b626f20d2a-calico-apiserver-certs\") pod \"calico-apiserver-7f7cc95944-6bp59\" (UID: \"c8ee9856-fca3-490f-ad69-82b626f20d2a\") " pod="calico-apiserver/calico-apiserver-7f7cc95944-6bp59" Sep 4 17:29:00.223778 kubelet[3195]: I0904 17:29:00.223655 3195 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvpwz\" (UniqueName: \"kubernetes.io/projected/c8ee9856-fca3-490f-ad69-82b626f20d2a-kube-api-access-wvpwz\") pod \"calico-apiserver-7f7cc95944-6bp59\" (UID: \"c8ee9856-fca3-490f-ad69-82b626f20d2a\") " pod="calico-apiserver/calico-apiserver-7f7cc95944-6bp59" Sep 4 17:29:00.333257 kubelet[3195]: E0904 17:29:00.333182 3195 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Sep 4 17:29:00.392336 kubelet[3195]: E0904 17:29:00.390966 3195 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8ee9856-fca3-490f-ad69-82b626f20d2a-calico-apiserver-certs podName:c8ee9856-fca3-490f-ad69-82b626f20d2a nodeName:}" failed. No retries permitted until 2024-09-04 17:29:00.847433831 +0000 UTC m=+94.861846775 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/c8ee9856-fca3-490f-ad69-82b626f20d2a-calico-apiserver-certs") pod "calico-apiserver-7f7cc95944-6bp59" (UID: "c8ee9856-fca3-490f-ad69-82b626f20d2a") : secret "calico-apiserver-certs" not found Sep 4 17:29:01.078985 containerd[1981]: time="2024-09-04T17:29:01.078909715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f7cc95944-6bp59,Uid:c8ee9856-fca3-490f-ad69-82b626f20d2a,Namespace:calico-apiserver,Attempt:0,}" Sep 4 17:29:01.543856 systemd-networkd[1810]: calic6d124439ec: Link UP Sep 4 17:29:01.544261 systemd-networkd[1810]: calic6d124439ec: Gained carrier Sep 4 17:29:01.550653 (udev-worker)[5869]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:29:01.577978 containerd[1981]: 2024-09-04 17:29:01.344 [INFO][5849] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--103-k8s-calico--apiserver--7f7cc95944--6bp59-eth0 calico-apiserver-7f7cc95944- calico-apiserver c8ee9856-fca3-490f-ad69-82b626f20d2a 1070 0 2024-09-04 17:29:00 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7f7cc95944 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-30-103 calico-apiserver-7f7cc95944-6bp59 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic6d124439ec [] []}} ContainerID="3f1dbd8e4b843eef6c62d00ee34e9f1eae5c235ce5dafd2ac3b3c8774bd4b04e" Namespace="calico-apiserver" Pod="calico-apiserver-7f7cc95944-6bp59" WorkloadEndpoint="ip--172--31--30--103-k8s-calico--apiserver--7f7cc95944--6bp59-" Sep 4 17:29:01.577978 containerd[1981]: 2024-09-04 17:29:01.344 [INFO][5849] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3f1dbd8e4b843eef6c62d00ee34e9f1eae5c235ce5dafd2ac3b3c8774bd4b04e" Namespace="calico-apiserver" Pod="calico-apiserver-7f7cc95944-6bp59" WorkloadEndpoint="ip--172--31--30--103-k8s-calico--apiserver--7f7cc95944--6bp59-eth0" Sep 4 17:29:01.577978 containerd[1981]: 2024-09-04 17:29:01.471 [INFO][5862] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3f1dbd8e4b843eef6c62d00ee34e9f1eae5c235ce5dafd2ac3b3c8774bd4b04e" HandleID="k8s-pod-network.3f1dbd8e4b843eef6c62d00ee34e9f1eae5c235ce5dafd2ac3b3c8774bd4b04e" Workload="ip--172--31--30--103-k8s-calico--apiserver--7f7cc95944--6bp59-eth0" Sep 4 17:29:01.577978 containerd[1981]: 2024-09-04 17:29:01.488 [INFO][5862] ipam_plugin.go 270: Auto assigning IP ContainerID="3f1dbd8e4b843eef6c62d00ee34e9f1eae5c235ce5dafd2ac3b3c8774bd4b04e" HandleID="k8s-pod-network.3f1dbd8e4b843eef6c62d00ee34e9f1eae5c235ce5dafd2ac3b3c8774bd4b04e" Workload="ip--172--31--30--103-k8s-calico--apiserver--7f7cc95944--6bp59-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000050df0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-30-103", "pod":"calico-apiserver-7f7cc95944-6bp59", "timestamp":"2024-09-04 17:29:01.471712583 +0000 UTC"}, Hostname:"ip-172-31-30-103", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:29:01.577978 containerd[1981]: 2024-09-04 17:29:01.489 [INFO][5862] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:29:01.577978 containerd[1981]: 2024-09-04 17:29:01.489 [INFO][5862] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:29:01.577978 containerd[1981]: 2024-09-04 17:29:01.489 [INFO][5862] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-103' Sep 4 17:29:01.577978 containerd[1981]: 2024-09-04 17:29:01.492 [INFO][5862] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3f1dbd8e4b843eef6c62d00ee34e9f1eae5c235ce5dafd2ac3b3c8774bd4b04e" host="ip-172-31-30-103" Sep 4 17:29:01.577978 containerd[1981]: 2024-09-04 17:29:01.497 [INFO][5862] ipam.go 372: Looking up existing affinities for host host="ip-172-31-30-103" Sep 4 17:29:01.577978 containerd[1981]: 2024-09-04 17:29:01.504 [INFO][5862] ipam.go 489: Trying affinity for 192.168.10.192/26 host="ip-172-31-30-103" Sep 4 17:29:01.577978 containerd[1981]: 2024-09-04 17:29:01.507 [INFO][5862] ipam.go 155: Attempting to load block cidr=192.168.10.192/26 host="ip-172-31-30-103" Sep 4 17:29:01.577978 containerd[1981]: 2024-09-04 17:29:01.512 [INFO][5862] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.10.192/26 host="ip-172-31-30-103" Sep 4 17:29:01.577978 containerd[1981]: 2024-09-04 17:29:01.512 [INFO][5862] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.10.192/26 handle="k8s-pod-network.3f1dbd8e4b843eef6c62d00ee34e9f1eae5c235ce5dafd2ac3b3c8774bd4b04e" host="ip-172-31-30-103" Sep 4 17:29:01.577978 containerd[1981]: 2024-09-04 17:29:01.515 [INFO][5862] ipam.go 1685: Creating new handle: k8s-pod-network.3f1dbd8e4b843eef6c62d00ee34e9f1eae5c235ce5dafd2ac3b3c8774bd4b04e Sep 4 17:29:01.577978 containerd[1981]: 2024-09-04 17:29:01.520 [INFO][5862] ipam.go 1203: Writing block in order to claim IPs block=192.168.10.192/26 handle="k8s-pod-network.3f1dbd8e4b843eef6c62d00ee34e9f1eae5c235ce5dafd2ac3b3c8774bd4b04e" host="ip-172-31-30-103" Sep 4 17:29:01.577978 containerd[1981]: 2024-09-04 17:29:01.532 [INFO][5862] ipam.go 1216: Successfully claimed IPs: [192.168.10.197/26] block=192.168.10.192/26 handle="k8s-pod-network.3f1dbd8e4b843eef6c62d00ee34e9f1eae5c235ce5dafd2ac3b3c8774bd4b04e" host="ip-172-31-30-103" Sep 4 17:29:01.577978 containerd[1981]: 2024-09-04 17:29:01.532 [INFO][5862] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.10.197/26] handle="k8s-pod-network.3f1dbd8e4b843eef6c62d00ee34e9f1eae5c235ce5dafd2ac3b3c8774bd4b04e" host="ip-172-31-30-103" Sep 4 17:29:01.577978 containerd[1981]: 2024-09-04 17:29:01.534 [INFO][5862] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:29:01.577978 containerd[1981]: 2024-09-04 17:29:01.534 [INFO][5862] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.10.197/26] IPv6=[] ContainerID="3f1dbd8e4b843eef6c62d00ee34e9f1eae5c235ce5dafd2ac3b3c8774bd4b04e" HandleID="k8s-pod-network.3f1dbd8e4b843eef6c62d00ee34e9f1eae5c235ce5dafd2ac3b3c8774bd4b04e" Workload="ip--172--31--30--103-k8s-calico--apiserver--7f7cc95944--6bp59-eth0" Sep 4 17:29:01.580824 containerd[1981]: 2024-09-04 17:29:01.539 [INFO][5849] k8s.go 386: Populated endpoint ContainerID="3f1dbd8e4b843eef6c62d00ee34e9f1eae5c235ce5dafd2ac3b3c8774bd4b04e" Namespace="calico-apiserver" Pod="calico-apiserver-7f7cc95944-6bp59" WorkloadEndpoint="ip--172--31--30--103-k8s-calico--apiserver--7f7cc95944--6bp59-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--103-k8s-calico--apiserver--7f7cc95944--6bp59-eth0", GenerateName:"calico-apiserver-7f7cc95944-", Namespace:"calico-apiserver", SelfLink:"", UID:"c8ee9856-fca3-490f-ad69-82b626f20d2a", ResourceVersion:"1070", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 29, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f7cc95944", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-103", ContainerID:"", Pod:"calico-apiserver-7f7cc95944-6bp59", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.10.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic6d124439ec", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:29:01.580824 containerd[1981]: 2024-09-04 17:29:01.539 [INFO][5849] k8s.go 387: Calico CNI using IPs: [192.168.10.197/32] ContainerID="3f1dbd8e4b843eef6c62d00ee34e9f1eae5c235ce5dafd2ac3b3c8774bd4b04e" Namespace="calico-apiserver" Pod="calico-apiserver-7f7cc95944-6bp59" WorkloadEndpoint="ip--172--31--30--103-k8s-calico--apiserver--7f7cc95944--6bp59-eth0" Sep 4 17:29:01.580824 containerd[1981]: 2024-09-04 17:29:01.540 [INFO][5849] dataplane_linux.go 68: Setting the host side veth name to calic6d124439ec ContainerID="3f1dbd8e4b843eef6c62d00ee34e9f1eae5c235ce5dafd2ac3b3c8774bd4b04e" Namespace="calico-apiserver" Pod="calico-apiserver-7f7cc95944-6bp59" WorkloadEndpoint="ip--172--31--30--103-k8s-calico--apiserver--7f7cc95944--6bp59-eth0" Sep 4 17:29:01.580824 containerd[1981]: 2024-09-04 17:29:01.543 [INFO][5849] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="3f1dbd8e4b843eef6c62d00ee34e9f1eae5c235ce5dafd2ac3b3c8774bd4b04e" Namespace="calico-apiserver" Pod="calico-apiserver-7f7cc95944-6bp59" WorkloadEndpoint="ip--172--31--30--103-k8s-calico--apiserver--7f7cc95944--6bp59-eth0" Sep 4 17:29:01.580824 containerd[1981]: 2024-09-04 17:29:01.545 [INFO][5849] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3f1dbd8e4b843eef6c62d00ee34e9f1eae5c235ce5dafd2ac3b3c8774bd4b04e" Namespace="calico-apiserver" Pod="calico-apiserver-7f7cc95944-6bp59" WorkloadEndpoint="ip--172--31--30--103-k8s-calico--apiserver--7f7cc95944--6bp59-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--103-k8s-calico--apiserver--7f7cc95944--6bp59-eth0", GenerateName:"calico-apiserver-7f7cc95944-", Namespace:"calico-apiserver", SelfLink:"", UID:"c8ee9856-fca3-490f-ad69-82b626f20d2a", ResourceVersion:"1070", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 29, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f7cc95944", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-103", ContainerID:"3f1dbd8e4b843eef6c62d00ee34e9f1eae5c235ce5dafd2ac3b3c8774bd4b04e", Pod:"calico-apiserver-7f7cc95944-6bp59", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.10.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic6d124439ec", MAC:"0a:38:86:5d:1b:e7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:29:01.580824 containerd[1981]: 2024-09-04 17:29:01.571 [INFO][5849] k8s.go 500: Wrote updated endpoint to datastore ContainerID="3f1dbd8e4b843eef6c62d00ee34e9f1eae5c235ce5dafd2ac3b3c8774bd4b04e" Namespace="calico-apiserver" Pod="calico-apiserver-7f7cc95944-6bp59" WorkloadEndpoint="ip--172--31--30--103-k8s-calico--apiserver--7f7cc95944--6bp59-eth0" Sep 4 17:29:01.692388 containerd[1981]: time="2024-09-04T17:29:01.692169988Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:29:01.692388 containerd[1981]: time="2024-09-04T17:29:01.692273825Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:29:01.693078 containerd[1981]: time="2024-09-04T17:29:01.692811855Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:29:01.693078 containerd[1981]: time="2024-09-04T17:29:01.692905932Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:29:01.749022 systemd[1]: Started cri-containerd-3f1dbd8e4b843eef6c62d00ee34e9f1eae5c235ce5dafd2ac3b3c8774bd4b04e.scope - libcontainer container 3f1dbd8e4b843eef6c62d00ee34e9f1eae5c235ce5dafd2ac3b3c8774bd4b04e. Sep 4 17:29:01.952599 containerd[1981]: time="2024-09-04T17:29:01.952544066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f7cc95944-6bp59,Uid:c8ee9856-fca3-490f-ad69-82b626f20d2a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"3f1dbd8e4b843eef6c62d00ee34e9f1eae5c235ce5dafd2ac3b3c8774bd4b04e\"" Sep 4 17:29:01.973204 containerd[1981]: time="2024-09-04T17:29:01.972660966Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Sep 4 17:29:03.529063 systemd-networkd[1810]: calic6d124439ec: Gained IPv6LL Sep 4 17:29:04.232995 systemd[1]: Started sshd@21-172.31.30.103:22-139.178.68.195:33716.service - OpenSSH per-connection server daemon (139.178.68.195:33716). Sep 4 17:29:04.499423 sshd[5932]: Accepted publickey for core from 139.178.68.195 port 33716 ssh2: RSA SHA256:RWdrpmeD0uODGtoWpmuFEK/G9FvWcalt1Ic3MhVma0g Sep 4 17:29:04.503949 sshd[5932]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:29:04.519698 systemd-logind[1947]: New session 22 of user core. Sep 4 17:29:04.527266 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 4 17:29:05.461851 sshd[5932]: pam_unix(sshd:session): session closed for user core Sep 4 17:29:05.467836 systemd-logind[1947]: Session 22 logged out. Waiting for processes to exit. Sep 4 17:29:05.468712 systemd[1]: sshd@21-172.31.30.103:22-139.178.68.195:33716.service: Deactivated successfully. Sep 4 17:29:05.474755 systemd[1]: session-22.scope: Deactivated successfully. Sep 4 17:29:05.477250 systemd-logind[1947]: Removed session 22. Sep 4 17:29:05.673016 containerd[1981]: time="2024-09-04T17:29:05.657414238Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=40419849" Sep 4 17:29:05.699270 containerd[1981]: time="2024-09-04T17:29:05.698421477Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"41912266\" in 3.703626452s" Sep 4 17:29:05.699270 containerd[1981]: time="2024-09-04T17:29:05.698489797Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\"" Sep 4 17:29:05.699968 containerd[1981]: time="2024-09-04T17:29:05.699925126Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:29:05.713532 containerd[1981]: time="2024-09-04T17:29:05.712424928Z" level=info msg="ImageCreate event name:\"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:29:05.713532 containerd[1981]: time="2024-09-04T17:29:05.713419492Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:29:05.743496 containerd[1981]: time="2024-09-04T17:29:05.743438310Z" level=info msg="CreateContainer within sandbox \"3f1dbd8e4b843eef6c62d00ee34e9f1eae5c235ce5dafd2ac3b3c8774bd4b04e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 4 17:29:05.777286 containerd[1981]: time="2024-09-04T17:29:05.776736214Z" level=info msg="CreateContainer within sandbox \"3f1dbd8e4b843eef6c62d00ee34e9f1eae5c235ce5dafd2ac3b3c8774bd4b04e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e6457922a02d2f72d1a806a831708eda7e7cb511e22bc65741e988bafb01bad3\"" Sep 4 17:29:05.779974 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount281162975.mount: Deactivated successfully. Sep 4 17:29:05.780492 containerd[1981]: time="2024-09-04T17:29:05.780451705Z" level=info msg="StartContainer for \"e6457922a02d2f72d1a806a831708eda7e7cb511e22bc65741e988bafb01bad3\"" Sep 4 17:29:05.869594 systemd[1]: Started cri-containerd-e6457922a02d2f72d1a806a831708eda7e7cb511e22bc65741e988bafb01bad3.scope - libcontainer container e6457922a02d2f72d1a806a831708eda7e7cb511e22bc65741e988bafb01bad3. Sep 4 17:29:05.982805 containerd[1981]: time="2024-09-04T17:29:05.981278813Z" level=info msg="StartContainer for \"e6457922a02d2f72d1a806a831708eda7e7cb511e22bc65741e988bafb01bad3\" returns successfully" Sep 4 17:29:06.019541 ntpd[1942]: Listen normally on 13 calic6d124439ec [fe80::ecee:eeff:feee:eeee%11]:123 Sep 4 17:29:06.019993 ntpd[1942]: 4 Sep 17:29:06 ntpd[1942]: Listen normally on 13 calic6d124439ec [fe80::ecee:eeff:feee:eeee%11]:123 Sep 4 17:29:07.147170 kubelet[3195]: I0904 17:29:07.146374 3195 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7f7cc95944-6bp59" podStartSLOduration=3.406999852 podStartE2EDuration="7.135336051s" podCreationTimestamp="2024-09-04 17:29:00 +0000 UTC" firstStartedPulling="2024-09-04 17:29:01.971460067 +0000 UTC m=+95.985873019" lastFinishedPulling="2024-09-04 17:29:05.69979627 +0000 UTC m=+99.714209218" observedRunningTime="2024-09-04 17:29:06.163196099 +0000 UTC m=+100.177609064" watchObservedRunningTime="2024-09-04 17:29:07.135336051 +0000 UTC m=+101.149749015" Sep 4 17:29:10.505906 systemd[1]: Started sshd@22-172.31.30.103:22-139.178.68.195:48702.service - OpenSSH per-connection server daemon (139.178.68.195:48702). Sep 4 17:29:10.735091 sshd[5999]: Accepted publickey for core from 139.178.68.195 port 48702 ssh2: RSA SHA256:RWdrpmeD0uODGtoWpmuFEK/G9FvWcalt1Ic3MhVma0g Sep 4 17:29:10.737370 sshd[5999]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:29:10.744726 systemd-logind[1947]: New session 23 of user core. Sep 4 17:29:10.748542 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 4 17:29:11.169477 sshd[5999]: pam_unix(sshd:session): session closed for user core Sep 4 17:29:11.175946 systemd[1]: sshd@22-172.31.30.103:22-139.178.68.195:48702.service: Deactivated successfully. Sep 4 17:29:11.178619 systemd[1]: session-23.scope: Deactivated successfully. Sep 4 17:29:11.181308 systemd-logind[1947]: Session 23 logged out. Waiting for processes to exit. Sep 4 17:29:11.183494 systemd-logind[1947]: Removed session 23. Sep 4 17:29:16.209944 systemd[1]: Started sshd@23-172.31.30.103:22-139.178.68.195:48708.service - OpenSSH per-connection server daemon (139.178.68.195:48708). Sep 4 17:29:16.416456 sshd[6019]: Accepted publickey for core from 139.178.68.195 port 48708 ssh2: RSA SHA256:RWdrpmeD0uODGtoWpmuFEK/G9FvWcalt1Ic3MhVma0g Sep 4 17:29:16.419566 sshd[6019]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:29:16.429136 systemd-logind[1947]: New session 24 of user core. Sep 4 17:29:16.431542 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 4 17:29:16.826445 sshd[6019]: pam_unix(sshd:session): session closed for user core Sep 4 17:29:16.835090 systemd[1]: sshd@23-172.31.30.103:22-139.178.68.195:48708.service: Deactivated successfully. Sep 4 17:29:16.840809 systemd[1]: session-24.scope: Deactivated successfully. Sep 4 17:29:16.842269 systemd-logind[1947]: Session 24 logged out. Waiting for processes to exit. Sep 4 17:29:16.845063 systemd-logind[1947]: Removed session 24. Sep 4 17:29:18.923708 systemd[1]: run-containerd-runc-k8s.io-e6a1469bcf28683fd5182ef2b718cf9f0883e6d6c6318bf53efe70a757937290-runc.6kY21Q.mount: Deactivated successfully. Sep 4 17:29:21.869830 systemd[1]: Started sshd@24-172.31.30.103:22-139.178.68.195:35154.service - OpenSSH per-connection server daemon (139.178.68.195:35154). Sep 4 17:29:22.074388 sshd[6078]: Accepted publickey for core from 139.178.68.195 port 35154 ssh2: RSA SHA256:RWdrpmeD0uODGtoWpmuFEK/G9FvWcalt1Ic3MhVma0g Sep 4 17:29:22.075722 sshd[6078]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:29:22.081724 systemd-logind[1947]: New session 25 of user core. Sep 4 17:29:22.088474 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 4 17:29:22.478913 sshd[6078]: pam_unix(sshd:session): session closed for user core Sep 4 17:29:22.486125 systemd[1]: sshd@24-172.31.30.103:22-139.178.68.195:35154.service: Deactivated successfully. Sep 4 17:29:22.492354 systemd[1]: session-25.scope: Deactivated successfully. Sep 4 17:29:22.493591 systemd-logind[1947]: Session 25 logged out. Waiting for processes to exit. Sep 4 17:29:22.498595 systemd-logind[1947]: Removed session 25. Sep 4 17:29:27.516724 systemd[1]: Started sshd@25-172.31.30.103:22-139.178.68.195:49066.service - OpenSSH per-connection server daemon (139.178.68.195:49066). Sep 4 17:29:27.714325 sshd[6117]: Accepted publickey for core from 139.178.68.195 port 49066 ssh2: RSA SHA256:RWdrpmeD0uODGtoWpmuFEK/G9FvWcalt1Ic3MhVma0g Sep 4 17:29:27.719954 sshd[6117]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:29:27.734089 systemd-logind[1947]: New session 26 of user core. Sep 4 17:29:27.746130 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 4 17:29:28.385139 sshd[6117]: pam_unix(sshd:session): session closed for user core Sep 4 17:29:28.388785 systemd[1]: sshd@25-172.31.30.103:22-139.178.68.195:49066.service: Deactivated successfully. Sep 4 17:29:28.391555 systemd[1]: session-26.scope: Deactivated successfully. Sep 4 17:29:28.394405 systemd-logind[1947]: Session 26 logged out. Waiting for processes to exit. Sep 4 17:29:28.397039 systemd-logind[1947]: Removed session 26. Sep 4 17:29:43.187425 systemd[1]: cri-containerd-319d379c9f852d89f87d8331840c6c83c26cc13953ded5ae96f834485412a5d1.scope: Deactivated successfully. Sep 4 17:29:43.189621 systemd[1]: cri-containerd-319d379c9f852d89f87d8331840c6c83c26cc13953ded5ae96f834485412a5d1.scope: Consumed 3.145s CPU time, 28.3M memory peak, 0B memory swap peak. Sep 4 17:29:43.242684 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-319d379c9f852d89f87d8331840c6c83c26cc13953ded5ae96f834485412a5d1-rootfs.mount: Deactivated successfully. Sep 4 17:29:43.249199 containerd[1981]: time="2024-09-04T17:29:43.240707197Z" level=info msg="shim disconnected" id=319d379c9f852d89f87d8331840c6c83c26cc13953ded5ae96f834485412a5d1 namespace=k8s.io Sep 4 17:29:43.249967 containerd[1981]: time="2024-09-04T17:29:43.249199602Z" level=warning msg="cleaning up after shim disconnected" id=319d379c9f852d89f87d8331840c6c83c26cc13953ded5ae96f834485412a5d1 namespace=k8s.io Sep 4 17:29:43.249967 containerd[1981]: time="2024-09-04T17:29:43.249222356Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:29:43.355077 systemd[1]: cri-containerd-b4724da9109fda2181bc97cc8f308e9b8d5dff477eadcc8b72aafa03aefcba33.scope: Deactivated successfully. Sep 4 17:29:43.355380 systemd[1]: cri-containerd-b4724da9109fda2181bc97cc8f308e9b8d5dff477eadcc8b72aafa03aefcba33.scope: Consumed 6.248s CPU time. Sep 4 17:29:43.375042 kubelet[3195]: I0904 17:29:43.374992 3195 scope.go:117] "RemoveContainer" containerID="319d379c9f852d89f87d8331840c6c83c26cc13953ded5ae96f834485412a5d1" Sep 4 17:29:43.385187 containerd[1981]: time="2024-09-04T17:29:43.385046124Z" level=info msg="CreateContainer within sandbox \"482d9fc0928288b81a8301dbe8164b2810253355b638d4ce2b735fa04d5c192c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 4 17:29:43.395957 containerd[1981]: time="2024-09-04T17:29:43.395672858Z" level=info msg="shim disconnected" id=b4724da9109fda2181bc97cc8f308e9b8d5dff477eadcc8b72aafa03aefcba33 namespace=k8s.io Sep 4 17:29:43.395957 containerd[1981]: time="2024-09-04T17:29:43.395736353Z" level=warning msg="cleaning up after shim disconnected" id=b4724da9109fda2181bc97cc8f308e9b8d5dff477eadcc8b72aafa03aefcba33 namespace=k8s.io Sep 4 17:29:43.395957 containerd[1981]: time="2024-09-04T17:29:43.395752965Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:29:43.397525 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b4724da9109fda2181bc97cc8f308e9b8d5dff477eadcc8b72aafa03aefcba33-rootfs.mount: Deactivated successfully. Sep 4 17:29:43.425882 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3442010062.mount: Deactivated successfully. Sep 4 17:29:43.427941 containerd[1981]: time="2024-09-04T17:29:43.427901883Z" level=info msg="CreateContainer within sandbox \"482d9fc0928288b81a8301dbe8164b2810253355b638d4ce2b735fa04d5c192c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"09197264902d9fa1fda6622eae00ce39ac0c9967afc625d358df6db8fd92d65f\"" Sep 4 17:29:43.428732 containerd[1981]: time="2024-09-04T17:29:43.428700067Z" level=info msg="StartContainer for \"09197264902d9fa1fda6622eae00ce39ac0c9967afc625d358df6db8fd92d65f\"" Sep 4 17:29:43.475464 systemd[1]: Started cri-containerd-09197264902d9fa1fda6622eae00ce39ac0c9967afc625d358df6db8fd92d65f.scope - libcontainer container 09197264902d9fa1fda6622eae00ce39ac0c9967afc625d358df6db8fd92d65f. Sep 4 17:29:43.537524 containerd[1981]: time="2024-09-04T17:29:43.537467578Z" level=info msg="StartContainer for \"09197264902d9fa1fda6622eae00ce39ac0c9967afc625d358df6db8fd92d65f\" returns successfully" Sep 4 17:29:44.361934 kubelet[3195]: I0904 17:29:44.361901 3195 scope.go:117] "RemoveContainer" containerID="b4724da9109fda2181bc97cc8f308e9b8d5dff477eadcc8b72aafa03aefcba33" Sep 4 17:29:44.371497 containerd[1981]: time="2024-09-04T17:29:44.371453522Z" level=info msg="CreateContainer within sandbox \"3a8dbf8b6333b8f7e309ac3b0ff069ca37641a68116abc5b4d20f107a9e7d6f5\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Sep 4 17:29:44.450430 containerd[1981]: time="2024-09-04T17:29:44.449413962Z" level=info msg="CreateContainer within sandbox \"3a8dbf8b6333b8f7e309ac3b0ff069ca37641a68116abc5b4d20f107a9e7d6f5\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"7d69d967a51e5f48e17f4e77058fda9205ca22a97fbc27315572fe9c324315d7\"" Sep 4 17:29:44.450430 containerd[1981]: time="2024-09-04T17:29:44.449916004Z" level=info msg="StartContainer for \"7d69d967a51e5f48e17f4e77058fda9205ca22a97fbc27315572fe9c324315d7\"" Sep 4 17:29:44.540523 systemd[1]: Started cri-containerd-7d69d967a51e5f48e17f4e77058fda9205ca22a97fbc27315572fe9c324315d7.scope - libcontainer container 7d69d967a51e5f48e17f4e77058fda9205ca22a97fbc27315572fe9c324315d7. Sep 4 17:29:44.588412 containerd[1981]: time="2024-09-04T17:29:44.588365145Z" level=info msg="StartContainer for \"7d69d967a51e5f48e17f4e77058fda9205ca22a97fbc27315572fe9c324315d7\" returns successfully" Sep 4 17:29:45.249172 systemd[1]: run-containerd-runc-k8s.io-7d69d967a51e5f48e17f4e77058fda9205ca22a97fbc27315572fe9c324315d7-runc.pViLdo.mount: Deactivated successfully. Sep 4 17:29:47.622655 systemd[1]: cri-containerd-6206820158e65f516bfc116ae0509d583a8d0b891f11feaabcd970bfe5b161ec.scope: Deactivated successfully. Sep 4 17:29:47.623089 systemd[1]: cri-containerd-6206820158e65f516bfc116ae0509d583a8d0b891f11feaabcd970bfe5b161ec.scope: Consumed 2.021s CPU time, 17.8M memory peak, 0B memory swap peak. Sep 4 17:29:47.668379 containerd[1981]: time="2024-09-04T17:29:47.665719985Z" level=info msg="shim disconnected" id=6206820158e65f516bfc116ae0509d583a8d0b891f11feaabcd970bfe5b161ec namespace=k8s.io Sep 4 17:29:47.668379 containerd[1981]: time="2024-09-04T17:29:47.665810445Z" level=warning msg="cleaning up after shim disconnected" id=6206820158e65f516bfc116ae0509d583a8d0b891f11feaabcd970bfe5b161ec namespace=k8s.io Sep 4 17:29:47.668379 containerd[1981]: time="2024-09-04T17:29:47.665835697Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:29:47.669986 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6206820158e65f516bfc116ae0509d583a8d0b891f11feaabcd970bfe5b161ec-rootfs.mount: Deactivated successfully. Sep 4 17:29:48.137695 systemd[1]: run-containerd-runc-k8s.io-89f3a67f8d922e8bc44260d8c46f643b646726450c56041721cbcdf710801967-runc.T9RgPU.mount: Deactivated successfully. Sep 4 17:29:48.411343 kubelet[3195]: I0904 17:29:48.410519 3195 scope.go:117] "RemoveContainer" containerID="6206820158e65f516bfc116ae0509d583a8d0b891f11feaabcd970bfe5b161ec" Sep 4 17:29:48.415620 containerd[1981]: time="2024-09-04T17:29:48.415045462Z" level=info msg="CreateContainer within sandbox \"5616b8481dc76820f76e2bd03fc2295429a7fb4a6269a5b0d450d54055f1f650\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Sep 4 17:29:48.446174 containerd[1981]: time="2024-09-04T17:29:48.446110851Z" level=info msg="CreateContainer within sandbox \"5616b8481dc76820f76e2bd03fc2295429a7fb4a6269a5b0d450d54055f1f650\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"866753f6914ef95e860f427207d4998680a1def5729041f157d2155f50b461ae\"" Sep 4 17:29:48.447194 containerd[1981]: time="2024-09-04T17:29:48.447020297Z" level=info msg="StartContainer for \"866753f6914ef95e860f427207d4998680a1def5729041f157d2155f50b461ae\"" Sep 4 17:29:48.490800 systemd[1]: Started cri-containerd-866753f6914ef95e860f427207d4998680a1def5729041f157d2155f50b461ae.scope - libcontainer container 866753f6914ef95e860f427207d4998680a1def5729041f157d2155f50b461ae. Sep 4 17:29:48.550870 containerd[1981]: time="2024-09-04T17:29:48.550828683Z" level=info msg="StartContainer for \"866753f6914ef95e860f427207d4998680a1def5729041f157d2155f50b461ae\" returns successfully" Sep 4 17:29:48.808271 kubelet[3195]: E0904 17:29:48.808203 3195 controller.go:195] "Failed to update lease" err="Put \"https://172.31.30.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-103?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"