Jun 25 18:36:17.140840 kernel: Linux version 6.6.35-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Tue Jun 25 17:21:28 -00 2024 Jun 25 18:36:17.140884 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=4483672da8ac4c95f5ee13a489103440a13110ce1f63977ab5a6a33d0c137bf8 Jun 25 18:36:17.140898 kernel: BIOS-provided physical RAM map: Jun 25 18:36:17.140909 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jun 25 18:36:17.140918 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jun 25 18:36:17.140929 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jun 25 18:36:17.140945 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Jun 25 18:36:17.140956 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Jun 25 18:36:17.140967 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Jun 25 18:36:17.140978 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jun 25 18:36:17.140989 kernel: NX (Execute Disable) protection: active Jun 25 18:36:17.141000 kernel: APIC: Static calls initialized Jun 25 18:36:17.141011 kernel: SMBIOS 2.7 present. Jun 25 18:36:17.141022 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Jun 25 18:36:17.141039 kernel: Hypervisor detected: KVM Jun 25 18:36:17.141052 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jun 25 18:36:17.141064 kernel: kvm-clock: using sched offset of 6584257441 cycles Jun 25 18:36:17.141100 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jun 25 18:36:17.141113 kernel: tsc: Detected 2500.004 MHz processor Jun 25 18:36:17.141124 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jun 25 18:36:17.141135 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jun 25 18:36:17.141150 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Jun 25 18:36:17.141162 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jun 25 18:36:17.141173 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jun 25 18:36:17.141185 kernel: Using GB pages for direct mapping Jun 25 18:36:17.141195 kernel: ACPI: Early table checksum verification disabled Jun 25 18:36:17.141207 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Jun 25 18:36:17.141219 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Jun 25 18:36:17.141231 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Jun 25 18:36:17.141243 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jun 25 18:36:17.141258 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Jun 25 18:36:17.141270 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jun 25 18:36:17.143820 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jun 25 18:36:17.143846 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Jun 25 18:36:17.143859 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jun 25 18:36:17.143871 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Jun 25 18:36:17.143883 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Jun 25 18:36:17.143895 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jun 25 18:36:17.144062 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Jun 25 18:36:17.144095 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Jun 25 18:36:17.144115 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Jun 25 18:36:17.144129 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Jun 25 18:36:17.144143 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Jun 25 18:36:17.144156 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Jun 25 18:36:17.144176 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Jun 25 18:36:17.144189 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Jun 25 18:36:17.144203 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Jun 25 18:36:17.144218 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Jun 25 18:36:17.144232 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jun 25 18:36:17.144246 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jun 25 18:36:17.144260 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Jun 25 18:36:17.144274 kernel: NUMA: Initialized distance table, cnt=1 Jun 25 18:36:17.144287 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Jun 25 18:36:17.144305 kernel: Zone ranges: Jun 25 18:36:17.144319 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jun 25 18:36:17.144333 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Jun 25 18:36:17.144346 kernel: Normal empty Jun 25 18:36:17.144360 kernel: Movable zone start for each node Jun 25 18:36:17.144373 kernel: Early memory node ranges Jun 25 18:36:17.144387 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jun 25 18:36:17.144400 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Jun 25 18:36:17.144421 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Jun 25 18:36:17.144439 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 25 18:36:17.144453 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jun 25 18:36:17.144467 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Jun 25 18:36:17.144482 kernel: ACPI: PM-Timer IO Port: 0xb008 Jun 25 18:36:17.144496 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jun 25 18:36:17.144511 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Jun 25 18:36:17.144525 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jun 25 18:36:17.144539 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jun 25 18:36:17.144554 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jun 25 18:36:17.144572 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jun 25 18:36:17.144586 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jun 25 18:36:17.144600 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jun 25 18:36:17.144614 kernel: TSC deadline timer available Jun 25 18:36:17.144634 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jun 25 18:36:17.144648 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jun 25 18:36:17.144664 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Jun 25 18:36:17.144678 kernel: Booting paravirtualized kernel on KVM Jun 25 18:36:17.144692 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jun 25 18:36:17.144711 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jun 25 18:36:17.144729 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Jun 25 18:36:17.144742 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Jun 25 18:36:17.144755 kernel: pcpu-alloc: [0] 0 1 Jun 25 18:36:17.144769 kernel: kvm-guest: PV spinlocks enabled Jun 25 18:36:17.144784 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jun 25 18:36:17.144802 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=4483672da8ac4c95f5ee13a489103440a13110ce1f63977ab5a6a33d0c137bf8 Jun 25 18:36:17.144819 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 25 18:36:17.144838 kernel: random: crng init done Jun 25 18:36:17.144854 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 25 18:36:17.145164 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jun 25 18:36:17.145184 kernel: Fallback order for Node 0: 0 Jun 25 18:36:17.145199 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Jun 25 18:36:17.145216 kernel: Policy zone: DMA32 Jun 25 18:36:17.145234 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 25 18:36:17.145249 kernel: Memory: 1926200K/2057760K available (12288K kernel code, 2302K rwdata, 22636K rodata, 49384K init, 1964K bss, 131300K reserved, 0K cma-reserved) Jun 25 18:36:17.145264 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jun 25 18:36:17.145284 kernel: Kernel/User page tables isolation: enabled Jun 25 18:36:17.145301 kernel: ftrace: allocating 37650 entries in 148 pages Jun 25 18:36:17.145316 kernel: ftrace: allocated 148 pages with 3 groups Jun 25 18:36:17.145330 kernel: Dynamic Preempt: voluntary Jun 25 18:36:17.145346 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 25 18:36:17.145364 kernel: rcu: RCU event tracing is enabled. Jun 25 18:36:17.145382 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jun 25 18:36:17.145400 kernel: Trampoline variant of Tasks RCU enabled. Jun 25 18:36:17.145416 kernel: Rude variant of Tasks RCU enabled. Jun 25 18:36:17.145433 kernel: Tracing variant of Tasks RCU enabled. Jun 25 18:36:17.145447 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 25 18:36:17.145462 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jun 25 18:36:17.145476 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jun 25 18:36:17.145489 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 25 18:36:17.145504 kernel: Console: colour VGA+ 80x25 Jun 25 18:36:17.145519 kernel: printk: console [ttyS0] enabled Jun 25 18:36:17.145533 kernel: ACPI: Core revision 20230628 Jun 25 18:36:17.145547 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Jun 25 18:36:17.145561 kernel: APIC: Switch to symmetric I/O mode setup Jun 25 18:36:17.145580 kernel: x2apic enabled Jun 25 18:36:17.145595 kernel: APIC: Switched APIC routing to: physical x2apic Jun 25 18:36:17.145621 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093d6e846, max_idle_ns: 440795249997 ns Jun 25 18:36:17.145640 kernel: Calibrating delay loop (skipped) preset value.. 5000.00 BogoMIPS (lpj=2500004) Jun 25 18:36:17.145656 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jun 25 18:36:17.145672 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jun 25 18:36:17.145689 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jun 25 18:36:17.145705 kernel: Spectre V2 : Mitigation: Retpolines Jun 25 18:36:17.145720 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jun 25 18:36:17.145736 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jun 25 18:36:17.145753 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jun 25 18:36:17.145768 kernel: RETBleed: Vulnerable Jun 25 18:36:17.145787 kernel: Speculative Store Bypass: Vulnerable Jun 25 18:36:17.145803 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Jun 25 18:36:17.145819 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jun 25 18:36:17.145835 kernel: GDS: Unknown: Dependent on hypervisor status Jun 25 18:36:17.145851 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jun 25 18:36:17.145867 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jun 25 18:36:17.145886 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jun 25 18:36:17.145903 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jun 25 18:36:17.145918 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jun 25 18:36:17.145934 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jun 25 18:36:17.145950 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jun 25 18:36:17.145966 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jun 25 18:36:17.145982 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jun 25 18:36:17.145999 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jun 25 18:36:17.146013 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jun 25 18:36:17.146101 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jun 25 18:36:17.146113 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Jun 25 18:36:17.146131 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Jun 25 18:36:17.146144 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Jun 25 18:36:17.146158 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Jun 25 18:36:17.146171 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Jun 25 18:36:17.146186 kernel: Freeing SMP alternatives memory: 32K Jun 25 18:36:17.146200 kernel: pid_max: default: 32768 minimum: 301 Jun 25 18:36:17.146215 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Jun 25 18:36:17.146231 kernel: SELinux: Initializing. Jun 25 18:36:17.146246 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jun 25 18:36:17.146262 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jun 25 18:36:17.146279 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Jun 25 18:36:17.146295 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jun 25 18:36:17.146315 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jun 25 18:36:17.146332 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jun 25 18:36:17.146348 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jun 25 18:36:17.146423 kernel: signal: max sigframe size: 3632 Jun 25 18:36:17.146443 kernel: rcu: Hierarchical SRCU implementation. Jun 25 18:36:17.146461 kernel: rcu: Max phase no-delay instances is 400. Jun 25 18:36:17.146477 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jun 25 18:36:17.146493 kernel: smp: Bringing up secondary CPUs ... Jun 25 18:36:17.146613 kernel: smpboot: x86: Booting SMP configuration: Jun 25 18:36:17.146636 kernel: .... node #0, CPUs: #1 Jun 25 18:36:17.146654 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jun 25 18:36:17.146673 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jun 25 18:36:17.146935 kernel: smp: Brought up 1 node, 2 CPUs Jun 25 18:36:17.146998 kernel: smpboot: Max logical packages: 1 Jun 25 18:36:17.147016 kernel: smpboot: Total of 2 processors activated (10000.01 BogoMIPS) Jun 25 18:36:17.147032 kernel: devtmpfs: initialized Jun 25 18:36:17.147049 kernel: x86/mm: Memory block size: 128MB Jun 25 18:36:17.147083 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 25 18:36:17.147097 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jun 25 18:36:17.147110 kernel: pinctrl core: initialized pinctrl subsystem Jun 25 18:36:17.147124 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 25 18:36:17.147136 kernel: audit: initializing netlink subsys (disabled) Jun 25 18:36:17.147149 kernel: audit: type=2000 audit(1719340576.247:1): state=initialized audit_enabled=0 res=1 Jun 25 18:36:17.147162 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 25 18:36:17.147175 kernel: thermal_sys: Registered thermal governor 'user_space' Jun 25 18:36:17.147189 kernel: cpuidle: using governor menu Jun 25 18:36:17.147208 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 25 18:36:17.147223 kernel: dca service started, version 1.12.1 Jun 25 18:36:17.147240 kernel: PCI: Using configuration type 1 for base access Jun 25 18:36:17.147254 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jun 25 18:36:17.147267 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 25 18:36:17.147280 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jun 25 18:36:17.147295 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 25 18:36:17.147308 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jun 25 18:36:17.147322 kernel: ACPI: Added _OSI(Module Device) Jun 25 18:36:17.147340 kernel: ACPI: Added _OSI(Processor Device) Jun 25 18:36:17.147384 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jun 25 18:36:17.147398 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 25 18:36:17.147412 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jun 25 18:36:17.147426 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jun 25 18:36:17.147440 kernel: ACPI: Interpreter enabled Jun 25 18:36:17.147454 kernel: ACPI: PM: (supports S0 S5) Jun 25 18:36:17.147468 kernel: ACPI: Using IOAPIC for interrupt routing Jun 25 18:36:17.147484 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jun 25 18:36:17.147502 kernel: PCI: Using E820 reservations for host bridge windows Jun 25 18:36:17.147516 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jun 25 18:36:17.147530 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jun 25 18:36:17.147760 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jun 25 18:36:17.148384 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jun 25 18:36:17.148645 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jun 25 18:36:17.148669 kernel: acpiphp: Slot [3] registered Jun 25 18:36:17.148691 kernel: acpiphp: Slot [4] registered Jun 25 18:36:17.148705 kernel: acpiphp: Slot [5] registered Jun 25 18:36:17.148719 kernel: acpiphp: Slot [6] registered Jun 25 18:36:17.148733 kernel: acpiphp: Slot [7] registered Jun 25 18:36:17.148747 kernel: acpiphp: Slot [8] registered Jun 25 18:36:17.148761 kernel: acpiphp: Slot [9] registered Jun 25 18:36:17.148775 kernel: acpiphp: Slot [10] registered Jun 25 18:36:17.148789 kernel: acpiphp: Slot [11] registered Jun 25 18:36:17.148804 kernel: acpiphp: Slot [12] registered Jun 25 18:36:17.148817 kernel: acpiphp: Slot [13] registered Jun 25 18:36:17.148834 kernel: acpiphp: Slot [14] registered Jun 25 18:36:17.148849 kernel: acpiphp: Slot [15] registered Jun 25 18:36:17.148863 kernel: acpiphp: Slot [16] registered Jun 25 18:36:17.148876 kernel: acpiphp: Slot [17] registered Jun 25 18:36:17.148890 kernel: acpiphp: Slot [18] registered Jun 25 18:36:17.148903 kernel: acpiphp: Slot [19] registered Jun 25 18:36:17.148918 kernel: acpiphp: Slot [20] registered Jun 25 18:36:17.148931 kernel: acpiphp: Slot [21] registered Jun 25 18:36:17.148945 kernel: acpiphp: Slot [22] registered Jun 25 18:36:17.148963 kernel: acpiphp: Slot [23] registered Jun 25 18:36:17.149174 kernel: acpiphp: Slot [24] registered Jun 25 18:36:17.149191 kernel: acpiphp: Slot [25] registered Jun 25 18:36:17.149205 kernel: acpiphp: Slot [26] registered Jun 25 18:36:17.149219 kernel: acpiphp: Slot [27] registered Jun 25 18:36:17.149232 kernel: acpiphp: Slot [28] registered Jun 25 18:36:17.149246 kernel: acpiphp: Slot [29] registered Jun 25 18:36:17.149260 kernel: acpiphp: Slot [30] registered Jun 25 18:36:17.149273 kernel: acpiphp: Slot [31] registered Jun 25 18:36:17.149287 kernel: PCI host bridge to bus 0000:00 Jun 25 18:36:17.149443 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jun 25 18:36:17.149572 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jun 25 18:36:17.149698 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jun 25 18:36:17.149820 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jun 25 18:36:17.149989 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jun 25 18:36:17.150376 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jun 25 18:36:17.150717 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jun 25 18:36:17.150882 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Jun 25 18:36:17.151205 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jun 25 18:36:17.152360 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Jun 25 18:36:17.152592 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Jun 25 18:36:17.152920 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Jun 25 18:36:17.153186 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Jun 25 18:36:17.153344 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Jun 25 18:36:17.153484 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Jun 25 18:36:17.153624 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Jun 25 18:36:17.153763 kernel: pci 0000:00:01.3: quirk_piix4_acpi+0x0/0x180 took 20507 usecs Jun 25 18:36:17.153911 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Jun 25 18:36:17.154053 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Jun 25 18:36:17.154216 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jun 25 18:36:17.154358 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jun 25 18:36:17.154572 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jun 25 18:36:17.154727 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Jun 25 18:36:17.154885 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jun 25 18:36:17.155029 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Jun 25 18:36:17.155052 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jun 25 18:36:17.155099 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jun 25 18:36:17.155118 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jun 25 18:36:17.155130 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jun 25 18:36:17.155143 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jun 25 18:36:17.155155 kernel: iommu: Default domain type: Translated Jun 25 18:36:17.155168 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jun 25 18:36:17.155181 kernel: PCI: Using ACPI for IRQ routing Jun 25 18:36:17.155195 kernel: PCI: pci_cache_line_size set to 64 bytes Jun 25 18:36:17.155209 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jun 25 18:36:17.155222 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Jun 25 18:36:17.155375 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Jun 25 18:36:17.155510 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Jun 25 18:36:17.155641 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jun 25 18:36:17.155659 kernel: vgaarb: loaded Jun 25 18:36:17.155675 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jun 25 18:36:17.155691 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Jun 25 18:36:17.155706 kernel: clocksource: Switched to clocksource kvm-clock Jun 25 18:36:17.155721 kernel: VFS: Disk quotas dquot_6.6.0 Jun 25 18:36:17.155742 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 25 18:36:17.155755 kernel: pnp: PnP ACPI init Jun 25 18:36:17.155771 kernel: pnp: PnP ACPI: found 5 devices Jun 25 18:36:17.155787 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jun 25 18:36:17.155803 kernel: NET: Registered PF_INET protocol family Jun 25 18:36:17.155819 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 25 18:36:17.155833 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jun 25 18:36:17.155851 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 25 18:36:17.155868 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jun 25 18:36:17.155888 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jun 25 18:36:17.155903 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jun 25 18:36:17.155918 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jun 25 18:36:17.155931 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jun 25 18:36:17.155945 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 25 18:36:17.155959 kernel: NET: Registered PF_XDP protocol family Jun 25 18:36:17.156118 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jun 25 18:36:17.156250 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jun 25 18:36:17.156381 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jun 25 18:36:17.156596 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jun 25 18:36:17.156751 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jun 25 18:36:17.156773 kernel: PCI: CLS 0 bytes, default 64 Jun 25 18:36:17.156791 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jun 25 18:36:17.156808 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093d6e846, max_idle_ns: 440795249997 ns Jun 25 18:36:17.156824 kernel: clocksource: Switched to clocksource tsc Jun 25 18:36:17.156841 kernel: Initialise system trusted keyrings Jun 25 18:36:17.156942 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jun 25 18:36:17.156963 kernel: Key type asymmetric registered Jun 25 18:36:17.156980 kernel: Asymmetric key parser 'x509' registered Jun 25 18:36:17.156996 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jun 25 18:36:17.157012 kernel: io scheduler mq-deadline registered Jun 25 18:36:17.157243 kernel: io scheduler kyber registered Jun 25 18:36:17.157262 kernel: io scheduler bfq registered Jun 25 18:36:17.157277 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jun 25 18:36:17.157291 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 25 18:36:17.157310 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jun 25 18:36:17.157324 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jun 25 18:36:17.157337 kernel: i8042: Warning: Keylock active Jun 25 18:36:17.157351 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jun 25 18:36:17.157367 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jun 25 18:36:17.157543 kernel: rtc_cmos 00:00: RTC can wake from S4 Jun 25 18:36:17.157753 kernel: rtc_cmos 00:00: registered as rtc0 Jun 25 18:36:17.157903 kernel: rtc_cmos 00:00: setting system clock to 2024-06-25T18:36:16 UTC (1719340576) Jun 25 18:36:17.158358 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jun 25 18:36:17.158387 kernel: intel_pstate: CPU model not supported Jun 25 18:36:17.158406 kernel: NET: Registered PF_INET6 protocol family Jun 25 18:36:17.158458 kernel: Segment Routing with IPv6 Jun 25 18:36:17.158477 kernel: In-situ OAM (IOAM) with IPv6 Jun 25 18:36:17.158494 kernel: NET: Registered PF_PACKET protocol family Jun 25 18:36:17.158906 kernel: Key type dns_resolver registered Jun 25 18:36:17.158924 kernel: IPI shorthand broadcast: enabled Jun 25 18:36:17.158940 kernel: sched_clock: Marking stable (762003486, 273290832)->(1150055913, -114761595) Jun 25 18:36:17.158962 kernel: registered taskstats version 1 Jun 25 18:36:17.158978 kernel: Loading compiled-in X.509 certificates Jun 25 18:36:17.158995 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.35-flatcar: 60204e9db5f484c670a1c92aec37e9a0c4d3ae90' Jun 25 18:36:17.159011 kernel: Key type .fscrypt registered Jun 25 18:36:17.159028 kernel: Key type fscrypt-provisioning registered Jun 25 18:36:17.159044 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 25 18:36:17.159060 kernel: ima: Allocated hash algorithm: sha1 Jun 25 18:36:17.159111 kernel: ima: No architecture policies found Jun 25 18:36:17.159124 kernel: clk: Disabling unused clocks Jun 25 18:36:17.159142 kernel: Freeing unused kernel image (initmem) memory: 49384K Jun 25 18:36:17.159155 kernel: Write protecting the kernel read-only data: 36864k Jun 25 18:36:17.159168 kernel: Freeing unused kernel image (rodata/data gap) memory: 1940K Jun 25 18:36:17.159183 kernel: Run /init as init process Jun 25 18:36:17.159199 kernel: with arguments: Jun 25 18:36:17.159215 kernel: /init Jun 25 18:36:17.159231 kernel: with environment: Jun 25 18:36:17.159247 kernel: HOME=/ Jun 25 18:36:17.159263 kernel: TERM=linux Jun 25 18:36:17.159284 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 25 18:36:17.159310 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jun 25 18:36:17.159355 systemd[1]: Detected virtualization amazon. Jun 25 18:36:17.159440 systemd[1]: Detected architecture x86-64. Jun 25 18:36:17.159465 systemd[1]: Running in initrd. Jun 25 18:36:17.159487 systemd[1]: No hostname configured, using default hostname. Jun 25 18:36:17.159505 systemd[1]: Hostname set to . Jun 25 18:36:17.159524 systemd[1]: Initializing machine ID from VM UUID. Jun 25 18:36:17.159542 systemd[1]: Queued start job for default target initrd.target. Jun 25 18:36:17.159560 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 25 18:36:17.159579 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 18:36:17.159598 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 25 18:36:17.159616 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 25 18:36:17.159638 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 25 18:36:17.159657 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 25 18:36:17.159679 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 25 18:36:17.159698 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 25 18:36:17.159716 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 25 18:36:17.159734 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 18:36:17.159752 systemd[1]: Reached target paths.target - Path Units. Jun 25 18:36:17.159775 systemd[1]: Reached target slices.target - Slice Units. Jun 25 18:36:17.159792 systemd[1]: Reached target swap.target - Swaps. Jun 25 18:36:17.159811 systemd[1]: Reached target timers.target - Timer Units. Jun 25 18:36:17.159828 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 18:36:17.159846 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 18:36:17.159865 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jun 25 18:36:17.159889 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 25 18:36:17.159910 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jun 25 18:36:17.159928 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 18:36:17.159947 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 18:36:17.159968 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 18:36:17.159988 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 18:36:17.160006 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 25 18:36:17.160025 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 18:36:17.160047 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 25 18:36:17.160066 systemd[1]: Starting systemd-fsck-usr.service... Jun 25 18:36:17.160179 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 18:36:17.160197 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 18:36:17.160214 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:36:17.160233 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 25 18:36:17.160251 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 18:36:17.160302 systemd-journald[178]: Collecting audit messages is disabled. Jun 25 18:36:17.160346 systemd[1]: Finished systemd-fsck-usr.service. Jun 25 18:36:17.160402 systemd-journald[178]: Journal started Jun 25 18:36:17.160666 systemd-journald[178]: Runtime Journal (/run/log/journal/ec2aa8e8a6059fc4f8ff2f0387be1d2a) is 4.8M, max 38.6M, 33.8M free. Jun 25 18:36:17.171360 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 25 18:36:17.175301 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 18:36:17.200515 systemd-modules-load[179]: Inserted module 'overlay' Jun 25 18:36:17.213322 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 18:36:17.338769 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 25 18:36:17.338809 kernel: Bridge firewalling registered Jun 25 18:36:17.256784 systemd-modules-load[179]: Inserted module 'br_netfilter' Jun 25 18:36:17.343083 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 18:36:17.347374 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:36:17.350449 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 25 18:36:17.364579 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 18:36:17.371314 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 18:36:17.377691 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 18:36:17.381274 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 18:36:17.411127 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:36:17.415167 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:36:17.415502 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 18:36:17.429433 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 25 18:36:17.443319 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 18:36:17.467102 dracut-cmdline[211]: dracut-dracut-053 Jun 25 18:36:17.472925 dracut-cmdline[211]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=4483672da8ac4c95f5ee13a489103440a13110ce1f63977ab5a6a33d0c137bf8 Jun 25 18:36:17.520885 systemd-resolved[212]: Positive Trust Anchors: Jun 25 18:36:17.520906 systemd-resolved[212]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 18:36:17.520970 systemd-resolved[212]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jun 25 18:36:17.544279 systemd-resolved[212]: Defaulting to hostname 'linux'. Jun 25 18:36:17.547999 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 18:36:17.549838 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 18:36:17.643542 kernel: SCSI subsystem initialized Jun 25 18:36:17.680107 kernel: Loading iSCSI transport class v2.0-870. Jun 25 18:36:17.698104 kernel: iscsi: registered transport (tcp) Jun 25 18:36:17.750106 kernel: iscsi: registered transport (qla4xxx) Jun 25 18:36:17.750356 kernel: QLogic iSCSI HBA Driver Jun 25 18:36:17.822914 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 25 18:36:17.828299 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 25 18:36:17.896351 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 25 18:36:17.896432 kernel: device-mapper: uevent: version 1.0.3 Jun 25 18:36:17.896454 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jun 25 18:36:17.949126 kernel: raid6: avx512x4 gen() 17008 MB/s Jun 25 18:36:17.966118 kernel: raid6: avx512x2 gen() 16275 MB/s Jun 25 18:36:17.983131 kernel: raid6: avx512x1 gen() 9449 MB/s Jun 25 18:36:18.001127 kernel: raid6: avx2x4 gen() 10710 MB/s Jun 25 18:36:18.019118 kernel: raid6: avx2x2 gen() 9634 MB/s Jun 25 18:36:18.036187 kernel: raid6: avx2x1 gen() 10155 MB/s Jun 25 18:36:18.036268 kernel: raid6: using algorithm avx512x4 gen() 17008 MB/s Jun 25 18:36:18.055625 kernel: raid6: .... xor() 5493 MB/s, rmw enabled Jun 25 18:36:18.055708 kernel: raid6: using avx512x2 recovery algorithm Jun 25 18:36:18.108833 kernel: xor: automatically using best checksumming function avx Jun 25 18:36:18.350216 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 25 18:36:18.360659 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 25 18:36:18.366413 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 18:36:18.400865 systemd-udevd[395]: Using default interface naming scheme 'v255'. Jun 25 18:36:18.410581 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 18:36:18.423286 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 25 18:36:18.442912 dracut-pre-trigger[400]: rd.md=0: removing MD RAID activation Jun 25 18:36:18.523141 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 18:36:18.536982 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 18:36:18.643482 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 18:36:18.655665 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 25 18:36:18.709869 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 25 18:36:18.714765 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 18:36:18.727994 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 18:36:18.735254 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 18:36:18.767683 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 25 18:36:18.858280 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 25 18:36:18.872408 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jun 25 18:36:18.897090 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jun 25 18:36:18.897378 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Jun 25 18:36:18.897573 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:f3:21:06:2d:05 Jun 25 18:36:18.904191 kernel: cryptd: max_cpu_qlen set to 1000 Jun 25 18:36:18.907165 (udev-worker)[454]: Network interface NamePolicy= disabled on kernel command line. Jun 25 18:36:18.936632 kernel: AVX2 version of gcm_enc/dec engaged. Jun 25 18:36:18.936709 kernel: AES CTR mode by8 optimization enabled Jun 25 18:36:18.940416 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 25 18:36:18.940735 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:36:18.946136 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 18:36:18.948652 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 18:36:18.948873 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:36:18.951714 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:36:18.960768 kernel: nvme nvme0: pci function 0000:00:04.0 Jun 25 18:36:18.961002 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jun 25 18:36:18.964713 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:36:18.981098 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jun 25 18:36:18.986130 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jun 25 18:36:18.986195 kernel: GPT:9289727 != 16777215 Jun 25 18:36:18.986215 kernel: GPT:Alternate GPT header not at the end of the disk. Jun 25 18:36:18.986233 kernel: GPT:9289727 != 16777215 Jun 25 18:36:18.986249 kernel: GPT: Use GNU Parted to correct GPT errors. Jun 25 18:36:18.986267 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 25 18:36:19.076110 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (452) Jun 25 18:36:19.087101 kernel: BTRFS: device fsid 329ce27e-ea89-47b5-8f8b-f762c8412eb0 devid 1 transid 31 /dev/nvme0n1p3 scanned by (udev-worker) (449) Jun 25 18:36:19.173707 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:36:19.184369 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 18:36:19.229851 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jun 25 18:36:19.292537 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:36:19.308774 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jun 25 18:36:19.337968 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jun 25 18:36:19.345659 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jun 25 18:36:19.345805 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jun 25 18:36:19.379643 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 25 18:36:19.406181 disk-uuid[623]: Primary Header is updated. Jun 25 18:36:19.406181 disk-uuid[623]: Secondary Entries is updated. Jun 25 18:36:19.406181 disk-uuid[623]: Secondary Header is updated. Jun 25 18:36:19.431112 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 25 18:36:19.446097 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 25 18:36:20.446093 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 25 18:36:20.447325 disk-uuid[624]: The operation has completed successfully. Jun 25 18:36:20.704259 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 25 18:36:20.704625 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 25 18:36:20.749378 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 25 18:36:20.761037 sh[884]: Success Jun 25 18:36:20.798378 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jun 25 18:36:20.933713 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 25 18:36:20.949027 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 25 18:36:20.960761 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 25 18:36:21.006096 kernel: BTRFS info (device dm-0): first mount of filesystem 329ce27e-ea89-47b5-8f8b-f762c8412eb0 Jun 25 18:36:21.006175 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jun 25 18:36:21.007224 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jun 25 18:36:21.007250 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jun 25 18:36:21.008430 kernel: BTRFS info (device dm-0): using free space tree Jun 25 18:36:21.093095 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jun 25 18:36:21.135646 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 25 18:36:21.139734 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 25 18:36:21.162656 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 25 18:36:21.182948 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 25 18:36:21.223967 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:36:21.224058 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jun 25 18:36:21.224107 kernel: BTRFS info (device nvme0n1p6): using free space tree Jun 25 18:36:21.228115 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jun 25 18:36:21.251267 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:36:21.250406 systemd[1]: mnt-oem.mount: Deactivated successfully. Jun 25 18:36:21.274763 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 25 18:36:21.288424 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 25 18:36:21.360219 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 18:36:21.368389 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 18:36:21.414041 systemd-networkd[1076]: lo: Link UP Jun 25 18:36:21.414058 systemd-networkd[1076]: lo: Gained carrier Jun 25 18:36:21.416327 systemd-networkd[1076]: Enumeration completed Jun 25 18:36:21.416785 systemd-networkd[1076]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:36:21.416790 systemd-networkd[1076]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 18:36:21.419042 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 18:36:21.428940 systemd[1]: Reached target network.target - Network. Jun 25 18:36:21.430732 systemd-networkd[1076]: eth0: Link UP Jun 25 18:36:21.430738 systemd-networkd[1076]: eth0: Gained carrier Jun 25 18:36:21.430754 systemd-networkd[1076]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:36:21.443270 systemd-networkd[1076]: eth0: DHCPv4 address 172.31.20.217/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jun 25 18:36:21.670831 ignition[1009]: Ignition 2.19.0 Jun 25 18:36:21.670842 ignition[1009]: Stage: fetch-offline Jun 25 18:36:21.671046 ignition[1009]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:36:21.671054 ignition[1009]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 25 18:36:21.672230 ignition[1009]: Ignition finished successfully Jun 25 18:36:21.679274 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 18:36:21.689448 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jun 25 18:36:21.713113 ignition[1086]: Ignition 2.19.0 Jun 25 18:36:21.713130 ignition[1086]: Stage: fetch Jun 25 18:36:21.713693 ignition[1086]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:36:21.713708 ignition[1086]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 25 18:36:21.713822 ignition[1086]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 25 18:36:21.742753 ignition[1086]: PUT result: OK Jun 25 18:36:21.747820 ignition[1086]: parsed url from cmdline: "" Jun 25 18:36:21.747833 ignition[1086]: no config URL provided Jun 25 18:36:21.747842 ignition[1086]: reading system config file "/usr/lib/ignition/user.ign" Jun 25 18:36:21.747857 ignition[1086]: no config at "/usr/lib/ignition/user.ign" Jun 25 18:36:21.747907 ignition[1086]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 25 18:36:21.756652 ignition[1086]: PUT result: OK Jun 25 18:36:21.756784 ignition[1086]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jun 25 18:36:21.764625 ignition[1086]: GET result: OK Jun 25 18:36:21.764900 ignition[1086]: parsing config with SHA512: 04f2b5b8ce153720c7136e5747d2e383b7db263e04be5b08240d7bb35d79736aa9e835e940463b79beefb72611af097203fee57fb5feacf22b57081853dad13b Jun 25 18:36:21.771554 unknown[1086]: fetched base config from "system" Jun 25 18:36:21.771570 unknown[1086]: fetched base config from "system" Jun 25 18:36:21.772474 ignition[1086]: fetch: fetch complete Jun 25 18:36:21.771577 unknown[1086]: fetched user config from "aws" Jun 25 18:36:21.772488 ignition[1086]: fetch: fetch passed Jun 25 18:36:21.772551 ignition[1086]: Ignition finished successfully Jun 25 18:36:21.777279 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jun 25 18:36:21.788629 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 25 18:36:21.812943 ignition[1094]: Ignition 2.19.0 Jun 25 18:36:21.812958 ignition[1094]: Stage: kargs Jun 25 18:36:21.813487 ignition[1094]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:36:21.813554 ignition[1094]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 25 18:36:21.813679 ignition[1094]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 25 18:36:21.816366 ignition[1094]: PUT result: OK Jun 25 18:36:21.821561 ignition[1094]: kargs: kargs passed Jun 25 18:36:21.821678 ignition[1094]: Ignition finished successfully Jun 25 18:36:21.825573 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 25 18:36:21.837284 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 25 18:36:21.868665 ignition[1101]: Ignition 2.19.0 Jun 25 18:36:21.868683 ignition[1101]: Stage: disks Jun 25 18:36:21.869299 ignition[1101]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:36:21.869314 ignition[1101]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 25 18:36:21.869429 ignition[1101]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 25 18:36:21.874555 ignition[1101]: PUT result: OK Jun 25 18:36:21.881958 ignition[1101]: disks: disks passed Jun 25 18:36:21.882019 ignition[1101]: Ignition finished successfully Jun 25 18:36:21.894038 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 25 18:36:21.894487 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 25 18:36:21.904531 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 25 18:36:21.907543 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 18:36:21.910195 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 18:36:21.912650 systemd[1]: Reached target basic.target - Basic System. Jun 25 18:36:21.920383 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 25 18:36:21.979176 systemd-fsck[1110]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jun 25 18:36:21.983609 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 25 18:36:21.995541 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 25 18:36:22.164097 kernel: EXT4-fs (nvme0n1p9): mounted filesystem ed685e11-963b-427a-9b96-a4691c40e909 r/w with ordered data mode. Quota mode: none. Jun 25 18:36:22.164995 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 25 18:36:22.175112 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 25 18:36:22.187267 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 18:36:22.189425 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 25 18:36:22.194737 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jun 25 18:36:22.194967 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 25 18:36:22.195005 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 18:36:22.210095 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1129) Jun 25 18:36:22.213516 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:36:22.213589 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jun 25 18:36:22.213609 kernel: BTRFS info (device nvme0n1p6): using free space tree Jun 25 18:36:22.219179 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jun 25 18:36:22.220589 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 18:36:22.226172 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 25 18:36:22.238583 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 25 18:36:22.622192 initrd-setup-root[1153]: cut: /sysroot/etc/passwd: No such file or directory Jun 25 18:36:22.642561 initrd-setup-root[1160]: cut: /sysroot/etc/group: No such file or directory Jun 25 18:36:22.655098 initrd-setup-root[1167]: cut: /sysroot/etc/shadow: No such file or directory Jun 25 18:36:22.665919 initrd-setup-root[1174]: cut: /sysroot/etc/gshadow: No such file or directory Jun 25 18:36:22.900229 systemd-networkd[1076]: eth0: Gained IPv6LL Jun 25 18:36:22.972591 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 25 18:36:22.987436 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 25 18:36:23.014970 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 25 18:36:23.027669 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 25 18:36:23.030289 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:36:23.065508 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 25 18:36:23.077225 ignition[1243]: INFO : Ignition 2.19.0 Jun 25 18:36:23.077225 ignition[1243]: INFO : Stage: mount Jun 25 18:36:23.079372 ignition[1243]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 18:36:23.079372 ignition[1243]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 25 18:36:23.079372 ignition[1243]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 25 18:36:23.083494 ignition[1243]: INFO : PUT result: OK Jun 25 18:36:23.086709 ignition[1243]: INFO : mount: mount passed Jun 25 18:36:23.086709 ignition[1243]: INFO : Ignition finished successfully Jun 25 18:36:23.092007 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 25 18:36:23.100237 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 25 18:36:23.174411 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 18:36:23.229138 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1254) Jun 25 18:36:23.233399 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:36:23.233471 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jun 25 18:36:23.233503 kernel: BTRFS info (device nvme0n1p6): using free space tree Jun 25 18:36:23.242156 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jun 25 18:36:23.250376 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 18:36:23.312020 ignition[1271]: INFO : Ignition 2.19.0 Jun 25 18:36:23.312020 ignition[1271]: INFO : Stage: files Jun 25 18:36:23.314714 ignition[1271]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 18:36:23.314714 ignition[1271]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 25 18:36:23.314714 ignition[1271]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 25 18:36:23.319401 ignition[1271]: INFO : PUT result: OK Jun 25 18:36:23.323613 ignition[1271]: DEBUG : files: compiled without relabeling support, skipping Jun 25 18:36:23.325614 ignition[1271]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 25 18:36:23.325614 ignition[1271]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 25 18:36:23.360417 ignition[1271]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 25 18:36:23.362033 ignition[1271]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 25 18:36:23.362033 ignition[1271]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 25 18:36:23.361167 unknown[1271]: wrote ssh authorized keys file for user: core Jun 25 18:36:23.367731 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 25 18:36:23.370289 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jun 25 18:36:23.423367 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 25 18:36:23.549663 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 25 18:36:23.549663 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jun 25 18:36:23.554310 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jun 25 18:36:23.554310 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 25 18:36:23.560893 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 25 18:36:23.560893 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 18:36:23.560893 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 18:36:23.566777 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 18:36:23.566777 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 18:36:23.566777 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 18:36:23.575813 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 18:36:23.575813 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jun 25 18:36:23.581521 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jun 25 18:36:23.581521 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jun 25 18:36:23.581521 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jun 25 18:36:24.029701 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jun 25 18:36:24.669289 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jun 25 18:36:24.669289 ignition[1271]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jun 25 18:36:24.676006 ignition[1271]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 18:36:24.682781 ignition[1271]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 18:36:24.682781 ignition[1271]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jun 25 18:36:24.682781 ignition[1271]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jun 25 18:36:24.682781 ignition[1271]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jun 25 18:36:24.682781 ignition[1271]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 25 18:36:24.682781 ignition[1271]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 25 18:36:24.682781 ignition[1271]: INFO : files: files passed Jun 25 18:36:24.682781 ignition[1271]: INFO : Ignition finished successfully Jun 25 18:36:24.692300 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 25 18:36:24.714469 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 25 18:36:24.724324 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 25 18:36:24.738208 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 25 18:36:24.753283 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 25 18:36:24.765535 initrd-setup-root-after-ignition[1300]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 18:36:24.765535 initrd-setup-root-after-ignition[1300]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 25 18:36:24.769873 initrd-setup-root-after-ignition[1304]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 18:36:24.774675 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 18:36:24.775015 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 25 18:36:24.785292 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 25 18:36:24.820152 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 25 18:36:24.820285 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 25 18:36:24.823661 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 25 18:36:24.825992 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 25 18:36:24.828431 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 25 18:36:24.836528 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 25 18:36:24.856481 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 18:36:24.865402 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 25 18:36:24.882932 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 25 18:36:24.885801 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 18:36:24.888580 systemd[1]: Stopped target timers.target - Timer Units. Jun 25 18:36:24.891024 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 25 18:36:24.892175 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 18:36:24.901406 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 25 18:36:24.905143 systemd[1]: Stopped target basic.target - Basic System. Jun 25 18:36:24.907333 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 25 18:36:24.907538 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 18:36:24.915357 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 25 18:36:24.915625 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 25 18:36:24.924199 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 18:36:24.926172 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 25 18:36:24.932578 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 25 18:36:24.936105 systemd[1]: Stopped target swap.target - Swaps. Jun 25 18:36:24.946012 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 25 18:36:24.946631 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 25 18:36:24.953134 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 25 18:36:24.972526 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 25 18:36:24.972717 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 25 18:36:24.975258 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 25 18:36:24.978932 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 25 18:36:24.980098 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 25 18:36:24.982770 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 25 18:36:24.984505 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 18:36:24.987290 systemd[1]: ignition-files.service: Deactivated successfully. Jun 25 18:36:24.988385 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 25 18:36:24.996420 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 25 18:36:25.002370 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 25 18:36:25.004058 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 25 18:36:25.004287 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 18:36:25.006376 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 25 18:36:25.006545 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 18:36:25.025572 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 25 18:36:25.025704 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 25 18:36:25.033429 ignition[1324]: INFO : Ignition 2.19.0 Jun 25 18:36:25.034549 ignition[1324]: INFO : Stage: umount Jun 25 18:36:25.035621 ignition[1324]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 18:36:25.035621 ignition[1324]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 25 18:36:25.035621 ignition[1324]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 25 18:36:25.040403 ignition[1324]: INFO : PUT result: OK Jun 25 18:36:25.044651 ignition[1324]: INFO : umount: umount passed Jun 25 18:36:25.044651 ignition[1324]: INFO : Ignition finished successfully Jun 25 18:36:25.045450 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 25 18:36:25.045596 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 25 18:36:25.050382 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 25 18:36:25.050462 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 25 18:36:25.052561 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 25 18:36:25.052614 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 25 18:36:25.056579 systemd[1]: ignition-fetch.service: Deactivated successfully. Jun 25 18:36:25.056634 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jun 25 18:36:25.059348 systemd[1]: Stopped target network.target - Network. Jun 25 18:36:25.061513 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 25 18:36:25.061586 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 18:36:25.064568 systemd[1]: Stopped target paths.target - Path Units. Jun 25 18:36:25.067792 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 25 18:36:25.068116 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 18:36:25.070511 systemd[1]: Stopped target slices.target - Slice Units. Jun 25 18:36:25.071765 systemd[1]: Stopped target sockets.target - Socket Units. Jun 25 18:36:25.073543 systemd[1]: iscsid.socket: Deactivated successfully. Jun 25 18:36:25.073708 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 18:36:25.076552 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 25 18:36:25.076610 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 18:36:25.078702 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 25 18:36:25.078759 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 25 18:36:25.080668 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 25 18:36:25.080722 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 25 18:36:25.083300 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 25 18:36:25.088675 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 25 18:36:25.091457 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 25 18:36:25.092194 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 25 18:36:25.092289 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 25 18:36:25.095112 systemd-networkd[1076]: eth0: DHCPv6 lease lost Jun 25 18:36:25.097483 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 25 18:36:25.097594 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 25 18:36:25.101733 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 25 18:36:25.102748 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 25 18:36:25.117299 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 25 18:36:25.123123 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 25 18:36:25.152066 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 25 18:36:25.152166 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 25 18:36:25.184432 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 25 18:36:25.185708 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 25 18:36:25.185897 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 18:36:25.187752 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 25 18:36:25.189421 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:36:25.192968 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 25 18:36:25.193038 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 25 18:36:25.200670 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 25 18:36:25.200858 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 18:36:25.203755 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 18:36:25.234647 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 25 18:36:25.234848 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 18:36:25.237970 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 25 18:36:25.238356 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 25 18:36:25.244551 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 25 18:36:25.244632 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 25 18:36:25.252368 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 25 18:36:25.252431 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 18:36:25.254623 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 25 18:36:25.254702 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 25 18:36:25.262354 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 25 18:36:25.262448 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 25 18:36:25.266028 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 25 18:36:25.266180 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:36:25.280398 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 25 18:36:25.283188 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 25 18:36:25.283304 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 18:36:25.285485 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jun 25 18:36:25.285569 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 25 18:36:25.287367 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 25 18:36:25.287440 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 18:36:25.289128 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 18:36:25.289201 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:36:25.308997 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 25 18:36:25.309274 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 25 18:36:25.313140 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 25 18:36:25.328623 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 25 18:36:25.360696 systemd[1]: Switching root. Jun 25 18:36:25.414815 systemd-journald[178]: Journal stopped Jun 25 18:36:27.774884 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Jun 25 18:36:27.774991 kernel: SELinux: policy capability network_peer_controls=1 Jun 25 18:36:27.775018 kernel: SELinux: policy capability open_perms=1 Jun 25 18:36:27.775039 kernel: SELinux: policy capability extended_socket_class=1 Jun 25 18:36:27.775067 kernel: SELinux: policy capability always_check_network=0 Jun 25 18:36:27.776537 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 25 18:36:27.776562 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 25 18:36:27.776581 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 25 18:36:27.776599 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 25 18:36:27.776618 kernel: audit: type=1403 audit(1719340586.107:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 25 18:36:27.776649 systemd[1]: Successfully loaded SELinux policy in 59.030ms. Jun 25 18:36:27.776778 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 35.911ms. Jun 25 18:36:27.776805 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jun 25 18:36:27.776830 systemd[1]: Detected virtualization amazon. Jun 25 18:36:27.776850 systemd[1]: Detected architecture x86-64. Jun 25 18:36:27.776869 systemd[1]: Detected first boot. Jun 25 18:36:27.776970 systemd[1]: Initializing machine ID from VM UUID. Jun 25 18:36:27.776993 zram_generator::config[1366]: No configuration found. Jun 25 18:36:27.777020 systemd[1]: Populated /etc with preset unit settings. Jun 25 18:36:27.777040 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 25 18:36:27.777059 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 25 18:36:27.778441 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 25 18:36:27.778472 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 25 18:36:27.778494 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 25 18:36:27.778514 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 25 18:36:27.778533 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 25 18:36:27.778552 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 25 18:36:27.778570 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 25 18:36:27.778588 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 25 18:36:27.778606 systemd[1]: Created slice user.slice - User and Session Slice. Jun 25 18:36:27.778630 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 25 18:36:27.778649 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 18:36:27.778670 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 25 18:36:27.778690 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 25 18:36:27.778709 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 25 18:36:27.778728 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 25 18:36:27.778747 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jun 25 18:36:27.778772 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 25 18:36:27.778791 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 25 18:36:27.778813 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 25 18:36:27.778833 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 25 18:36:27.778860 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 25 18:36:27.778880 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 18:36:27.778898 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 18:36:27.778919 systemd[1]: Reached target slices.target - Slice Units. Jun 25 18:36:27.778938 systemd[1]: Reached target swap.target - Swaps. Jun 25 18:36:27.778957 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 25 18:36:27.778980 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 25 18:36:27.779000 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 18:36:27.779021 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 18:36:27.779041 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 18:36:27.779060 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 25 18:36:27.781236 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 25 18:36:27.781276 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 25 18:36:27.781301 systemd[1]: Mounting media.mount - External Media Directory... Jun 25 18:36:27.781326 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:36:27.781357 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 25 18:36:27.781381 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 25 18:36:27.781404 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 25 18:36:27.781429 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 25 18:36:27.781564 systemd[1]: Reached target machines.target - Containers. Jun 25 18:36:27.781622 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 25 18:36:27.781642 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 18:36:27.781662 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 18:36:27.781692 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 25 18:36:27.781715 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 18:36:27.781734 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 18:36:27.781754 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 18:36:27.781773 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 25 18:36:27.781793 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 18:36:27.781813 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 25 18:36:27.781833 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 25 18:36:27.781858 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 25 18:36:27.781880 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 25 18:36:27.781900 systemd[1]: Stopped systemd-fsck-usr.service. Jun 25 18:36:27.781964 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 18:36:27.781985 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 18:36:27.782034 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 25 18:36:27.782057 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 25 18:36:27.782132 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 18:36:27.782154 systemd[1]: verity-setup.service: Deactivated successfully. Jun 25 18:36:27.782207 systemd[1]: Stopped verity-setup.service. Jun 25 18:36:27.782228 kernel: loop: module loaded Jun 25 18:36:27.782250 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:36:27.782299 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 25 18:36:27.782320 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 25 18:36:27.782341 systemd[1]: Mounted media.mount - External Media Directory. Jun 25 18:36:27.782390 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 25 18:36:27.782414 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 25 18:36:27.782468 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 25 18:36:27.782491 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 18:36:27.782512 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 25 18:36:27.782772 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 25 18:36:27.782830 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 18:36:27.782852 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 18:36:27.782875 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 18:36:27.782924 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 18:36:27.782944 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 18:36:27.782964 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 18:36:27.783015 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 18:36:27.783039 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 25 18:36:27.783105 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 25 18:36:27.785158 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 25 18:36:27.785191 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 18:36:27.785213 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 18:36:27.785237 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 25 18:36:27.785261 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 25 18:36:27.785286 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 25 18:36:27.785309 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 18:36:27.785381 systemd-journald[1440]: Collecting audit messages is disabled. Jun 25 18:36:27.785427 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jun 25 18:36:27.785450 systemd-journald[1440]: Journal started Jun 25 18:36:27.785494 systemd-journald[1440]: Runtime Journal (/run/log/journal/ec2aa8e8a6059fc4f8ff2f0387be1d2a) is 4.8M, max 38.6M, 33.8M free. Jun 25 18:36:27.188559 systemd[1]: Queued start job for default target multi-user.target. Jun 25 18:36:27.233249 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jun 25 18:36:27.233733 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 25 18:36:27.799145 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jun 25 18:36:27.820120 kernel: fuse: init (API version 7.39) Jun 25 18:36:27.832050 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 25 18:36:27.836120 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 18:36:27.849845 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 25 18:36:27.867515 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 18:36:27.867602 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jun 25 18:36:27.904694 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 25 18:36:27.904787 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 18:36:27.915599 kernel: ACPI: bus type drm_connector registered Jun 25 18:36:27.940573 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 25 18:36:27.944271 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 18:36:27.948459 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 18:36:27.950976 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 25 18:36:27.953009 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 25 18:36:27.954941 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 25 18:36:27.957251 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jun 25 18:36:27.962105 kernel: loop0: detected capacity change from 0 to 139760 Jun 25 18:36:27.966489 kernel: block loop0: the capability attribute has been deprecated. Jun 25 18:36:27.966363 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:36:28.043260 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 25 18:36:28.065381 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 25 18:36:28.071286 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jun 25 18:36:28.072961 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 25 18:36:28.078037 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 25 18:36:28.086227 systemd-journald[1440]: Time spent on flushing to /var/log/journal/ec2aa8e8a6059fc4f8ff2f0387be1d2a is 119.083ms for 965 entries. Jun 25 18:36:28.086227 systemd-journald[1440]: System Journal (/var/log/journal/ec2aa8e8a6059fc4f8ff2f0387be1d2a) is 8.0M, max 195.6M, 187.6M free. Jun 25 18:36:28.226671 systemd-journald[1440]: Received client request to flush runtime journal. Jun 25 18:36:28.226770 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 25 18:36:28.226806 kernel: loop1: detected capacity change from 0 to 60984 Jun 25 18:36:28.083045 systemd-tmpfiles[1463]: ACLs are not supported, ignoring. Jun 25 18:36:28.085052 systemd-tmpfiles[1463]: ACLs are not supported, ignoring. Jun 25 18:36:28.089264 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jun 25 18:36:28.091249 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 18:36:28.117348 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jun 25 18:36:28.120692 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 25 18:36:28.147651 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 25 18:36:28.191238 udevadm[1505]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jun 25 18:36:28.234050 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 25 18:36:28.255713 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 25 18:36:28.258662 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jun 25 18:36:28.282322 kernel: loop2: detected capacity change from 0 to 210664 Jun 25 18:36:28.288492 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 25 18:36:28.299335 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 18:36:28.341414 kernel: loop3: detected capacity change from 0 to 80568 Jun 25 18:36:28.353009 systemd-tmpfiles[1514]: ACLs are not supported, ignoring. Jun 25 18:36:28.354470 systemd-tmpfiles[1514]: ACLs are not supported, ignoring. Jun 25 18:36:28.367413 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 18:36:28.487141 kernel: loop4: detected capacity change from 0 to 139760 Jun 25 18:36:28.535133 kernel: loop5: detected capacity change from 0 to 60984 Jun 25 18:36:28.572111 kernel: loop6: detected capacity change from 0 to 210664 Jun 25 18:36:28.603131 kernel: loop7: detected capacity change from 0 to 80568 Jun 25 18:36:28.621330 (sd-merge)[1519]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jun 25 18:36:28.625866 (sd-merge)[1519]: Merged extensions into '/usr'. Jun 25 18:36:28.650429 systemd[1]: Reloading requested from client PID 1475 ('systemd-sysext') (unit systemd-sysext.service)... Jun 25 18:36:28.650609 systemd[1]: Reloading... Jun 25 18:36:28.834889 zram_generator::config[1540]: No configuration found. Jun 25 18:36:29.185389 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:36:29.342568 systemd[1]: Reloading finished in 690 ms. Jun 25 18:36:29.376403 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 25 18:36:29.385360 systemd[1]: Starting ensure-sysext.service... Jun 25 18:36:29.395394 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 18:36:29.426373 systemd[1]: Reloading requested from client PID 1591 ('systemctl') (unit ensure-sysext.service)... Jun 25 18:36:29.426393 systemd[1]: Reloading... Jun 25 18:36:29.433400 systemd-tmpfiles[1592]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 25 18:36:29.433920 systemd-tmpfiles[1592]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 25 18:36:29.448322 systemd-tmpfiles[1592]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 25 18:36:29.448759 systemd-tmpfiles[1592]: ACLs are not supported, ignoring. Jun 25 18:36:29.448845 systemd-tmpfiles[1592]: ACLs are not supported, ignoring. Jun 25 18:36:29.455446 systemd-tmpfiles[1592]: Detected autofs mount point /boot during canonicalization of boot. Jun 25 18:36:29.455461 systemd-tmpfiles[1592]: Skipping /boot Jun 25 18:36:29.470513 systemd-tmpfiles[1592]: Detected autofs mount point /boot during canonicalization of boot. Jun 25 18:36:29.470530 systemd-tmpfiles[1592]: Skipping /boot Jun 25 18:36:29.544221 zram_generator::config[1618]: No configuration found. Jun 25 18:36:29.595195 ldconfig[1470]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 25 18:36:29.720763 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:36:29.794670 systemd[1]: Reloading finished in 367 ms. Jun 25 18:36:29.825149 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 25 18:36:29.828700 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 25 18:36:29.839816 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 18:36:29.863331 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 18:36:29.874609 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 25 18:36:29.884343 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 25 18:36:29.892649 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 18:36:29.899358 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 18:36:29.906710 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 25 18:36:29.922490 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:36:29.922912 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 18:36:29.932732 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 18:36:29.941792 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 18:36:29.948555 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 18:36:29.950807 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 18:36:29.951178 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:36:29.968586 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 25 18:36:29.992294 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:36:29.994720 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 18:36:29.997706 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 18:36:29.998110 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:36:30.020795 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 18:36:30.021342 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 18:36:30.024803 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 18:36:30.025003 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 18:36:30.029064 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:36:30.031952 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 18:36:30.053532 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 18:36:30.056098 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 18:36:30.056371 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 18:36:30.056728 systemd[1]: Reached target time-set.target - System Time Set. Jun 25 18:36:30.060912 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:36:30.062415 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 25 18:36:30.064914 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 18:36:30.065165 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 18:36:30.073761 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 25 18:36:30.087556 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 18:36:30.100325 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 25 18:36:30.102360 systemd[1]: Finished ensure-sysext.service. Jun 25 18:36:30.115378 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 18:36:30.117785 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 18:36:30.138469 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 25 18:36:30.149651 systemd-udevd[1675]: Using default interface naming scheme 'v255'. Jun 25 18:36:30.168347 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 25 18:36:30.174366 augenrules[1709]: No rules Jun 25 18:36:30.177891 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 18:36:30.200617 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 25 18:36:30.203777 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 25 18:36:30.207855 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 18:36:30.225954 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 18:36:30.328153 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jun 25 18:36:30.353109 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1729) Jun 25 18:36:30.351396 (udev-worker)[1730]: Network interface NamePolicy= disabled on kernel command line. Jun 25 18:36:30.414491 systemd-resolved[1674]: Positive Trust Anchors: Jun 25 18:36:30.414515 systemd-resolved[1674]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 18:36:30.414567 systemd-resolved[1674]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jun 25 18:36:30.422296 systemd-networkd[1718]: lo: Link UP Jun 25 18:36:30.422307 systemd-networkd[1718]: lo: Gained carrier Jun 25 18:36:30.424733 systemd-networkd[1718]: Enumeration completed Jun 25 18:36:30.424966 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 18:36:30.425470 systemd-networkd[1718]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:36:30.425478 systemd-networkd[1718]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 18:36:30.429646 systemd-resolved[1674]: Defaulting to hostname 'linux'. Jun 25 18:36:30.431096 systemd-networkd[1718]: eth0: Link UP Jun 25 18:36:30.433301 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 25 18:36:30.439645 systemd-networkd[1718]: eth0: Gained carrier Jun 25 18:36:30.439686 systemd-networkd[1718]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:36:30.442174 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 18:36:30.444350 systemd[1]: Reached target network.target - Network. Jun 25 18:36:30.446660 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 18:36:30.454883 systemd-networkd[1718]: eth0: DHCPv4 address 172.31.20.217/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jun 25 18:36:30.504461 systemd-networkd[1718]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:36:30.507114 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Jun 25 18:36:30.511423 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jun 25 18:36:30.517791 kernel: ACPI: button: Power Button [PWRF] Jun 25 18:36:30.517886 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Jun 25 18:36:30.530305 kernel: ACPI: button: Sleep Button [SLPF] Jun 25 18:36:30.535102 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input5 Jun 25 18:36:30.590205 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:36:30.686129 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 31 scanned by (udev-worker) (1731) Jun 25 18:36:30.689743 kernel: mousedev: PS/2 mouse device common for all mice Jun 25 18:36:30.857965 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jun 25 18:36:31.014088 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jun 25 18:36:31.022699 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:36:31.045470 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jun 25 18:36:31.053276 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 25 18:36:31.091224 lvm[1835]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 18:36:31.095479 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 25 18:36:31.126405 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jun 25 18:36:31.128660 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 18:36:31.130242 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 18:36:31.132061 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 25 18:36:31.133743 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 25 18:36:31.135945 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 25 18:36:31.137362 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 25 18:36:31.140510 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 25 18:36:31.142846 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 25 18:36:31.142897 systemd[1]: Reached target paths.target - Path Units. Jun 25 18:36:31.144032 systemd[1]: Reached target timers.target - Timer Units. Jun 25 18:36:31.146630 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 25 18:36:31.153976 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 25 18:36:31.177287 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 25 18:36:31.183207 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jun 25 18:36:31.188228 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 25 18:36:31.190623 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 18:36:31.193839 systemd[1]: Reached target basic.target - Basic System. Jun 25 18:36:31.195384 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 25 18:36:31.195423 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 25 18:36:31.201748 systemd[1]: Starting containerd.service - containerd container runtime... Jun 25 18:36:31.211862 lvm[1842]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 18:36:31.221607 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jun 25 18:36:31.229324 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 25 18:36:31.244875 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 25 18:36:31.255770 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 25 18:36:31.257002 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 25 18:36:31.272035 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 25 18:36:31.285432 systemd[1]: Started ntpd.service - Network Time Service. Jun 25 18:36:31.320562 jq[1846]: false Jun 25 18:36:31.310344 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 25 18:36:31.326283 systemd[1]: Starting setup-oem.service - Setup OEM... Jun 25 18:36:31.334898 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 25 18:36:31.347325 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 25 18:36:31.357350 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 25 18:36:31.359090 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 25 18:36:31.359791 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 25 18:36:31.368138 systemd[1]: Starting update-engine.service - Update Engine... Jun 25 18:36:31.376979 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 25 18:36:31.379878 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jun 25 18:36:31.388514 extend-filesystems[1847]: Found loop4 Jun 25 18:36:31.407213 extend-filesystems[1847]: Found loop5 Jun 25 18:36:31.407213 extend-filesystems[1847]: Found loop6 Jun 25 18:36:31.407213 extend-filesystems[1847]: Found loop7 Jun 25 18:36:31.407213 extend-filesystems[1847]: Found nvme0n1 Jun 25 18:36:31.407213 extend-filesystems[1847]: Found nvme0n1p1 Jun 25 18:36:31.407213 extend-filesystems[1847]: Found nvme0n1p2 Jun 25 18:36:31.407213 extend-filesystems[1847]: Found nvme0n1p3 Jun 25 18:36:31.407213 extend-filesystems[1847]: Found usr Jun 25 18:36:31.407213 extend-filesystems[1847]: Found nvme0n1p4 Jun 25 18:36:31.407213 extend-filesystems[1847]: Found nvme0n1p6 Jun 25 18:36:31.407213 extend-filesystems[1847]: Found nvme0n1p7 Jun 25 18:36:31.407213 extend-filesystems[1847]: Found nvme0n1p9 Jun 25 18:36:31.407213 extend-filesystems[1847]: Checking size of /dev/nvme0n1p9 Jun 25 18:36:31.406483 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 25 18:36:31.406706 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 25 18:36:31.469551 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 25 18:36:31.469804 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 25 18:36:31.472336 systemd[1]: motdgen.service: Deactivated successfully. Jun 25 18:36:31.473276 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 25 18:36:31.478764 dbus-daemon[1845]: [system] SELinux support is enabled Jun 25 18:36:31.482534 dbus-daemon[1845]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1718 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jun 25 18:36:31.489327 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 25 18:36:31.492753 jq[1860]: true Jun 25 18:36:31.501102 extend-filesystems[1847]: Resized partition /dev/nvme0n1p9 Jun 25 18:36:31.508875 extend-filesystems[1880]: resize2fs 1.47.0 (5-Feb-2023) Jun 25 18:36:31.537584 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 25 18:36:31.538150 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 25 18:36:31.541382 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 25 18:36:31.541415 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 25 18:36:31.560224 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jun 25 18:36:31.557603 dbus-daemon[1845]: [system] Successfully activated service 'org.freedesktop.systemd1' Jun 25 18:36:31.563333 (ntainerd)[1879]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jun 25 18:36:31.571440 ntpd[1849]: 25 Jun 18:36:31 ntpd[1849]: ntpd 4.2.8p17@1.4004-o Tue Jun 25 16:52:45 UTC 2024 (1): Starting Jun 25 18:36:31.571440 ntpd[1849]: 25 Jun 18:36:31 ntpd[1849]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jun 25 18:36:31.571440 ntpd[1849]: 25 Jun 18:36:31 ntpd[1849]: ---------------------------------------------------- Jun 25 18:36:31.571440 ntpd[1849]: 25 Jun 18:36:31 ntpd[1849]: ntp-4 is maintained by Network Time Foundation, Jun 25 18:36:31.571440 ntpd[1849]: 25 Jun 18:36:31 ntpd[1849]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jun 25 18:36:31.571440 ntpd[1849]: 25 Jun 18:36:31 ntpd[1849]: corporation. Support and training for ntp-4 are Jun 25 18:36:31.571440 ntpd[1849]: 25 Jun 18:36:31 ntpd[1849]: available at https://www.nwtime.org/support Jun 25 18:36:31.571440 ntpd[1849]: 25 Jun 18:36:31 ntpd[1849]: ---------------------------------------------------- Jun 25 18:36:31.565425 ntpd[1849]: ntpd 4.2.8p17@1.4004-o Tue Jun 25 16:52:45 UTC 2024 (1): Starting Jun 25 18:36:31.565471 ntpd[1849]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jun 25 18:36:31.565482 ntpd[1849]: ---------------------------------------------------- Jun 25 18:36:31.565492 ntpd[1849]: ntp-4 is maintained by Network Time Foundation, Jun 25 18:36:31.565502 ntpd[1849]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jun 25 18:36:31.565511 ntpd[1849]: corporation. Support and training for ntp-4 are Jun 25 18:36:31.565523 ntpd[1849]: available at https://www.nwtime.org/support Jun 25 18:36:31.585048 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jun 25 18:36:31.565532 ntpd[1849]: ---------------------------------------------------- Jun 25 18:36:31.596838 ntpd[1849]: proto: precision = 0.104 usec (-23) Jun 25 18:36:31.601586 ntpd[1849]: 25 Jun 18:36:31 ntpd[1849]: proto: precision = 0.104 usec (-23) Jun 25 18:36:31.604007 ntpd[1849]: basedate set to 2024-06-13 Jun 25 18:36:31.604225 systemd-networkd[1718]: eth0: Gained IPv6LL Jun 25 18:36:31.641383 ntpd[1849]: 25 Jun 18:36:31 ntpd[1849]: basedate set to 2024-06-13 Jun 25 18:36:31.641383 ntpd[1849]: 25 Jun 18:36:31 ntpd[1849]: gps base set to 2024-06-16 (week 2319) Jun 25 18:36:31.641383 ntpd[1849]: 25 Jun 18:36:31 ntpd[1849]: Listen and drop on 0 v6wildcard [::]:123 Jun 25 18:36:31.641383 ntpd[1849]: 25 Jun 18:36:31 ntpd[1849]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jun 25 18:36:31.641383 ntpd[1849]: 25 Jun 18:36:31 ntpd[1849]: Listen normally on 2 lo 127.0.0.1:123 Jun 25 18:36:31.641383 ntpd[1849]: 25 Jun 18:36:31 ntpd[1849]: Listen normally on 3 eth0 172.31.20.217:123 Jun 25 18:36:31.641383 ntpd[1849]: 25 Jun 18:36:31 ntpd[1849]: Listen normally on 4 lo [::1]:123 Jun 25 18:36:31.641383 ntpd[1849]: 25 Jun 18:36:31 ntpd[1849]: Listen normally on 5 eth0 [fe80::4f3:21ff:fe06:2d05%2]:123 Jun 25 18:36:31.641383 ntpd[1849]: 25 Jun 18:36:31 ntpd[1849]: Listening on routing socket on fd #22 for interface updates Jun 25 18:36:31.604036 ntpd[1849]: gps base set to 2024-06-16 (week 2319) Jun 25 18:36:31.642360 tar[1866]: linux-amd64/helm Jun 25 18:36:31.632262 ntpd[1849]: Listen and drop on 0 v6wildcard [::]:123 Jun 25 18:36:31.644336 jq[1881]: true Jun 25 18:36:31.642742 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 25 18:36:31.632332 ntpd[1849]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jun 25 18:36:31.632541 ntpd[1849]: Listen normally on 2 lo 127.0.0.1:123 Jun 25 18:36:31.632661 ntpd[1849]: Listen normally on 3 eth0 172.31.20.217:123 Jun 25 18:36:31.632712 ntpd[1849]: Listen normally on 4 lo [::1]:123 Jun 25 18:36:31.632758 ntpd[1849]: Listen normally on 5 eth0 [fe80::4f3:21ff:fe06:2d05%2]:123 Jun 25 18:36:31.632797 ntpd[1849]: Listening on routing socket on fd #22 for interface updates Jun 25 18:36:31.649115 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jun 25 18:36:31.657667 systemd[1]: Reached target network-online.target - Network is Online. Jun 25 18:36:31.696371 update_engine[1859]: I0625 18:36:31.675174 1859 main.cc:92] Flatcar Update Engine starting Jun 25 18:36:31.696371 update_engine[1859]: I0625 18:36:31.691737 1859 update_check_scheduler.cc:74] Next update check in 2m21s Jun 25 18:36:31.701678 coreos-metadata[1844]: Jun 25 18:36:31.669 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jun 25 18:36:31.701678 coreos-metadata[1844]: Jun 25 18:36:31.678 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jun 25 18:36:31.701678 coreos-metadata[1844]: Jun 25 18:36:31.683 INFO Fetch successful Jun 25 18:36:31.701678 coreos-metadata[1844]: Jun 25 18:36:31.683 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jun 25 18:36:31.701678 coreos-metadata[1844]: Jun 25 18:36:31.686 INFO Fetch successful Jun 25 18:36:31.701678 coreos-metadata[1844]: Jun 25 18:36:31.686 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jun 25 18:36:31.701678 coreos-metadata[1844]: Jun 25 18:36:31.687 INFO Fetch successful Jun 25 18:36:31.701678 coreos-metadata[1844]: Jun 25 18:36:31.687 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jun 25 18:36:31.701678 coreos-metadata[1844]: Jun 25 18:36:31.688 INFO Fetch successful Jun 25 18:36:31.701678 coreos-metadata[1844]: Jun 25 18:36:31.688 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jun 25 18:36:31.701678 coreos-metadata[1844]: Jun 25 18:36:31.689 INFO Fetch failed with 404: resource not found Jun 25 18:36:31.701678 coreos-metadata[1844]: Jun 25 18:36:31.689 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jun 25 18:36:31.701678 coreos-metadata[1844]: Jun 25 18:36:31.689 INFO Fetch successful Jun 25 18:36:31.701678 coreos-metadata[1844]: Jun 25 18:36:31.689 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jun 25 18:36:31.701678 coreos-metadata[1844]: Jun 25 18:36:31.690 INFO Fetch successful Jun 25 18:36:31.701678 coreos-metadata[1844]: Jun 25 18:36:31.690 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jun 25 18:36:31.701678 coreos-metadata[1844]: Jun 25 18:36:31.694 INFO Fetch successful Jun 25 18:36:31.701678 coreos-metadata[1844]: Jun 25 18:36:31.694 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jun 25 18:36:31.701678 coreos-metadata[1844]: Jun 25 18:36:31.695 INFO Fetch successful Jun 25 18:36:31.701678 coreos-metadata[1844]: Jun 25 18:36:31.695 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jun 25 18:36:31.701678 coreos-metadata[1844]: Jun 25 18:36:31.697 INFO Fetch successful Jun 25 18:36:31.723680 ntpd[1849]: 25 Jun 18:36:31 ntpd[1849]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jun 25 18:36:31.723680 ntpd[1849]: 25 Jun 18:36:31 ntpd[1849]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jun 25 18:36:31.673185 ntpd[1849]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jun 25 18:36:31.667676 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:36:31.730253 extend-filesystems[1880]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jun 25 18:36:31.730253 extend-filesystems[1880]: old_desc_blocks = 1, new_desc_blocks = 1 Jun 25 18:36:31.730253 extend-filesystems[1880]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jun 25 18:36:31.673219 ntpd[1849]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jun 25 18:36:31.677405 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 25 18:36:31.752698 extend-filesystems[1847]: Resized filesystem in /dev/nvme0n1p9 Jun 25 18:36:31.691596 systemd[1]: Started update-engine.service - Update Engine. Jun 25 18:36:31.705169 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 25 18:36:31.707183 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 25 18:36:31.707415 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 25 18:36:31.709494 systemd[1]: Finished setup-oem.service - Setup OEM. Jun 25 18:36:31.730655 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jun 25 18:36:31.900945 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jun 25 18:36:31.902768 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 25 18:36:31.923771 amazon-ssm-agent[1914]: Initializing new seelog logger Jun 25 18:36:31.927426 amazon-ssm-agent[1914]: New Seelog Logger Creation Complete Jun 25 18:36:31.927426 amazon-ssm-agent[1914]: 2024/06/25 18:36:31 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jun 25 18:36:31.927426 amazon-ssm-agent[1914]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jun 25 18:36:31.927426 amazon-ssm-agent[1914]: 2024/06/25 18:36:31 processing appconfig overrides Jun 25 18:36:31.927643 amazon-ssm-agent[1914]: 2024/06/25 18:36:31 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jun 25 18:36:31.927643 amazon-ssm-agent[1914]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jun 25 18:36:31.928593 amazon-ssm-agent[1914]: 2024/06/25 18:36:31 processing appconfig overrides Jun 25 18:36:31.928989 amazon-ssm-agent[1914]: 2024/06/25 18:36:31 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jun 25 18:36:31.928989 amazon-ssm-agent[1914]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jun 25 18:36:31.929133 amazon-ssm-agent[1914]: 2024/06/25 18:36:31 processing appconfig overrides Jun 25 18:36:31.929697 amazon-ssm-agent[1914]: 2024-06-25 18:36:31 INFO Proxy environment variables: Jun 25 18:36:31.988207 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 31 scanned by (udev-worker) (1719) Jun 25 18:36:31.994004 amazon-ssm-agent[1914]: 2024/06/25 18:36:31 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jun 25 18:36:31.994004 amazon-ssm-agent[1914]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jun 25 18:36:31.994168 amazon-ssm-agent[1914]: 2024/06/25 18:36:31 processing appconfig overrides Jun 25 18:36:32.040933 amazon-ssm-agent[1914]: 2024-06-25 18:36:31 INFO https_proxy: Jun 25 18:36:32.071493 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 25 18:36:32.086615 bash[1960]: Updated "/home/core/.ssh/authorized_keys" Jun 25 18:36:32.091238 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 25 18:36:32.111343 systemd[1]: Starting sshkeys.service... Jun 25 18:36:32.146966 amazon-ssm-agent[1914]: 2024-06-25 18:36:31 INFO http_proxy: Jun 25 18:36:32.199188 systemd-logind[1858]: Watching system buttons on /dev/input/event1 (Power Button) Jun 25 18:36:32.199866 systemd-logind[1858]: Watching system buttons on /dev/input/event2 (Sleep Button) Jun 25 18:36:32.199986 systemd-logind[1858]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jun 25 18:36:32.201563 systemd-logind[1858]: New seat seat0. Jun 25 18:36:32.202629 systemd[1]: Started systemd-logind.service - User Login Management. Jun 25 18:36:32.236273 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jun 25 18:36:32.250707 amazon-ssm-agent[1914]: 2024-06-25 18:36:31 INFO no_proxy: Jun 25 18:36:32.250585 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jun 25 18:36:32.357612 amazon-ssm-agent[1914]: 2024-06-25 18:36:31 INFO Checking if agent identity type OnPrem can be assumed Jun 25 18:36:32.460704 amazon-ssm-agent[1914]: 2024-06-25 18:36:31 INFO Checking if agent identity type EC2 can be assumed Jun 25 18:36:32.491349 coreos-metadata[2018]: Jun 25 18:36:32.489 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jun 25 18:36:32.492796 dbus-daemon[1845]: [system] Successfully activated service 'org.freedesktop.hostname1' Jun 25 18:36:32.493013 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jun 25 18:36:32.493345 locksmithd[1910]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 25 18:36:32.553475 coreos-metadata[2018]: Jun 25 18:36:32.521 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jun 25 18:36:32.553475 coreos-metadata[2018]: Jun 25 18:36:32.537 INFO Fetch successful Jun 25 18:36:32.553475 coreos-metadata[2018]: Jun 25 18:36:32.537 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jun 25 18:36:32.540942 dbus-daemon[1845]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1896 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jun 25 18:36:32.555242 coreos-metadata[2018]: Jun 25 18:36:32.554 INFO Fetch successful Jun 25 18:36:32.561544 systemd[1]: Starting polkit.service - Authorization Manager... Jun 25 18:36:32.565882 amazon-ssm-agent[1914]: 2024-06-25 18:36:32 INFO Agent will take identity from EC2 Jun 25 18:36:32.566758 unknown[2018]: wrote ssh authorized keys file for user: core Jun 25 18:36:32.606923 polkitd[2048]: Started polkitd version 121 Jun 25 18:36:32.691422 amazon-ssm-agent[1914]: 2024-06-25 18:36:32 INFO [amazon-ssm-agent] using named pipe channel for IPC Jun 25 18:36:32.705301 update-ssh-keys[2053]: Updated "/home/core/.ssh/authorized_keys" Jun 25 18:36:32.706452 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jun 25 18:36:32.723100 systemd[1]: Finished sshkeys.service. Jun 25 18:36:32.742309 polkitd[2048]: Loading rules from directory /etc/polkit-1/rules.d Jun 25 18:36:32.742561 polkitd[2048]: Loading rules from directory /usr/share/polkit-1/rules.d Jun 25 18:36:32.751348 polkitd[2048]: Finished loading, compiling and executing 2 rules Jun 25 18:36:32.760369 dbus-daemon[1845]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jun 25 18:36:32.762159 systemd[1]: Started polkit.service - Authorization Manager. Jun 25 18:36:32.770740 polkitd[2048]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jun 25 18:36:32.808281 amazon-ssm-agent[1914]: 2024-06-25 18:36:32 INFO [amazon-ssm-agent] using named pipe channel for IPC Jun 25 18:36:32.860582 systemd-hostnamed[1896]: Hostname set to (transient) Jun 25 18:36:32.861800 systemd-resolved[1674]: System hostname changed to 'ip-172-31-20-217'. Jun 25 18:36:32.915092 amazon-ssm-agent[1914]: 2024-06-25 18:36:32 INFO [amazon-ssm-agent] using named pipe channel for IPC Jun 25 18:36:33.020375 amazon-ssm-agent[1914]: 2024-06-25 18:36:32 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jun 25 18:36:33.120586 amazon-ssm-agent[1914]: 2024-06-25 18:36:32 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Jun 25 18:36:33.122650 containerd[1879]: time="2024-06-25T18:36:33.122538994Z" level=info msg="starting containerd" revision=cd7148ac666309abf41fd4a49a8a5895b905e7f3 version=v1.7.18 Jun 25 18:36:33.199742 sshd_keygen[1874]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 25 18:36:33.228194 amazon-ssm-agent[1914]: 2024-06-25 18:36:32 INFO [amazon-ssm-agent] Starting Core Agent Jun 25 18:36:33.280149 containerd[1879]: time="2024-06-25T18:36:33.278726874Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jun 25 18:36:33.280149 containerd[1879]: time="2024-06-25T18:36:33.279230478Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:36:33.286674 containerd[1879]: time="2024-06-25T18:36:33.283540584Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.35-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:36:33.286674 containerd[1879]: time="2024-06-25T18:36:33.283605736Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:36:33.286674 containerd[1879]: time="2024-06-25T18:36:33.284290352Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:36:33.286674 containerd[1879]: time="2024-06-25T18:36:33.284322147Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jun 25 18:36:33.286674 containerd[1879]: time="2024-06-25T18:36:33.284535382Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jun 25 18:36:33.286674 containerd[1879]: time="2024-06-25T18:36:33.284621534Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:36:33.286674 containerd[1879]: time="2024-06-25T18:36:33.284640451Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jun 25 18:36:33.286674 containerd[1879]: time="2024-06-25T18:36:33.284738497Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:36:33.286674 containerd[1879]: time="2024-06-25T18:36:33.285454799Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jun 25 18:36:33.286674 containerd[1879]: time="2024-06-25T18:36:33.285486129Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jun 25 18:36:33.286674 containerd[1879]: time="2024-06-25T18:36:33.285502022Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:36:33.287783 containerd[1879]: time="2024-06-25T18:36:33.285853804Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:36:33.287783 containerd[1879]: time="2024-06-25T18:36:33.286051495Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jun 25 18:36:33.287783 containerd[1879]: time="2024-06-25T18:36:33.286222564Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jun 25 18:36:33.287783 containerd[1879]: time="2024-06-25T18:36:33.286245077Z" level=info msg="metadata content store policy set" policy=shared Jun 25 18:36:33.294847 containerd[1879]: time="2024-06-25T18:36:33.294796171Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jun 25 18:36:33.294847 containerd[1879]: time="2024-06-25T18:36:33.294853811Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jun 25 18:36:33.295154 containerd[1879]: time="2024-06-25T18:36:33.294875114Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jun 25 18:36:33.295154 containerd[1879]: time="2024-06-25T18:36:33.294952045Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jun 25 18:36:33.295255 containerd[1879]: time="2024-06-25T18:36:33.295159586Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jun 25 18:36:33.295255 containerd[1879]: time="2024-06-25T18:36:33.295184036Z" level=info msg="NRI interface is disabled by configuration." Jun 25 18:36:33.295255 containerd[1879]: time="2024-06-25T18:36:33.295205277Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jun 25 18:36:33.295681 containerd[1879]: time="2024-06-25T18:36:33.295378082Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jun 25 18:36:33.295681 containerd[1879]: time="2024-06-25T18:36:33.295406350Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jun 25 18:36:33.295681 containerd[1879]: time="2024-06-25T18:36:33.295426562Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jun 25 18:36:33.295681 containerd[1879]: time="2024-06-25T18:36:33.295619043Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jun 25 18:36:33.295681 containerd[1879]: time="2024-06-25T18:36:33.295645676Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jun 25 18:36:33.295681 containerd[1879]: time="2024-06-25T18:36:33.295670580Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jun 25 18:36:33.295917 containerd[1879]: time="2024-06-25T18:36:33.295689106Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jun 25 18:36:33.295917 containerd[1879]: time="2024-06-25T18:36:33.295714754Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jun 25 18:36:33.295917 containerd[1879]: time="2024-06-25T18:36:33.295735221Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jun 25 18:36:33.295917 containerd[1879]: time="2024-06-25T18:36:33.295760719Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jun 25 18:36:33.295917 containerd[1879]: time="2024-06-25T18:36:33.295782123Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jun 25 18:36:33.295917 containerd[1879]: time="2024-06-25T18:36:33.295803943Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jun 25 18:36:33.296129 containerd[1879]: time="2024-06-25T18:36:33.295949204Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jun 25 18:36:33.302101 containerd[1879]: time="2024-06-25T18:36:33.300507087Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jun 25 18:36:33.302101 containerd[1879]: time="2024-06-25T18:36:33.300582400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jun 25 18:36:33.302101 containerd[1879]: time="2024-06-25T18:36:33.300611003Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jun 25 18:36:33.302101 containerd[1879]: time="2024-06-25T18:36:33.301865060Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jun 25 18:36:33.302101 containerd[1879]: time="2024-06-25T18:36:33.301987317Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jun 25 18:36:33.302101 containerd[1879]: time="2024-06-25T18:36:33.302025831Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jun 25 18:36:33.302101 containerd[1879]: time="2024-06-25T18:36:33.302052627Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jun 25 18:36:33.305330 containerd[1879]: time="2024-06-25T18:36:33.305277223Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jun 25 18:36:33.305330 containerd[1879]: time="2024-06-25T18:36:33.305334612Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jun 25 18:36:33.305496 containerd[1879]: time="2024-06-25T18:36:33.305360377Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jun 25 18:36:33.305496 containerd[1879]: time="2024-06-25T18:36:33.305383261Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jun 25 18:36:33.305496 containerd[1879]: time="2024-06-25T18:36:33.305404777Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jun 25 18:36:33.305496 containerd[1879]: time="2024-06-25T18:36:33.305431167Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jun 25 18:36:33.308296 containerd[1879]: time="2024-06-25T18:36:33.305738877Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jun 25 18:36:33.308296 containerd[1879]: time="2024-06-25T18:36:33.305783516Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jun 25 18:36:33.308296 containerd[1879]: time="2024-06-25T18:36:33.305814482Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jun 25 18:36:33.308296 containerd[1879]: time="2024-06-25T18:36:33.305843981Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jun 25 18:36:33.308296 containerd[1879]: time="2024-06-25T18:36:33.305869993Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jun 25 18:36:33.308296 containerd[1879]: time="2024-06-25T18:36:33.305899349Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jun 25 18:36:33.308296 containerd[1879]: time="2024-06-25T18:36:33.305924887Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jun 25 18:36:33.308296 containerd[1879]: time="2024-06-25T18:36:33.305947484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jun 25 18:36:33.308693 containerd[1879]: time="2024-06-25T18:36:33.307106021Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jun 25 18:36:33.308693 containerd[1879]: time="2024-06-25T18:36:33.307777857Z" level=info msg="Connect containerd service" Jun 25 18:36:33.308693 containerd[1879]: time="2024-06-25T18:36:33.307833887Z" level=info msg="using legacy CRI server" Jun 25 18:36:33.308693 containerd[1879]: time="2024-06-25T18:36:33.307844258Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 25 18:36:33.310224 containerd[1879]: time="2024-06-25T18:36:33.309273065Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jun 25 18:36:33.313765 containerd[1879]: time="2024-06-25T18:36:33.312540781Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 25 18:36:33.313765 containerd[1879]: time="2024-06-25T18:36:33.312616863Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jun 25 18:36:33.313765 containerd[1879]: time="2024-06-25T18:36:33.312675741Z" level=info msg="Start subscribing containerd event" Jun 25 18:36:33.313765 containerd[1879]: time="2024-06-25T18:36:33.312729302Z" level=info msg="Start recovering state" Jun 25 18:36:33.313765 containerd[1879]: time="2024-06-25T18:36:33.312808449Z" level=info msg="Start event monitor" Jun 25 18:36:33.313765 containerd[1879]: time="2024-06-25T18:36:33.312823350Z" level=info msg="Start snapshots syncer" Jun 25 18:36:33.313765 containerd[1879]: time="2024-06-25T18:36:33.312835682Z" level=info msg="Start cni network conf syncer for default" Jun 25 18:36:33.313765 containerd[1879]: time="2024-06-25T18:36:33.312845815Z" level=info msg="Start streaming server" Jun 25 18:36:33.313765 containerd[1879]: time="2024-06-25T18:36:33.313134805Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jun 25 18:36:33.313765 containerd[1879]: time="2024-06-25T18:36:33.313196873Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jun 25 18:36:33.313765 containerd[1879]: time="2024-06-25T18:36:33.313222265Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jun 25 18:36:33.314185 containerd[1879]: time="2024-06-25T18:36:33.313843824Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 25 18:36:33.314185 containerd[1879]: time="2024-06-25T18:36:33.313902311Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 25 18:36:33.314101 systemd[1]: Started containerd.service - containerd container runtime. Jun 25 18:36:33.315834 containerd[1879]: time="2024-06-25T18:36:33.314232444Z" level=info msg="containerd successfully booted in 0.202930s" Jun 25 18:36:33.332090 amazon-ssm-agent[1914]: 2024-06-25 18:36:32 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jun 25 18:36:33.347379 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 25 18:36:33.359180 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 25 18:36:33.401167 systemd[1]: issuegen.service: Deactivated successfully. Jun 25 18:36:33.401412 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 25 18:36:33.410433 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 25 18:36:33.420725 amazon-ssm-agent[1914]: 2024-06-25 18:36:32 INFO [Registrar] Starting registrar module Jun 25 18:36:33.422266 amazon-ssm-agent[1914]: 2024-06-25 18:36:32 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jun 25 18:36:33.422266 amazon-ssm-agent[1914]: 2024-06-25 18:36:33 INFO [EC2Identity] EC2 registration was successful. Jun 25 18:36:33.422266 amazon-ssm-agent[1914]: 2024-06-25 18:36:33 INFO [CredentialRefresher] credentialRefresher has started Jun 25 18:36:33.422266 amazon-ssm-agent[1914]: 2024-06-25 18:36:33 INFO [CredentialRefresher] Starting credentials refresher loop Jun 25 18:36:33.422266 amazon-ssm-agent[1914]: 2024-06-25 18:36:33 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jun 25 18:36:33.433106 amazon-ssm-agent[1914]: 2024-06-25 18:36:33 INFO [CredentialRefresher] Next credential rotation will be in 32.23330557765 minutes Jun 25 18:36:33.456166 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 25 18:36:33.472595 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 25 18:36:33.484496 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jun 25 18:36:33.486319 systemd[1]: Reached target getty.target - Login Prompts. Jun 25 18:36:33.868410 tar[1866]: linux-amd64/LICENSE Jun 25 18:36:33.868891 tar[1866]: linux-amd64/README.md Jun 25 18:36:33.891057 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 25 18:36:34.350112 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:36:34.352280 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 25 18:36:34.354275 systemd[1]: Startup finished in 951ms (kernel) + 9.300s (initrd) + 8.304s (userspace) = 18.556s. Jun 25 18:36:34.445596 amazon-ssm-agent[1914]: 2024-06-25 18:36:34 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jun 25 18:36:34.520631 (kubelet)[2094]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:36:34.551198 amazon-ssm-agent[1914]: 2024-06-25 18:36:34 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2097) started Jun 25 18:36:34.647770 amazon-ssm-agent[1914]: 2024-06-25 18:36:34 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jun 25 18:36:35.042266 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 25 18:36:35.050659 systemd[1]: Started sshd@0-172.31.20.217:22-139.178.68.195:53232.service - OpenSSH per-connection server daemon (139.178.68.195:53232). Jun 25 18:36:35.242518 sshd[2117]: Accepted publickey for core from 139.178.68.195 port 53232 ssh2: RSA SHA256:zWpntMacToOmwCaU62vdvg6t1el6aib1JfI6hz3EHOQ Jun 25 18:36:35.244405 sshd[2117]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:36:35.267953 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 25 18:36:35.278188 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 25 18:36:35.284041 systemd-logind[1858]: New session 1 of user core. Jun 25 18:36:35.310663 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 25 18:36:35.321467 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 25 18:36:35.333724 (systemd)[2122]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:36:35.561282 kubelet[2094]: E0625 18:36:35.559138 2094 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:36:35.563377 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:36:35.563583 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:36:35.564186 systemd[1]: kubelet.service: Consumed 1.115s CPU time. Jun 25 18:36:35.593714 systemd[2122]: Queued start job for default target default.target. Jun 25 18:36:35.608107 systemd[2122]: Created slice app.slice - User Application Slice. Jun 25 18:36:35.608154 systemd[2122]: Reached target paths.target - Paths. Jun 25 18:36:35.608175 systemd[2122]: Reached target timers.target - Timers. Jun 25 18:36:35.610056 systemd[2122]: Starting dbus.socket - D-Bus User Message Bus Socket... Jun 25 18:36:35.634574 systemd[2122]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jun 25 18:36:35.634735 systemd[2122]: Reached target sockets.target - Sockets. Jun 25 18:36:35.634755 systemd[2122]: Reached target basic.target - Basic System. Jun 25 18:36:35.634809 systemd[2122]: Reached target default.target - Main User Target. Jun 25 18:36:35.634847 systemd[2122]: Startup finished in 282ms. Jun 25 18:36:35.635106 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 25 18:36:35.642325 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 25 18:36:35.800228 systemd[1]: Started sshd@1-172.31.20.217:22-139.178.68.195:53234.service - OpenSSH per-connection server daemon (139.178.68.195:53234). Jun 25 18:36:35.967795 sshd[2134]: Accepted publickey for core from 139.178.68.195 port 53234 ssh2: RSA SHA256:zWpntMacToOmwCaU62vdvg6t1el6aib1JfI6hz3EHOQ Jun 25 18:36:35.969446 sshd[2134]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:36:35.976945 systemd-logind[1858]: New session 2 of user core. Jun 25 18:36:35.991471 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 25 18:36:36.112126 sshd[2134]: pam_unix(sshd:session): session closed for user core Jun 25 18:36:36.115574 systemd[1]: sshd@1-172.31.20.217:22-139.178.68.195:53234.service: Deactivated successfully. Jun 25 18:36:36.117962 systemd[1]: session-2.scope: Deactivated successfully. Jun 25 18:36:36.119727 systemd-logind[1858]: Session 2 logged out. Waiting for processes to exit. Jun 25 18:36:36.121018 systemd-logind[1858]: Removed session 2. Jun 25 18:36:36.145646 systemd[1]: Started sshd@2-172.31.20.217:22-139.178.68.195:53246.service - OpenSSH per-connection server daemon (139.178.68.195:53246). Jun 25 18:36:36.333572 sshd[2141]: Accepted publickey for core from 139.178.68.195 port 53246 ssh2: RSA SHA256:zWpntMacToOmwCaU62vdvg6t1el6aib1JfI6hz3EHOQ Jun 25 18:36:36.335984 sshd[2141]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:36:36.341739 systemd-logind[1858]: New session 3 of user core. Jun 25 18:36:36.349305 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 25 18:36:36.463516 sshd[2141]: pam_unix(sshd:session): session closed for user core Jun 25 18:36:36.472554 systemd[1]: sshd@2-172.31.20.217:22-139.178.68.195:53246.service: Deactivated successfully. Jun 25 18:36:36.476848 systemd[1]: session-3.scope: Deactivated successfully. Jun 25 18:36:36.479710 systemd-logind[1858]: Session 3 logged out. Waiting for processes to exit. Jun 25 18:36:36.484860 systemd-logind[1858]: Removed session 3. Jun 25 18:36:36.517610 systemd[1]: Started sshd@3-172.31.20.217:22-139.178.68.195:53262.service - OpenSSH per-connection server daemon (139.178.68.195:53262). Jun 25 18:36:36.690160 sshd[2148]: Accepted publickey for core from 139.178.68.195 port 53262 ssh2: RSA SHA256:zWpntMacToOmwCaU62vdvg6t1el6aib1JfI6hz3EHOQ Jun 25 18:36:36.691888 sshd[2148]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:36:36.707900 systemd-logind[1858]: New session 4 of user core. Jun 25 18:36:36.718507 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 25 18:36:36.844180 sshd[2148]: pam_unix(sshd:session): session closed for user core Jun 25 18:36:36.848000 systemd[1]: sshd@3-172.31.20.217:22-139.178.68.195:53262.service: Deactivated successfully. Jun 25 18:36:36.850920 systemd[1]: session-4.scope: Deactivated successfully. Jun 25 18:36:36.853315 systemd-logind[1858]: Session 4 logged out. Waiting for processes to exit. Jun 25 18:36:36.854557 systemd-logind[1858]: Removed session 4. Jun 25 18:36:36.872696 systemd[1]: Started sshd@4-172.31.20.217:22-139.178.68.195:53274.service - OpenSSH per-connection server daemon (139.178.68.195:53274). Jun 25 18:36:37.044103 sshd[2155]: Accepted publickey for core from 139.178.68.195 port 53274 ssh2: RSA SHA256:zWpntMacToOmwCaU62vdvg6t1el6aib1JfI6hz3EHOQ Jun 25 18:36:37.046345 sshd[2155]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:36:37.071803 systemd-logind[1858]: New session 5 of user core. Jun 25 18:36:37.079310 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 25 18:36:37.256774 sudo[2158]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 25 18:36:37.257174 sudo[2158]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:36:37.283978 sudo[2158]: pam_unix(sudo:session): session closed for user root Jun 25 18:36:37.309237 sshd[2155]: pam_unix(sshd:session): session closed for user core Jun 25 18:36:37.314801 systemd[1]: sshd@4-172.31.20.217:22-139.178.68.195:53274.service: Deactivated successfully. Jun 25 18:36:37.316901 systemd[1]: session-5.scope: Deactivated successfully. Jun 25 18:36:37.317828 systemd-logind[1858]: Session 5 logged out. Waiting for processes to exit. Jun 25 18:36:37.319257 systemd-logind[1858]: Removed session 5. Jun 25 18:36:37.343768 systemd[1]: Started sshd@5-172.31.20.217:22-139.178.68.195:37180.service - OpenSSH per-connection server daemon (139.178.68.195:37180). Jun 25 18:36:37.525423 sshd[2163]: Accepted publickey for core from 139.178.68.195 port 37180 ssh2: RSA SHA256:zWpntMacToOmwCaU62vdvg6t1el6aib1JfI6hz3EHOQ Jun 25 18:36:37.529761 sshd[2163]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:36:37.550620 systemd-logind[1858]: New session 6 of user core. Jun 25 18:36:37.557440 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 25 18:36:37.675822 sudo[2167]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 25 18:36:37.676322 sudo[2167]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:36:37.703968 sudo[2167]: pam_unix(sudo:session): session closed for user root Jun 25 18:36:37.715849 sudo[2166]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jun 25 18:36:37.716440 sudo[2166]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:36:37.738518 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jun 25 18:36:37.749200 auditctl[2170]: No rules Jun 25 18:36:37.750280 systemd[1]: audit-rules.service: Deactivated successfully. Jun 25 18:36:37.750778 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jun 25 18:36:37.759592 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 18:36:37.805703 augenrules[2188]: No rules Jun 25 18:36:37.807172 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 18:36:37.809275 sudo[2166]: pam_unix(sudo:session): session closed for user root Jun 25 18:36:37.833575 sshd[2163]: pam_unix(sshd:session): session closed for user core Jun 25 18:36:37.842423 systemd[1]: sshd@5-172.31.20.217:22-139.178.68.195:37180.service: Deactivated successfully. Jun 25 18:36:37.845151 systemd[1]: session-6.scope: Deactivated successfully. Jun 25 18:36:37.846844 systemd-logind[1858]: Session 6 logged out. Waiting for processes to exit. Jun 25 18:36:37.848292 systemd-logind[1858]: Removed session 6. Jun 25 18:36:37.873356 systemd[1]: Started sshd@6-172.31.20.217:22-139.178.68.195:37194.service - OpenSSH per-connection server daemon (139.178.68.195:37194). Jun 25 18:36:38.055695 sshd[2196]: Accepted publickey for core from 139.178.68.195 port 37194 ssh2: RSA SHA256:zWpntMacToOmwCaU62vdvg6t1el6aib1JfI6hz3EHOQ Jun 25 18:36:38.057818 sshd[2196]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:36:38.090868 systemd-logind[1858]: New session 7 of user core. Jun 25 18:36:38.100488 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 25 18:36:38.222474 sudo[2199]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 25 18:36:38.222873 sudo[2199]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:36:38.518653 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 25 18:36:38.518981 (dockerd)[2209]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jun 25 18:36:39.351865 systemd-resolved[1674]: Clock change detected. Flushing caches. Jun 25 18:36:40.032707 dockerd[2209]: time="2024-06-25T18:36:40.032645588Z" level=info msg="Starting up" Jun 25 18:36:40.073854 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3080125453-merged.mount: Deactivated successfully. Jun 25 18:36:40.299756 dockerd[2209]: time="2024-06-25T18:36:40.299619496Z" level=info msg="Loading containers: start." Jun 25 18:36:40.540203 kernel: Initializing XFRM netlink socket Jun 25 18:36:40.588071 (udev-worker)[2267]: Network interface NamePolicy= disabled on kernel command line. Jun 25 18:36:40.761524 systemd-networkd[1718]: docker0: Link UP Jun 25 18:36:40.777865 dockerd[2209]: time="2024-06-25T18:36:40.777817555Z" level=info msg="Loading containers: done." Jun 25 18:36:40.942347 dockerd[2209]: time="2024-06-25T18:36:40.942221161Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 25 18:36:40.942534 dockerd[2209]: time="2024-06-25T18:36:40.942484474Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jun 25 18:36:40.942706 dockerd[2209]: time="2024-06-25T18:36:40.942620641Z" level=info msg="Daemon has completed initialization" Jun 25 18:36:41.005257 dockerd[2209]: time="2024-06-25T18:36:41.004518261Z" level=info msg="API listen on /run/docker.sock" Jun 25 18:36:41.004823 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 25 18:36:42.326219 containerd[1879]: time="2024-06-25T18:36:42.325986999Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.2\"" Jun 25 18:36:43.040986 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount572719948.mount: Deactivated successfully. Jun 25 18:36:46.184621 containerd[1879]: time="2024-06-25T18:36:46.184560849Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:36:46.186194 containerd[1879]: time="2024-06-25T18:36:46.186124647Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.2: active requests=0, bytes read=32771801" Jun 25 18:36:46.187926 containerd[1879]: time="2024-06-25T18:36:46.187780924Z" level=info msg="ImageCreate event name:\"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:36:46.196059 containerd[1879]: time="2024-06-25T18:36:46.195440081Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:36:46.201872 containerd[1879]: time="2024-06-25T18:36:46.199989817Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.2\" with image id \"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d\", size \"32768601\" in 3.873894536s" Jun 25 18:36:46.201872 containerd[1879]: time="2024-06-25T18:36:46.200182364Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.2\" returns image reference \"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe\"" Jun 25 18:36:46.231148 containerd[1879]: time="2024-06-25T18:36:46.231109849Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.2\"" Jun 25 18:36:46.386484 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 25 18:36:46.398152 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:36:47.298126 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:36:47.308341 (kubelet)[2410]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:36:47.410946 kubelet[2410]: E0625 18:36:47.410883 2410 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:36:47.415429 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:36:47.415902 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:36:49.909707 containerd[1879]: time="2024-06-25T18:36:49.909648608Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:36:49.911831 containerd[1879]: time="2024-06-25T18:36:49.911613102Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.2: active requests=0, bytes read=29588674" Jun 25 18:36:49.914187 containerd[1879]: time="2024-06-25T18:36:49.913562287Z" level=info msg="ImageCreate event name:\"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:36:49.917636 containerd[1879]: time="2024-06-25T18:36:49.917590407Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:36:49.919257 containerd[1879]: time="2024-06-25T18:36:49.919086030Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.2\" with image id \"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e\", size \"31138657\" in 3.687930281s" Jun 25 18:36:49.919429 containerd[1879]: time="2024-06-25T18:36:49.919401324Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.2\" returns image reference \"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974\"" Jun 25 18:36:49.958620 containerd[1879]: time="2024-06-25T18:36:49.958578557Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.2\"" Jun 25 18:36:51.853933 containerd[1879]: time="2024-06-25T18:36:51.853882465Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:36:51.855367 containerd[1879]: time="2024-06-25T18:36:51.855289978Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.2: active requests=0, bytes read=17778120" Jun 25 18:36:51.858828 containerd[1879]: time="2024-06-25T18:36:51.856870428Z" level=info msg="ImageCreate event name:\"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:36:51.865416 containerd[1879]: time="2024-06-25T18:36:51.865365707Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:36:51.867231 containerd[1879]: time="2024-06-25T18:36:51.867186021Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.2\" with image id \"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc\", size \"19328121\" in 1.908527261s" Jun 25 18:36:51.867351 containerd[1879]: time="2024-06-25T18:36:51.867236080Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.2\" returns image reference \"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940\"" Jun 25 18:36:51.892665 containerd[1879]: time="2024-06-25T18:36:51.892626541Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.2\"" Jun 25 18:36:53.403957 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2445566.mount: Deactivated successfully. Jun 25 18:36:54.036588 containerd[1879]: time="2024-06-25T18:36:54.036526578Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:36:54.039286 containerd[1879]: time="2024-06-25T18:36:54.039065056Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.2: active requests=0, bytes read=29035438" Jun 25 18:36:54.040828 containerd[1879]: time="2024-06-25T18:36:54.040572263Z" level=info msg="ImageCreate event name:\"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:36:54.045252 containerd[1879]: time="2024-06-25T18:36:54.045093165Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:36:54.046603 containerd[1879]: time="2024-06-25T18:36:54.046015546Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.2\" with image id \"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772\", repo tag \"registry.k8s.io/kube-proxy:v1.30.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec\", size \"29034457\" in 2.153343868s" Jun 25 18:36:54.046603 containerd[1879]: time="2024-06-25T18:36:54.046065884Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.2\" returns image reference \"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772\"" Jun 25 18:36:54.113277 containerd[1879]: time="2024-06-25T18:36:54.112338697Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jun 25 18:36:54.777505 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount572232371.mount: Deactivated successfully. Jun 25 18:36:56.425879 containerd[1879]: time="2024-06-25T18:36:56.425824109Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:36:56.427439 containerd[1879]: time="2024-06-25T18:36:56.427374305Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jun 25 18:36:56.429252 containerd[1879]: time="2024-06-25T18:36:56.429192015Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:36:56.433477 containerd[1879]: time="2024-06-25T18:36:56.433094134Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:36:56.434377 containerd[1879]: time="2024-06-25T18:36:56.434331167Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.320986991s" Jun 25 18:36:56.434484 containerd[1879]: time="2024-06-25T18:36:56.434382414Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jun 25 18:36:56.469053 containerd[1879]: time="2024-06-25T18:36:56.469009072Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jun 25 18:36:57.015596 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount464136654.mount: Deactivated successfully. Jun 25 18:36:57.025556 containerd[1879]: time="2024-06-25T18:36:57.025500701Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:36:57.026827 containerd[1879]: time="2024-06-25T18:36:57.026731810Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jun 25 18:36:57.028644 containerd[1879]: time="2024-06-25T18:36:57.028584144Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:36:57.036783 containerd[1879]: time="2024-06-25T18:36:57.036694814Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:36:57.038481 containerd[1879]: time="2024-06-25T18:36:57.037767361Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 568.706816ms" Jun 25 18:36:57.038481 containerd[1879]: time="2024-06-25T18:36:57.037833096Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jun 25 18:36:57.084144 containerd[1879]: time="2024-06-25T18:36:57.084090672Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jun 25 18:36:57.636308 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 25 18:36:57.663153 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:36:57.781324 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount556270676.mount: Deactivated successfully. Jun 25 18:36:58.038705 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:36:58.051529 (kubelet)[2521]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:36:58.222817 kubelet[2521]: E0625 18:36:58.220070 2521 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:36:58.225473 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:36:58.232172 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:37:03.330028 containerd[1879]: time="2024-06-25T18:37:03.329953259Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:37:03.334151 containerd[1879]: time="2024-06-25T18:37:03.332412437Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Jun 25 18:37:03.338192 containerd[1879]: time="2024-06-25T18:37:03.338123735Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:37:03.353332 containerd[1879]: time="2024-06-25T18:37:03.353260246Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:37:03.354881 containerd[1879]: time="2024-06-25T18:37:03.354833962Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 6.270693033s" Jun 25 18:37:03.355182 containerd[1879]: time="2024-06-25T18:37:03.355047778Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jun 25 18:37:03.685604 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jun 25 18:37:07.816652 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:37:07.829364 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:37:07.870932 systemd[1]: Reloading requested from client PID 2636 ('systemctl') (unit session-7.scope)... Jun 25 18:37:07.871083 systemd[1]: Reloading... Jun 25 18:37:08.080892 zram_generator::config[2677]: No configuration found. Jun 25 18:37:08.319001 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:37:08.463416 systemd[1]: Reloading finished in 591 ms. Jun 25 18:37:08.554538 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jun 25 18:37:08.554645 systemd[1]: kubelet.service: Failed with result 'signal'. Jun 25 18:37:08.555099 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:37:08.575037 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:37:09.006056 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:37:09.009823 (kubelet)[2735]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 25 18:37:09.125775 kubelet[2735]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:37:09.125775 kubelet[2735]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 18:37:09.125775 kubelet[2735]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:37:09.126739 kubelet[2735]: I0625 18:37:09.125856 2735 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 18:37:09.834591 kubelet[2735]: I0625 18:37:09.834380 2735 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jun 25 18:37:09.834591 kubelet[2735]: I0625 18:37:09.834595 2735 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 18:37:09.834928 kubelet[2735]: I0625 18:37:09.834904 2735 server.go:927] "Client rotation is on, will bootstrap in background" Jun 25 18:37:09.874448 kubelet[2735]: I0625 18:37:09.874124 2735 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 18:37:09.881175 kubelet[2735]: E0625 18:37:09.881136 2735 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.20.217:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.20.217:6443: connect: connection refused Jun 25 18:37:09.912890 kubelet[2735]: I0625 18:37:09.912715 2735 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 18:37:09.920717 kubelet[2735]: I0625 18:37:09.920544 2735 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 18:37:09.922053 kubelet[2735]: I0625 18:37:09.920706 2735 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-20-217","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 18:37:09.922208 kubelet[2735]: I0625 18:37:09.922074 2735 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 18:37:09.922208 kubelet[2735]: I0625 18:37:09.922093 2735 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 18:37:09.922311 kubelet[2735]: I0625 18:37:09.922252 2735 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:37:09.923826 kubelet[2735]: I0625 18:37:09.923793 2735 kubelet.go:400] "Attempting to sync node with API server" Jun 25 18:37:09.923932 kubelet[2735]: I0625 18:37:09.923832 2735 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 18:37:09.923932 kubelet[2735]: I0625 18:37:09.923867 2735 kubelet.go:312] "Adding apiserver pod source" Jun 25 18:37:09.923932 kubelet[2735]: I0625 18:37:09.923885 2735 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 18:37:09.932964 kubelet[2735]: W0625 18:37:09.932893 2735 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.20.217:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-217&limit=500&resourceVersion=0": dial tcp 172.31.20.217:6443: connect: connection refused Jun 25 18:37:09.933177 kubelet[2735]: E0625 18:37:09.933162 2735 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.20.217:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-217&limit=500&resourceVersion=0": dial tcp 172.31.20.217:6443: connect: connection refused Jun 25 18:37:09.935147 kubelet[2735]: W0625 18:37:09.935080 2735 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.20.217:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.20.217:6443: connect: connection refused Jun 25 18:37:09.935147 kubelet[2735]: E0625 18:37:09.935147 2735 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.20.217:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.20.217:6443: connect: connection refused Jun 25 18:37:09.935814 kubelet[2735]: I0625 18:37:09.935778 2735 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Jun 25 18:37:09.938164 kubelet[2735]: I0625 18:37:09.938132 2735 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 25 18:37:09.938258 kubelet[2735]: W0625 18:37:09.938214 2735 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 25 18:37:09.942830 kubelet[2735]: I0625 18:37:09.939019 2735 server.go:1264] "Started kubelet" Jun 25 18:37:09.949258 kubelet[2735]: I0625 18:37:09.949203 2735 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 18:37:09.953882 kubelet[2735]: I0625 18:37:09.952504 2735 server.go:455] "Adding debug handlers to kubelet server" Jun 25 18:37:09.954523 kubelet[2735]: I0625 18:37:09.954458 2735 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 25 18:37:09.955173 kubelet[2735]: I0625 18:37:09.955148 2735 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 18:37:09.957554 kubelet[2735]: I0625 18:37:09.957533 2735 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 18:37:09.968995 kubelet[2735]: E0625 18:37:09.955691 2735 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.20.217:6443/api/v1/namespaces/default/events\": dial tcp 172.31.20.217:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-20-217.17dc53318768e339 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-20-217,UID:ip-172-31-20-217,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-20-217,},FirstTimestamp:2024-06-25 18:37:09.938987833 +0000 UTC m=+0.920049297,LastTimestamp:2024-06-25 18:37:09.938987833 +0000 UTC m=+0.920049297,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-20-217,}" Jun 25 18:37:09.973719 kubelet[2735]: E0625 18:37:09.969062 2735 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-20-217\" not found" Jun 25 18:37:09.973903 kubelet[2735]: I0625 18:37:09.973764 2735 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 18:37:09.973903 kubelet[2735]: I0625 18:37:09.973902 2735 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jun 25 18:37:09.973992 kubelet[2735]: I0625 18:37:09.973983 2735 reconciler.go:26] "Reconciler: start to sync state" Jun 25 18:37:09.974668 kubelet[2735]: E0625 18:37:09.974627 2735 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.217:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-217?timeout=10s\": dial tcp 172.31.20.217:6443: connect: connection refused" interval="200ms" Jun 25 18:37:09.979023 kubelet[2735]: W0625 18:37:09.978946 2735 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.20.217:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.20.217:6443: connect: connection refused Jun 25 18:37:09.979136 kubelet[2735]: E0625 18:37:09.979033 2735 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.20.217:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.20.217:6443: connect: connection refused Jun 25 18:37:09.979477 kubelet[2735]: I0625 18:37:09.979453 2735 factory.go:221] Registration of the systemd container factory successfully Jun 25 18:37:09.979587 kubelet[2735]: I0625 18:37:09.979565 2735 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 25 18:37:09.982430 kubelet[2735]: I0625 18:37:09.982400 2735 factory.go:221] Registration of the containerd container factory successfully Jun 25 18:37:09.996950 kubelet[2735]: E0625 18:37:09.995690 2735 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 18:37:10.012056 kubelet[2735]: I0625 18:37:10.011942 2735 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 18:37:10.014324 kubelet[2735]: I0625 18:37:10.014287 2735 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 18:37:10.014324 kubelet[2735]: I0625 18:37:10.014328 2735 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 18:37:10.014583 kubelet[2735]: I0625 18:37:10.014347 2735 kubelet.go:2337] "Starting kubelet main sync loop" Jun 25 18:37:10.014583 kubelet[2735]: E0625 18:37:10.014392 2735 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 18:37:10.023594 kubelet[2735]: W0625 18:37:10.023336 2735 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.20.217:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.20.217:6443: connect: connection refused Jun 25 18:37:10.024067 kubelet[2735]: E0625 18:37:10.024038 2735 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.20.217:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.20.217:6443: connect: connection refused Jun 25 18:37:10.030577 kubelet[2735]: I0625 18:37:10.030543 2735 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 18:37:10.030577 kubelet[2735]: I0625 18:37:10.030567 2735 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 18:37:10.030577 kubelet[2735]: I0625 18:37:10.030587 2735 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:37:10.036822 kubelet[2735]: I0625 18:37:10.036769 2735 policy_none.go:49] "None policy: Start" Jun 25 18:37:10.037994 kubelet[2735]: I0625 18:37:10.037970 2735 memory_manager.go:170] "Starting memorymanager" policy="None" Jun 25 18:37:10.038109 kubelet[2735]: I0625 18:37:10.038006 2735 state_mem.go:35] "Initializing new in-memory state store" Jun 25 18:37:10.051093 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 25 18:37:10.071358 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 25 18:37:10.077649 kubelet[2735]: I0625 18:37:10.077331 2735 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-20-217" Jun 25 18:37:10.079003 kubelet[2735]: E0625 18:37:10.078418 2735 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.20.217:6443/api/v1/nodes\": dial tcp 172.31.20.217:6443: connect: connection refused" node="ip-172-31-20-217" Jun 25 18:37:10.079670 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 25 18:37:10.098456 kubelet[2735]: I0625 18:37:10.098003 2735 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 18:37:10.098456 kubelet[2735]: I0625 18:37:10.098465 2735 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 25 18:37:10.098456 kubelet[2735]: I0625 18:37:10.098601 2735 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 18:37:10.105293 kubelet[2735]: E0625 18:37:10.105202 2735 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-20-217\" not found" Jun 25 18:37:10.115015 kubelet[2735]: I0625 18:37:10.114956 2735 topology_manager.go:215] "Topology Admit Handler" podUID="05377a85f48908c68054e5e873cba8c6" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-20-217" Jun 25 18:37:10.116948 kubelet[2735]: I0625 18:37:10.116903 2735 topology_manager.go:215] "Topology Admit Handler" podUID="653716146b61d24ddda129dbc59b717c" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-20-217" Jun 25 18:37:10.119372 kubelet[2735]: I0625 18:37:10.119344 2735 topology_manager.go:215] "Topology Admit Handler" podUID="47019425b534b0fbaf3ce11d62eda7d2" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-20-217" Jun 25 18:37:10.131457 systemd[1]: Created slice kubepods-burstable-pod05377a85f48908c68054e5e873cba8c6.slice - libcontainer container kubepods-burstable-pod05377a85f48908c68054e5e873cba8c6.slice. Jun 25 18:37:10.160328 systemd[1]: Created slice kubepods-burstable-pod653716146b61d24ddda129dbc59b717c.slice - libcontainer container kubepods-burstable-pod653716146b61d24ddda129dbc59b717c.slice. Jun 25 18:37:10.167646 systemd[1]: Created slice kubepods-burstable-pod47019425b534b0fbaf3ce11d62eda7d2.slice - libcontainer container kubepods-burstable-pod47019425b534b0fbaf3ce11d62eda7d2.slice. Jun 25 18:37:10.175287 kubelet[2735]: E0625 18:37:10.175235 2735 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.217:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-217?timeout=10s\": dial tcp 172.31.20.217:6443: connect: connection refused" interval="400ms" Jun 25 18:37:10.275818 kubelet[2735]: I0625 18:37:10.275753 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/05377a85f48908c68054e5e873cba8c6-ca-certs\") pod \"kube-apiserver-ip-172-31-20-217\" (UID: \"05377a85f48908c68054e5e873cba8c6\") " pod="kube-system/kube-apiserver-ip-172-31-20-217" Jun 25 18:37:10.275818 kubelet[2735]: I0625 18:37:10.275822 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/653716146b61d24ddda129dbc59b717c-ca-certs\") pod \"kube-controller-manager-ip-172-31-20-217\" (UID: \"653716146b61d24ddda129dbc59b717c\") " pod="kube-system/kube-controller-manager-ip-172-31-20-217" Jun 25 18:37:10.276043 kubelet[2735]: I0625 18:37:10.275850 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/653716146b61d24ddda129dbc59b717c-kubeconfig\") pod \"kube-controller-manager-ip-172-31-20-217\" (UID: \"653716146b61d24ddda129dbc59b717c\") " pod="kube-system/kube-controller-manager-ip-172-31-20-217" Jun 25 18:37:10.276043 kubelet[2735]: I0625 18:37:10.275870 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/47019425b534b0fbaf3ce11d62eda7d2-kubeconfig\") pod \"kube-scheduler-ip-172-31-20-217\" (UID: \"47019425b534b0fbaf3ce11d62eda7d2\") " pod="kube-system/kube-scheduler-ip-172-31-20-217" Jun 25 18:37:10.276043 kubelet[2735]: I0625 18:37:10.275894 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/653716146b61d24ddda129dbc59b717c-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-20-217\" (UID: \"653716146b61d24ddda129dbc59b717c\") " pod="kube-system/kube-controller-manager-ip-172-31-20-217" Jun 25 18:37:10.276043 kubelet[2735]: I0625 18:37:10.275918 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/05377a85f48908c68054e5e873cba8c6-k8s-certs\") pod \"kube-apiserver-ip-172-31-20-217\" (UID: \"05377a85f48908c68054e5e873cba8c6\") " pod="kube-system/kube-apiserver-ip-172-31-20-217" Jun 25 18:37:10.276043 kubelet[2735]: I0625 18:37:10.275965 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/05377a85f48908c68054e5e873cba8c6-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-20-217\" (UID: \"05377a85f48908c68054e5e873cba8c6\") " pod="kube-system/kube-apiserver-ip-172-31-20-217" Jun 25 18:37:10.276613 kubelet[2735]: I0625 18:37:10.275992 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/653716146b61d24ddda129dbc59b717c-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-20-217\" (UID: \"653716146b61d24ddda129dbc59b717c\") " pod="kube-system/kube-controller-manager-ip-172-31-20-217" Jun 25 18:37:10.276613 kubelet[2735]: I0625 18:37:10.276019 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/653716146b61d24ddda129dbc59b717c-k8s-certs\") pod \"kube-controller-manager-ip-172-31-20-217\" (UID: \"653716146b61d24ddda129dbc59b717c\") " pod="kube-system/kube-controller-manager-ip-172-31-20-217" Jun 25 18:37:10.280989 kubelet[2735]: I0625 18:37:10.280965 2735 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-20-217" Jun 25 18:37:10.281332 kubelet[2735]: E0625 18:37:10.281306 2735 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.20.217:6443/api/v1/nodes\": dial tcp 172.31.20.217:6443: connect: connection refused" node="ip-172-31-20-217" Jun 25 18:37:10.455749 containerd[1879]: time="2024-06-25T18:37:10.455679515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-20-217,Uid:05377a85f48908c68054e5e873cba8c6,Namespace:kube-system,Attempt:0,}" Jun 25 18:37:10.482005 containerd[1879]: time="2024-06-25T18:37:10.481943133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-20-217,Uid:653716146b61d24ddda129dbc59b717c,Namespace:kube-system,Attempt:0,}" Jun 25 18:37:10.483334 containerd[1879]: time="2024-06-25T18:37:10.482645794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-20-217,Uid:47019425b534b0fbaf3ce11d62eda7d2,Namespace:kube-system,Attempt:0,}" Jun 25 18:37:10.576565 kubelet[2735]: E0625 18:37:10.576491 2735 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.217:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-217?timeout=10s\": dial tcp 172.31.20.217:6443: connect: connection refused" interval="800ms" Jun 25 18:37:10.683569 kubelet[2735]: I0625 18:37:10.683540 2735 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-20-217" Jun 25 18:37:10.684157 kubelet[2735]: E0625 18:37:10.684121 2735 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.20.217:6443/api/v1/nodes\": dial tcp 172.31.20.217:6443: connect: connection refused" node="ip-172-31-20-217" Jun 25 18:37:10.949176 kubelet[2735]: W0625 18:37:10.949131 2735 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.20.217:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.20.217:6443: connect: connection refused Jun 25 18:37:10.949176 kubelet[2735]: E0625 18:37:10.949183 2735 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.20.217:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.20.217:6443: connect: connection refused Jun 25 18:37:11.044565 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3747309179.mount: Deactivated successfully. Jun 25 18:37:11.070756 containerd[1879]: time="2024-06-25T18:37:11.070368010Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:37:11.084817 containerd[1879]: time="2024-06-25T18:37:11.082174789Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:37:11.090599 containerd[1879]: time="2024-06-25T18:37:11.088821566Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jun 25 18:37:11.100407 containerd[1879]: time="2024-06-25T18:37:11.100351229Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:37:11.105318 kubelet[2735]: W0625 18:37:11.105116 2735 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.20.217:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.20.217:6443: connect: connection refused Jun 25 18:37:11.105318 kubelet[2735]: E0625 18:37:11.105285 2735 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.20.217:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.20.217:6443: connect: connection refused Jun 25 18:37:11.114135 containerd[1879]: time="2024-06-25T18:37:11.109166999Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 18:37:11.125577 containerd[1879]: time="2024-06-25T18:37:11.124708740Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 18:37:11.125577 containerd[1879]: time="2024-06-25T18:37:11.124978446Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:37:11.135287 containerd[1879]: time="2024-06-25T18:37:11.134870723Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 679.002242ms" Jun 25 18:37:11.143242 containerd[1879]: time="2024-06-25T18:37:11.142752606Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 659.404595ms" Jun 25 18:37:11.144322 containerd[1879]: time="2024-06-25T18:37:11.144056563Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:37:11.151922 containerd[1879]: time="2024-06-25T18:37:11.151822931Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 669.131791ms" Jun 25 18:37:11.155392 kubelet[2735]: W0625 18:37:11.155315 2735 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.20.217:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.20.217:6443: connect: connection refused Jun 25 18:37:11.155392 kubelet[2735]: E0625 18:37:11.155391 2735 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.20.217:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.20.217:6443: connect: connection refused Jun 25 18:37:11.194605 kubelet[2735]: W0625 18:37:11.194521 2735 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.20.217:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-217&limit=500&resourceVersion=0": dial tcp 172.31.20.217:6443: connect: connection refused Jun 25 18:37:11.194605 kubelet[2735]: E0625 18:37:11.194577 2735 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.20.217:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-217&limit=500&resourceVersion=0": dial tcp 172.31.20.217:6443: connect: connection refused Jun 25 18:37:11.378064 kubelet[2735]: E0625 18:37:11.377931 2735 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.217:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-217?timeout=10s\": dial tcp 172.31.20.217:6443: connect: connection refused" interval="1.6s" Jun 25 18:37:11.516731 kubelet[2735]: I0625 18:37:11.514716 2735 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-20-217" Jun 25 18:37:11.518122 kubelet[2735]: E0625 18:37:11.518074 2735 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.20.217:6443/api/v1/nodes\": dial tcp 172.31.20.217:6443: connect: connection refused" node="ip-172-31-20-217" Jun 25 18:37:11.557320 containerd[1879]: time="2024-06-25T18:37:11.557002940Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:37:11.557320 containerd[1879]: time="2024-06-25T18:37:11.557078943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:37:11.557320 containerd[1879]: time="2024-06-25T18:37:11.557108051Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:37:11.557320 containerd[1879]: time="2024-06-25T18:37:11.557130636Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:37:11.562010 containerd[1879]: time="2024-06-25T18:37:11.561868073Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:37:11.562010 containerd[1879]: time="2024-06-25T18:37:11.561936009Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:37:11.562268 containerd[1879]: time="2024-06-25T18:37:11.561965160Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:37:11.562268 containerd[1879]: time="2024-06-25T18:37:11.561993870Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:37:11.562428 containerd[1879]: time="2024-06-25T18:37:11.562359716Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:37:11.562519 containerd[1879]: time="2024-06-25T18:37:11.562419054Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:37:11.562519 containerd[1879]: time="2024-06-25T18:37:11.562461258Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:37:11.562519 containerd[1879]: time="2024-06-25T18:37:11.562484873Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:37:11.611438 systemd[1]: Started cri-containerd-81206c5471392a22c5331b790dce14b0d09cedf6083a582ff7f3daeb112515df.scope - libcontainer container 81206c5471392a22c5331b790dce14b0d09cedf6083a582ff7f3daeb112515df. Jun 25 18:37:11.623053 systemd[1]: Started cri-containerd-fec89ca21bf4796610582b79a385f2387799f96b0aaacef752c77d105d5fe778.scope - libcontainer container fec89ca21bf4796610582b79a385f2387799f96b0aaacef752c77d105d5fe778. Jun 25 18:37:11.639196 systemd[1]: Started cri-containerd-1320c4344189b46e069094b48a75fd7c624da0917a26d7d36eb062663943134a.scope - libcontainer container 1320c4344189b46e069094b48a75fd7c624da0917a26d7d36eb062663943134a. Jun 25 18:37:11.717128 containerd[1879]: time="2024-06-25T18:37:11.717086276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-20-217,Uid:05377a85f48908c68054e5e873cba8c6,Namespace:kube-system,Attempt:0,} returns sandbox id \"81206c5471392a22c5331b790dce14b0d09cedf6083a582ff7f3daeb112515df\"" Jun 25 18:37:11.725175 containerd[1879]: time="2024-06-25T18:37:11.725128039Z" level=info msg="CreateContainer within sandbox \"81206c5471392a22c5331b790dce14b0d09cedf6083a582ff7f3daeb112515df\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 25 18:37:11.768399 containerd[1879]: time="2024-06-25T18:37:11.768192953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-20-217,Uid:47019425b534b0fbaf3ce11d62eda7d2,Namespace:kube-system,Attempt:0,} returns sandbox id \"fec89ca21bf4796610582b79a385f2387799f96b0aaacef752c77d105d5fe778\"" Jun 25 18:37:11.776650 containerd[1879]: time="2024-06-25T18:37:11.776431807Z" level=info msg="CreateContainer within sandbox \"81206c5471392a22c5331b790dce14b0d09cedf6083a582ff7f3daeb112515df\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d4df999d67c2e23b5e94ce052c10d6d709d6e96541bcc056b8a1a5eca0c820b9\"" Jun 25 18:37:11.779425 containerd[1879]: time="2024-06-25T18:37:11.779387882Z" level=info msg="CreateContainer within sandbox \"fec89ca21bf4796610582b79a385f2387799f96b0aaacef752c77d105d5fe778\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 25 18:37:11.781819 containerd[1879]: time="2024-06-25T18:37:11.780468407Z" level=info msg="StartContainer for \"d4df999d67c2e23b5e94ce052c10d6d709d6e96541bcc056b8a1a5eca0c820b9\"" Jun 25 18:37:11.796160 containerd[1879]: time="2024-06-25T18:37:11.796119880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-20-217,Uid:653716146b61d24ddda129dbc59b717c,Namespace:kube-system,Attempt:0,} returns sandbox id \"1320c4344189b46e069094b48a75fd7c624da0917a26d7d36eb062663943134a\"" Jun 25 18:37:11.803080 containerd[1879]: time="2024-06-25T18:37:11.803039333Z" level=info msg="CreateContainer within sandbox \"1320c4344189b46e069094b48a75fd7c624da0917a26d7d36eb062663943134a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 25 18:37:11.817299 containerd[1879]: time="2024-06-25T18:37:11.817253612Z" level=info msg="CreateContainer within sandbox \"fec89ca21bf4796610582b79a385f2387799f96b0aaacef752c77d105d5fe778\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0aae028db90b8bd519277b2d2681df86f8e2249013b6b22b72be783c03a65810\"" Jun 25 18:37:11.820727 containerd[1879]: time="2024-06-25T18:37:11.820686450Z" level=info msg="StartContainer for \"0aae028db90b8bd519277b2d2681df86f8e2249013b6b22b72be783c03a65810\"" Jun 25 18:37:11.830046 systemd[1]: Started cri-containerd-d4df999d67c2e23b5e94ce052c10d6d709d6e96541bcc056b8a1a5eca0c820b9.scope - libcontainer container d4df999d67c2e23b5e94ce052c10d6d709d6e96541bcc056b8a1a5eca0c820b9. Jun 25 18:37:11.841268 containerd[1879]: time="2024-06-25T18:37:11.841211533Z" level=info msg="CreateContainer within sandbox \"1320c4344189b46e069094b48a75fd7c624da0917a26d7d36eb062663943134a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0c882dc54446421682187276be470494d3239818c578351ece07b293b5d08c99\"" Jun 25 18:37:11.843343 containerd[1879]: time="2024-06-25T18:37:11.843294061Z" level=info msg="StartContainer for \"0c882dc54446421682187276be470494d3239818c578351ece07b293b5d08c99\"" Jun 25 18:37:11.894033 systemd[1]: Started cri-containerd-0aae028db90b8bd519277b2d2681df86f8e2249013b6b22b72be783c03a65810.scope - libcontainer container 0aae028db90b8bd519277b2d2681df86f8e2249013b6b22b72be783c03a65810. Jun 25 18:37:11.924513 systemd[1]: Started cri-containerd-0c882dc54446421682187276be470494d3239818c578351ece07b293b5d08c99.scope - libcontainer container 0c882dc54446421682187276be470494d3239818c578351ece07b293b5d08c99. Jun 25 18:37:11.985164 containerd[1879]: time="2024-06-25T18:37:11.984825941Z" level=info msg="StartContainer for \"d4df999d67c2e23b5e94ce052c10d6d709d6e96541bcc056b8a1a5eca0c820b9\" returns successfully" Jun 25 18:37:12.005023 containerd[1879]: time="2024-06-25T18:37:12.004964397Z" level=info msg="StartContainer for \"0aae028db90b8bd519277b2d2681df86f8e2249013b6b22b72be783c03a65810\" returns successfully" Jun 25 18:37:12.056480 kubelet[2735]: E0625 18:37:12.053866 2735 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.20.217:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.20.217:6443: connect: connection refused Jun 25 18:37:12.069108 containerd[1879]: time="2024-06-25T18:37:12.069062378Z" level=info msg="StartContainer for \"0c882dc54446421682187276be470494d3239818c578351ece07b293b5d08c99\" returns successfully" Jun 25 18:37:12.235369 kubelet[2735]: E0625 18:37:12.235162 2735 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.20.217:6443/api/v1/namespaces/default/events\": dial tcp 172.31.20.217:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-20-217.17dc53318768e339 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-20-217,UID:ip-172-31-20-217,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-20-217,},FirstTimestamp:2024-06-25 18:37:09.938987833 +0000 UTC m=+0.920049297,LastTimestamp:2024-06-25 18:37:09.938987833 +0000 UTC m=+0.920049297,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-20-217,}" Jun 25 18:37:12.736071 kubelet[2735]: W0625 18:37:12.735928 2735 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.20.217:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.20.217:6443: connect: connection refused Jun 25 18:37:12.736348 kubelet[2735]: E0625 18:37:12.736085 2735 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.20.217:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.20.217:6443: connect: connection refused Jun 25 18:37:13.121292 kubelet[2735]: I0625 18:37:13.120928 2735 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-20-217" Jun 25 18:37:15.730667 kubelet[2735]: E0625 18:37:15.730601 2735 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-20-217\" not found" node="ip-172-31-20-217" Jun 25 18:37:15.757227 kubelet[2735]: I0625 18:37:15.757189 2735 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-20-217" Jun 25 18:37:15.932426 kubelet[2735]: I0625 18:37:15.932067 2735 apiserver.go:52] "Watching apiserver" Jun 25 18:37:15.975088 kubelet[2735]: I0625 18:37:15.975018 2735 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jun 25 18:37:17.588905 update_engine[1859]: I0625 18:37:17.588851 1859 update_attempter.cc:509] Updating boot flags... Jun 25 18:37:17.751171 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 31 scanned by (udev-worker) (3025) Jun 25 18:37:18.010853 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 31 scanned by (udev-worker) (3027) Jun 25 18:37:18.517705 systemd[1]: Reloading requested from client PID 3194 ('systemctl') (unit session-7.scope)... Jun 25 18:37:18.517729 systemd[1]: Reloading... Jun 25 18:37:18.747893 zram_generator::config[3238]: No configuration found. Jun 25 18:37:18.913524 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:37:19.062428 systemd[1]: Reloading finished in 543 ms. Jun 25 18:37:19.152151 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:37:19.173262 systemd[1]: kubelet.service: Deactivated successfully. Jun 25 18:37:19.173524 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:37:19.183190 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:37:19.559401 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:37:19.576327 (kubelet)[3289]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 25 18:37:19.753092 kubelet[3289]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:37:19.753092 kubelet[3289]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 18:37:19.753092 kubelet[3289]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:37:19.753542 kubelet[3289]: I0625 18:37:19.753175 3289 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 18:37:19.764904 kubelet[3289]: I0625 18:37:19.764253 3289 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jun 25 18:37:19.764904 kubelet[3289]: I0625 18:37:19.764278 3289 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 18:37:19.764904 kubelet[3289]: I0625 18:37:19.764475 3289 server.go:927] "Client rotation is on, will bootstrap in background" Jun 25 18:37:19.768603 kubelet[3289]: I0625 18:37:19.767690 3289 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 25 18:37:19.783486 kubelet[3289]: I0625 18:37:19.783460 3289 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 18:37:19.811115 kubelet[3289]: I0625 18:37:19.810993 3289 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 18:37:19.812718 kubelet[3289]: I0625 18:37:19.811280 3289 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 18:37:19.812718 kubelet[3289]: I0625 18:37:19.811322 3289 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-20-217","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 18:37:19.817482 kubelet[3289]: I0625 18:37:19.815808 3289 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 18:37:19.817482 kubelet[3289]: I0625 18:37:19.815864 3289 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 18:37:19.817482 kubelet[3289]: I0625 18:37:19.815949 3289 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:37:19.817482 kubelet[3289]: I0625 18:37:19.816098 3289 kubelet.go:400] "Attempting to sync node with API server" Jun 25 18:37:19.824464 kubelet[3289]: I0625 18:37:19.823960 3289 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 18:37:19.827247 kubelet[3289]: I0625 18:37:19.826586 3289 kubelet.go:312] "Adding apiserver pod source" Jun 25 18:37:19.827247 kubelet[3289]: I0625 18:37:19.826633 3289 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 18:37:19.828494 kubelet[3289]: I0625 18:37:19.828459 3289 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Jun 25 18:37:19.843006 kubelet[3289]: I0625 18:37:19.841417 3289 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 25 18:37:19.844235 kubelet[3289]: I0625 18:37:19.844161 3289 server.go:1264] "Started kubelet" Jun 25 18:37:19.874270 kubelet[3289]: I0625 18:37:19.867412 3289 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 18:37:19.886006 kubelet[3289]: I0625 18:37:19.885954 3289 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 18:37:19.891289 kubelet[3289]: I0625 18:37:19.891217 3289 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 25 18:37:19.891685 kubelet[3289]: I0625 18:37:19.891657 3289 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 18:37:19.899693 kubelet[3289]: I0625 18:37:19.899662 3289 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 18:37:19.903236 kubelet[3289]: I0625 18:37:19.903045 3289 server.go:455] "Adding debug handlers to kubelet server" Jun 25 18:37:19.903482 kubelet[3289]: I0625 18:37:19.903337 3289 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jun 25 18:37:19.904551 kubelet[3289]: I0625 18:37:19.904537 3289 reconciler.go:26] "Reconciler: start to sync state" Jun 25 18:37:19.944276 kubelet[3289]: E0625 18:37:19.944224 3289 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 18:37:19.944790 kubelet[3289]: I0625 18:37:19.944764 3289 factory.go:221] Registration of the containerd container factory successfully Jun 25 18:37:19.944790 kubelet[3289]: I0625 18:37:19.944784 3289 factory.go:221] Registration of the systemd container factory successfully Jun 25 18:37:19.946256 kubelet[3289]: I0625 18:37:19.946220 3289 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 18:37:19.947777 kubelet[3289]: I0625 18:37:19.946495 3289 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 25 18:37:19.957610 kubelet[3289]: I0625 18:37:19.957540 3289 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 18:37:19.957610 kubelet[3289]: I0625 18:37:19.957588 3289 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 18:37:19.957610 kubelet[3289]: I0625 18:37:19.957611 3289 kubelet.go:2337] "Starting kubelet main sync loop" Jun 25 18:37:19.958031 kubelet[3289]: E0625 18:37:19.957657 3289 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 18:37:20.020037 kubelet[3289]: I0625 18:37:20.019969 3289 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-20-217" Jun 25 18:37:20.041618 kubelet[3289]: I0625 18:37:20.041573 3289 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-20-217" Jun 25 18:37:20.041760 kubelet[3289]: I0625 18:37:20.041673 3289 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-20-217" Jun 25 18:37:20.058651 kubelet[3289]: E0625 18:37:20.058595 3289 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 25 18:37:20.124950 kubelet[3289]: I0625 18:37:20.124832 3289 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 18:37:20.124950 kubelet[3289]: I0625 18:37:20.124856 3289 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 18:37:20.124950 kubelet[3289]: I0625 18:37:20.124878 3289 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:37:20.125157 kubelet[3289]: I0625 18:37:20.125054 3289 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 25 18:37:20.125157 kubelet[3289]: I0625 18:37:20.125066 3289 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 25 18:37:20.125157 kubelet[3289]: I0625 18:37:20.125091 3289 policy_none.go:49] "None policy: Start" Jun 25 18:37:20.126338 kubelet[3289]: I0625 18:37:20.126297 3289 memory_manager.go:170] "Starting memorymanager" policy="None" Jun 25 18:37:20.126338 kubelet[3289]: I0625 18:37:20.126330 3289 state_mem.go:35] "Initializing new in-memory state store" Jun 25 18:37:20.127177 kubelet[3289]: I0625 18:37:20.126554 3289 state_mem.go:75] "Updated machine memory state" Jun 25 18:37:20.142914 kubelet[3289]: I0625 18:37:20.141811 3289 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 18:37:20.142914 kubelet[3289]: I0625 18:37:20.142018 3289 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 25 18:37:20.145189 kubelet[3289]: I0625 18:37:20.144616 3289 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 18:37:20.260469 kubelet[3289]: I0625 18:37:20.259282 3289 topology_manager.go:215] "Topology Admit Handler" podUID="05377a85f48908c68054e5e873cba8c6" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-20-217" Jun 25 18:37:20.260469 kubelet[3289]: I0625 18:37:20.259397 3289 topology_manager.go:215] "Topology Admit Handler" podUID="653716146b61d24ddda129dbc59b717c" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-20-217" Jun 25 18:37:20.260469 kubelet[3289]: I0625 18:37:20.259487 3289 topology_manager.go:215] "Topology Admit Handler" podUID="47019425b534b0fbaf3ce11d62eda7d2" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-20-217" Jun 25 18:37:20.306140 kubelet[3289]: I0625 18:37:20.306099 3289 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/653716146b61d24ddda129dbc59b717c-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-20-217\" (UID: \"653716146b61d24ddda129dbc59b717c\") " pod="kube-system/kube-controller-manager-ip-172-31-20-217" Jun 25 18:37:20.306140 kubelet[3289]: I0625 18:37:20.306143 3289 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/653716146b61d24ddda129dbc59b717c-k8s-certs\") pod \"kube-controller-manager-ip-172-31-20-217\" (UID: \"653716146b61d24ddda129dbc59b717c\") " pod="kube-system/kube-controller-manager-ip-172-31-20-217" Jun 25 18:37:20.306355 kubelet[3289]: I0625 18:37:20.306170 3289 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/653716146b61d24ddda129dbc59b717c-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-20-217\" (UID: \"653716146b61d24ddda129dbc59b717c\") " pod="kube-system/kube-controller-manager-ip-172-31-20-217" Jun 25 18:37:20.306355 kubelet[3289]: I0625 18:37:20.306195 3289 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/653716146b61d24ddda129dbc59b717c-ca-certs\") pod \"kube-controller-manager-ip-172-31-20-217\" (UID: \"653716146b61d24ddda129dbc59b717c\") " pod="kube-system/kube-controller-manager-ip-172-31-20-217" Jun 25 18:37:20.306355 kubelet[3289]: I0625 18:37:20.306234 3289 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/653716146b61d24ddda129dbc59b717c-kubeconfig\") pod \"kube-controller-manager-ip-172-31-20-217\" (UID: \"653716146b61d24ddda129dbc59b717c\") " pod="kube-system/kube-controller-manager-ip-172-31-20-217" Jun 25 18:37:20.306355 kubelet[3289]: I0625 18:37:20.306269 3289 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/47019425b534b0fbaf3ce11d62eda7d2-kubeconfig\") pod \"kube-scheduler-ip-172-31-20-217\" (UID: \"47019425b534b0fbaf3ce11d62eda7d2\") " pod="kube-system/kube-scheduler-ip-172-31-20-217" Jun 25 18:37:20.306355 kubelet[3289]: I0625 18:37:20.306297 3289 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/05377a85f48908c68054e5e873cba8c6-ca-certs\") pod \"kube-apiserver-ip-172-31-20-217\" (UID: \"05377a85f48908c68054e5e873cba8c6\") " pod="kube-system/kube-apiserver-ip-172-31-20-217" Jun 25 18:37:20.306538 kubelet[3289]: I0625 18:37:20.306319 3289 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/05377a85f48908c68054e5e873cba8c6-k8s-certs\") pod \"kube-apiserver-ip-172-31-20-217\" (UID: \"05377a85f48908c68054e5e873cba8c6\") " pod="kube-system/kube-apiserver-ip-172-31-20-217" Jun 25 18:37:20.306538 kubelet[3289]: I0625 18:37:20.306346 3289 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/05377a85f48908c68054e5e873cba8c6-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-20-217\" (UID: \"05377a85f48908c68054e5e873cba8c6\") " pod="kube-system/kube-apiserver-ip-172-31-20-217" Jun 25 18:37:20.829560 kubelet[3289]: I0625 18:37:20.827866 3289 apiserver.go:52] "Watching apiserver" Jun 25 18:37:20.905058 kubelet[3289]: I0625 18:37:20.904905 3289 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jun 25 18:37:21.296400 kubelet[3289]: I0625 18:37:21.296230 3289 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-20-217" podStartSLOduration=1.2962063719999999 podStartE2EDuration="1.296206372s" podCreationTimestamp="2024-06-25 18:37:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:37:21.191081487 +0000 UTC m=+1.603167381" watchObservedRunningTime="2024-06-25 18:37:21.296206372 +0000 UTC m=+1.708292263" Jun 25 18:37:21.352707 kubelet[3289]: I0625 18:37:21.352628 3289 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-20-217" podStartSLOduration=1.352559793 podStartE2EDuration="1.352559793s" podCreationTimestamp="2024-06-25 18:37:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:37:21.298051512 +0000 UTC m=+1.710137406" watchObservedRunningTime="2024-06-25 18:37:21.352559793 +0000 UTC m=+1.764645684" Jun 25 18:37:24.560872 kubelet[3289]: I0625 18:37:24.560783 3289 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-20-217" podStartSLOduration=4.560760864 podStartE2EDuration="4.560760864s" podCreationTimestamp="2024-06-25 18:37:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:37:21.355306446 +0000 UTC m=+1.767392343" watchObservedRunningTime="2024-06-25 18:37:24.560760864 +0000 UTC m=+4.972846754" Jun 25 18:37:26.175556 sudo[2199]: pam_unix(sudo:session): session closed for user root Jun 25 18:37:26.199475 sshd[2196]: pam_unix(sshd:session): session closed for user core Jun 25 18:37:26.204833 systemd[1]: sshd@6-172.31.20.217:22-139.178.68.195:37194.service: Deactivated successfully. Jun 25 18:37:26.208394 systemd[1]: session-7.scope: Deactivated successfully. Jun 25 18:37:26.208652 systemd[1]: session-7.scope: Consumed 5.558s CPU time, 137.9M memory peak, 0B memory swap peak. Jun 25 18:37:26.212210 systemd-logind[1858]: Session 7 logged out. Waiting for processes to exit. Jun 25 18:37:26.215387 systemd-logind[1858]: Removed session 7. Jun 25 18:37:32.760074 kubelet[3289]: I0625 18:37:32.760028 3289 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 25 18:37:32.761574 containerd[1879]: time="2024-06-25T18:37:32.761524434Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 25 18:37:32.762123 kubelet[3289]: I0625 18:37:32.761965 3289 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 25 18:37:33.924413 kubelet[3289]: I0625 18:37:33.924360 3289 topology_manager.go:215] "Topology Admit Handler" podUID="03e053b2-de4e-43b7-8d68-8aac39da9926" podNamespace="kube-system" podName="kube-proxy-pg7cl" Jun 25 18:37:33.975013 systemd[1]: Created slice kubepods-besteffort-pod03e053b2_de4e_43b7_8d68_8aac39da9926.slice - libcontainer container kubepods-besteffort-pod03e053b2_de4e_43b7_8d68_8aac39da9926.slice. Jun 25 18:37:34.115602 kubelet[3289]: I0625 18:37:34.115483 3289 topology_manager.go:215] "Topology Admit Handler" podUID="c73d6da1-276c-457c-80b2-89ccdbb495a3" podNamespace="tigera-operator" podName="tigera-operator-76ff79f7fd-mxzx7" Jun 25 18:37:34.126505 kubelet[3289]: I0625 18:37:34.125897 3289 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/03e053b2-de4e-43b7-8d68-8aac39da9926-kube-proxy\") pod \"kube-proxy-pg7cl\" (UID: \"03e053b2-de4e-43b7-8d68-8aac39da9926\") " pod="kube-system/kube-proxy-pg7cl" Jun 25 18:37:34.128832 kubelet[3289]: I0625 18:37:34.128177 3289 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/03e053b2-de4e-43b7-8d68-8aac39da9926-xtables-lock\") pod \"kube-proxy-pg7cl\" (UID: \"03e053b2-de4e-43b7-8d68-8aac39da9926\") " pod="kube-system/kube-proxy-pg7cl" Jun 25 18:37:34.128832 kubelet[3289]: I0625 18:37:34.128226 3289 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/03e053b2-de4e-43b7-8d68-8aac39da9926-lib-modules\") pod \"kube-proxy-pg7cl\" (UID: \"03e053b2-de4e-43b7-8d68-8aac39da9926\") " pod="kube-system/kube-proxy-pg7cl" Jun 25 18:37:34.128832 kubelet[3289]: I0625 18:37:34.128250 3289 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tqbr\" (UniqueName: \"kubernetes.io/projected/03e053b2-de4e-43b7-8d68-8aac39da9926-kube-api-access-4tqbr\") pod \"kube-proxy-pg7cl\" (UID: \"03e053b2-de4e-43b7-8d68-8aac39da9926\") " pod="kube-system/kube-proxy-pg7cl" Jun 25 18:37:34.130289 systemd[1]: Created slice kubepods-besteffort-podc73d6da1_276c_457c_80b2_89ccdbb495a3.slice - libcontainer container kubepods-besteffort-podc73d6da1_276c_457c_80b2_89ccdbb495a3.slice. Jun 25 18:37:34.137526 kubelet[3289]: W0625 18:37:34.137490 3289 reflector.go:547] object-"tigera-operator"/"kubernetes-services-endpoint": failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:ip-172-31-20-217" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ip-172-31-20-217' and this object Jun 25 18:37:34.138260 kubelet[3289]: E0625 18:37:34.138140 3289 reflector.go:150] object-"tigera-operator"/"kubernetes-services-endpoint": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:ip-172-31-20-217" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ip-172-31-20-217' and this object Jun 25 18:37:34.138314 kubelet[3289]: W0625 18:37:34.138284 3289 reflector.go:547] object-"tigera-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-20-217" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ip-172-31-20-217' and this object Jun 25 18:37:34.138314 kubelet[3289]: E0625 18:37:34.138306 3289 reflector.go:150] object-"tigera-operator"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-20-217" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ip-172-31-20-217' and this object Jun 25 18:37:34.230279 kubelet[3289]: I0625 18:37:34.229636 3289 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c73d6da1-276c-457c-80b2-89ccdbb495a3-var-lib-calico\") pod \"tigera-operator-76ff79f7fd-mxzx7\" (UID: \"c73d6da1-276c-457c-80b2-89ccdbb495a3\") " pod="tigera-operator/tigera-operator-76ff79f7fd-mxzx7" Jun 25 18:37:34.230279 kubelet[3289]: I0625 18:37:34.229893 3289 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gs8zl\" (UniqueName: \"kubernetes.io/projected/c73d6da1-276c-457c-80b2-89ccdbb495a3-kube-api-access-gs8zl\") pod \"tigera-operator-76ff79f7fd-mxzx7\" (UID: \"c73d6da1-276c-457c-80b2-89ccdbb495a3\") " pod="tigera-operator/tigera-operator-76ff79f7fd-mxzx7" Jun 25 18:37:34.288942 containerd[1879]: time="2024-06-25T18:37:34.288893311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pg7cl,Uid:03e053b2-de4e-43b7-8d68-8aac39da9926,Namespace:kube-system,Attempt:0,}" Jun 25 18:37:34.334375 containerd[1879]: time="2024-06-25T18:37:34.333846703Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:37:34.334375 containerd[1879]: time="2024-06-25T18:37:34.333919932Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:37:34.334375 containerd[1879]: time="2024-06-25T18:37:34.334231577Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:37:34.334898 containerd[1879]: time="2024-06-25T18:37:34.334351152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:37:34.378132 systemd[1]: Started cri-containerd-65a25737be8b76954d7b2eb33018b4bd1f551fcd373c4e43d8b46e18f251a6c6.scope - libcontainer container 65a25737be8b76954d7b2eb33018b4bd1f551fcd373c4e43d8b46e18f251a6c6. Jun 25 18:37:34.432851 containerd[1879]: time="2024-06-25T18:37:34.432782889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pg7cl,Uid:03e053b2-de4e-43b7-8d68-8aac39da9926,Namespace:kube-system,Attempt:0,} returns sandbox id \"65a25737be8b76954d7b2eb33018b4bd1f551fcd373c4e43d8b46e18f251a6c6\"" Jun 25 18:37:34.441180 containerd[1879]: time="2024-06-25T18:37:34.441067702Z" level=info msg="CreateContainer within sandbox \"65a25737be8b76954d7b2eb33018b4bd1f551fcd373c4e43d8b46e18f251a6c6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 25 18:37:34.475278 containerd[1879]: time="2024-06-25T18:37:34.475225631Z" level=info msg="CreateContainer within sandbox \"65a25737be8b76954d7b2eb33018b4bd1f551fcd373c4e43d8b46e18f251a6c6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ea9dac68f890a0bf2c69991fa3c08f4b478558c0945269b6dc13a69d7eadb424\"" Jun 25 18:37:34.478667 containerd[1879]: time="2024-06-25T18:37:34.478619841Z" level=info msg="StartContainer for \"ea9dac68f890a0bf2c69991fa3c08f4b478558c0945269b6dc13a69d7eadb424\"" Jun 25 18:37:34.525374 systemd[1]: Started cri-containerd-ea9dac68f890a0bf2c69991fa3c08f4b478558c0945269b6dc13a69d7eadb424.scope - libcontainer container ea9dac68f890a0bf2c69991fa3c08f4b478558c0945269b6dc13a69d7eadb424. Jun 25 18:37:34.648967 containerd[1879]: time="2024-06-25T18:37:34.648912903Z" level=info msg="StartContainer for \"ea9dac68f890a0bf2c69991fa3c08f4b478558c0945269b6dc13a69d7eadb424\" returns successfully" Jun 25 18:37:35.163284 kubelet[3289]: I0625 18:37:35.163223 3289 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pg7cl" podStartSLOduration=2.163200259 podStartE2EDuration="2.163200259s" podCreationTimestamp="2024-06-25 18:37:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:37:35.161111625 +0000 UTC m=+15.573197518" watchObservedRunningTime="2024-06-25 18:37:35.163200259 +0000 UTC m=+15.575286151" Jun 25 18:37:35.349406 kubelet[3289]: E0625 18:37:35.349291 3289 projected.go:294] Couldn't get configMap tigera-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jun 25 18:37:35.349406 kubelet[3289]: E0625 18:37:35.349405 3289 projected.go:200] Error preparing data for projected volume kube-api-access-gs8zl for pod tigera-operator/tigera-operator-76ff79f7fd-mxzx7: failed to sync configmap cache: timed out waiting for the condition Jun 25 18:37:35.370256 kubelet[3289]: E0625 18:37:35.369006 3289 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c73d6da1-276c-457c-80b2-89ccdbb495a3-kube-api-access-gs8zl podName:c73d6da1-276c-457c-80b2-89ccdbb495a3 nodeName:}" failed. No retries permitted until 2024-06-25 18:37:35.868973122 +0000 UTC m=+16.281058996 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gs8zl" (UniqueName: "kubernetes.io/projected/c73d6da1-276c-457c-80b2-89ccdbb495a3-kube-api-access-gs8zl") pod "tigera-operator-76ff79f7fd-mxzx7" (UID: "c73d6da1-276c-457c-80b2-89ccdbb495a3") : failed to sync configmap cache: timed out waiting for the condition Jun 25 18:37:36.241007 containerd[1879]: time="2024-06-25T18:37:36.240959477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76ff79f7fd-mxzx7,Uid:c73d6da1-276c-457c-80b2-89ccdbb495a3,Namespace:tigera-operator,Attempt:0,}" Jun 25 18:37:36.314734 containerd[1879]: time="2024-06-25T18:37:36.307123289Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:37:36.314734 containerd[1879]: time="2024-06-25T18:37:36.309056571Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:37:36.314734 containerd[1879]: time="2024-06-25T18:37:36.310638452Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:37:36.314734 containerd[1879]: time="2024-06-25T18:37:36.312196220Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:37:36.379039 systemd[1]: Started cri-containerd-b3c75770b291e349cba58561e9a240dc0560ad41684cc06dac0f67d7a66e61a9.scope - libcontainer container b3c75770b291e349cba58561e9a240dc0560ad41684cc06dac0f67d7a66e61a9. Jun 25 18:37:36.524248 containerd[1879]: time="2024-06-25T18:37:36.524149666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76ff79f7fd-mxzx7,Uid:c73d6da1-276c-457c-80b2-89ccdbb495a3,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b3c75770b291e349cba58561e9a240dc0560ad41684cc06dac0f67d7a66e61a9\"" Jun 25 18:37:36.543207 containerd[1879]: time="2024-06-25T18:37:36.543157965Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Jun 25 18:37:38.038515 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount383204814.mount: Deactivated successfully. Jun 25 18:37:39.038518 containerd[1879]: time="2024-06-25T18:37:39.036647126Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:37:39.039972 containerd[1879]: time="2024-06-25T18:37:39.039919387Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=22076088" Jun 25 18:37:39.041845 containerd[1879]: time="2024-06-25T18:37:39.041758429Z" level=info msg="ImageCreate event name:\"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:37:39.048040 containerd[1879]: time="2024-06-25T18:37:39.047983347Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:37:39.049752 containerd[1879]: time="2024-06-25T18:37:39.049599453Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"22070263\" in 2.506395679s" Jun 25 18:37:39.049752 containerd[1879]: time="2024-06-25T18:37:39.049649010Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\"" Jun 25 18:37:39.062395 containerd[1879]: time="2024-06-25T18:37:39.062145953Z" level=info msg="CreateContainer within sandbox \"b3c75770b291e349cba58561e9a240dc0560ad41684cc06dac0f67d7a66e61a9\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jun 25 18:37:39.091050 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2754951672.mount: Deactivated successfully. Jun 25 18:37:39.096649 containerd[1879]: time="2024-06-25T18:37:39.096596481Z" level=info msg="CreateContainer within sandbox \"b3c75770b291e349cba58561e9a240dc0560ad41684cc06dac0f67d7a66e61a9\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"e0ceabca1a930e2e742e844d8f844c47ec4989c5a5fa93ba2da4ff6868a2e24e\"" Jun 25 18:37:39.097466 containerd[1879]: time="2024-06-25T18:37:39.097429774Z" level=info msg="StartContainer for \"e0ceabca1a930e2e742e844d8f844c47ec4989c5a5fa93ba2da4ff6868a2e24e\"" Jun 25 18:37:39.161999 systemd[1]: Started cri-containerd-e0ceabca1a930e2e742e844d8f844c47ec4989c5a5fa93ba2da4ff6868a2e24e.scope - libcontainer container e0ceabca1a930e2e742e844d8f844c47ec4989c5a5fa93ba2da4ff6868a2e24e. Jun 25 18:37:39.218274 containerd[1879]: time="2024-06-25T18:37:39.218229304Z" level=info msg="StartContainer for \"e0ceabca1a930e2e742e844d8f844c47ec4989c5a5fa93ba2da4ff6868a2e24e\" returns successfully" Jun 25 18:37:40.227262 kubelet[3289]: I0625 18:37:40.227066 3289 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76ff79f7fd-mxzx7" podStartSLOduration=4.717109078 podStartE2EDuration="7.227043978s" podCreationTimestamp="2024-06-25 18:37:33 +0000 UTC" firstStartedPulling="2024-06-25 18:37:36.542650296 +0000 UTC m=+16.954736170" lastFinishedPulling="2024-06-25 18:37:39.052585193 +0000 UTC m=+19.464671070" observedRunningTime="2024-06-25 18:37:40.227041373 +0000 UTC m=+20.639127266" watchObservedRunningTime="2024-06-25 18:37:40.227043978 +0000 UTC m=+20.639129858" Jun 25 18:37:42.774914 kubelet[3289]: I0625 18:37:42.774855 3289 topology_manager.go:215] "Topology Admit Handler" podUID="c4f906ec-f0cc-46a1-9ee2-d232971306ef" podNamespace="calico-system" podName="calico-typha-5674d47f95-ncpmt" Jun 25 18:37:42.794613 systemd[1]: Created slice kubepods-besteffort-podc4f906ec_f0cc_46a1_9ee2_d232971306ef.slice - libcontainer container kubepods-besteffort-podc4f906ec_f0cc_46a1_9ee2_d232971306ef.slice. Jun 25 18:37:42.823399 kubelet[3289]: I0625 18:37:42.823336 3289 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nghxc\" (UniqueName: \"kubernetes.io/projected/c4f906ec-f0cc-46a1-9ee2-d232971306ef-kube-api-access-nghxc\") pod \"calico-typha-5674d47f95-ncpmt\" (UID: \"c4f906ec-f0cc-46a1-9ee2-d232971306ef\") " pod="calico-system/calico-typha-5674d47f95-ncpmt" Jun 25 18:37:42.823563 kubelet[3289]: I0625 18:37:42.823441 3289 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4f906ec-f0cc-46a1-9ee2-d232971306ef-tigera-ca-bundle\") pod \"calico-typha-5674d47f95-ncpmt\" (UID: \"c4f906ec-f0cc-46a1-9ee2-d232971306ef\") " pod="calico-system/calico-typha-5674d47f95-ncpmt" Jun 25 18:37:42.823563 kubelet[3289]: I0625 18:37:42.823470 3289 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/c4f906ec-f0cc-46a1-9ee2-d232971306ef-typha-certs\") pod \"calico-typha-5674d47f95-ncpmt\" (UID: \"c4f906ec-f0cc-46a1-9ee2-d232971306ef\") " pod="calico-system/calico-typha-5674d47f95-ncpmt" Jun 25 18:37:42.925355 kubelet[3289]: I0625 18:37:42.925303 3289 topology_manager.go:215] "Topology Admit Handler" podUID="78e32cd9-2927-4b40-aeec-56e1c5ee7fec" podNamespace="calico-system" podName="calico-node-fl492" Jun 25 18:37:42.971597 systemd[1]: Created slice kubepods-besteffort-pod78e32cd9_2927_4b40_aeec_56e1c5ee7fec.slice - libcontainer container kubepods-besteffort-pod78e32cd9_2927_4b40_aeec_56e1c5ee7fec.slice. Jun 25 18:37:43.025830 kubelet[3289]: I0625 18:37:43.025194 3289 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/78e32cd9-2927-4b40-aeec-56e1c5ee7fec-cni-log-dir\") pod \"calico-node-fl492\" (UID: \"78e32cd9-2927-4b40-aeec-56e1c5ee7fec\") " pod="calico-system/calico-node-fl492" Jun 25 18:37:43.025830 kubelet[3289]: I0625 18:37:43.025250 3289 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/78e32cd9-2927-4b40-aeec-56e1c5ee7fec-lib-modules\") pod \"calico-node-fl492\" (UID: \"78e32cd9-2927-4b40-aeec-56e1c5ee7fec\") " pod="calico-system/calico-node-fl492" Jun 25 18:37:43.025830 kubelet[3289]: I0625 18:37:43.025276 3289 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/78e32cd9-2927-4b40-aeec-56e1c5ee7fec-cni-net-dir\") pod \"calico-node-fl492\" (UID: \"78e32cd9-2927-4b40-aeec-56e1c5ee7fec\") " pod="calico-system/calico-node-fl492" Jun 25 18:37:43.025830 kubelet[3289]: I0625 18:37:43.025303 3289 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/78e32cd9-2927-4b40-aeec-56e1c5ee7fec-cni-bin-dir\") pod \"calico-node-fl492\" (UID: \"78e32cd9-2927-4b40-aeec-56e1c5ee7fec\") " pod="calico-system/calico-node-fl492" Jun 25 18:37:43.025830 kubelet[3289]: I0625 18:37:43.025334 3289 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spfmd\" (UniqueName: \"kubernetes.io/projected/78e32cd9-2927-4b40-aeec-56e1c5ee7fec-kube-api-access-spfmd\") pod \"calico-node-fl492\" (UID: \"78e32cd9-2927-4b40-aeec-56e1c5ee7fec\") " pod="calico-system/calico-node-fl492" Jun 25 18:37:43.026321 kubelet[3289]: I0625 18:37:43.025366 3289 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/78e32cd9-2927-4b40-aeec-56e1c5ee7fec-policysync\") pod \"calico-node-fl492\" (UID: \"78e32cd9-2927-4b40-aeec-56e1c5ee7fec\") " pod="calico-system/calico-node-fl492" Jun 25 18:37:43.026321 kubelet[3289]: I0625 18:37:43.025391 3289 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/78e32cd9-2927-4b40-aeec-56e1c5ee7fec-var-lib-calico\") pod \"calico-node-fl492\" (UID: \"78e32cd9-2927-4b40-aeec-56e1c5ee7fec\") " pod="calico-system/calico-node-fl492" Jun 25 18:37:43.026321 kubelet[3289]: I0625 18:37:43.025434 3289 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/78e32cd9-2927-4b40-aeec-56e1c5ee7fec-xtables-lock\") pod \"calico-node-fl492\" (UID: \"78e32cd9-2927-4b40-aeec-56e1c5ee7fec\") " pod="calico-system/calico-node-fl492" Jun 25 18:37:43.026321 kubelet[3289]: I0625 18:37:43.025457 3289 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/78e32cd9-2927-4b40-aeec-56e1c5ee7fec-var-run-calico\") pod \"calico-node-fl492\" (UID: \"78e32cd9-2927-4b40-aeec-56e1c5ee7fec\") " pod="calico-system/calico-node-fl492" Jun 25 18:37:43.026321 kubelet[3289]: I0625 18:37:43.025481 3289 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/78e32cd9-2927-4b40-aeec-56e1c5ee7fec-flexvol-driver-host\") pod \"calico-node-fl492\" (UID: \"78e32cd9-2927-4b40-aeec-56e1c5ee7fec\") " pod="calico-system/calico-node-fl492" Jun 25 18:37:43.026521 kubelet[3289]: I0625 18:37:43.025553 3289 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/78e32cd9-2927-4b40-aeec-56e1c5ee7fec-tigera-ca-bundle\") pod \"calico-node-fl492\" (UID: \"78e32cd9-2927-4b40-aeec-56e1c5ee7fec\") " pod="calico-system/calico-node-fl492" Jun 25 18:37:43.026521 kubelet[3289]: I0625 18:37:43.025578 3289 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/78e32cd9-2927-4b40-aeec-56e1c5ee7fec-node-certs\") pod \"calico-node-fl492\" (UID: \"78e32cd9-2927-4b40-aeec-56e1c5ee7fec\") " pod="calico-system/calico-node-fl492" Jun 25 18:37:43.106921 kubelet[3289]: I0625 18:37:43.105093 3289 topology_manager.go:215] "Topology Admit Handler" podUID="4109c626-0c92-402f-a0e5-85bdc1e223de" podNamespace="calico-system" podName="csi-node-driver-jd8sm" Jun 25 18:37:43.106921 kubelet[3289]: E0625 18:37:43.105632 3289 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jd8sm" podUID="4109c626-0c92-402f-a0e5-85bdc1e223de" Jun 25 18:37:43.114070 containerd[1879]: time="2024-06-25T18:37:43.114025728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5674d47f95-ncpmt,Uid:c4f906ec-f0cc-46a1-9ee2-d232971306ef,Namespace:calico-system,Attempt:0,}" Jun 25 18:37:43.131703 kubelet[3289]: I0625 18:37:43.130848 3289 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4109c626-0c92-402f-a0e5-85bdc1e223de-registration-dir\") pod \"csi-node-driver-jd8sm\" (UID: \"4109c626-0c92-402f-a0e5-85bdc1e223de\") " pod="calico-system/csi-node-driver-jd8sm" Jun 25 18:37:43.135498 kubelet[3289]: I0625 18:37:43.135463 3289 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tcw6\" (UniqueName: \"kubernetes.io/projected/4109c626-0c92-402f-a0e5-85bdc1e223de-kube-api-access-6tcw6\") pod \"csi-node-driver-jd8sm\" (UID: \"4109c626-0c92-402f-a0e5-85bdc1e223de\") " pod="calico-system/csi-node-driver-jd8sm" Jun 25 18:37:43.135635 kubelet[3289]: I0625 18:37:43.135593 3289 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4109c626-0c92-402f-a0e5-85bdc1e223de-kubelet-dir\") pod \"csi-node-driver-jd8sm\" (UID: \"4109c626-0c92-402f-a0e5-85bdc1e223de\") " pod="calico-system/csi-node-driver-jd8sm" Jun 25 18:37:43.136496 kubelet[3289]: I0625 18:37:43.135635 3289 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4109c626-0c92-402f-a0e5-85bdc1e223de-socket-dir\") pod \"csi-node-driver-jd8sm\" (UID: \"4109c626-0c92-402f-a0e5-85bdc1e223de\") " pod="calico-system/csi-node-driver-jd8sm" Jun 25 18:37:43.136496 kubelet[3289]: I0625 18:37:43.136482 3289 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/4109c626-0c92-402f-a0e5-85bdc1e223de-varrun\") pod \"csi-node-driver-jd8sm\" (UID: \"4109c626-0c92-402f-a0e5-85bdc1e223de\") " pod="calico-system/csi-node-driver-jd8sm" Jun 25 18:37:43.165274 kubelet[3289]: E0625 18:37:43.165153 3289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:43.165274 kubelet[3289]: W0625 18:37:43.165194 3289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:43.165274 kubelet[3289]: E0625 18:37:43.165223 3289 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:43.246831 kubelet[3289]: E0625 18:37:43.244661 3289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:43.246831 kubelet[3289]: W0625 18:37:43.244692 3289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:43.246831 kubelet[3289]: E0625 18:37:43.244717 3289 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:43.246831 kubelet[3289]: E0625 18:37:43.245780 3289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:43.246831 kubelet[3289]: W0625 18:37:43.245819 3289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:43.246831 kubelet[3289]: E0625 18:37:43.245840 3289 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:43.247895 kubelet[3289]: E0625 18:37:43.247373 3289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:43.247895 kubelet[3289]: W0625 18:37:43.247392 3289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:43.248057 kubelet[3289]: E0625 18:37:43.247938 3289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:43.248057 kubelet[3289]: W0625 18:37:43.247950 3289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:43.248057 kubelet[3289]: E0625 18:37:43.247968 3289 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:43.248904 kubelet[3289]: E0625 18:37:43.248710 3289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:43.249264 kubelet[3289]: E0625 18:37:43.248782 3289 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:43.249264 kubelet[3289]: W0625 18:37:43.248730 3289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:43.249264 kubelet[3289]: E0625 18:37:43.249118 3289 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:43.251131 kubelet[3289]: E0625 18:37:43.250607 3289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:43.251131 kubelet[3289]: W0625 18:37:43.250624 3289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:43.251131 kubelet[3289]: E0625 18:37:43.250971 3289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:43.251131 kubelet[3289]: W0625 18:37:43.250984 3289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:43.251131 kubelet[3289]: E0625 18:37:43.251000 3289 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:43.256137 kubelet[3289]: E0625 18:37:43.251948 3289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:43.256137 kubelet[3289]: W0625 18:37:43.251965 3289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:43.256137 kubelet[3289]: E0625 18:37:43.251981 3289 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:43.256137 kubelet[3289]: E0625 18:37:43.252148 3289 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:43.256137 kubelet[3289]: E0625 18:37:43.252596 3289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:43.256137 kubelet[3289]: W0625 18:37:43.252659 3289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:43.256137 kubelet[3289]: E0625 18:37:43.252691 3289 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:43.273206 kubelet[3289]: E0625 18:37:43.273152 3289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:43.273206 kubelet[3289]: W0625 18:37:43.273192 3289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:43.273405 kubelet[3289]: E0625 18:37:43.273222 3289 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:43.273741 kubelet[3289]: E0625 18:37:43.273689 3289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:43.273741 kubelet[3289]: W0625 18:37:43.273712 3289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:43.273906 kubelet[3289]: E0625 18:37:43.273848 3289 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:43.274267 kubelet[3289]: E0625 18:37:43.274245 3289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:43.274267 kubelet[3289]: W0625 18:37:43.274265 3289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:43.274391 kubelet[3289]: E0625 18:37:43.274354 3289 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:43.275389 kubelet[3289]: E0625 18:37:43.274638 3289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:43.275389 kubelet[3289]: W0625 18:37:43.274684 3289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:43.275389 kubelet[3289]: E0625 18:37:43.274819 3289 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:43.275389 kubelet[3289]: E0625 18:37:43.275066 3289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:43.275389 kubelet[3289]: W0625 18:37:43.275077 3289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:43.275389 kubelet[3289]: E0625 18:37:43.275162 3289 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:43.275965 kubelet[3289]: E0625 18:37:43.275407 3289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:43.275965 kubelet[3289]: W0625 18:37:43.275417 3289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:43.275965 kubelet[3289]: E0625 18:37:43.275791 3289 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:43.282958 kubelet[3289]: E0625 18:37:43.276140 3289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:43.282958 kubelet[3289]: W0625 18:37:43.276154 3289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:43.282958 kubelet[3289]: E0625 18:37:43.276232 3289 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:43.282958 kubelet[3289]: E0625 18:37:43.276633 3289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:43.282958 kubelet[3289]: W0625 18:37:43.276645 3289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:43.282958 kubelet[3289]: E0625 18:37:43.276835 3289 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:43.282958 kubelet[3289]: E0625 18:37:43.277005 3289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:43.282958 kubelet[3289]: W0625 18:37:43.277015 3289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:43.282958 kubelet[3289]: E0625 18:37:43.277188 3289 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:43.282958 kubelet[3289]: E0625 18:37:43.277342 3289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:43.283551 kubelet[3289]: W0625 18:37:43.277351 3289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:43.283551 kubelet[3289]: E0625 18:37:43.277453 3289 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:43.283551 kubelet[3289]: E0625 18:37:43.277767 3289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:43.283551 kubelet[3289]: W0625 18:37:43.277780 3289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:43.283551 kubelet[3289]: E0625 18:37:43.277846 3289 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:43.283551 kubelet[3289]: E0625 18:37:43.278396 3289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:43.283551 kubelet[3289]: W0625 18:37:43.278408 3289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:43.283551 kubelet[3289]: E0625 18:37:43.278425 3289 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:43.283551 kubelet[3289]: E0625 18:37:43.278754 3289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:43.283551 kubelet[3289]: W0625 18:37:43.278765 3289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:43.284223 kubelet[3289]: E0625 18:37:43.278918 3289 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:43.284223 kubelet[3289]: E0625 18:37:43.279287 3289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:43.284223 kubelet[3289]: W0625 18:37:43.279297 3289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:43.284223 kubelet[3289]: E0625 18:37:43.279314 3289 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:43.284223 kubelet[3289]: E0625 18:37:43.279640 3289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:43.284223 kubelet[3289]: W0625 18:37:43.279729 3289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:43.284223 kubelet[3289]: E0625 18:37:43.279854 3289 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:43.284223 kubelet[3289]: E0625 18:37:43.280077 3289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:43.284223 kubelet[3289]: W0625 18:37:43.280369 3289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:43.284223 kubelet[3289]: E0625 18:37:43.280455 3289 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:43.284513 kubelet[3289]: E0625 18:37:43.280768 3289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:43.284513 kubelet[3289]: W0625 18:37:43.280779 3289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:43.284513 kubelet[3289]: E0625 18:37:43.280832 3289 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:43.284513 kubelet[3289]: E0625 18:37:43.281160 3289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:43.284513 kubelet[3289]: W0625 18:37:43.281171 3289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:43.284513 kubelet[3289]: E0625 18:37:43.281184 3289 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:43.310204 containerd[1879]: time="2024-06-25T18:37:43.302943219Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:37:43.310204 containerd[1879]: time="2024-06-25T18:37:43.303108463Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:37:43.310204 containerd[1879]: time="2024-06-25T18:37:43.303142808Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:37:43.310204 containerd[1879]: time="2024-06-25T18:37:43.303163818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:37:43.319571 containerd[1879]: time="2024-06-25T18:37:43.319526710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fl492,Uid:78e32cd9-2927-4b40-aeec-56e1c5ee7fec,Namespace:calico-system,Attempt:0,}" Jun 25 18:37:43.355467 kubelet[3289]: E0625 18:37:43.355268 3289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:43.355467 kubelet[3289]: W0625 18:37:43.355298 3289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:43.355467 kubelet[3289]: E0625 18:37:43.355377 3289 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:43.361625 kubelet[3289]: E0625 18:37:43.359404 3289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:43.364086 kubelet[3289]: W0625 18:37:43.359536 3289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:43.364086 kubelet[3289]: E0625 18:37:43.362994 3289 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:43.370564 kubelet[3289]: E0625 18:37:43.369976 3289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:43.370564 kubelet[3289]: W0625 18:37:43.370002 3289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:43.370564 kubelet[3289]: E0625 18:37:43.370029 3289 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:43.372921 kubelet[3289]: E0625 18:37:43.372884 3289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:43.372921 kubelet[3289]: W0625 18:37:43.372913 3289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:43.374662 kubelet[3289]: E0625 18:37:43.372938 3289 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:43.375503 kubelet[3289]: E0625 18:37:43.375421 3289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:43.376136 kubelet[3289]: W0625 18:37:43.376088 3289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:43.377860 kubelet[3289]: E0625 18:37:43.376130 3289 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:43.378968 kubelet[3289]: E0625 18:37:43.378943 3289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:43.378968 kubelet[3289]: W0625 18:37:43.378965 3289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:43.380417 kubelet[3289]: E0625 18:37:43.378991 3289 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:43.385378 kubelet[3289]: E0625 18:37:43.381742 3289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:37:43.385378 kubelet[3289]: W0625 18:37:43.381759 3289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:37:43.385378 kubelet[3289]: E0625 18:37:43.381875 3289 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:37:43.397346 systemd[1]: Started cri-containerd-78a8b3a54587a618fe3fc06e7225578a5531e2a06ef68f5c953be4cd3dedaa09.scope - libcontainer container 78a8b3a54587a618fe3fc06e7225578a5531e2a06ef68f5c953be4cd3dedaa09. Jun 25 18:37:43.439180 containerd[1879]: time="2024-06-25T18:37:43.435786070Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:37:43.439180 containerd[1879]: time="2024-06-25T18:37:43.436785055Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:37:43.439180 containerd[1879]: time="2024-06-25T18:37:43.436830123Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:37:43.439180 containerd[1879]: time="2024-06-25T18:37:43.436847435Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:37:43.491039 systemd[1]: Started cri-containerd-e3ff9be9d7d6449b3eda80c43a117060420e5e04bb1776e1f0637a9f5d4964ca.scope - libcontainer container e3ff9be9d7d6449b3eda80c43a117060420e5e04bb1776e1f0637a9f5d4964ca. Jun 25 18:37:43.578939 containerd[1879]: time="2024-06-25T18:37:43.578233497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fl492,Uid:78e32cd9-2927-4b40-aeec-56e1c5ee7fec,Namespace:calico-system,Attempt:0,} returns sandbox id \"e3ff9be9d7d6449b3eda80c43a117060420e5e04bb1776e1f0637a9f5d4964ca\"" Jun 25 18:37:43.582683 containerd[1879]: time="2024-06-25T18:37:43.582191973Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Jun 25 18:37:43.669324 containerd[1879]: time="2024-06-25T18:37:43.669283330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5674d47f95-ncpmt,Uid:c4f906ec-f0cc-46a1-9ee2-d232971306ef,Namespace:calico-system,Attempt:0,} returns sandbox id \"78a8b3a54587a618fe3fc06e7225578a5531e2a06ef68f5c953be4cd3dedaa09\"" Jun 25 18:37:44.960706 kubelet[3289]: E0625 18:37:44.959282 3289 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jd8sm" podUID="4109c626-0c92-402f-a0e5-85bdc1e223de" Jun 25 18:37:45.260645 containerd[1879]: time="2024-06-25T18:37:45.260361862Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:37:45.264531 containerd[1879]: time="2024-06-25T18:37:45.263618577Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=5140568" Jun 25 18:37:45.265784 containerd[1879]: time="2024-06-25T18:37:45.265737074Z" level=info msg="ImageCreate event name:\"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:37:45.270150 containerd[1879]: time="2024-06-25T18:37:45.270101061Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:37:45.274991 containerd[1879]: time="2024-06-25T18:37:45.274838403Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6588288\" in 1.69259942s" Jun 25 18:37:45.275142 containerd[1879]: time="2024-06-25T18:37:45.274994650Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\"" Jun 25 18:37:45.281256 containerd[1879]: time="2024-06-25T18:37:45.277275788Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Jun 25 18:37:45.283177 containerd[1879]: time="2024-06-25T18:37:45.283079011Z" level=info msg="CreateContainer within sandbox \"e3ff9be9d7d6449b3eda80c43a117060420e5e04bb1776e1f0637a9f5d4964ca\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jun 25 18:37:45.339683 containerd[1879]: time="2024-06-25T18:37:45.339595963Z" level=info msg="CreateContainer within sandbox \"e3ff9be9d7d6449b3eda80c43a117060420e5e04bb1776e1f0637a9f5d4964ca\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e81de663d936e1a4595f8c94501a0ad4b83187817b73abce55d36af7f6b5a4cf\"" Jun 25 18:37:45.340637 containerd[1879]: time="2024-06-25T18:37:45.340581502Z" level=info msg="StartContainer for \"e81de663d936e1a4595f8c94501a0ad4b83187817b73abce55d36af7f6b5a4cf\"" Jun 25 18:37:45.432299 systemd[1]: Started cri-containerd-e81de663d936e1a4595f8c94501a0ad4b83187817b73abce55d36af7f6b5a4cf.scope - libcontainer container e81de663d936e1a4595f8c94501a0ad4b83187817b73abce55d36af7f6b5a4cf. Jun 25 18:37:45.506698 containerd[1879]: time="2024-06-25T18:37:45.505515098Z" level=info msg="StartContainer for \"e81de663d936e1a4595f8c94501a0ad4b83187817b73abce55d36af7f6b5a4cf\" returns successfully" Jun 25 18:37:45.549668 systemd[1]: cri-containerd-e81de663d936e1a4595f8c94501a0ad4b83187817b73abce55d36af7f6b5a4cf.scope: Deactivated successfully. Jun 25 18:37:45.615531 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e81de663d936e1a4595f8c94501a0ad4b83187817b73abce55d36af7f6b5a4cf-rootfs.mount: Deactivated successfully. Jun 25 18:37:45.664789 containerd[1879]: time="2024-06-25T18:37:45.664383087Z" level=info msg="shim disconnected" id=e81de663d936e1a4595f8c94501a0ad4b83187817b73abce55d36af7f6b5a4cf namespace=k8s.io Jun 25 18:37:45.664789 containerd[1879]: time="2024-06-25T18:37:45.664608814Z" level=warning msg="cleaning up after shim disconnected" id=e81de663d936e1a4595f8c94501a0ad4b83187817b73abce55d36af7f6b5a4cf namespace=k8s.io Jun 25 18:37:45.664789 containerd[1879]: time="2024-06-25T18:37:45.664624765Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:37:46.958560 kubelet[3289]: E0625 18:37:46.958492 3289 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jd8sm" podUID="4109c626-0c92-402f-a0e5-85bdc1e223de" Jun 25 18:37:48.265053 containerd[1879]: time="2024-06-25T18:37:48.265001836Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:37:48.267828 containerd[1879]: time="2024-06-25T18:37:48.267395186Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=29458030" Jun 25 18:37:48.270130 containerd[1879]: time="2024-06-25T18:37:48.269521706Z" level=info msg="ImageCreate event name:\"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:37:48.283506 containerd[1879]: time="2024-06-25T18:37:48.283427899Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:37:48.286435 containerd[1879]: time="2024-06-25T18:37:48.285968789Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"30905782\" in 3.008089227s" Jun 25 18:37:48.286435 containerd[1879]: time="2024-06-25T18:37:48.286344213Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\"" Jun 25 18:37:48.340571 containerd[1879]: time="2024-06-25T18:37:48.340291502Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Jun 25 18:37:48.350207 containerd[1879]: time="2024-06-25T18:37:48.349023632Z" level=info msg="CreateContainer within sandbox \"78a8b3a54587a618fe3fc06e7225578a5531e2a06ef68f5c953be4cd3dedaa09\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jun 25 18:37:48.375361 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1154346081.mount: Deactivated successfully. Jun 25 18:37:48.382761 containerd[1879]: time="2024-06-25T18:37:48.382713731Z" level=info msg="CreateContainer within sandbox \"78a8b3a54587a618fe3fc06e7225578a5531e2a06ef68f5c953be4cd3dedaa09\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"826738758e2adb2dabc766d9aefa87873c2b928c766368f8bdccf34e3306cc2d\"" Jun 25 18:37:48.383967 containerd[1879]: time="2024-06-25T18:37:48.383926932Z" level=info msg="StartContainer for \"826738758e2adb2dabc766d9aefa87873c2b928c766368f8bdccf34e3306cc2d\"" Jun 25 18:37:48.504004 systemd[1]: Started cri-containerd-826738758e2adb2dabc766d9aefa87873c2b928c766368f8bdccf34e3306cc2d.scope - libcontainer container 826738758e2adb2dabc766d9aefa87873c2b928c766368f8bdccf34e3306cc2d. Jun 25 18:37:48.637734 containerd[1879]: time="2024-06-25T18:37:48.637399929Z" level=info msg="StartContainer for \"826738758e2adb2dabc766d9aefa87873c2b928c766368f8bdccf34e3306cc2d\" returns successfully" Jun 25 18:37:48.958622 kubelet[3289]: E0625 18:37:48.957997 3289 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jd8sm" podUID="4109c626-0c92-402f-a0e5-85bdc1e223de" Jun 25 18:37:50.314218 kubelet[3289]: I0625 18:37:50.314184 3289 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 18:37:50.959766 kubelet[3289]: E0625 18:37:50.959711 3289 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jd8sm" podUID="4109c626-0c92-402f-a0e5-85bdc1e223de" Jun 25 18:37:51.353574 kubelet[3289]: I0625 18:37:51.353049 3289 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 18:37:51.393252 kubelet[3289]: I0625 18:37:51.392694 3289 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5674d47f95-ncpmt" podStartSLOduration=4.775462886 podStartE2EDuration="9.392672423s" podCreationTimestamp="2024-06-25 18:37:42 +0000 UTC" firstStartedPulling="2024-06-25 18:37:43.67184335 +0000 UTC m=+24.083929236" lastFinishedPulling="2024-06-25 18:37:48.289052901 +0000 UTC m=+28.701138773" observedRunningTime="2024-06-25 18:37:49.34087405 +0000 UTC m=+29.752959946" watchObservedRunningTime="2024-06-25 18:37:51.392672423 +0000 UTC m=+31.804758317" Jun 25 18:37:52.958376 kubelet[3289]: E0625 18:37:52.958322 3289 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jd8sm" podUID="4109c626-0c92-402f-a0e5-85bdc1e223de" Jun 25 18:37:54.083373 containerd[1879]: time="2024-06-25T18:37:54.083313607Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:37:54.085004 containerd[1879]: time="2024-06-25T18:37:54.084829535Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=93087850" Jun 25 18:37:54.087155 containerd[1879]: time="2024-06-25T18:37:54.087101015Z" level=info msg="ImageCreate event name:\"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:37:54.094176 containerd[1879]: time="2024-06-25T18:37:54.094016513Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:37:54.095559 containerd[1879]: time="2024-06-25T18:37:54.095326355Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"94535610\" in 5.754972077s" Jun 25 18:37:54.095559 containerd[1879]: time="2024-06-25T18:37:54.095372707Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\"" Jun 25 18:37:54.099193 containerd[1879]: time="2024-06-25T18:37:54.099152202Z" level=info msg="CreateContainer within sandbox \"e3ff9be9d7d6449b3eda80c43a117060420e5e04bb1776e1f0637a9f5d4964ca\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jun 25 18:37:54.142032 containerd[1879]: time="2024-06-25T18:37:54.141551571Z" level=info msg="CreateContainer within sandbox \"e3ff9be9d7d6449b3eda80c43a117060420e5e04bb1776e1f0637a9f5d4964ca\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"42f615de7ee45f73d738743efa5f2c86d24745538bdf7ff7efebcfd50d83b756\"" Jun 25 18:37:54.148822 containerd[1879]: time="2024-06-25T18:37:54.147051239Z" level=info msg="StartContainer for \"42f615de7ee45f73d738743efa5f2c86d24745538bdf7ff7efebcfd50d83b756\"" Jun 25 18:37:54.252079 systemd[1]: Started cri-containerd-42f615de7ee45f73d738743efa5f2c86d24745538bdf7ff7efebcfd50d83b756.scope - libcontainer container 42f615de7ee45f73d738743efa5f2c86d24745538bdf7ff7efebcfd50d83b756. Jun 25 18:37:54.316791 containerd[1879]: time="2024-06-25T18:37:54.316410121Z" level=info msg="StartContainer for \"42f615de7ee45f73d738743efa5f2c86d24745538bdf7ff7efebcfd50d83b756\" returns successfully" Jun 25 18:37:54.958660 kubelet[3289]: E0625 18:37:54.958585 3289 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jd8sm" podUID="4109c626-0c92-402f-a0e5-85bdc1e223de" Jun 25 18:37:55.522518 systemd[1]: cri-containerd-42f615de7ee45f73d738743efa5f2c86d24745538bdf7ff7efebcfd50d83b756.scope: Deactivated successfully. Jun 25 18:37:55.571890 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-42f615de7ee45f73d738743efa5f2c86d24745538bdf7ff7efebcfd50d83b756-rootfs.mount: Deactivated successfully. Jun 25 18:37:55.600763 containerd[1879]: time="2024-06-25T18:37:55.600685766Z" level=info msg="shim disconnected" id=42f615de7ee45f73d738743efa5f2c86d24745538bdf7ff7efebcfd50d83b756 namespace=k8s.io Jun 25 18:37:55.601671 containerd[1879]: time="2024-06-25T18:37:55.601339200Z" level=warning msg="cleaning up after shim disconnected" id=42f615de7ee45f73d738743efa5f2c86d24745538bdf7ff7efebcfd50d83b756 namespace=k8s.io Jun 25 18:37:55.601671 containerd[1879]: time="2024-06-25T18:37:55.601387904Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:37:55.622672 kubelet[3289]: I0625 18:37:55.621831 3289 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jun 25 18:37:55.679636 kubelet[3289]: I0625 18:37:55.676731 3289 topology_manager.go:215] "Topology Admit Handler" podUID="b58966d2-ccc7-40b6-ba32-ef9977463f92" podNamespace="kube-system" podName="coredns-7db6d8ff4d-8zcvk" Jun 25 18:37:55.695978 kubelet[3289]: I0625 18:37:55.695920 3289 topology_manager.go:215] "Topology Admit Handler" podUID="0dc1be98-a119-4811-b95d-a24913f2cc14" podNamespace="kube-system" podName="coredns-7db6d8ff4d-bblmb" Jun 25 18:37:55.696208 kubelet[3289]: I0625 18:37:55.696181 3289 topology_manager.go:215] "Topology Admit Handler" podUID="239a9240-02bc-486a-9fcd-8b5b78a2cc4e" podNamespace="calico-system" podName="calico-kube-controllers-6c77496f95-mnncn" Jun 25 18:37:55.708452 kubelet[3289]: I0625 18:37:55.708210 3289 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqtg9\" (UniqueName: \"kubernetes.io/projected/0dc1be98-a119-4811-b95d-a24913f2cc14-kube-api-access-sqtg9\") pod \"coredns-7db6d8ff4d-bblmb\" (UID: \"0dc1be98-a119-4811-b95d-a24913f2cc14\") " pod="kube-system/coredns-7db6d8ff4d-bblmb" Jun 25 18:37:55.708452 kubelet[3289]: I0625 18:37:55.708253 3289 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b58966d2-ccc7-40b6-ba32-ef9977463f92-config-volume\") pod \"coredns-7db6d8ff4d-8zcvk\" (UID: \"b58966d2-ccc7-40b6-ba32-ef9977463f92\") " pod="kube-system/coredns-7db6d8ff4d-8zcvk" Jun 25 18:37:55.708452 kubelet[3289]: I0625 18:37:55.708285 3289 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/239a9240-02bc-486a-9fcd-8b5b78a2cc4e-tigera-ca-bundle\") pod \"calico-kube-controllers-6c77496f95-mnncn\" (UID: \"239a9240-02bc-486a-9fcd-8b5b78a2cc4e\") " pod="calico-system/calico-kube-controllers-6c77496f95-mnncn" Jun 25 18:37:55.708452 kubelet[3289]: I0625 18:37:55.708318 3289 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z88rw\" (UniqueName: \"kubernetes.io/projected/239a9240-02bc-486a-9fcd-8b5b78a2cc4e-kube-api-access-z88rw\") pod \"calico-kube-controllers-6c77496f95-mnncn\" (UID: \"239a9240-02bc-486a-9fcd-8b5b78a2cc4e\") " pod="calico-system/calico-kube-controllers-6c77496f95-mnncn" Jun 25 18:37:55.708452 kubelet[3289]: I0625 18:37:55.708344 3289 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8tnv\" (UniqueName: \"kubernetes.io/projected/b58966d2-ccc7-40b6-ba32-ef9977463f92-kube-api-access-k8tnv\") pod \"coredns-7db6d8ff4d-8zcvk\" (UID: \"b58966d2-ccc7-40b6-ba32-ef9977463f92\") " pod="kube-system/coredns-7db6d8ff4d-8zcvk" Jun 25 18:37:55.709404 kubelet[3289]: I0625 18:37:55.708369 3289 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0dc1be98-a119-4811-b95d-a24913f2cc14-config-volume\") pod \"coredns-7db6d8ff4d-bblmb\" (UID: \"0dc1be98-a119-4811-b95d-a24913f2cc14\") " pod="kube-system/coredns-7db6d8ff4d-bblmb" Jun 25 18:37:55.746623 systemd[1]: Created slice kubepods-burstable-podb58966d2_ccc7_40b6_ba32_ef9977463f92.slice - libcontainer container kubepods-burstable-podb58966d2_ccc7_40b6_ba32_ef9977463f92.slice. Jun 25 18:37:55.758749 systemd[1]: Created slice kubepods-besteffort-pod239a9240_02bc_486a_9fcd_8b5b78a2cc4e.slice - libcontainer container kubepods-besteffort-pod239a9240_02bc_486a_9fcd_8b5b78a2cc4e.slice. Jun 25 18:37:55.773372 systemd[1]: Created slice kubepods-burstable-pod0dc1be98_a119_4811_b95d_a24913f2cc14.slice - libcontainer container kubepods-burstable-pod0dc1be98_a119_4811_b95d_a24913f2cc14.slice. Jun 25 18:37:56.054833 containerd[1879]: time="2024-06-25T18:37:56.054684291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8zcvk,Uid:b58966d2-ccc7-40b6-ba32-ef9977463f92,Namespace:kube-system,Attempt:0,}" Jun 25 18:37:56.070821 containerd[1879]: time="2024-06-25T18:37:56.068933651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c77496f95-mnncn,Uid:239a9240-02bc-486a-9fcd-8b5b78a2cc4e,Namespace:calico-system,Attempt:0,}" Jun 25 18:37:56.083334 containerd[1879]: time="2024-06-25T18:37:56.082030096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bblmb,Uid:0dc1be98-a119-4811-b95d-a24913f2cc14,Namespace:kube-system,Attempt:0,}" Jun 25 18:37:56.355524 containerd[1879]: time="2024-06-25T18:37:56.355085538Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Jun 25 18:37:56.381267 containerd[1879]: time="2024-06-25T18:37:56.380422920Z" level=error msg="Failed to destroy network for sandbox \"bf7970fad0c7b2b14f7cd10923f4786a1054b1ec18cda7591d42f8a5752e6c6c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:37:56.411485 containerd[1879]: time="2024-06-25T18:37:56.411423595Z" level=error msg="encountered an error cleaning up failed sandbox \"bf7970fad0c7b2b14f7cd10923f4786a1054b1ec18cda7591d42f8a5752e6c6c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:37:56.411676 containerd[1879]: time="2024-06-25T18:37:56.411436652Z" level=error msg="Failed to destroy network for sandbox \"1a30d46c8972338c5167ad00daeedfb2787ba575849dd504fdabe9d7e36d5849\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:37:56.411948 containerd[1879]: time="2024-06-25T18:37:56.411907808Z" level=error msg="encountered an error cleaning up failed sandbox \"1a30d46c8972338c5167ad00daeedfb2787ba575849dd504fdabe9d7e36d5849\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:37:56.412037 containerd[1879]: time="2024-06-25T18:37:56.411977964Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8zcvk,Uid:b58966d2-ccc7-40b6-ba32-ef9977463f92,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1a30d46c8972338c5167ad00daeedfb2787ba575849dd504fdabe9d7e36d5849\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:37:56.412446 containerd[1879]: time="2024-06-25T18:37:56.412363377Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bblmb,Uid:0dc1be98-a119-4811-b95d-a24913f2cc14,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bf7970fad0c7b2b14f7cd10923f4786a1054b1ec18cda7591d42f8a5752e6c6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:37:56.417926 containerd[1879]: time="2024-06-25T18:37:56.417268399Z" level=error msg="Failed to destroy network for sandbox \"a615f784b4ea81d1ffa5d305f1dba14198613895437ce503591554b9419f9346\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:37:56.417926 containerd[1879]: time="2024-06-25T18:37:56.417760163Z" level=error msg="encountered an error cleaning up failed sandbox \"a615f784b4ea81d1ffa5d305f1dba14198613895437ce503591554b9419f9346\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:37:56.417926 containerd[1879]: time="2024-06-25T18:37:56.417865710Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c77496f95-mnncn,Uid:239a9240-02bc-486a-9fcd-8b5b78a2cc4e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a615f784b4ea81d1ffa5d305f1dba14198613895437ce503591554b9419f9346\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:37:56.423378 kubelet[3289]: E0625 18:37:56.412327 3289 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a30d46c8972338c5167ad00daeedfb2787ba575849dd504fdabe9d7e36d5849\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:37:56.425036 kubelet[3289]: E0625 18:37:56.423405 3289 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a30d46c8972338c5167ad00daeedfb2787ba575849dd504fdabe9d7e36d5849\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-8zcvk" Jun 25 18:37:56.425036 kubelet[3289]: E0625 18:37:56.423434 3289 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a30d46c8972338c5167ad00daeedfb2787ba575849dd504fdabe9d7e36d5849\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-8zcvk" Jun 25 18:37:56.425036 kubelet[3289]: E0625 18:37:56.423487 3289 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-8zcvk_kube-system(b58966d2-ccc7-40b6-ba32-ef9977463f92)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-8zcvk_kube-system(b58966d2-ccc7-40b6-ba32-ef9977463f92)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1a30d46c8972338c5167ad00daeedfb2787ba575849dd504fdabe9d7e36d5849\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-8zcvk" podUID="b58966d2-ccc7-40b6-ba32-ef9977463f92" Jun 25 18:37:56.425360 kubelet[3289]: E0625 18:37:56.412603 3289 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf7970fad0c7b2b14f7cd10923f4786a1054b1ec18cda7591d42f8a5752e6c6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:37:56.425627 kubelet[3289]: E0625 18:37:56.425578 3289 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a615f784b4ea81d1ffa5d305f1dba14198613895437ce503591554b9419f9346\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:37:56.425753 kubelet[3289]: E0625 18:37:56.425634 3289 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a615f784b4ea81d1ffa5d305f1dba14198613895437ce503591554b9419f9346\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6c77496f95-mnncn" Jun 25 18:37:56.425753 kubelet[3289]: E0625 18:37:56.425707 3289 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a615f784b4ea81d1ffa5d305f1dba14198613895437ce503591554b9419f9346\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6c77496f95-mnncn" Jun 25 18:37:56.425931 kubelet[3289]: E0625 18:37:56.425756 3289 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6c77496f95-mnncn_calico-system(239a9240-02bc-486a-9fcd-8b5b78a2cc4e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6c77496f95-mnncn_calico-system(239a9240-02bc-486a-9fcd-8b5b78a2cc4e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a615f784b4ea81d1ffa5d305f1dba14198613895437ce503591554b9419f9346\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6c77496f95-mnncn" podUID="239a9240-02bc-486a-9fcd-8b5b78a2cc4e" Jun 25 18:37:56.430930 kubelet[3289]: E0625 18:37:56.430732 3289 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf7970fad0c7b2b14f7cd10923f4786a1054b1ec18cda7591d42f8a5752e6c6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-bblmb" Jun 25 18:37:56.431089 kubelet[3289]: E0625 18:37:56.430935 3289 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf7970fad0c7b2b14f7cd10923f4786a1054b1ec18cda7591d42f8a5752e6c6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-bblmb" Jun 25 18:37:56.431157 kubelet[3289]: E0625 18:37:56.431084 3289 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-bblmb_kube-system(0dc1be98-a119-4811-b95d-a24913f2cc14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-bblmb_kube-system(0dc1be98-a119-4811-b95d-a24913f2cc14)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bf7970fad0c7b2b14f7cd10923f4786a1054b1ec18cda7591d42f8a5752e6c6c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-bblmb" podUID="0dc1be98-a119-4811-b95d-a24913f2cc14" Jun 25 18:37:56.570346 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1a30d46c8972338c5167ad00daeedfb2787ba575849dd504fdabe9d7e36d5849-shm.mount: Deactivated successfully. Jun 25 18:37:56.972627 systemd[1]: Created slice kubepods-besteffort-pod4109c626_0c92_402f_a0e5_85bdc1e223de.slice - libcontainer container kubepods-besteffort-pod4109c626_0c92_402f_a0e5_85bdc1e223de.slice. Jun 25 18:37:56.977187 containerd[1879]: time="2024-06-25T18:37:56.976786362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jd8sm,Uid:4109c626-0c92-402f-a0e5-85bdc1e223de,Namespace:calico-system,Attempt:0,}" Jun 25 18:37:57.088031 containerd[1879]: time="2024-06-25T18:37:57.087973619Z" level=error msg="Failed to destroy network for sandbox \"a2e25af349ccaf02eaf050372e010ee51788e04c4821ba4f07c8a7c348808923\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:37:57.097003 containerd[1879]: time="2024-06-25T18:37:57.096197021Z" level=error msg="encountered an error cleaning up failed sandbox \"a2e25af349ccaf02eaf050372e010ee51788e04c4821ba4f07c8a7c348808923\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:37:57.097003 containerd[1879]: time="2024-06-25T18:37:57.096285585Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jd8sm,Uid:4109c626-0c92-402f-a0e5-85bdc1e223de,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a2e25af349ccaf02eaf050372e010ee51788e04c4821ba4f07c8a7c348808923\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:37:57.097408 kubelet[3289]: E0625 18:37:57.096600 3289 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2e25af349ccaf02eaf050372e010ee51788e04c4821ba4f07c8a7c348808923\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:37:57.097408 kubelet[3289]: E0625 18:37:57.096665 3289 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2e25af349ccaf02eaf050372e010ee51788e04c4821ba4f07c8a7c348808923\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jd8sm" Jun 25 18:37:57.097408 kubelet[3289]: E0625 18:37:57.096690 3289 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2e25af349ccaf02eaf050372e010ee51788e04c4821ba4f07c8a7c348808923\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jd8sm" Jun 25 18:37:57.097560 kubelet[3289]: E0625 18:37:57.096742 3289 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jd8sm_calico-system(4109c626-0c92-402f-a0e5-85bdc1e223de)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jd8sm_calico-system(4109c626-0c92-402f-a0e5-85bdc1e223de)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a2e25af349ccaf02eaf050372e010ee51788e04c4821ba4f07c8a7c348808923\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jd8sm" podUID="4109c626-0c92-402f-a0e5-85bdc1e223de" Jun 25 18:37:57.098335 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a2e25af349ccaf02eaf050372e010ee51788e04c4821ba4f07c8a7c348808923-shm.mount: Deactivated successfully. Jun 25 18:37:57.356958 kubelet[3289]: I0625 18:37:57.356742 3289 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2e25af349ccaf02eaf050372e010ee51788e04c4821ba4f07c8a7c348808923" Jun 25 18:37:57.361315 kubelet[3289]: I0625 18:37:57.360864 3289 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf7970fad0c7b2b14f7cd10923f4786a1054b1ec18cda7591d42f8a5752e6c6c" Jun 25 18:37:57.380676 kubelet[3289]: I0625 18:37:57.380504 3289 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a30d46c8972338c5167ad00daeedfb2787ba575849dd504fdabe9d7e36d5849" Jun 25 18:37:57.390223 kubelet[3289]: I0625 18:37:57.389309 3289 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a615f784b4ea81d1ffa5d305f1dba14198613895437ce503591554b9419f9346" Jun 25 18:37:57.406739 containerd[1879]: time="2024-06-25T18:37:57.406692933Z" level=info msg="StopPodSandbox for \"a615f784b4ea81d1ffa5d305f1dba14198613895437ce503591554b9419f9346\"" Jun 25 18:37:57.407015 containerd[1879]: time="2024-06-25T18:37:57.406987094Z" level=info msg="Ensure that sandbox a615f784b4ea81d1ffa5d305f1dba14198613895437ce503591554b9419f9346 in task-service has been cleanup successfully" Jun 25 18:37:57.415842 containerd[1879]: time="2024-06-25T18:37:57.415673230Z" level=info msg="StopPodSandbox for \"1a30d46c8972338c5167ad00daeedfb2787ba575849dd504fdabe9d7e36d5849\"" Jun 25 18:37:57.416011 containerd[1879]: time="2024-06-25T18:37:57.415974978Z" level=info msg="Ensure that sandbox 1a30d46c8972338c5167ad00daeedfb2787ba575849dd504fdabe9d7e36d5849 in task-service has been cleanup successfully" Jun 25 18:37:57.420867 containerd[1879]: time="2024-06-25T18:37:57.418934714Z" level=info msg="StopPodSandbox for \"a2e25af349ccaf02eaf050372e010ee51788e04c4821ba4f07c8a7c348808923\"" Jun 25 18:37:57.421959 containerd[1879]: time="2024-06-25T18:37:57.421902579Z" level=info msg="Ensure that sandbox a2e25af349ccaf02eaf050372e010ee51788e04c4821ba4f07c8a7c348808923 in task-service has been cleanup successfully" Jun 25 18:37:57.423235 containerd[1879]: time="2024-06-25T18:37:57.422128810Z" level=info msg="StopPodSandbox for \"bf7970fad0c7b2b14f7cd10923f4786a1054b1ec18cda7591d42f8a5752e6c6c\"" Jun 25 18:37:57.423235 containerd[1879]: time="2024-06-25T18:37:57.423050944Z" level=info msg="Ensure that sandbox bf7970fad0c7b2b14f7cd10923f4786a1054b1ec18cda7591d42f8a5752e6c6c in task-service has been cleanup successfully" Jun 25 18:37:57.511594 containerd[1879]: time="2024-06-25T18:37:57.511534042Z" level=error msg="StopPodSandbox for \"a615f784b4ea81d1ffa5d305f1dba14198613895437ce503591554b9419f9346\" failed" error="failed to destroy network for sandbox \"a615f784b4ea81d1ffa5d305f1dba14198613895437ce503591554b9419f9346\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:37:57.511974 kubelet[3289]: E0625 18:37:57.511932 3289 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a615f784b4ea81d1ffa5d305f1dba14198613895437ce503591554b9419f9346\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a615f784b4ea81d1ffa5d305f1dba14198613895437ce503591554b9419f9346" Jun 25 18:37:57.513467 kubelet[3289]: E0625 18:37:57.512520 3289 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a615f784b4ea81d1ffa5d305f1dba14198613895437ce503591554b9419f9346"} Jun 25 18:37:57.513467 kubelet[3289]: E0625 18:37:57.512612 3289 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"239a9240-02bc-486a-9fcd-8b5b78a2cc4e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a615f784b4ea81d1ffa5d305f1dba14198613895437ce503591554b9419f9346\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 18:37:57.513467 kubelet[3289]: E0625 18:37:57.512650 3289 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"239a9240-02bc-486a-9fcd-8b5b78a2cc4e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a615f784b4ea81d1ffa5d305f1dba14198613895437ce503591554b9419f9346\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6c77496f95-mnncn" podUID="239a9240-02bc-486a-9fcd-8b5b78a2cc4e" Jun 25 18:37:57.566196 containerd[1879]: time="2024-06-25T18:37:57.566021476Z" level=error msg="StopPodSandbox for \"bf7970fad0c7b2b14f7cd10923f4786a1054b1ec18cda7591d42f8a5752e6c6c\" failed" error="failed to destroy network for sandbox \"bf7970fad0c7b2b14f7cd10923f4786a1054b1ec18cda7591d42f8a5752e6c6c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:37:57.566714 kubelet[3289]: E0625 18:37:57.566502 3289 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bf7970fad0c7b2b14f7cd10923f4786a1054b1ec18cda7591d42f8a5752e6c6c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bf7970fad0c7b2b14f7cd10923f4786a1054b1ec18cda7591d42f8a5752e6c6c" Jun 25 18:37:57.566869 kubelet[3289]: E0625 18:37:57.566737 3289 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bf7970fad0c7b2b14f7cd10923f4786a1054b1ec18cda7591d42f8a5752e6c6c"} Jun 25 18:37:57.566869 kubelet[3289]: E0625 18:37:57.566785 3289 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0dc1be98-a119-4811-b95d-a24913f2cc14\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bf7970fad0c7b2b14f7cd10923f4786a1054b1ec18cda7591d42f8a5752e6c6c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 18:37:57.566869 kubelet[3289]: E0625 18:37:57.566832 3289 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0dc1be98-a119-4811-b95d-a24913f2cc14\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bf7970fad0c7b2b14f7cd10923f4786a1054b1ec18cda7591d42f8a5752e6c6c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-bblmb" podUID="0dc1be98-a119-4811-b95d-a24913f2cc14" Jun 25 18:37:57.574732 containerd[1879]: time="2024-06-25T18:37:57.574676858Z" level=error msg="StopPodSandbox for \"1a30d46c8972338c5167ad00daeedfb2787ba575849dd504fdabe9d7e36d5849\" failed" error="failed to destroy network for sandbox \"1a30d46c8972338c5167ad00daeedfb2787ba575849dd504fdabe9d7e36d5849\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:37:57.575684 kubelet[3289]: E0625 18:37:57.575167 3289 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1a30d46c8972338c5167ad00daeedfb2787ba575849dd504fdabe9d7e36d5849\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1a30d46c8972338c5167ad00daeedfb2787ba575849dd504fdabe9d7e36d5849" Jun 25 18:37:57.575684 kubelet[3289]: E0625 18:37:57.575235 3289 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1a30d46c8972338c5167ad00daeedfb2787ba575849dd504fdabe9d7e36d5849"} Jun 25 18:37:57.575684 kubelet[3289]: E0625 18:37:57.575565 3289 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b58966d2-ccc7-40b6-ba32-ef9977463f92\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1a30d46c8972338c5167ad00daeedfb2787ba575849dd504fdabe9d7e36d5849\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 18:37:57.575684 kubelet[3289]: E0625 18:37:57.575615 3289 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b58966d2-ccc7-40b6-ba32-ef9977463f92\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1a30d46c8972338c5167ad00daeedfb2787ba575849dd504fdabe9d7e36d5849\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-8zcvk" podUID="b58966d2-ccc7-40b6-ba32-ef9977463f92" Jun 25 18:37:57.579218 containerd[1879]: time="2024-06-25T18:37:57.578643998Z" level=error msg="StopPodSandbox for \"a2e25af349ccaf02eaf050372e010ee51788e04c4821ba4f07c8a7c348808923\" failed" error="failed to destroy network for sandbox \"a2e25af349ccaf02eaf050372e010ee51788e04c4821ba4f07c8a7c348808923\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:37:57.579373 kubelet[3289]: E0625 18:37:57.579023 3289 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a2e25af349ccaf02eaf050372e010ee51788e04c4821ba4f07c8a7c348808923\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a2e25af349ccaf02eaf050372e010ee51788e04c4821ba4f07c8a7c348808923" Jun 25 18:37:57.579373 kubelet[3289]: E0625 18:37:57.579072 3289 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a2e25af349ccaf02eaf050372e010ee51788e04c4821ba4f07c8a7c348808923"} Jun 25 18:37:57.579373 kubelet[3289]: E0625 18:37:57.579113 3289 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4109c626-0c92-402f-a0e5-85bdc1e223de\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a2e25af349ccaf02eaf050372e010ee51788e04c4821ba4f07c8a7c348808923\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 18:37:57.579373 kubelet[3289]: E0625 18:37:57.579144 3289 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4109c626-0c92-402f-a0e5-85bdc1e223de\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a2e25af349ccaf02eaf050372e010ee51788e04c4821ba4f07c8a7c348808923\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jd8sm" podUID="4109c626-0c92-402f-a0e5-85bdc1e223de" Jun 25 18:38:05.675264 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount916681684.mount: Deactivated successfully. Jun 25 18:38:05.743409 containerd[1879]: time="2024-06-25T18:38:05.735093610Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:38:05.745318 containerd[1879]: time="2024-06-25T18:38:05.736292247Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=115238750" Jun 25 18:38:05.748241 containerd[1879]: time="2024-06-25T18:38:05.746720679Z" level=info msg="ImageCreate event name:\"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:38:05.750033 containerd[1879]: time="2024-06-25T18:38:05.749968421Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:38:05.751150 containerd[1879]: time="2024-06-25T18:38:05.751100026Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"115238612\" in 9.395712703s" Jun 25 18:38:05.751910 containerd[1879]: time="2024-06-25T18:38:05.751281549Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\"" Jun 25 18:38:05.778442 containerd[1879]: time="2024-06-25T18:38:05.778395388Z" level=info msg="CreateContainer within sandbox \"e3ff9be9d7d6449b3eda80c43a117060420e5e04bb1776e1f0637a9f5d4964ca\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jun 25 18:38:05.806194 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2361141569.mount: Deactivated successfully. Jun 25 18:38:05.837351 containerd[1879]: time="2024-06-25T18:38:05.813936771Z" level=info msg="CreateContainer within sandbox \"e3ff9be9d7d6449b3eda80c43a117060420e5e04bb1776e1f0637a9f5d4964ca\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"663547915d4d13fd80cfcb2a37482a851a3f35051dd6f18bf0ec6d7caf50c902\"" Jun 25 18:38:05.839431 containerd[1879]: time="2024-06-25T18:38:05.838973632Z" level=info msg="StartContainer for \"663547915d4d13fd80cfcb2a37482a851a3f35051dd6f18bf0ec6d7caf50c902\"" Jun 25 18:38:05.947900 systemd[1]: Started cri-containerd-663547915d4d13fd80cfcb2a37482a851a3f35051dd6f18bf0ec6d7caf50c902.scope - libcontainer container 663547915d4d13fd80cfcb2a37482a851a3f35051dd6f18bf0ec6d7caf50c902. Jun 25 18:38:06.077880 containerd[1879]: time="2024-06-25T18:38:06.076624745Z" level=info msg="StartContainer for \"663547915d4d13fd80cfcb2a37482a851a3f35051dd6f18bf0ec6d7caf50c902\" returns successfully" Jun 25 18:38:06.257178 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jun 25 18:38:06.258808 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jun 25 18:38:06.541712 kubelet[3289]: I0625 18:38:06.527067 3289 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-fl492" podStartSLOduration=2.333252625 podStartE2EDuration="24.50408486s" podCreationTimestamp="2024-06-25 18:37:42 +0000 UTC" firstStartedPulling="2024-06-25 18:37:43.581392518 +0000 UTC m=+23.993478406" lastFinishedPulling="2024-06-25 18:38:05.752224754 +0000 UTC m=+46.164310641" observedRunningTime="2024-06-25 18:38:06.503348615 +0000 UTC m=+46.915434545" watchObservedRunningTime="2024-06-25 18:38:06.50408486 +0000 UTC m=+46.916170755" Jun 25 18:38:08.857136 (udev-worker)[4221]: Network interface NamePolicy= disabled on kernel command line. Jun 25 18:38:08.867264 systemd-networkd[1718]: vxlan.calico: Link UP Jun 25 18:38:08.867271 systemd-networkd[1718]: vxlan.calico: Gained carrier Jun 25 18:38:08.897004 (udev-worker)[4399]: Network interface NamePolicy= disabled on kernel command line. Jun 25 18:38:08.901633 (udev-worker)[4401]: Network interface NamePolicy= disabled on kernel command line. Jun 25 18:38:10.570131 systemd-networkd[1718]: vxlan.calico: Gained IPv6LL Jun 25 18:38:10.960772 containerd[1879]: time="2024-06-25T18:38:10.959403012Z" level=info msg="StopPodSandbox for \"1a30d46c8972338c5167ad00daeedfb2787ba575849dd504fdabe9d7e36d5849\"" Jun 25 18:38:10.960772 containerd[1879]: time="2024-06-25T18:38:10.960544115Z" level=info msg="StopPodSandbox for \"a615f784b4ea81d1ffa5d305f1dba14198613895437ce503591554b9419f9346\"" Jun 25 18:38:10.966777 containerd[1879]: time="2024-06-25T18:38:10.966354339Z" level=info msg="StopPodSandbox for \"bf7970fad0c7b2b14f7cd10923f4786a1054b1ec18cda7591d42f8a5752e6c6c\"" Jun 25 18:38:11.287818 systemd[1]: Started sshd@7-172.31.20.217:22-139.178.68.195:36388.service - OpenSSH per-connection server daemon (139.178.68.195:36388). Jun 25 18:38:11.608041 sshd[4515]: Accepted publickey for core from 139.178.68.195 port 36388 ssh2: RSA SHA256:zWpntMacToOmwCaU62vdvg6t1el6aib1JfI6hz3EHOQ Jun 25 18:38:11.612338 sshd[4515]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:38:11.630086 containerd[1879]: 2024-06-25 18:38:11.126 [INFO][4480] k8s.go 608: Cleaning up netns ContainerID="bf7970fad0c7b2b14f7cd10923f4786a1054b1ec18cda7591d42f8a5752e6c6c" Jun 25 18:38:11.630086 containerd[1879]: 2024-06-25 18:38:11.126 [INFO][4480] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="bf7970fad0c7b2b14f7cd10923f4786a1054b1ec18cda7591d42f8a5752e6c6c" iface="eth0" netns="/var/run/netns/cni-0ae083ed-76e9-c86c-3a22-07368ca5f837" Jun 25 18:38:11.630086 containerd[1879]: 2024-06-25 18:38:11.127 [INFO][4480] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="bf7970fad0c7b2b14f7cd10923f4786a1054b1ec18cda7591d42f8a5752e6c6c" iface="eth0" netns="/var/run/netns/cni-0ae083ed-76e9-c86c-3a22-07368ca5f837" Jun 25 18:38:11.630086 containerd[1879]: 2024-06-25 18:38:11.128 [INFO][4480] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="bf7970fad0c7b2b14f7cd10923f4786a1054b1ec18cda7591d42f8a5752e6c6c" iface="eth0" netns="/var/run/netns/cni-0ae083ed-76e9-c86c-3a22-07368ca5f837" Jun 25 18:38:11.630086 containerd[1879]: 2024-06-25 18:38:11.128 [INFO][4480] k8s.go 615: Releasing IP address(es) ContainerID="bf7970fad0c7b2b14f7cd10923f4786a1054b1ec18cda7591d42f8a5752e6c6c" Jun 25 18:38:11.630086 containerd[1879]: 2024-06-25 18:38:11.128 [INFO][4480] utils.go 188: Calico CNI releasing IP address ContainerID="bf7970fad0c7b2b14f7cd10923f4786a1054b1ec18cda7591d42f8a5752e6c6c" Jun 25 18:38:11.630086 containerd[1879]: 2024-06-25 18:38:11.576 [INFO][4504] ipam_plugin.go 411: Releasing address using handleID ContainerID="bf7970fad0c7b2b14f7cd10923f4786a1054b1ec18cda7591d42f8a5752e6c6c" HandleID="k8s-pod-network.bf7970fad0c7b2b14f7cd10923f4786a1054b1ec18cda7591d42f8a5752e6c6c" Workload="ip--172--31--20--217-k8s-coredns--7db6d8ff4d--bblmb-eth0" Jun 25 18:38:11.630086 containerd[1879]: 2024-06-25 18:38:11.579 [INFO][4504] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:38:11.630086 containerd[1879]: 2024-06-25 18:38:11.579 [INFO][4504] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:38:11.630086 containerd[1879]: 2024-06-25 18:38:11.611 [WARNING][4504] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="bf7970fad0c7b2b14f7cd10923f4786a1054b1ec18cda7591d42f8a5752e6c6c" HandleID="k8s-pod-network.bf7970fad0c7b2b14f7cd10923f4786a1054b1ec18cda7591d42f8a5752e6c6c" Workload="ip--172--31--20--217-k8s-coredns--7db6d8ff4d--bblmb-eth0" Jun 25 18:38:11.630086 containerd[1879]: 2024-06-25 18:38:11.613 [INFO][4504] ipam_plugin.go 439: Releasing address using workloadID ContainerID="bf7970fad0c7b2b14f7cd10923f4786a1054b1ec18cda7591d42f8a5752e6c6c" HandleID="k8s-pod-network.bf7970fad0c7b2b14f7cd10923f4786a1054b1ec18cda7591d42f8a5752e6c6c" Workload="ip--172--31--20--217-k8s-coredns--7db6d8ff4d--bblmb-eth0" Jun 25 18:38:11.630086 containerd[1879]: 2024-06-25 18:38:11.619 [INFO][4504] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:38:11.630086 containerd[1879]: 2024-06-25 18:38:11.626 [INFO][4480] k8s.go 621: Teardown processing complete. ContainerID="bf7970fad0c7b2b14f7cd10923f4786a1054b1ec18cda7591d42f8a5752e6c6c" Jun 25 18:38:11.633867 containerd[1879]: time="2024-06-25T18:38:11.632307278Z" level=info msg="TearDown network for sandbox \"bf7970fad0c7b2b14f7cd10923f4786a1054b1ec18cda7591d42f8a5752e6c6c\" successfully" Jun 25 18:38:11.633867 containerd[1879]: time="2024-06-25T18:38:11.632371637Z" level=info msg="StopPodSandbox for \"bf7970fad0c7b2b14f7cd10923f4786a1054b1ec18cda7591d42f8a5752e6c6c\" returns successfully" Jun 25 18:38:11.635962 systemd-logind[1858]: New session 8 of user core. Jun 25 18:38:11.647431 systemd[1]: run-netns-cni\x2d0ae083ed\x2d76e9\x2dc86c\x2d3a22\x2d07368ca5f837.mount: Deactivated successfully. Jun 25 18:38:11.660433 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 25 18:38:11.703699 containerd[1879]: 2024-06-25 18:38:11.120 [INFO][4481] k8s.go 608: Cleaning up netns ContainerID="1a30d46c8972338c5167ad00daeedfb2787ba575849dd504fdabe9d7e36d5849" Jun 25 18:38:11.703699 containerd[1879]: 2024-06-25 18:38:11.120 [INFO][4481] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="1a30d46c8972338c5167ad00daeedfb2787ba575849dd504fdabe9d7e36d5849" iface="eth0" netns="/var/run/netns/cni-414adb74-53cb-b6d1-6a16-97fdf8f6e21a" Jun 25 18:38:11.703699 containerd[1879]: 2024-06-25 18:38:11.120 [INFO][4481] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="1a30d46c8972338c5167ad00daeedfb2787ba575849dd504fdabe9d7e36d5849" iface="eth0" netns="/var/run/netns/cni-414adb74-53cb-b6d1-6a16-97fdf8f6e21a" Jun 25 18:38:11.703699 containerd[1879]: 2024-06-25 18:38:11.120 [INFO][4481] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="1a30d46c8972338c5167ad00daeedfb2787ba575849dd504fdabe9d7e36d5849" iface="eth0" netns="/var/run/netns/cni-414adb74-53cb-b6d1-6a16-97fdf8f6e21a" Jun 25 18:38:11.703699 containerd[1879]: 2024-06-25 18:38:11.120 [INFO][4481] k8s.go 615: Releasing IP address(es) ContainerID="1a30d46c8972338c5167ad00daeedfb2787ba575849dd504fdabe9d7e36d5849" Jun 25 18:38:11.703699 containerd[1879]: 2024-06-25 18:38:11.120 [INFO][4481] utils.go 188: Calico CNI releasing IP address ContainerID="1a30d46c8972338c5167ad00daeedfb2787ba575849dd504fdabe9d7e36d5849" Jun 25 18:38:11.703699 containerd[1879]: 2024-06-25 18:38:11.582 [INFO][4503] ipam_plugin.go 411: Releasing address using handleID ContainerID="1a30d46c8972338c5167ad00daeedfb2787ba575849dd504fdabe9d7e36d5849" HandleID="k8s-pod-network.1a30d46c8972338c5167ad00daeedfb2787ba575849dd504fdabe9d7e36d5849" Workload="ip--172--31--20--217-k8s-coredns--7db6d8ff4d--8zcvk-eth0" Jun 25 18:38:11.703699 containerd[1879]: 2024-06-25 18:38:11.583 [INFO][4503] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:38:11.703699 containerd[1879]: 2024-06-25 18:38:11.619 [INFO][4503] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:38:11.703699 containerd[1879]: 2024-06-25 18:38:11.678 [WARNING][4503] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="1a30d46c8972338c5167ad00daeedfb2787ba575849dd504fdabe9d7e36d5849" HandleID="k8s-pod-network.1a30d46c8972338c5167ad00daeedfb2787ba575849dd504fdabe9d7e36d5849" Workload="ip--172--31--20--217-k8s-coredns--7db6d8ff4d--8zcvk-eth0" Jun 25 18:38:11.703699 containerd[1879]: 2024-06-25 18:38:11.678 [INFO][4503] ipam_plugin.go 439: Releasing address using workloadID ContainerID="1a30d46c8972338c5167ad00daeedfb2787ba575849dd504fdabe9d7e36d5849" HandleID="k8s-pod-network.1a30d46c8972338c5167ad00daeedfb2787ba575849dd504fdabe9d7e36d5849" Workload="ip--172--31--20--217-k8s-coredns--7db6d8ff4d--8zcvk-eth0" Jun 25 18:38:11.703699 containerd[1879]: 2024-06-25 18:38:11.686 [INFO][4503] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:38:11.703699 containerd[1879]: 2024-06-25 18:38:11.697 [INFO][4481] k8s.go 621: Teardown processing complete. ContainerID="1a30d46c8972338c5167ad00daeedfb2787ba575849dd504fdabe9d7e36d5849" Jun 25 18:38:11.706210 containerd[1879]: time="2024-06-25T18:38:11.705665402Z" level=info msg="TearDown network for sandbox \"1a30d46c8972338c5167ad00daeedfb2787ba575849dd504fdabe9d7e36d5849\" successfully" Jun 25 18:38:11.706210 containerd[1879]: time="2024-06-25T18:38:11.705708007Z" level=info msg="StopPodSandbox for \"1a30d46c8972338c5167ad00daeedfb2787ba575849dd504fdabe9d7e36d5849\" returns successfully" Jun 25 18:38:11.712286 systemd[1]: run-netns-cni\x2d414adb74\x2d53cb\x2db6d1\x2d6a16\x2d97fdf8f6e21a.mount: Deactivated successfully. Jun 25 18:38:11.725934 containerd[1879]: 2024-06-25 18:38:11.100 [INFO][4491] k8s.go 608: Cleaning up netns ContainerID="a615f784b4ea81d1ffa5d305f1dba14198613895437ce503591554b9419f9346" Jun 25 18:38:11.725934 containerd[1879]: 2024-06-25 18:38:11.102 [INFO][4491] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="a615f784b4ea81d1ffa5d305f1dba14198613895437ce503591554b9419f9346" iface="eth0" netns="/var/run/netns/cni-d121d740-423c-94a5-7329-931d5ff71d01" Jun 25 18:38:11.725934 containerd[1879]: 2024-06-25 18:38:11.102 [INFO][4491] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="a615f784b4ea81d1ffa5d305f1dba14198613895437ce503591554b9419f9346" iface="eth0" netns="/var/run/netns/cni-d121d740-423c-94a5-7329-931d5ff71d01" Jun 25 18:38:11.725934 containerd[1879]: 2024-06-25 18:38:11.103 [INFO][4491] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="a615f784b4ea81d1ffa5d305f1dba14198613895437ce503591554b9419f9346" iface="eth0" netns="/var/run/netns/cni-d121d740-423c-94a5-7329-931d5ff71d01" Jun 25 18:38:11.725934 containerd[1879]: 2024-06-25 18:38:11.103 [INFO][4491] k8s.go 615: Releasing IP address(es) ContainerID="a615f784b4ea81d1ffa5d305f1dba14198613895437ce503591554b9419f9346" Jun 25 18:38:11.725934 containerd[1879]: 2024-06-25 18:38:11.103 [INFO][4491] utils.go 188: Calico CNI releasing IP address ContainerID="a615f784b4ea81d1ffa5d305f1dba14198613895437ce503591554b9419f9346" Jun 25 18:38:11.725934 containerd[1879]: 2024-06-25 18:38:11.601 [INFO][4502] ipam_plugin.go 411: Releasing address using handleID ContainerID="a615f784b4ea81d1ffa5d305f1dba14198613895437ce503591554b9419f9346" HandleID="k8s-pod-network.a615f784b4ea81d1ffa5d305f1dba14198613895437ce503591554b9419f9346" Workload="ip--172--31--20--217-k8s-calico--kube--controllers--6c77496f95--mnncn-eth0" Jun 25 18:38:11.725934 containerd[1879]: 2024-06-25 18:38:11.606 [INFO][4502] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:38:11.725934 containerd[1879]: 2024-06-25 18:38:11.687 [INFO][4502] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:38:11.725934 containerd[1879]: 2024-06-25 18:38:11.706 [WARNING][4502] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="a615f784b4ea81d1ffa5d305f1dba14198613895437ce503591554b9419f9346" HandleID="k8s-pod-network.a615f784b4ea81d1ffa5d305f1dba14198613895437ce503591554b9419f9346" Workload="ip--172--31--20--217-k8s-calico--kube--controllers--6c77496f95--mnncn-eth0" Jun 25 18:38:11.725934 containerd[1879]: 2024-06-25 18:38:11.706 [INFO][4502] ipam_plugin.go 439: Releasing address using workloadID ContainerID="a615f784b4ea81d1ffa5d305f1dba14198613895437ce503591554b9419f9346" HandleID="k8s-pod-network.a615f784b4ea81d1ffa5d305f1dba14198613895437ce503591554b9419f9346" Workload="ip--172--31--20--217-k8s-calico--kube--controllers--6c77496f95--mnncn-eth0" Jun 25 18:38:11.725934 containerd[1879]: 2024-06-25 18:38:11.709 [INFO][4502] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:38:11.725934 containerd[1879]: 2024-06-25 18:38:11.715 [INFO][4491] k8s.go 621: Teardown processing complete. ContainerID="a615f784b4ea81d1ffa5d305f1dba14198613895437ce503591554b9419f9346" Jun 25 18:38:11.725934 containerd[1879]: time="2024-06-25T18:38:11.722301294Z" level=info msg="TearDown network for sandbox \"a615f784b4ea81d1ffa5d305f1dba14198613895437ce503591554b9419f9346\" successfully" Jun 25 18:38:11.725934 containerd[1879]: time="2024-06-25T18:38:11.722336534Z" level=info msg="StopPodSandbox for \"a615f784b4ea81d1ffa5d305f1dba14198613895437ce503591554b9419f9346\" returns successfully" Jun 25 18:38:11.729989 containerd[1879]: time="2024-06-25T18:38:11.729283148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bblmb,Uid:0dc1be98-a119-4811-b95d-a24913f2cc14,Namespace:kube-system,Attempt:1,}" Jun 25 18:38:11.730291 containerd[1879]: time="2024-06-25T18:38:11.730146909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8zcvk,Uid:b58966d2-ccc7-40b6-ba32-ef9977463f92,Namespace:kube-system,Attempt:1,}" Jun 25 18:38:11.730579 systemd[1]: run-netns-cni\x2dd121d740\x2d423c\x2d94a5\x2d7329\x2d931d5ff71d01.mount: Deactivated successfully. Jun 25 18:38:11.755385 containerd[1879]: time="2024-06-25T18:38:11.753480928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c77496f95-mnncn,Uid:239a9240-02bc-486a-9fcd-8b5b78a2cc4e,Namespace:calico-system,Attempt:1,}" Jun 25 18:38:12.022791 containerd[1879]: time="2024-06-25T18:38:12.022747166Z" level=info msg="StopPodSandbox for \"a2e25af349ccaf02eaf050372e010ee51788e04c4821ba4f07c8a7c348808923\"" Jun 25 18:38:12.781207 sshd[4515]: pam_unix(sshd:session): session closed for user core Jun 25 18:38:12.793713 systemd[1]: sshd@7-172.31.20.217:22-139.178.68.195:36388.service: Deactivated successfully. Jun 25 18:38:12.801764 systemd[1]: session-8.scope: Deactivated successfully. Jun 25 18:38:12.812438 systemd-logind[1858]: Session 8 logged out. Waiting for processes to exit. Jun 25 18:38:12.815698 systemd-logind[1858]: Removed session 8. Jun 25 18:38:13.186735 systemd-networkd[1718]: cali5c50d8e61c9: Link UP Jun 25 18:38:13.192470 systemd-networkd[1718]: cali5c50d8e61c9: Gained carrier Jun 25 18:38:13.197362 (udev-worker)[4624]: Network interface NamePolicy= disabled on kernel command line. Jun 25 18:38:13.247338 containerd[1879]: 2024-06-25 18:38:12.820 [INFO][4573] k8s.go 608: Cleaning up netns ContainerID="a2e25af349ccaf02eaf050372e010ee51788e04c4821ba4f07c8a7c348808923" Jun 25 18:38:13.247338 containerd[1879]: 2024-06-25 18:38:12.820 [INFO][4573] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="a2e25af349ccaf02eaf050372e010ee51788e04c4821ba4f07c8a7c348808923" iface="eth0" netns="/var/run/netns/cni-d5548b87-0dca-eeeb-6edd-b21aff15c575" Jun 25 18:38:13.247338 containerd[1879]: 2024-06-25 18:38:12.821 [INFO][4573] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="a2e25af349ccaf02eaf050372e010ee51788e04c4821ba4f07c8a7c348808923" iface="eth0" netns="/var/run/netns/cni-d5548b87-0dca-eeeb-6edd-b21aff15c575" Jun 25 18:38:13.247338 containerd[1879]: 2024-06-25 18:38:12.829 [INFO][4573] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="a2e25af349ccaf02eaf050372e010ee51788e04c4821ba4f07c8a7c348808923" iface="eth0" netns="/var/run/netns/cni-d5548b87-0dca-eeeb-6edd-b21aff15c575" Jun 25 18:38:13.247338 containerd[1879]: 2024-06-25 18:38:12.830 [INFO][4573] k8s.go 615: Releasing IP address(es) ContainerID="a2e25af349ccaf02eaf050372e010ee51788e04c4821ba4f07c8a7c348808923" Jun 25 18:38:13.247338 containerd[1879]: 2024-06-25 18:38:12.830 [INFO][4573] utils.go 188: Calico CNI releasing IP address ContainerID="a2e25af349ccaf02eaf050372e010ee51788e04c4821ba4f07c8a7c348808923" Jun 25 18:38:13.247338 containerd[1879]: 2024-06-25 18:38:12.997 [INFO][4608] ipam_plugin.go 411: Releasing address using handleID ContainerID="a2e25af349ccaf02eaf050372e010ee51788e04c4821ba4f07c8a7c348808923" HandleID="k8s-pod-network.a2e25af349ccaf02eaf050372e010ee51788e04c4821ba4f07c8a7c348808923" Workload="ip--172--31--20--217-k8s-csi--node--driver--jd8sm-eth0" Jun 25 18:38:13.247338 containerd[1879]: 2024-06-25 18:38:12.997 [INFO][4608] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:38:13.247338 containerd[1879]: 2024-06-25 18:38:13.159 [INFO][4608] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:38:13.247338 containerd[1879]: 2024-06-25 18:38:13.225 [WARNING][4608] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="a2e25af349ccaf02eaf050372e010ee51788e04c4821ba4f07c8a7c348808923" HandleID="k8s-pod-network.a2e25af349ccaf02eaf050372e010ee51788e04c4821ba4f07c8a7c348808923" Workload="ip--172--31--20--217-k8s-csi--node--driver--jd8sm-eth0" Jun 25 18:38:13.247338 containerd[1879]: 2024-06-25 18:38:13.226 [INFO][4608] ipam_plugin.go 439: Releasing address using workloadID ContainerID="a2e25af349ccaf02eaf050372e010ee51788e04c4821ba4f07c8a7c348808923" HandleID="k8s-pod-network.a2e25af349ccaf02eaf050372e010ee51788e04c4821ba4f07c8a7c348808923" Workload="ip--172--31--20--217-k8s-csi--node--driver--jd8sm-eth0" Jun 25 18:38:13.247338 containerd[1879]: 2024-06-25 18:38:13.233 [INFO][4608] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:38:13.247338 containerd[1879]: 2024-06-25 18:38:13.243 [INFO][4573] k8s.go 621: Teardown processing complete. ContainerID="a2e25af349ccaf02eaf050372e010ee51788e04c4821ba4f07c8a7c348808923" Jun 25 18:38:13.256578 containerd[1879]: time="2024-06-25T18:38:13.247532784Z" level=info msg="TearDown network for sandbox \"a2e25af349ccaf02eaf050372e010ee51788e04c4821ba4f07c8a7c348808923\" successfully" Jun 25 18:38:13.256578 containerd[1879]: time="2024-06-25T18:38:13.247566885Z" level=info msg="StopPodSandbox for \"a2e25af349ccaf02eaf050372e010ee51788e04c4821ba4f07c8a7c348808923\" returns successfully" Jun 25 18:38:13.256578 containerd[1879]: time="2024-06-25T18:38:13.251946101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jd8sm,Uid:4109c626-0c92-402f-a0e5-85bdc1e223de,Namespace:calico-system,Attempt:1,}" Jun 25 18:38:13.264618 systemd[1]: run-netns-cni\x2dd5548b87\x2d0dca\x2deeeb\x2d6edd\x2db21aff15c575.mount: Deactivated successfully. Jun 25 18:38:13.359451 containerd[1879]: 2024-06-25 18:38:12.652 [INFO][4552] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--217-k8s-calico--kube--controllers--6c77496f95--mnncn-eth0 calico-kube-controllers-6c77496f95- calico-system 239a9240-02bc-486a-9fcd-8b5b78a2cc4e 725 0 2024-06-25 18:37:43 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6c77496f95 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-20-217 calico-kube-controllers-6c77496f95-mnncn eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali5c50d8e61c9 [] []}} ContainerID="e4bdb58b1abe2c3c974edbdd3d7685c62df874b1c7d46ad7d9a76238f9c86da5" Namespace="calico-system" Pod="calico-kube-controllers-6c77496f95-mnncn" WorkloadEndpoint="ip--172--31--20--217-k8s-calico--kube--controllers--6c77496f95--mnncn-" Jun 25 18:38:13.359451 containerd[1879]: 2024-06-25 18:38:12.653 [INFO][4552] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e4bdb58b1abe2c3c974edbdd3d7685c62df874b1c7d46ad7d9a76238f9c86da5" Namespace="calico-system" Pod="calico-kube-controllers-6c77496f95-mnncn" WorkloadEndpoint="ip--172--31--20--217-k8s-calico--kube--controllers--6c77496f95--mnncn-eth0" Jun 25 18:38:13.359451 containerd[1879]: 2024-06-25 18:38:12.891 [INFO][4593] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e4bdb58b1abe2c3c974edbdd3d7685c62df874b1c7d46ad7d9a76238f9c86da5" HandleID="k8s-pod-network.e4bdb58b1abe2c3c974edbdd3d7685c62df874b1c7d46ad7d9a76238f9c86da5" Workload="ip--172--31--20--217-k8s-calico--kube--controllers--6c77496f95--mnncn-eth0" Jun 25 18:38:13.359451 containerd[1879]: 2024-06-25 18:38:12.987 [INFO][4593] ipam_plugin.go 264: Auto assigning IP ContainerID="e4bdb58b1abe2c3c974edbdd3d7685c62df874b1c7d46ad7d9a76238f9c86da5" HandleID="k8s-pod-network.e4bdb58b1abe2c3c974edbdd3d7685c62df874b1c7d46ad7d9a76238f9c86da5" Workload="ip--172--31--20--217-k8s-calico--kube--controllers--6c77496f95--mnncn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003181c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-20-217", "pod":"calico-kube-controllers-6c77496f95-mnncn", "timestamp":"2024-06-25 18:38:12.891712601 +0000 UTC"}, Hostname:"ip-172-31-20-217", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 18:38:13.359451 containerd[1879]: 2024-06-25 18:38:12.987 [INFO][4593] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:38:13.359451 containerd[1879]: 2024-06-25 18:38:12.987 [INFO][4593] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:38:13.359451 containerd[1879]: 2024-06-25 18:38:12.987 [INFO][4593] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-217' Jun 25 18:38:13.359451 containerd[1879]: 2024-06-25 18:38:13.032 [INFO][4593] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e4bdb58b1abe2c3c974edbdd3d7685c62df874b1c7d46ad7d9a76238f9c86da5" host="ip-172-31-20-217" Jun 25 18:38:13.359451 containerd[1879]: 2024-06-25 18:38:13.077 [INFO][4593] ipam.go 372: Looking up existing affinities for host host="ip-172-31-20-217" Jun 25 18:38:13.359451 containerd[1879]: 2024-06-25 18:38:13.105 [INFO][4593] ipam.go 489: Trying affinity for 192.168.36.64/26 host="ip-172-31-20-217" Jun 25 18:38:13.359451 containerd[1879]: 2024-06-25 18:38:13.109 [INFO][4593] ipam.go 155: Attempting to load block cidr=192.168.36.64/26 host="ip-172-31-20-217" Jun 25 18:38:13.359451 containerd[1879]: 2024-06-25 18:38:13.114 [INFO][4593] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.36.64/26 host="ip-172-31-20-217" Jun 25 18:38:13.359451 containerd[1879]: 2024-06-25 18:38:13.114 [INFO][4593] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.36.64/26 handle="k8s-pod-network.e4bdb58b1abe2c3c974edbdd3d7685c62df874b1c7d46ad7d9a76238f9c86da5" host="ip-172-31-20-217" Jun 25 18:38:13.359451 containerd[1879]: 2024-06-25 18:38:13.118 [INFO][4593] ipam.go 1685: Creating new handle: k8s-pod-network.e4bdb58b1abe2c3c974edbdd3d7685c62df874b1c7d46ad7d9a76238f9c86da5 Jun 25 18:38:13.359451 containerd[1879]: 2024-06-25 18:38:13.140 [INFO][4593] ipam.go 1203: Writing block in order to claim IPs block=192.168.36.64/26 handle="k8s-pod-network.e4bdb58b1abe2c3c974edbdd3d7685c62df874b1c7d46ad7d9a76238f9c86da5" host="ip-172-31-20-217" Jun 25 18:38:13.359451 containerd[1879]: 2024-06-25 18:38:13.159 [INFO][4593] ipam.go 1216: Successfully claimed IPs: [192.168.36.65/26] block=192.168.36.64/26 handle="k8s-pod-network.e4bdb58b1abe2c3c974edbdd3d7685c62df874b1c7d46ad7d9a76238f9c86da5" host="ip-172-31-20-217" Jun 25 18:38:13.359451 containerd[1879]: 2024-06-25 18:38:13.159 [INFO][4593] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.36.65/26] handle="k8s-pod-network.e4bdb58b1abe2c3c974edbdd3d7685c62df874b1c7d46ad7d9a76238f9c86da5" host="ip-172-31-20-217" Jun 25 18:38:13.359451 containerd[1879]: 2024-06-25 18:38:13.159 [INFO][4593] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:38:13.359451 containerd[1879]: 2024-06-25 18:38:13.159 [INFO][4593] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.36.65/26] IPv6=[] ContainerID="e4bdb58b1abe2c3c974edbdd3d7685c62df874b1c7d46ad7d9a76238f9c86da5" HandleID="k8s-pod-network.e4bdb58b1abe2c3c974edbdd3d7685c62df874b1c7d46ad7d9a76238f9c86da5" Workload="ip--172--31--20--217-k8s-calico--kube--controllers--6c77496f95--mnncn-eth0" Jun 25 18:38:13.360581 containerd[1879]: 2024-06-25 18:38:13.173 [INFO][4552] k8s.go 386: Populated endpoint ContainerID="e4bdb58b1abe2c3c974edbdd3d7685c62df874b1c7d46ad7d9a76238f9c86da5" Namespace="calico-system" Pod="calico-kube-controllers-6c77496f95-mnncn" WorkloadEndpoint="ip--172--31--20--217-k8s-calico--kube--controllers--6c77496f95--mnncn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--217-k8s-calico--kube--controllers--6c77496f95--mnncn-eth0", GenerateName:"calico-kube-controllers-6c77496f95-", Namespace:"calico-system", SelfLink:"", UID:"239a9240-02bc-486a-9fcd-8b5b78a2cc4e", ResourceVersion:"725", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 37, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c77496f95", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-217", ContainerID:"", Pod:"calico-kube-controllers-6c77496f95-mnncn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.36.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5c50d8e61c9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:38:13.360581 containerd[1879]: 2024-06-25 18:38:13.173 [INFO][4552] k8s.go 387: Calico CNI using IPs: [192.168.36.65/32] ContainerID="e4bdb58b1abe2c3c974edbdd3d7685c62df874b1c7d46ad7d9a76238f9c86da5" Namespace="calico-system" Pod="calico-kube-controllers-6c77496f95-mnncn" WorkloadEndpoint="ip--172--31--20--217-k8s-calico--kube--controllers--6c77496f95--mnncn-eth0" Jun 25 18:38:13.360581 containerd[1879]: 2024-06-25 18:38:13.173 [INFO][4552] dataplane_linux.go 68: Setting the host side veth name to cali5c50d8e61c9 ContainerID="e4bdb58b1abe2c3c974edbdd3d7685c62df874b1c7d46ad7d9a76238f9c86da5" Namespace="calico-system" Pod="calico-kube-controllers-6c77496f95-mnncn" WorkloadEndpoint="ip--172--31--20--217-k8s-calico--kube--controllers--6c77496f95--mnncn-eth0" Jun 25 18:38:13.360581 containerd[1879]: 2024-06-25 18:38:13.194 [INFO][4552] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="e4bdb58b1abe2c3c974edbdd3d7685c62df874b1c7d46ad7d9a76238f9c86da5" Namespace="calico-system" Pod="calico-kube-controllers-6c77496f95-mnncn" WorkloadEndpoint="ip--172--31--20--217-k8s-calico--kube--controllers--6c77496f95--mnncn-eth0" Jun 25 18:38:13.360581 containerd[1879]: 2024-06-25 18:38:13.199 [INFO][4552] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e4bdb58b1abe2c3c974edbdd3d7685c62df874b1c7d46ad7d9a76238f9c86da5" Namespace="calico-system" Pod="calico-kube-controllers-6c77496f95-mnncn" WorkloadEndpoint="ip--172--31--20--217-k8s-calico--kube--controllers--6c77496f95--mnncn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--217-k8s-calico--kube--controllers--6c77496f95--mnncn-eth0", GenerateName:"calico-kube-controllers-6c77496f95-", Namespace:"calico-system", SelfLink:"", UID:"239a9240-02bc-486a-9fcd-8b5b78a2cc4e", ResourceVersion:"725", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 37, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c77496f95", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-217", ContainerID:"e4bdb58b1abe2c3c974edbdd3d7685c62df874b1c7d46ad7d9a76238f9c86da5", Pod:"calico-kube-controllers-6c77496f95-mnncn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.36.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5c50d8e61c9", MAC:"de:d9:d9:78:7b:d1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:38:13.360581 containerd[1879]: 2024-06-25 18:38:13.333 [INFO][4552] k8s.go 500: Wrote updated endpoint to datastore ContainerID="e4bdb58b1abe2c3c974edbdd3d7685c62df874b1c7d46ad7d9a76238f9c86da5" Namespace="calico-system" Pod="calico-kube-controllers-6c77496f95-mnncn" WorkloadEndpoint="ip--172--31--20--217-k8s-calico--kube--controllers--6c77496f95--mnncn-eth0" Jun 25 18:38:13.521990 containerd[1879]: time="2024-06-25T18:38:13.520008975Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:38:13.521990 containerd[1879]: time="2024-06-25T18:38:13.520868123Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:38:13.521990 containerd[1879]: time="2024-06-25T18:38:13.520897629Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:38:13.521990 containerd[1879]: time="2024-06-25T18:38:13.520914835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:38:13.582896 systemd-networkd[1718]: califc5c9ed0446: Link UP Jun 25 18:38:13.589932 systemd-networkd[1718]: califc5c9ed0446: Gained carrier Jun 25 18:38:13.635529 systemd[1]: Started cri-containerd-e4bdb58b1abe2c3c974edbdd3d7685c62df874b1c7d46ad7d9a76238f9c86da5.scope - libcontainer container e4bdb58b1abe2c3c974edbdd3d7685c62df874b1c7d46ad7d9a76238f9c86da5. Jun 25 18:38:13.677748 systemd[1]: run-containerd-runc-k8s.io-e4bdb58b1abe2c3c974edbdd3d7685c62df874b1c7d46ad7d9a76238f9c86da5-runc.Ji93Cu.mount: Deactivated successfully. Jun 25 18:38:13.690681 containerd[1879]: 2024-06-25 18:38:12.683 [INFO][4579] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--217-k8s-coredns--7db6d8ff4d--8zcvk-eth0 coredns-7db6d8ff4d- kube-system b58966d2-ccc7-40b6-ba32-ef9977463f92 726 0 2024-06-25 18:37:33 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-20-217 coredns-7db6d8ff4d-8zcvk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califc5c9ed0446 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="de7a98039a684b3edcd75ef26fddaf47315245106ea71c92223c589b71f90de7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8zcvk" WorkloadEndpoint="ip--172--31--20--217-k8s-coredns--7db6d8ff4d--8zcvk-" Jun 25 18:38:13.690681 containerd[1879]: 2024-06-25 18:38:12.684 [INFO][4579] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="de7a98039a684b3edcd75ef26fddaf47315245106ea71c92223c589b71f90de7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8zcvk" WorkloadEndpoint="ip--172--31--20--217-k8s-coredns--7db6d8ff4d--8zcvk-eth0" Jun 25 18:38:13.690681 containerd[1879]: 2024-06-25 18:38:12.964 [INFO][4594] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="de7a98039a684b3edcd75ef26fddaf47315245106ea71c92223c589b71f90de7" HandleID="k8s-pod-network.de7a98039a684b3edcd75ef26fddaf47315245106ea71c92223c589b71f90de7" Workload="ip--172--31--20--217-k8s-coredns--7db6d8ff4d--8zcvk-eth0" Jun 25 18:38:13.690681 containerd[1879]: 2024-06-25 18:38:13.057 [INFO][4594] ipam_plugin.go 264: Auto assigning IP ContainerID="de7a98039a684b3edcd75ef26fddaf47315245106ea71c92223c589b71f90de7" HandleID="k8s-pod-network.de7a98039a684b3edcd75ef26fddaf47315245106ea71c92223c589b71f90de7" Workload="ip--172--31--20--217-k8s-coredns--7db6d8ff4d--8zcvk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cc8d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-20-217", "pod":"coredns-7db6d8ff4d-8zcvk", "timestamp":"2024-06-25 18:38:12.964037486 +0000 UTC"}, Hostname:"ip-172-31-20-217", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 18:38:13.690681 containerd[1879]: 2024-06-25 18:38:13.059 [INFO][4594] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:38:13.690681 containerd[1879]: 2024-06-25 18:38:13.240 [INFO][4594] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:38:13.690681 containerd[1879]: 2024-06-25 18:38:13.240 [INFO][4594] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-217' Jun 25 18:38:13.690681 containerd[1879]: 2024-06-25 18:38:13.308 [INFO][4594] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.de7a98039a684b3edcd75ef26fddaf47315245106ea71c92223c589b71f90de7" host="ip-172-31-20-217" Jun 25 18:38:13.690681 containerd[1879]: 2024-06-25 18:38:13.368 [INFO][4594] ipam.go 372: Looking up existing affinities for host host="ip-172-31-20-217" Jun 25 18:38:13.690681 containerd[1879]: 2024-06-25 18:38:13.426 [INFO][4594] ipam.go 489: Trying affinity for 192.168.36.64/26 host="ip-172-31-20-217" Jun 25 18:38:13.690681 containerd[1879]: 2024-06-25 18:38:13.465 [INFO][4594] ipam.go 155: Attempting to load block cidr=192.168.36.64/26 host="ip-172-31-20-217" Jun 25 18:38:13.690681 containerd[1879]: 2024-06-25 18:38:13.473 [INFO][4594] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.36.64/26 host="ip-172-31-20-217" Jun 25 18:38:13.690681 containerd[1879]: 2024-06-25 18:38:13.473 [INFO][4594] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.36.64/26 handle="k8s-pod-network.de7a98039a684b3edcd75ef26fddaf47315245106ea71c92223c589b71f90de7" host="ip-172-31-20-217" Jun 25 18:38:13.690681 containerd[1879]: 2024-06-25 18:38:13.476 [INFO][4594] ipam.go 1685: Creating new handle: k8s-pod-network.de7a98039a684b3edcd75ef26fddaf47315245106ea71c92223c589b71f90de7 Jun 25 18:38:13.690681 containerd[1879]: 2024-06-25 18:38:13.496 [INFO][4594] ipam.go 1203: Writing block in order to claim IPs block=192.168.36.64/26 handle="k8s-pod-network.de7a98039a684b3edcd75ef26fddaf47315245106ea71c92223c589b71f90de7" host="ip-172-31-20-217" Jun 25 18:38:13.690681 containerd[1879]: 2024-06-25 18:38:13.516 [INFO][4594] ipam.go 1216: Successfully claimed IPs: [192.168.36.66/26] block=192.168.36.64/26 handle="k8s-pod-network.de7a98039a684b3edcd75ef26fddaf47315245106ea71c92223c589b71f90de7" host="ip-172-31-20-217" Jun 25 18:38:13.690681 containerd[1879]: 2024-06-25 18:38:13.516 [INFO][4594] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.36.66/26] handle="k8s-pod-network.de7a98039a684b3edcd75ef26fddaf47315245106ea71c92223c589b71f90de7" host="ip-172-31-20-217" Jun 25 18:38:13.690681 containerd[1879]: 2024-06-25 18:38:13.517 [INFO][4594] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:38:13.690681 containerd[1879]: 2024-06-25 18:38:13.520 [INFO][4594] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.36.66/26] IPv6=[] ContainerID="de7a98039a684b3edcd75ef26fddaf47315245106ea71c92223c589b71f90de7" HandleID="k8s-pod-network.de7a98039a684b3edcd75ef26fddaf47315245106ea71c92223c589b71f90de7" Workload="ip--172--31--20--217-k8s-coredns--7db6d8ff4d--8zcvk-eth0" Jun 25 18:38:13.693080 containerd[1879]: 2024-06-25 18:38:13.545 [INFO][4579] k8s.go 386: Populated endpoint ContainerID="de7a98039a684b3edcd75ef26fddaf47315245106ea71c92223c589b71f90de7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8zcvk" WorkloadEndpoint="ip--172--31--20--217-k8s-coredns--7db6d8ff4d--8zcvk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--217-k8s-coredns--7db6d8ff4d--8zcvk-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"b58966d2-ccc7-40b6-ba32-ef9977463f92", ResourceVersion:"726", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 37, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-217", ContainerID:"", Pod:"coredns-7db6d8ff4d-8zcvk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.36.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califc5c9ed0446", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:38:13.693080 containerd[1879]: 2024-06-25 18:38:13.548 [INFO][4579] k8s.go 387: Calico CNI using IPs: [192.168.36.66/32] ContainerID="de7a98039a684b3edcd75ef26fddaf47315245106ea71c92223c589b71f90de7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8zcvk" WorkloadEndpoint="ip--172--31--20--217-k8s-coredns--7db6d8ff4d--8zcvk-eth0" Jun 25 18:38:13.693080 containerd[1879]: 2024-06-25 18:38:13.550 [INFO][4579] dataplane_linux.go 68: Setting the host side veth name to califc5c9ed0446 ContainerID="de7a98039a684b3edcd75ef26fddaf47315245106ea71c92223c589b71f90de7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8zcvk" WorkloadEndpoint="ip--172--31--20--217-k8s-coredns--7db6d8ff4d--8zcvk-eth0" Jun 25 18:38:13.693080 containerd[1879]: 2024-06-25 18:38:13.585 [INFO][4579] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="de7a98039a684b3edcd75ef26fddaf47315245106ea71c92223c589b71f90de7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8zcvk" WorkloadEndpoint="ip--172--31--20--217-k8s-coredns--7db6d8ff4d--8zcvk-eth0" Jun 25 18:38:13.693080 containerd[1879]: 2024-06-25 18:38:13.608 [INFO][4579] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="de7a98039a684b3edcd75ef26fddaf47315245106ea71c92223c589b71f90de7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8zcvk" WorkloadEndpoint="ip--172--31--20--217-k8s-coredns--7db6d8ff4d--8zcvk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--217-k8s-coredns--7db6d8ff4d--8zcvk-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"b58966d2-ccc7-40b6-ba32-ef9977463f92", ResourceVersion:"726", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 37, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-217", ContainerID:"de7a98039a684b3edcd75ef26fddaf47315245106ea71c92223c589b71f90de7", Pod:"coredns-7db6d8ff4d-8zcvk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.36.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califc5c9ed0446", MAC:"e2:0e:82:a3:f5:88", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:38:13.693080 containerd[1879]: 2024-06-25 18:38:13.684 [INFO][4579] k8s.go 500: Wrote updated endpoint to datastore ContainerID="de7a98039a684b3edcd75ef26fddaf47315245106ea71c92223c589b71f90de7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8zcvk" WorkloadEndpoint="ip--172--31--20--217-k8s-coredns--7db6d8ff4d--8zcvk-eth0" Jun 25 18:38:13.728841 systemd-networkd[1718]: calib07cd040fd4: Link UP Jun 25 18:38:13.731002 systemd-networkd[1718]: calib07cd040fd4: Gained carrier Jun 25 18:38:13.772976 containerd[1879]: 2024-06-25 18:38:12.672 [INFO][4532] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--217-k8s-coredns--7db6d8ff4d--bblmb-eth0 coredns-7db6d8ff4d- kube-system 0dc1be98-a119-4811-b95d-a24913f2cc14 727 0 2024-06-25 18:37:33 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-20-217 coredns-7db6d8ff4d-bblmb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib07cd040fd4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="04f491322af7a93e806dfa9f03ea44e6cb9187c7976acb729cdd5d53172cd43c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bblmb" WorkloadEndpoint="ip--172--31--20--217-k8s-coredns--7db6d8ff4d--bblmb-" Jun 25 18:38:13.772976 containerd[1879]: 2024-06-25 18:38:12.676 [INFO][4532] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="04f491322af7a93e806dfa9f03ea44e6cb9187c7976acb729cdd5d53172cd43c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bblmb" WorkloadEndpoint="ip--172--31--20--217-k8s-coredns--7db6d8ff4d--bblmb-eth0" Jun 25 18:38:13.772976 containerd[1879]: 2024-06-25 18:38:12.973 [INFO][4595] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="04f491322af7a93e806dfa9f03ea44e6cb9187c7976acb729cdd5d53172cd43c" HandleID="k8s-pod-network.04f491322af7a93e806dfa9f03ea44e6cb9187c7976acb729cdd5d53172cd43c" Workload="ip--172--31--20--217-k8s-coredns--7db6d8ff4d--bblmb-eth0" Jun 25 18:38:13.772976 containerd[1879]: 2024-06-25 18:38:13.078 [INFO][4595] ipam_plugin.go 264: Auto assigning IP ContainerID="04f491322af7a93e806dfa9f03ea44e6cb9187c7976acb729cdd5d53172cd43c" HandleID="k8s-pod-network.04f491322af7a93e806dfa9f03ea44e6cb9187c7976acb729cdd5d53172cd43c" Workload="ip--172--31--20--217-k8s-coredns--7db6d8ff4d--bblmb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5110), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-20-217", "pod":"coredns-7db6d8ff4d-bblmb", "timestamp":"2024-06-25 18:38:12.971220196 +0000 UTC"}, Hostname:"ip-172-31-20-217", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 18:38:13.772976 containerd[1879]: 2024-06-25 18:38:13.078 [INFO][4595] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:38:13.772976 containerd[1879]: 2024-06-25 18:38:13.517 [INFO][4595] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:38:13.772976 containerd[1879]: 2024-06-25 18:38:13.517 [INFO][4595] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-217' Jun 25 18:38:13.772976 containerd[1879]: 2024-06-25 18:38:13.538 [INFO][4595] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.04f491322af7a93e806dfa9f03ea44e6cb9187c7976acb729cdd5d53172cd43c" host="ip-172-31-20-217" Jun 25 18:38:13.772976 containerd[1879]: 2024-06-25 18:38:13.549 [INFO][4595] ipam.go 372: Looking up existing affinities for host host="ip-172-31-20-217" Jun 25 18:38:13.772976 containerd[1879]: 2024-06-25 18:38:13.627 [INFO][4595] ipam.go 489: Trying affinity for 192.168.36.64/26 host="ip-172-31-20-217" Jun 25 18:38:13.772976 containerd[1879]: 2024-06-25 18:38:13.644 [INFO][4595] ipam.go 155: Attempting to load block cidr=192.168.36.64/26 host="ip-172-31-20-217" Jun 25 18:38:13.772976 containerd[1879]: 2024-06-25 18:38:13.658 [INFO][4595] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.36.64/26 host="ip-172-31-20-217" Jun 25 18:38:13.772976 containerd[1879]: 2024-06-25 18:38:13.658 [INFO][4595] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.36.64/26 handle="k8s-pod-network.04f491322af7a93e806dfa9f03ea44e6cb9187c7976acb729cdd5d53172cd43c" host="ip-172-31-20-217" Jun 25 18:38:13.772976 containerd[1879]: 2024-06-25 18:38:13.680 [INFO][4595] ipam.go 1685: Creating new handle: k8s-pod-network.04f491322af7a93e806dfa9f03ea44e6cb9187c7976acb729cdd5d53172cd43c Jun 25 18:38:13.772976 containerd[1879]: 2024-06-25 18:38:13.695 [INFO][4595] ipam.go 1203: Writing block in order to claim IPs block=192.168.36.64/26 handle="k8s-pod-network.04f491322af7a93e806dfa9f03ea44e6cb9187c7976acb729cdd5d53172cd43c" host="ip-172-31-20-217" Jun 25 18:38:13.772976 containerd[1879]: 2024-06-25 18:38:13.706 [INFO][4595] ipam.go 1216: Successfully claimed IPs: [192.168.36.67/26] block=192.168.36.64/26 handle="k8s-pod-network.04f491322af7a93e806dfa9f03ea44e6cb9187c7976acb729cdd5d53172cd43c" host="ip-172-31-20-217" Jun 25 18:38:13.772976 containerd[1879]: 2024-06-25 18:38:13.706 [INFO][4595] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.36.67/26] handle="k8s-pod-network.04f491322af7a93e806dfa9f03ea44e6cb9187c7976acb729cdd5d53172cd43c" host="ip-172-31-20-217" Jun 25 18:38:13.772976 containerd[1879]: 2024-06-25 18:38:13.706 [INFO][4595] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:38:13.772976 containerd[1879]: 2024-06-25 18:38:13.706 [INFO][4595] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.36.67/26] IPv6=[] ContainerID="04f491322af7a93e806dfa9f03ea44e6cb9187c7976acb729cdd5d53172cd43c" HandleID="k8s-pod-network.04f491322af7a93e806dfa9f03ea44e6cb9187c7976acb729cdd5d53172cd43c" Workload="ip--172--31--20--217-k8s-coredns--7db6d8ff4d--bblmb-eth0" Jun 25 18:38:13.774862 containerd[1879]: 2024-06-25 18:38:13.713 [INFO][4532] k8s.go 386: Populated endpoint ContainerID="04f491322af7a93e806dfa9f03ea44e6cb9187c7976acb729cdd5d53172cd43c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bblmb" WorkloadEndpoint="ip--172--31--20--217-k8s-coredns--7db6d8ff4d--bblmb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--217-k8s-coredns--7db6d8ff4d--bblmb-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"0dc1be98-a119-4811-b95d-a24913f2cc14", ResourceVersion:"727", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 37, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-217", ContainerID:"", Pod:"coredns-7db6d8ff4d-bblmb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.36.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib07cd040fd4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:38:13.774862 containerd[1879]: 2024-06-25 18:38:13.713 [INFO][4532] k8s.go 387: Calico CNI using IPs: [192.168.36.67/32] ContainerID="04f491322af7a93e806dfa9f03ea44e6cb9187c7976acb729cdd5d53172cd43c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bblmb" WorkloadEndpoint="ip--172--31--20--217-k8s-coredns--7db6d8ff4d--bblmb-eth0" Jun 25 18:38:13.774862 containerd[1879]: 2024-06-25 18:38:13.713 [INFO][4532] dataplane_linux.go 68: Setting the host side veth name to calib07cd040fd4 ContainerID="04f491322af7a93e806dfa9f03ea44e6cb9187c7976acb729cdd5d53172cd43c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bblmb" WorkloadEndpoint="ip--172--31--20--217-k8s-coredns--7db6d8ff4d--bblmb-eth0" Jun 25 18:38:13.774862 containerd[1879]: 2024-06-25 18:38:13.730 [INFO][4532] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="04f491322af7a93e806dfa9f03ea44e6cb9187c7976acb729cdd5d53172cd43c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bblmb" WorkloadEndpoint="ip--172--31--20--217-k8s-coredns--7db6d8ff4d--bblmb-eth0" Jun 25 18:38:13.774862 containerd[1879]: 2024-06-25 18:38:13.733 [INFO][4532] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="04f491322af7a93e806dfa9f03ea44e6cb9187c7976acb729cdd5d53172cd43c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bblmb" WorkloadEndpoint="ip--172--31--20--217-k8s-coredns--7db6d8ff4d--bblmb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--217-k8s-coredns--7db6d8ff4d--bblmb-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"0dc1be98-a119-4811-b95d-a24913f2cc14", ResourceVersion:"727", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 37, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-217", ContainerID:"04f491322af7a93e806dfa9f03ea44e6cb9187c7976acb729cdd5d53172cd43c", Pod:"coredns-7db6d8ff4d-bblmb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.36.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib07cd040fd4", MAC:"2e:c2:06:ca:cb:bb", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:38:13.774862 containerd[1879]: 2024-06-25 18:38:13.759 [INFO][4532] k8s.go 500: Wrote updated endpoint to datastore ContainerID="04f491322af7a93e806dfa9f03ea44e6cb9187c7976acb729cdd5d53172cd43c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bblmb" WorkloadEndpoint="ip--172--31--20--217-k8s-coredns--7db6d8ff4d--bblmb-eth0" Jun 25 18:38:13.825539 containerd[1879]: time="2024-06-25T18:38:13.825056199Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:38:13.825539 containerd[1879]: time="2024-06-25T18:38:13.825138022Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:38:13.825539 containerd[1879]: time="2024-06-25T18:38:13.825180970Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:38:13.825539 containerd[1879]: time="2024-06-25T18:38:13.825203364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:38:13.855838 systemd-networkd[1718]: cali02677af2580: Link UP Jun 25 18:38:13.856123 systemd-networkd[1718]: cali02677af2580: Gained carrier Jun 25 18:38:13.892973 containerd[1879]: time="2024-06-25T18:38:13.892028198Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:38:13.892973 containerd[1879]: time="2024-06-25T18:38:13.892119358Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:38:13.892973 containerd[1879]: time="2024-06-25T18:38:13.892149489Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:38:13.892973 containerd[1879]: time="2024-06-25T18:38:13.892170272Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:38:13.930081 containerd[1879]: 2024-06-25 18:38:13.493 [INFO][4628] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--217-k8s-csi--node--driver--jd8sm-eth0 csi-node-driver- calico-system 4109c626-0c92-402f-a0e5-85bdc1e223de 740 0 2024-06-25 18:37:43 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6cc9df58f4 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ip-172-31-20-217 csi-node-driver-jd8sm eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali02677af2580 [] []}} ContainerID="9b9b292a120d1b939bde00dfcd853a57ad5b9c177a2b4bf3e8291dbff76a1472" Namespace="calico-system" Pod="csi-node-driver-jd8sm" WorkloadEndpoint="ip--172--31--20--217-k8s-csi--node--driver--jd8sm-" Jun 25 18:38:13.930081 containerd[1879]: 2024-06-25 18:38:13.496 [INFO][4628] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9b9b292a120d1b939bde00dfcd853a57ad5b9c177a2b4bf3e8291dbff76a1472" Namespace="calico-system" Pod="csi-node-driver-jd8sm" WorkloadEndpoint="ip--172--31--20--217-k8s-csi--node--driver--jd8sm-eth0" Jun 25 18:38:13.930081 containerd[1879]: 2024-06-25 18:38:13.662 [INFO][4674] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9b9b292a120d1b939bde00dfcd853a57ad5b9c177a2b4bf3e8291dbff76a1472" HandleID="k8s-pod-network.9b9b292a120d1b939bde00dfcd853a57ad5b9c177a2b4bf3e8291dbff76a1472" Workload="ip--172--31--20--217-k8s-csi--node--driver--jd8sm-eth0" Jun 25 18:38:13.930081 containerd[1879]: 2024-06-25 18:38:13.688 [INFO][4674] ipam_plugin.go 264: Auto assigning IP ContainerID="9b9b292a120d1b939bde00dfcd853a57ad5b9c177a2b4bf3e8291dbff76a1472" HandleID="k8s-pod-network.9b9b292a120d1b939bde00dfcd853a57ad5b9c177a2b4bf3e8291dbff76a1472" Workload="ip--172--31--20--217-k8s-csi--node--driver--jd8sm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001fd870), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-20-217", "pod":"csi-node-driver-jd8sm", "timestamp":"2024-06-25 18:38:13.662735503 +0000 UTC"}, Hostname:"ip-172-31-20-217", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 18:38:13.930081 containerd[1879]: 2024-06-25 18:38:13.689 [INFO][4674] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:38:13.930081 containerd[1879]: 2024-06-25 18:38:13.706 [INFO][4674] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:38:13.930081 containerd[1879]: 2024-06-25 18:38:13.707 [INFO][4674] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-217' Jun 25 18:38:13.930081 containerd[1879]: 2024-06-25 18:38:13.714 [INFO][4674] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9b9b292a120d1b939bde00dfcd853a57ad5b9c177a2b4bf3e8291dbff76a1472" host="ip-172-31-20-217" Jun 25 18:38:13.930081 containerd[1879]: 2024-06-25 18:38:13.741 [INFO][4674] ipam.go 372: Looking up existing affinities for host host="ip-172-31-20-217" Jun 25 18:38:13.930081 containerd[1879]: 2024-06-25 18:38:13.757 [INFO][4674] ipam.go 489: Trying affinity for 192.168.36.64/26 host="ip-172-31-20-217" Jun 25 18:38:13.930081 containerd[1879]: 2024-06-25 18:38:13.776 [INFO][4674] ipam.go 155: Attempting to load block cidr=192.168.36.64/26 host="ip-172-31-20-217" Jun 25 18:38:13.930081 containerd[1879]: 2024-06-25 18:38:13.794 [INFO][4674] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.36.64/26 host="ip-172-31-20-217" Jun 25 18:38:13.930081 containerd[1879]: 2024-06-25 18:38:13.795 [INFO][4674] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.36.64/26 handle="k8s-pod-network.9b9b292a120d1b939bde00dfcd853a57ad5b9c177a2b4bf3e8291dbff76a1472" host="ip-172-31-20-217" Jun 25 18:38:13.930081 containerd[1879]: 2024-06-25 18:38:13.803 [INFO][4674] ipam.go 1685: Creating new handle: k8s-pod-network.9b9b292a120d1b939bde00dfcd853a57ad5b9c177a2b4bf3e8291dbff76a1472 Jun 25 18:38:13.930081 containerd[1879]: 2024-06-25 18:38:13.815 [INFO][4674] ipam.go 1203: Writing block in order to claim IPs block=192.168.36.64/26 handle="k8s-pod-network.9b9b292a120d1b939bde00dfcd853a57ad5b9c177a2b4bf3e8291dbff76a1472" host="ip-172-31-20-217" Jun 25 18:38:13.930081 containerd[1879]: 2024-06-25 18:38:13.833 [INFO][4674] ipam.go 1216: Successfully claimed IPs: [192.168.36.68/26] block=192.168.36.64/26 handle="k8s-pod-network.9b9b292a120d1b939bde00dfcd853a57ad5b9c177a2b4bf3e8291dbff76a1472" host="ip-172-31-20-217" Jun 25 18:38:13.930081 containerd[1879]: 2024-06-25 18:38:13.833 [INFO][4674] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.36.68/26] handle="k8s-pod-network.9b9b292a120d1b939bde00dfcd853a57ad5b9c177a2b4bf3e8291dbff76a1472" host="ip-172-31-20-217" Jun 25 18:38:13.930081 containerd[1879]: 2024-06-25 18:38:13.833 [INFO][4674] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:38:13.930081 containerd[1879]: 2024-06-25 18:38:13.833 [INFO][4674] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.36.68/26] IPv6=[] ContainerID="9b9b292a120d1b939bde00dfcd853a57ad5b9c177a2b4bf3e8291dbff76a1472" HandleID="k8s-pod-network.9b9b292a120d1b939bde00dfcd853a57ad5b9c177a2b4bf3e8291dbff76a1472" Workload="ip--172--31--20--217-k8s-csi--node--driver--jd8sm-eth0" Jun 25 18:38:13.933785 containerd[1879]: 2024-06-25 18:38:13.840 [INFO][4628] k8s.go 386: Populated endpoint ContainerID="9b9b292a120d1b939bde00dfcd853a57ad5b9c177a2b4bf3e8291dbff76a1472" Namespace="calico-system" Pod="csi-node-driver-jd8sm" WorkloadEndpoint="ip--172--31--20--217-k8s-csi--node--driver--jd8sm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--217-k8s-csi--node--driver--jd8sm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4109c626-0c92-402f-a0e5-85bdc1e223de", ResourceVersion:"740", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 37, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-217", ContainerID:"", Pod:"csi-node-driver-jd8sm", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.36.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali02677af2580", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:38:13.933785 containerd[1879]: 2024-06-25 18:38:13.843 [INFO][4628] k8s.go 387: Calico CNI using IPs: [192.168.36.68/32] ContainerID="9b9b292a120d1b939bde00dfcd853a57ad5b9c177a2b4bf3e8291dbff76a1472" Namespace="calico-system" Pod="csi-node-driver-jd8sm" WorkloadEndpoint="ip--172--31--20--217-k8s-csi--node--driver--jd8sm-eth0" Jun 25 18:38:13.933785 containerd[1879]: 2024-06-25 18:38:13.843 [INFO][4628] dataplane_linux.go 68: Setting the host side veth name to cali02677af2580 ContainerID="9b9b292a120d1b939bde00dfcd853a57ad5b9c177a2b4bf3e8291dbff76a1472" Namespace="calico-system" Pod="csi-node-driver-jd8sm" WorkloadEndpoint="ip--172--31--20--217-k8s-csi--node--driver--jd8sm-eth0" Jun 25 18:38:13.933785 containerd[1879]: 2024-06-25 18:38:13.857 [INFO][4628] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="9b9b292a120d1b939bde00dfcd853a57ad5b9c177a2b4bf3e8291dbff76a1472" Namespace="calico-system" Pod="csi-node-driver-jd8sm" WorkloadEndpoint="ip--172--31--20--217-k8s-csi--node--driver--jd8sm-eth0" Jun 25 18:38:13.933785 containerd[1879]: 2024-06-25 18:38:13.867 [INFO][4628] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9b9b292a120d1b939bde00dfcd853a57ad5b9c177a2b4bf3e8291dbff76a1472" Namespace="calico-system" Pod="csi-node-driver-jd8sm" WorkloadEndpoint="ip--172--31--20--217-k8s-csi--node--driver--jd8sm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--217-k8s-csi--node--driver--jd8sm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4109c626-0c92-402f-a0e5-85bdc1e223de", ResourceVersion:"740", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 37, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-217", ContainerID:"9b9b292a120d1b939bde00dfcd853a57ad5b9c177a2b4bf3e8291dbff76a1472", Pod:"csi-node-driver-jd8sm", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.36.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali02677af2580", MAC:"e2:40:8f:01:dd:fe", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:38:13.933785 containerd[1879]: 2024-06-25 18:38:13.895 [INFO][4628] k8s.go 500: Wrote updated endpoint to datastore ContainerID="9b9b292a120d1b939bde00dfcd853a57ad5b9c177a2b4bf3e8291dbff76a1472" Namespace="calico-system" Pod="csi-node-driver-jd8sm" WorkloadEndpoint="ip--172--31--20--217-k8s-csi--node--driver--jd8sm-eth0" Jun 25 18:38:13.937054 systemd[1]: Started cri-containerd-de7a98039a684b3edcd75ef26fddaf47315245106ea71c92223c589b71f90de7.scope - libcontainer container de7a98039a684b3edcd75ef26fddaf47315245106ea71c92223c589b71f90de7. Jun 25 18:38:14.027069 systemd[1]: Started cri-containerd-04f491322af7a93e806dfa9f03ea44e6cb9187c7976acb729cdd5d53172cd43c.scope - libcontainer container 04f491322af7a93e806dfa9f03ea44e6cb9187c7976acb729cdd5d53172cd43c. Jun 25 18:38:14.086707 containerd[1879]: time="2024-06-25T18:38:14.086120874Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:38:14.088531 containerd[1879]: time="2024-06-25T18:38:14.087999619Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:38:14.088531 containerd[1879]: time="2024-06-25T18:38:14.088068561Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:38:14.088531 containerd[1879]: time="2024-06-25T18:38:14.088089263Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:38:14.111675 containerd[1879]: time="2024-06-25T18:38:14.111630607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c77496f95-mnncn,Uid:239a9240-02bc-486a-9fcd-8b5b78a2cc4e,Namespace:calico-system,Attempt:1,} returns sandbox id \"e4bdb58b1abe2c3c974edbdd3d7685c62df874b1c7d46ad7d9a76238f9c86da5\"" Jun 25 18:38:14.119683 containerd[1879]: time="2024-06-25T18:38:14.118920792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8zcvk,Uid:b58966d2-ccc7-40b6-ba32-ef9977463f92,Namespace:kube-system,Attempt:1,} returns sandbox id \"de7a98039a684b3edcd75ef26fddaf47315245106ea71c92223c589b71f90de7\"" Jun 25 18:38:14.120519 containerd[1879]: time="2024-06-25T18:38:14.120446839Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Jun 25 18:38:14.143425 containerd[1879]: time="2024-06-25T18:38:14.143061145Z" level=info msg="CreateContainer within sandbox \"de7a98039a684b3edcd75ef26fddaf47315245106ea71c92223c589b71f90de7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 18:38:14.160230 systemd[1]: Started cri-containerd-9b9b292a120d1b939bde00dfcd853a57ad5b9c177a2b4bf3e8291dbff76a1472.scope - libcontainer container 9b9b292a120d1b939bde00dfcd853a57ad5b9c177a2b4bf3e8291dbff76a1472. Jun 25 18:38:14.224160 containerd[1879]: time="2024-06-25T18:38:14.224113276Z" level=info msg="CreateContainer within sandbox \"de7a98039a684b3edcd75ef26fddaf47315245106ea71c92223c589b71f90de7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d3ec18587b22a8a8b849f8f9c3ce71cec39cf45acf824433b0cfc8df71bc8a65\"" Jun 25 18:38:14.228013 containerd[1879]: time="2024-06-25T18:38:14.227814648Z" level=info msg="StartContainer for \"d3ec18587b22a8a8b849f8f9c3ce71cec39cf45acf824433b0cfc8df71bc8a65\"" Jun 25 18:38:14.378199 containerd[1879]: time="2024-06-25T18:38:14.378064357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bblmb,Uid:0dc1be98-a119-4811-b95d-a24913f2cc14,Namespace:kube-system,Attempt:1,} returns sandbox id \"04f491322af7a93e806dfa9f03ea44e6cb9187c7976acb729cdd5d53172cd43c\"" Jun 25 18:38:14.384114 containerd[1879]: time="2024-06-25T18:38:14.382153321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jd8sm,Uid:4109c626-0c92-402f-a0e5-85bdc1e223de,Namespace:calico-system,Attempt:1,} returns sandbox id \"9b9b292a120d1b939bde00dfcd853a57ad5b9c177a2b4bf3e8291dbff76a1472\"" Jun 25 18:38:14.388393 containerd[1879]: time="2024-06-25T18:38:14.388232459Z" level=info msg="CreateContainer within sandbox \"04f491322af7a93e806dfa9f03ea44e6cb9187c7976acb729cdd5d53172cd43c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 18:38:14.409732 systemd[1]: Started cri-containerd-d3ec18587b22a8a8b849f8f9c3ce71cec39cf45acf824433b0cfc8df71bc8a65.scope - libcontainer container d3ec18587b22a8a8b849f8f9c3ce71cec39cf45acf824433b0cfc8df71bc8a65. Jun 25 18:38:14.421452 containerd[1879]: time="2024-06-25T18:38:14.420080542Z" level=info msg="CreateContainer within sandbox \"04f491322af7a93e806dfa9f03ea44e6cb9187c7976acb729cdd5d53172cd43c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5cd6ae8b044f9e315a45d6522ba952942e23669c052e833569e788a3df0976d1\"" Jun 25 18:38:14.422881 containerd[1879]: time="2024-06-25T18:38:14.422842256Z" level=info msg="StartContainer for \"5cd6ae8b044f9e315a45d6522ba952942e23669c052e833569e788a3df0976d1\"" Jun 25 18:38:14.475685 systemd-networkd[1718]: cali5c50d8e61c9: Gained IPv6LL Jun 25 18:38:14.558438 systemd[1]: Started cri-containerd-5cd6ae8b044f9e315a45d6522ba952942e23669c052e833569e788a3df0976d1.scope - libcontainer container 5cd6ae8b044f9e315a45d6522ba952942e23669c052e833569e788a3df0976d1. Jun 25 18:38:14.582690 containerd[1879]: time="2024-06-25T18:38:14.582113867Z" level=info msg="StartContainer for \"d3ec18587b22a8a8b849f8f9c3ce71cec39cf45acf824433b0cfc8df71bc8a65\" returns successfully" Jun 25 18:38:14.641852 containerd[1879]: time="2024-06-25T18:38:14.641777369Z" level=info msg="StartContainer for \"5cd6ae8b044f9e315a45d6522ba952942e23669c052e833569e788a3df0976d1\" returns successfully" Jun 25 18:38:15.046742 systemd-networkd[1718]: calib07cd040fd4: Gained IPv6LL Jun 25 18:38:15.365162 systemd-networkd[1718]: cali02677af2580: Gained IPv6LL Jun 25 18:38:15.558046 systemd-networkd[1718]: califc5c9ed0446: Gained IPv6LL Jun 25 18:38:15.680485 kubelet[3289]: I0625 18:38:15.680406 3289 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-8zcvk" podStartSLOduration=42.680272693 podStartE2EDuration="42.680272693s" podCreationTimestamp="2024-06-25 18:37:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:38:15.676525451 +0000 UTC m=+56.088611346" watchObservedRunningTime="2024-06-25 18:38:15.680272693 +0000 UTC m=+56.092358586" Jun 25 18:38:15.739330 kubelet[3289]: I0625 18:38:15.738102 3289 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-bblmb" podStartSLOduration=42.738077837 podStartE2EDuration="42.738077837s" podCreationTimestamp="2024-06-25 18:37:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:38:15.736470568 +0000 UTC m=+56.148556461" watchObservedRunningTime="2024-06-25 18:38:15.738077837 +0000 UTC m=+56.150163730" Jun 25 18:38:17.840971 systemd[1]: Started sshd@8-172.31.20.217:22-139.178.68.195:36398.service - OpenSSH per-connection server daemon (139.178.68.195:36398). Jun 25 18:38:17.886966 containerd[1879]: time="2024-06-25T18:38:17.886170461Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:38:17.890197 containerd[1879]: time="2024-06-25T18:38:17.889361220Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=33505793" Jun 25 18:38:17.898128 containerd[1879]: time="2024-06-25T18:38:17.894131931Z" level=info msg="ImageCreate event name:\"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:38:17.901077 containerd[1879]: time="2024-06-25T18:38:17.901032924Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:38:17.904170 containerd[1879]: time="2024-06-25T18:38:17.904017257Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"34953521\" in 3.783524423s" Jun 25 18:38:17.904170 containerd[1879]: time="2024-06-25T18:38:17.904071529Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\"" Jun 25 18:38:17.914555 containerd[1879]: time="2024-06-25T18:38:17.909978688Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Jun 25 18:38:18.012530 containerd[1879]: time="2024-06-25T18:38:18.012335095Z" level=info msg="CreateContainer within sandbox \"e4bdb58b1abe2c3c974edbdd3d7685c62df874b1c7d46ad7d9a76238f9c86da5\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jun 25 18:38:18.046850 containerd[1879]: time="2024-06-25T18:38:18.042969435Z" level=info msg="CreateContainer within sandbox \"e4bdb58b1abe2c3c974edbdd3d7685c62df874b1c7d46ad7d9a76238f9c86da5\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"1bcf9f8822d78a798ce987887f6c76acf62642e199bc885efcaeab27ebf147c7\"" Jun 25 18:38:18.053105 containerd[1879]: time="2024-06-25T18:38:18.048626886Z" level=info msg="StartContainer for \"1bcf9f8822d78a798ce987887f6c76acf62642e199bc885efcaeab27ebf147c7\"" Jun 25 18:38:18.176328 sshd[4952]: Accepted publickey for core from 139.178.68.195 port 36398 ssh2: RSA SHA256:zWpntMacToOmwCaU62vdvg6t1el6aib1JfI6hz3EHOQ Jun 25 18:38:18.187900 sshd[4952]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:38:18.283579 systemd-logind[1858]: New session 9 of user core. Jun 25 18:38:18.297068 systemd[1]: Started cri-containerd-1bcf9f8822d78a798ce987887f6c76acf62642e199bc885efcaeab27ebf147c7.scope - libcontainer container 1bcf9f8822d78a798ce987887f6c76acf62642e199bc885efcaeab27ebf147c7. Jun 25 18:38:18.298534 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 25 18:38:18.350955 ntpd[1849]: Listen normally on 6 vxlan.calico 192.168.36.64:123 Jun 25 18:38:18.351052 ntpd[1849]: Listen normally on 7 vxlan.calico [fe80::6437:26ff:febc:a845%4]:123 Jun 25 18:38:18.351603 ntpd[1849]: 25 Jun 18:38:18 ntpd[1849]: Listen normally on 6 vxlan.calico 192.168.36.64:123 Jun 25 18:38:18.351603 ntpd[1849]: 25 Jun 18:38:18 ntpd[1849]: Listen normally on 7 vxlan.calico [fe80::6437:26ff:febc:a845%4]:123 Jun 25 18:38:18.351603 ntpd[1849]: 25 Jun 18:38:18 ntpd[1849]: Listen normally on 8 cali5c50d8e61c9 [fe80::ecee:eeff:feee:eeee%7]:123 Jun 25 18:38:18.351603 ntpd[1849]: 25 Jun 18:38:18 ntpd[1849]: Listen normally on 9 califc5c9ed0446 [fe80::ecee:eeff:feee:eeee%8]:123 Jun 25 18:38:18.351603 ntpd[1849]: 25 Jun 18:38:18 ntpd[1849]: Listen normally on 10 calib07cd040fd4 [fe80::ecee:eeff:feee:eeee%9]:123 Jun 25 18:38:18.351603 ntpd[1849]: 25 Jun 18:38:18 ntpd[1849]: Listen normally on 11 cali02677af2580 [fe80::ecee:eeff:feee:eeee%10]:123 Jun 25 18:38:18.351265 ntpd[1849]: Listen normally on 8 cali5c50d8e61c9 [fe80::ecee:eeff:feee:eeee%7]:123 Jun 25 18:38:18.351310 ntpd[1849]: Listen normally on 9 califc5c9ed0446 [fe80::ecee:eeff:feee:eeee%8]:123 Jun 25 18:38:18.351350 ntpd[1849]: Listen normally on 10 calib07cd040fd4 [fe80::ecee:eeff:feee:eeee%9]:123 Jun 25 18:38:18.351416 ntpd[1849]: Listen normally on 11 cali02677af2580 [fe80::ecee:eeff:feee:eeee%10]:123 Jun 25 18:38:18.568329 containerd[1879]: time="2024-06-25T18:38:18.567845718Z" level=info msg="StartContainer for \"1bcf9f8822d78a798ce987887f6c76acf62642e199bc885efcaeab27ebf147c7\" returns successfully" Jun 25 18:38:18.725959 kubelet[3289]: I0625 18:38:18.725527 3289 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6c77496f95-mnncn" podStartSLOduration=31.935104073 podStartE2EDuration="35.725504375s" podCreationTimestamp="2024-06-25 18:37:43 +0000 UTC" firstStartedPulling="2024-06-25 18:38:14.117557956 +0000 UTC m=+54.529643841" lastFinishedPulling="2024-06-25 18:38:17.907958251 +0000 UTC m=+58.320044143" observedRunningTime="2024-06-25 18:38:18.724607096 +0000 UTC m=+59.136692993" watchObservedRunningTime="2024-06-25 18:38:18.725504375 +0000 UTC m=+59.137590270" Jun 25 18:38:18.943028 systemd[1]: run-containerd-runc-k8s.io-1bcf9f8822d78a798ce987887f6c76acf62642e199bc885efcaeab27ebf147c7-runc.HUxvOw.mount: Deactivated successfully. Jun 25 18:38:19.271864 sshd[4952]: pam_unix(sshd:session): session closed for user core Jun 25 18:38:19.279002 systemd[1]: sshd@8-172.31.20.217:22-139.178.68.195:36398.service: Deactivated successfully. Jun 25 18:38:19.283356 systemd[1]: session-9.scope: Deactivated successfully. Jun 25 18:38:19.284898 systemd-logind[1858]: Session 9 logged out. Waiting for processes to exit. Jun 25 18:38:19.289288 systemd-logind[1858]: Removed session 9. Jun 25 18:38:19.670985 containerd[1879]: time="2024-06-25T18:38:19.670883819Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:38:19.674009 containerd[1879]: time="2024-06-25T18:38:19.673561401Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7641062" Jun 25 18:38:19.676809 containerd[1879]: time="2024-06-25T18:38:19.675779615Z" level=info msg="ImageCreate event name:\"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:38:19.682854 containerd[1879]: time="2024-06-25T18:38:19.681098728Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:38:19.682854 containerd[1879]: time="2024-06-25T18:38:19.682229452Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"9088822\" in 1.772200674s" Jun 25 18:38:19.682854 containerd[1879]: time="2024-06-25T18:38:19.682345830Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\"" Jun 25 18:38:19.691404 containerd[1879]: time="2024-06-25T18:38:19.691323207Z" level=info msg="CreateContainer within sandbox \"9b9b292a120d1b939bde00dfcd853a57ad5b9c177a2b4bf3e8291dbff76a1472\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jun 25 18:38:19.831787 containerd[1879]: time="2024-06-25T18:38:19.831330938Z" level=info msg="CreateContainer within sandbox \"9b9b292a120d1b939bde00dfcd853a57ad5b9c177a2b4bf3e8291dbff76a1472\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"4220e66c9f30786351b44a329c094cb0b783dbd4df533838a0baaf04bca0f039\"" Jun 25 18:38:19.833139 containerd[1879]: time="2024-06-25T18:38:19.833082621Z" level=info msg="StartContainer for \"4220e66c9f30786351b44a329c094cb0b783dbd4df533838a0baaf04bca0f039\"" Jun 25 18:38:20.023102 systemd[1]: Started cri-containerd-4220e66c9f30786351b44a329c094cb0b783dbd4df533838a0baaf04bca0f039.scope - libcontainer container 4220e66c9f30786351b44a329c094cb0b783dbd4df533838a0baaf04bca0f039. Jun 25 18:38:20.108876 containerd[1879]: time="2024-06-25T18:38:20.107882510Z" level=info msg="StopPodSandbox for \"a615f784b4ea81d1ffa5d305f1dba14198613895437ce503591554b9419f9346\"" Jun 25 18:38:20.350572 containerd[1879]: time="2024-06-25T18:38:20.348407816Z" level=info msg="StartContainer for \"4220e66c9f30786351b44a329c094cb0b783dbd4df533838a0baaf04bca0f039\" returns successfully" Jun 25 18:38:20.356584 containerd[1879]: time="2024-06-25T18:38:20.356542060Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Jun 25 18:38:20.609507 containerd[1879]: 2024-06-25 18:38:20.456 [WARNING][5059] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a615f784b4ea81d1ffa5d305f1dba14198613895437ce503591554b9419f9346" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--217-k8s-calico--kube--controllers--6c77496f95--mnncn-eth0", GenerateName:"calico-kube-controllers-6c77496f95-", Namespace:"calico-system", SelfLink:"", UID:"239a9240-02bc-486a-9fcd-8b5b78a2cc4e", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 37, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c77496f95", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-217", ContainerID:"e4bdb58b1abe2c3c974edbdd3d7685c62df874b1c7d46ad7d9a76238f9c86da5", Pod:"calico-kube-controllers-6c77496f95-mnncn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.36.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5c50d8e61c9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:38:20.609507 containerd[1879]: 2024-06-25 18:38:20.457 [INFO][5059] k8s.go 608: Cleaning up netns ContainerID="a615f784b4ea81d1ffa5d305f1dba14198613895437ce503591554b9419f9346" Jun 25 18:38:20.609507 containerd[1879]: 2024-06-25 18:38:20.457 [INFO][5059] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="a615f784b4ea81d1ffa5d305f1dba14198613895437ce503591554b9419f9346" iface="eth0" netns="" Jun 25 18:38:20.609507 containerd[1879]: 2024-06-25 18:38:20.457 [INFO][5059] k8s.go 615: Releasing IP address(es) ContainerID="a615f784b4ea81d1ffa5d305f1dba14198613895437ce503591554b9419f9346" Jun 25 18:38:20.609507 containerd[1879]: 2024-06-25 18:38:20.457 [INFO][5059] utils.go 188: Calico CNI releasing IP address ContainerID="a615f784b4ea81d1ffa5d305f1dba14198613895437ce503591554b9419f9346" Jun 25 18:38:20.609507 containerd[1879]: 2024-06-25 18:38:20.556 [INFO][5091] ipam_plugin.go 411: Releasing address using handleID ContainerID="a615f784b4ea81d1ffa5d305f1dba14198613895437ce503591554b9419f9346" HandleID="k8s-pod-network.a615f784b4ea81d1ffa5d305f1dba14198613895437ce503591554b9419f9346" Workload="ip--172--31--20--217-k8s-calico--kube--controllers--6c77496f95--mnncn-eth0" Jun 25 18:38:20.609507 containerd[1879]: 2024-06-25 18:38:20.556 [INFO][5091] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:38:20.609507 containerd[1879]: 2024-06-25 18:38:20.556 [INFO][5091] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:38:20.609507 containerd[1879]: 2024-06-25 18:38:20.585 [WARNING][5091] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="a615f784b4ea81d1ffa5d305f1dba14198613895437ce503591554b9419f9346" HandleID="k8s-pod-network.a615f784b4ea81d1ffa5d305f1dba14198613895437ce503591554b9419f9346" Workload="ip--172--31--20--217-k8s-calico--kube--controllers--6c77496f95--mnncn-eth0" Jun 25 18:38:20.609507 containerd[1879]: 2024-06-25 18:38:20.585 [INFO][5091] ipam_plugin.go 439: Releasing address using workloadID ContainerID="a615f784b4ea81d1ffa5d305f1dba14198613895437ce503591554b9419f9346" HandleID="k8s-pod-network.a615f784b4ea81d1ffa5d305f1dba14198613895437ce503591554b9419f9346" Workload="ip--172--31--20--217-k8s-calico--kube--controllers--6c77496f95--mnncn-eth0" Jun 25 18:38:20.609507 containerd[1879]: 2024-06-25 18:38:20.592 [INFO][5091] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:38:20.609507 containerd[1879]: 2024-06-25 18:38:20.599 [INFO][5059] k8s.go 621: Teardown processing complete. ContainerID="a615f784b4ea81d1ffa5d305f1dba14198613895437ce503591554b9419f9346" Jun 25 18:38:20.609507 containerd[1879]: time="2024-06-25T18:38:20.609091810Z" level=info msg="TearDown network for sandbox \"a615f784b4ea81d1ffa5d305f1dba14198613895437ce503591554b9419f9346\" successfully" Jun 25 18:38:20.609507 containerd[1879]: time="2024-06-25T18:38:20.609124631Z" level=info msg="StopPodSandbox for \"a615f784b4ea81d1ffa5d305f1dba14198613895437ce503591554b9419f9346\" returns successfully" Jun 25 18:38:20.629188 containerd[1879]: time="2024-06-25T18:38:20.626958053Z" level=info msg="RemovePodSandbox for \"a615f784b4ea81d1ffa5d305f1dba14198613895437ce503591554b9419f9346\"" Jun 25 18:38:20.629188 containerd[1879]: time="2024-06-25T18:38:20.627037744Z" level=info msg="Forcibly stopping sandbox \"a615f784b4ea81d1ffa5d305f1dba14198613895437ce503591554b9419f9346\"" Jun 25 18:38:20.920064 containerd[1879]: 2024-06-25 18:38:20.787 [WARNING][5113] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a615f784b4ea81d1ffa5d305f1dba14198613895437ce503591554b9419f9346" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--217-k8s-calico--kube--controllers--6c77496f95--mnncn-eth0", GenerateName:"calico-kube-controllers-6c77496f95-", Namespace:"calico-system", SelfLink:"", UID:"239a9240-02bc-486a-9fcd-8b5b78a2cc4e", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 37, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c77496f95", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-217", ContainerID:"e4bdb58b1abe2c3c974edbdd3d7685c62df874b1c7d46ad7d9a76238f9c86da5", Pod:"calico-kube-controllers-6c77496f95-mnncn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.36.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5c50d8e61c9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:38:20.920064 containerd[1879]: 2024-06-25 18:38:20.790 [INFO][5113] k8s.go 608: Cleaning up netns ContainerID="a615f784b4ea81d1ffa5d305f1dba14198613895437ce503591554b9419f9346" Jun 25 18:38:20.920064 containerd[1879]: 2024-06-25 18:38:20.791 [INFO][5113] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="a615f784b4ea81d1ffa5d305f1dba14198613895437ce503591554b9419f9346" iface="eth0" netns="" Jun 25 18:38:20.920064 containerd[1879]: 2024-06-25 18:38:20.791 [INFO][5113] k8s.go 615: Releasing IP address(es) ContainerID="a615f784b4ea81d1ffa5d305f1dba14198613895437ce503591554b9419f9346" Jun 25 18:38:20.920064 containerd[1879]: 2024-06-25 18:38:20.791 [INFO][5113] utils.go 188: Calico CNI releasing IP address ContainerID="a615f784b4ea81d1ffa5d305f1dba14198613895437ce503591554b9419f9346" Jun 25 18:38:20.920064 containerd[1879]: 2024-06-25 18:38:20.898 [INFO][5122] ipam_plugin.go 411: Releasing address using handleID ContainerID="a615f784b4ea81d1ffa5d305f1dba14198613895437ce503591554b9419f9346" HandleID="k8s-pod-network.a615f784b4ea81d1ffa5d305f1dba14198613895437ce503591554b9419f9346" Workload="ip--172--31--20--217-k8s-calico--kube--controllers--6c77496f95--mnncn-eth0" Jun 25 18:38:20.920064 containerd[1879]: 2024-06-25 18:38:20.899 [INFO][5122] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:38:20.920064 containerd[1879]: 2024-06-25 18:38:20.899 [INFO][5122] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:38:20.920064 containerd[1879]: 2024-06-25 18:38:20.910 [WARNING][5122] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="a615f784b4ea81d1ffa5d305f1dba14198613895437ce503591554b9419f9346" HandleID="k8s-pod-network.a615f784b4ea81d1ffa5d305f1dba14198613895437ce503591554b9419f9346" Workload="ip--172--31--20--217-k8s-calico--kube--controllers--6c77496f95--mnncn-eth0" Jun 25 18:38:20.920064 containerd[1879]: 2024-06-25 18:38:20.910 [INFO][5122] ipam_plugin.go 439: Releasing address using workloadID ContainerID="a615f784b4ea81d1ffa5d305f1dba14198613895437ce503591554b9419f9346" HandleID="k8s-pod-network.a615f784b4ea81d1ffa5d305f1dba14198613895437ce503591554b9419f9346" Workload="ip--172--31--20--217-k8s-calico--kube--controllers--6c77496f95--mnncn-eth0" Jun 25 18:38:20.920064 containerd[1879]: 2024-06-25 18:38:20.913 [INFO][5122] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:38:20.920064 containerd[1879]: 2024-06-25 18:38:20.916 [INFO][5113] k8s.go 621: Teardown processing complete. ContainerID="a615f784b4ea81d1ffa5d305f1dba14198613895437ce503591554b9419f9346" Jun 25 18:38:20.922051 containerd[1879]: time="2024-06-25T18:38:20.921550001Z" level=info msg="TearDown network for sandbox \"a615f784b4ea81d1ffa5d305f1dba14198613895437ce503591554b9419f9346\" successfully" Jun 25 18:38:20.951579 containerd[1879]: time="2024-06-25T18:38:20.949524164Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a615f784b4ea81d1ffa5d305f1dba14198613895437ce503591554b9419f9346\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 18:38:20.951948 containerd[1879]: time="2024-06-25T18:38:20.951914016Z" level=info msg="RemovePodSandbox \"a615f784b4ea81d1ffa5d305f1dba14198613895437ce503591554b9419f9346\" returns successfully" Jun 25 18:38:20.953240 containerd[1879]: time="2024-06-25T18:38:20.953113847Z" level=info msg="StopPodSandbox for \"1a30d46c8972338c5167ad00daeedfb2787ba575849dd504fdabe9d7e36d5849\"" Jun 25 18:38:21.140717 containerd[1879]: 2024-06-25 18:38:21.055 [WARNING][5140] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1a30d46c8972338c5167ad00daeedfb2787ba575849dd504fdabe9d7e36d5849" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--217-k8s-coredns--7db6d8ff4d--8zcvk-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"b58966d2-ccc7-40b6-ba32-ef9977463f92", ResourceVersion:"777", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 37, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-217", ContainerID:"de7a98039a684b3edcd75ef26fddaf47315245106ea71c92223c589b71f90de7", Pod:"coredns-7db6d8ff4d-8zcvk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.36.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califc5c9ed0446", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:38:21.140717 containerd[1879]: 2024-06-25 18:38:21.056 [INFO][5140] k8s.go 608: Cleaning up netns ContainerID="1a30d46c8972338c5167ad00daeedfb2787ba575849dd504fdabe9d7e36d5849" Jun 25 18:38:21.140717 containerd[1879]: 2024-06-25 18:38:21.056 [INFO][5140] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="1a30d46c8972338c5167ad00daeedfb2787ba575849dd504fdabe9d7e36d5849" iface="eth0" netns="" Jun 25 18:38:21.140717 containerd[1879]: 2024-06-25 18:38:21.056 [INFO][5140] k8s.go 615: Releasing IP address(es) ContainerID="1a30d46c8972338c5167ad00daeedfb2787ba575849dd504fdabe9d7e36d5849" Jun 25 18:38:21.140717 containerd[1879]: 2024-06-25 18:38:21.056 [INFO][5140] utils.go 188: Calico CNI releasing IP address ContainerID="1a30d46c8972338c5167ad00daeedfb2787ba575849dd504fdabe9d7e36d5849" Jun 25 18:38:21.140717 containerd[1879]: 2024-06-25 18:38:21.112 [INFO][5146] ipam_plugin.go 411: Releasing address using handleID ContainerID="1a30d46c8972338c5167ad00daeedfb2787ba575849dd504fdabe9d7e36d5849" HandleID="k8s-pod-network.1a30d46c8972338c5167ad00daeedfb2787ba575849dd504fdabe9d7e36d5849" Workload="ip--172--31--20--217-k8s-coredns--7db6d8ff4d--8zcvk-eth0" Jun 25 18:38:21.140717 containerd[1879]: 2024-06-25 18:38:21.113 [INFO][5146] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:38:21.140717 containerd[1879]: 2024-06-25 18:38:21.113 [INFO][5146] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:38:21.140717 containerd[1879]: 2024-06-25 18:38:21.129 [WARNING][5146] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="1a30d46c8972338c5167ad00daeedfb2787ba575849dd504fdabe9d7e36d5849" HandleID="k8s-pod-network.1a30d46c8972338c5167ad00daeedfb2787ba575849dd504fdabe9d7e36d5849" Workload="ip--172--31--20--217-k8s-coredns--7db6d8ff4d--8zcvk-eth0" Jun 25 18:38:21.140717 containerd[1879]: 2024-06-25 18:38:21.129 [INFO][5146] ipam_plugin.go 439: Releasing address using workloadID ContainerID="1a30d46c8972338c5167ad00daeedfb2787ba575849dd504fdabe9d7e36d5849" HandleID="k8s-pod-network.1a30d46c8972338c5167ad00daeedfb2787ba575849dd504fdabe9d7e36d5849" Workload="ip--172--31--20--217-k8s-coredns--7db6d8ff4d--8zcvk-eth0" Jun 25 18:38:21.140717 containerd[1879]: 2024-06-25 18:38:21.132 [INFO][5146] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:38:21.140717 containerd[1879]: 2024-06-25 18:38:21.136 [INFO][5140] k8s.go 621: Teardown processing complete. ContainerID="1a30d46c8972338c5167ad00daeedfb2787ba575849dd504fdabe9d7e36d5849" Jun 25 18:38:21.149954 containerd[1879]: time="2024-06-25T18:38:21.140767951Z" level=info msg="TearDown network for sandbox \"1a30d46c8972338c5167ad00daeedfb2787ba575849dd504fdabe9d7e36d5849\" successfully" Jun 25 18:38:21.149954 containerd[1879]: time="2024-06-25T18:38:21.140816835Z" level=info msg="StopPodSandbox for \"1a30d46c8972338c5167ad00daeedfb2787ba575849dd504fdabe9d7e36d5849\" returns successfully" Jun 25 18:38:21.149954 containerd[1879]: time="2024-06-25T18:38:21.141876057Z" level=info msg="RemovePodSandbox for \"1a30d46c8972338c5167ad00daeedfb2787ba575849dd504fdabe9d7e36d5849\"" Jun 25 18:38:21.149954 containerd[1879]: time="2024-06-25T18:38:21.141911020Z" level=info msg="Forcibly stopping sandbox \"1a30d46c8972338c5167ad00daeedfb2787ba575849dd504fdabe9d7e36d5849\"" Jun 25 18:38:21.363043 containerd[1879]: 2024-06-25 18:38:21.293 [WARNING][5165] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1a30d46c8972338c5167ad00daeedfb2787ba575849dd504fdabe9d7e36d5849" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--217-k8s-coredns--7db6d8ff4d--8zcvk-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"b58966d2-ccc7-40b6-ba32-ef9977463f92", ResourceVersion:"777", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 37, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-217", ContainerID:"de7a98039a684b3edcd75ef26fddaf47315245106ea71c92223c589b71f90de7", Pod:"coredns-7db6d8ff4d-8zcvk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.36.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califc5c9ed0446", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:38:21.363043 containerd[1879]: 2024-06-25 18:38:21.293 [INFO][5165] k8s.go 608: Cleaning up netns ContainerID="1a30d46c8972338c5167ad00daeedfb2787ba575849dd504fdabe9d7e36d5849" Jun 25 18:38:21.363043 containerd[1879]: 2024-06-25 18:38:21.293 [INFO][5165] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="1a30d46c8972338c5167ad00daeedfb2787ba575849dd504fdabe9d7e36d5849" iface="eth0" netns="" Jun 25 18:38:21.363043 containerd[1879]: 2024-06-25 18:38:21.293 [INFO][5165] k8s.go 615: Releasing IP address(es) ContainerID="1a30d46c8972338c5167ad00daeedfb2787ba575849dd504fdabe9d7e36d5849" Jun 25 18:38:21.363043 containerd[1879]: 2024-06-25 18:38:21.293 [INFO][5165] utils.go 188: Calico CNI releasing IP address ContainerID="1a30d46c8972338c5167ad00daeedfb2787ba575849dd504fdabe9d7e36d5849" Jun 25 18:38:21.363043 containerd[1879]: 2024-06-25 18:38:21.340 [INFO][5172] ipam_plugin.go 411: Releasing address using handleID ContainerID="1a30d46c8972338c5167ad00daeedfb2787ba575849dd504fdabe9d7e36d5849" HandleID="k8s-pod-network.1a30d46c8972338c5167ad00daeedfb2787ba575849dd504fdabe9d7e36d5849" Workload="ip--172--31--20--217-k8s-coredns--7db6d8ff4d--8zcvk-eth0" Jun 25 18:38:21.363043 containerd[1879]: 2024-06-25 18:38:21.340 [INFO][5172] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:38:21.363043 containerd[1879]: 2024-06-25 18:38:21.340 [INFO][5172] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:38:21.363043 containerd[1879]: 2024-06-25 18:38:21.354 [WARNING][5172] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="1a30d46c8972338c5167ad00daeedfb2787ba575849dd504fdabe9d7e36d5849" HandleID="k8s-pod-network.1a30d46c8972338c5167ad00daeedfb2787ba575849dd504fdabe9d7e36d5849" Workload="ip--172--31--20--217-k8s-coredns--7db6d8ff4d--8zcvk-eth0" Jun 25 18:38:21.363043 containerd[1879]: 2024-06-25 18:38:21.354 [INFO][5172] ipam_plugin.go 439: Releasing address using workloadID ContainerID="1a30d46c8972338c5167ad00daeedfb2787ba575849dd504fdabe9d7e36d5849" HandleID="k8s-pod-network.1a30d46c8972338c5167ad00daeedfb2787ba575849dd504fdabe9d7e36d5849" Workload="ip--172--31--20--217-k8s-coredns--7db6d8ff4d--8zcvk-eth0" Jun 25 18:38:21.363043 containerd[1879]: 2024-06-25 18:38:21.358 [INFO][5172] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:38:21.363043 containerd[1879]: 2024-06-25 18:38:21.360 [INFO][5165] k8s.go 621: Teardown processing complete. ContainerID="1a30d46c8972338c5167ad00daeedfb2787ba575849dd504fdabe9d7e36d5849" Jun 25 18:38:21.363043 containerd[1879]: time="2024-06-25T18:38:21.362246115Z" level=info msg="TearDown network for sandbox \"1a30d46c8972338c5167ad00daeedfb2787ba575849dd504fdabe9d7e36d5849\" successfully" Jun 25 18:38:21.369164 containerd[1879]: time="2024-06-25T18:38:21.369116632Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1a30d46c8972338c5167ad00daeedfb2787ba575849dd504fdabe9d7e36d5849\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 18:38:21.369713 containerd[1879]: time="2024-06-25T18:38:21.369199793Z" level=info msg="RemovePodSandbox \"1a30d46c8972338c5167ad00daeedfb2787ba575849dd504fdabe9d7e36d5849\" returns successfully" Jun 25 18:38:21.370120 containerd[1879]: time="2024-06-25T18:38:21.370090853Z" level=info msg="StopPodSandbox for \"bf7970fad0c7b2b14f7cd10923f4786a1054b1ec18cda7591d42f8a5752e6c6c\"" Jun 25 18:38:21.528670 containerd[1879]: 2024-06-25 18:38:21.446 [WARNING][5190] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bf7970fad0c7b2b14f7cd10923f4786a1054b1ec18cda7591d42f8a5752e6c6c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--217-k8s-coredns--7db6d8ff4d--bblmb-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"0dc1be98-a119-4811-b95d-a24913f2cc14", ResourceVersion:"772", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 37, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-217", ContainerID:"04f491322af7a93e806dfa9f03ea44e6cb9187c7976acb729cdd5d53172cd43c", Pod:"coredns-7db6d8ff4d-bblmb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.36.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib07cd040fd4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:38:21.528670 containerd[1879]: 2024-06-25 18:38:21.446 [INFO][5190] k8s.go 608: Cleaning up netns ContainerID="bf7970fad0c7b2b14f7cd10923f4786a1054b1ec18cda7591d42f8a5752e6c6c" Jun 25 18:38:21.528670 containerd[1879]: 2024-06-25 18:38:21.446 [INFO][5190] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="bf7970fad0c7b2b14f7cd10923f4786a1054b1ec18cda7591d42f8a5752e6c6c" iface="eth0" netns="" Jun 25 18:38:21.528670 containerd[1879]: 2024-06-25 18:38:21.446 [INFO][5190] k8s.go 615: Releasing IP address(es) ContainerID="bf7970fad0c7b2b14f7cd10923f4786a1054b1ec18cda7591d42f8a5752e6c6c" Jun 25 18:38:21.528670 containerd[1879]: 2024-06-25 18:38:21.446 [INFO][5190] utils.go 188: Calico CNI releasing IP address ContainerID="bf7970fad0c7b2b14f7cd10923f4786a1054b1ec18cda7591d42f8a5752e6c6c" Jun 25 18:38:21.528670 containerd[1879]: 2024-06-25 18:38:21.490 [INFO][5196] ipam_plugin.go 411: Releasing address using handleID ContainerID="bf7970fad0c7b2b14f7cd10923f4786a1054b1ec18cda7591d42f8a5752e6c6c" HandleID="k8s-pod-network.bf7970fad0c7b2b14f7cd10923f4786a1054b1ec18cda7591d42f8a5752e6c6c" Workload="ip--172--31--20--217-k8s-coredns--7db6d8ff4d--bblmb-eth0" Jun 25 18:38:21.528670 containerd[1879]: 2024-06-25 18:38:21.490 [INFO][5196] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:38:21.528670 containerd[1879]: 2024-06-25 18:38:21.490 [INFO][5196] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:38:21.528670 containerd[1879]: 2024-06-25 18:38:21.516 [WARNING][5196] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="bf7970fad0c7b2b14f7cd10923f4786a1054b1ec18cda7591d42f8a5752e6c6c" HandleID="k8s-pod-network.bf7970fad0c7b2b14f7cd10923f4786a1054b1ec18cda7591d42f8a5752e6c6c" Workload="ip--172--31--20--217-k8s-coredns--7db6d8ff4d--bblmb-eth0" Jun 25 18:38:21.528670 containerd[1879]: 2024-06-25 18:38:21.516 [INFO][5196] ipam_plugin.go 439: Releasing address using workloadID ContainerID="bf7970fad0c7b2b14f7cd10923f4786a1054b1ec18cda7591d42f8a5752e6c6c" HandleID="k8s-pod-network.bf7970fad0c7b2b14f7cd10923f4786a1054b1ec18cda7591d42f8a5752e6c6c" Workload="ip--172--31--20--217-k8s-coredns--7db6d8ff4d--bblmb-eth0" Jun 25 18:38:21.528670 containerd[1879]: 2024-06-25 18:38:21.523 [INFO][5196] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:38:21.528670 containerd[1879]: 2024-06-25 18:38:21.525 [INFO][5190] k8s.go 621: Teardown processing complete. ContainerID="bf7970fad0c7b2b14f7cd10923f4786a1054b1ec18cda7591d42f8a5752e6c6c" Jun 25 18:38:21.530363 containerd[1879]: time="2024-06-25T18:38:21.528742988Z" level=info msg="TearDown network for sandbox \"bf7970fad0c7b2b14f7cd10923f4786a1054b1ec18cda7591d42f8a5752e6c6c\" successfully" Jun 25 18:38:21.530363 containerd[1879]: time="2024-06-25T18:38:21.528773220Z" level=info msg="StopPodSandbox for \"bf7970fad0c7b2b14f7cd10923f4786a1054b1ec18cda7591d42f8a5752e6c6c\" returns successfully" Jun 25 18:38:21.530363 containerd[1879]: time="2024-06-25T18:38:21.529432030Z" level=info msg="RemovePodSandbox for \"bf7970fad0c7b2b14f7cd10923f4786a1054b1ec18cda7591d42f8a5752e6c6c\"" Jun 25 18:38:21.530363 containerd[1879]: time="2024-06-25T18:38:21.529467418Z" level=info msg="Forcibly stopping sandbox \"bf7970fad0c7b2b14f7cd10923f4786a1054b1ec18cda7591d42f8a5752e6c6c\"" Jun 25 18:38:21.710055 containerd[1879]: 2024-06-25 18:38:21.601 [WARNING][5214] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bf7970fad0c7b2b14f7cd10923f4786a1054b1ec18cda7591d42f8a5752e6c6c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--217-k8s-coredns--7db6d8ff4d--bblmb-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"0dc1be98-a119-4811-b95d-a24913f2cc14", ResourceVersion:"772", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 37, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-217", ContainerID:"04f491322af7a93e806dfa9f03ea44e6cb9187c7976acb729cdd5d53172cd43c", Pod:"coredns-7db6d8ff4d-bblmb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.36.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib07cd040fd4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:38:21.710055 containerd[1879]: 2024-06-25 18:38:21.601 [INFO][5214] k8s.go 608: Cleaning up netns ContainerID="bf7970fad0c7b2b14f7cd10923f4786a1054b1ec18cda7591d42f8a5752e6c6c" Jun 25 18:38:21.710055 containerd[1879]: 2024-06-25 18:38:21.601 [INFO][5214] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="bf7970fad0c7b2b14f7cd10923f4786a1054b1ec18cda7591d42f8a5752e6c6c" iface="eth0" netns="" Jun 25 18:38:21.710055 containerd[1879]: 2024-06-25 18:38:21.601 [INFO][5214] k8s.go 615: Releasing IP address(es) ContainerID="bf7970fad0c7b2b14f7cd10923f4786a1054b1ec18cda7591d42f8a5752e6c6c" Jun 25 18:38:21.710055 containerd[1879]: 2024-06-25 18:38:21.601 [INFO][5214] utils.go 188: Calico CNI releasing IP address ContainerID="bf7970fad0c7b2b14f7cd10923f4786a1054b1ec18cda7591d42f8a5752e6c6c" Jun 25 18:38:21.710055 containerd[1879]: 2024-06-25 18:38:21.678 [INFO][5220] ipam_plugin.go 411: Releasing address using handleID ContainerID="bf7970fad0c7b2b14f7cd10923f4786a1054b1ec18cda7591d42f8a5752e6c6c" HandleID="k8s-pod-network.bf7970fad0c7b2b14f7cd10923f4786a1054b1ec18cda7591d42f8a5752e6c6c" Workload="ip--172--31--20--217-k8s-coredns--7db6d8ff4d--bblmb-eth0" Jun 25 18:38:21.710055 containerd[1879]: 2024-06-25 18:38:21.678 [INFO][5220] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:38:21.710055 containerd[1879]: 2024-06-25 18:38:21.678 [INFO][5220] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:38:21.710055 containerd[1879]: 2024-06-25 18:38:21.690 [WARNING][5220] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="bf7970fad0c7b2b14f7cd10923f4786a1054b1ec18cda7591d42f8a5752e6c6c" HandleID="k8s-pod-network.bf7970fad0c7b2b14f7cd10923f4786a1054b1ec18cda7591d42f8a5752e6c6c" Workload="ip--172--31--20--217-k8s-coredns--7db6d8ff4d--bblmb-eth0" Jun 25 18:38:21.710055 containerd[1879]: 2024-06-25 18:38:21.690 [INFO][5220] ipam_plugin.go 439: Releasing address using workloadID ContainerID="bf7970fad0c7b2b14f7cd10923f4786a1054b1ec18cda7591d42f8a5752e6c6c" HandleID="k8s-pod-network.bf7970fad0c7b2b14f7cd10923f4786a1054b1ec18cda7591d42f8a5752e6c6c" Workload="ip--172--31--20--217-k8s-coredns--7db6d8ff4d--bblmb-eth0" Jun 25 18:38:21.710055 containerd[1879]: 2024-06-25 18:38:21.696 [INFO][5220] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:38:21.710055 containerd[1879]: 2024-06-25 18:38:21.703 [INFO][5214] k8s.go 621: Teardown processing complete. ContainerID="bf7970fad0c7b2b14f7cd10923f4786a1054b1ec18cda7591d42f8a5752e6c6c" Jun 25 18:38:21.710829 containerd[1879]: time="2024-06-25T18:38:21.710105779Z" level=info msg="TearDown network for sandbox \"bf7970fad0c7b2b14f7cd10923f4786a1054b1ec18cda7591d42f8a5752e6c6c\" successfully" Jun 25 18:38:21.718730 containerd[1879]: time="2024-06-25T18:38:21.718554285Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bf7970fad0c7b2b14f7cd10923f4786a1054b1ec18cda7591d42f8a5752e6c6c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 18:38:21.718730 containerd[1879]: time="2024-06-25T18:38:21.718630177Z" level=info msg="RemovePodSandbox \"bf7970fad0c7b2b14f7cd10923f4786a1054b1ec18cda7591d42f8a5752e6c6c\" returns successfully" Jun 25 18:38:21.723220 containerd[1879]: time="2024-06-25T18:38:21.723158535Z" level=info msg="StopPodSandbox for \"a2e25af349ccaf02eaf050372e010ee51788e04c4821ba4f07c8a7c348808923\"" Jun 25 18:38:21.927712 containerd[1879]: 2024-06-25 18:38:21.830 [WARNING][5238] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a2e25af349ccaf02eaf050372e010ee51788e04c4821ba4f07c8a7c348808923" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--217-k8s-csi--node--driver--jd8sm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4109c626-0c92-402f-a0e5-85bdc1e223de", ResourceVersion:"756", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 37, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-217", ContainerID:"9b9b292a120d1b939bde00dfcd853a57ad5b9c177a2b4bf3e8291dbff76a1472", Pod:"csi-node-driver-jd8sm", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.36.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali02677af2580", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:38:21.927712 containerd[1879]: 2024-06-25 18:38:21.830 [INFO][5238] k8s.go 608: Cleaning up netns ContainerID="a2e25af349ccaf02eaf050372e010ee51788e04c4821ba4f07c8a7c348808923" Jun 25 18:38:21.927712 containerd[1879]: 2024-06-25 18:38:21.830 [INFO][5238] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="a2e25af349ccaf02eaf050372e010ee51788e04c4821ba4f07c8a7c348808923" iface="eth0" netns="" Jun 25 18:38:21.927712 containerd[1879]: 2024-06-25 18:38:21.830 [INFO][5238] k8s.go 615: Releasing IP address(es) ContainerID="a2e25af349ccaf02eaf050372e010ee51788e04c4821ba4f07c8a7c348808923" Jun 25 18:38:21.927712 containerd[1879]: 2024-06-25 18:38:21.830 [INFO][5238] utils.go 188: Calico CNI releasing IP address ContainerID="a2e25af349ccaf02eaf050372e010ee51788e04c4821ba4f07c8a7c348808923" Jun 25 18:38:21.927712 containerd[1879]: 2024-06-25 18:38:21.895 [INFO][5245] ipam_plugin.go 411: Releasing address using handleID ContainerID="a2e25af349ccaf02eaf050372e010ee51788e04c4821ba4f07c8a7c348808923" HandleID="k8s-pod-network.a2e25af349ccaf02eaf050372e010ee51788e04c4821ba4f07c8a7c348808923" Workload="ip--172--31--20--217-k8s-csi--node--driver--jd8sm-eth0" Jun 25 18:38:21.927712 containerd[1879]: 2024-06-25 18:38:21.895 [INFO][5245] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:38:21.927712 containerd[1879]: 2024-06-25 18:38:21.895 [INFO][5245] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:38:21.927712 containerd[1879]: 2024-06-25 18:38:21.905 [WARNING][5245] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="a2e25af349ccaf02eaf050372e010ee51788e04c4821ba4f07c8a7c348808923" HandleID="k8s-pod-network.a2e25af349ccaf02eaf050372e010ee51788e04c4821ba4f07c8a7c348808923" Workload="ip--172--31--20--217-k8s-csi--node--driver--jd8sm-eth0" Jun 25 18:38:21.927712 containerd[1879]: 2024-06-25 18:38:21.905 [INFO][5245] ipam_plugin.go 439: Releasing address using workloadID ContainerID="a2e25af349ccaf02eaf050372e010ee51788e04c4821ba4f07c8a7c348808923" HandleID="k8s-pod-network.a2e25af349ccaf02eaf050372e010ee51788e04c4821ba4f07c8a7c348808923" Workload="ip--172--31--20--217-k8s-csi--node--driver--jd8sm-eth0" Jun 25 18:38:21.927712 containerd[1879]: 2024-06-25 18:38:21.913 [INFO][5245] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:38:21.927712 containerd[1879]: 2024-06-25 18:38:21.917 [INFO][5238] k8s.go 621: Teardown processing complete. ContainerID="a2e25af349ccaf02eaf050372e010ee51788e04c4821ba4f07c8a7c348808923" Jun 25 18:38:21.928695 containerd[1879]: time="2024-06-25T18:38:21.928258811Z" level=info msg="TearDown network for sandbox \"a2e25af349ccaf02eaf050372e010ee51788e04c4821ba4f07c8a7c348808923\" successfully" Jun 25 18:38:21.928695 containerd[1879]: time="2024-06-25T18:38:21.928320762Z" level=info msg="StopPodSandbox for \"a2e25af349ccaf02eaf050372e010ee51788e04c4821ba4f07c8a7c348808923\" returns successfully" Jun 25 18:38:21.930826 containerd[1879]: time="2024-06-25T18:38:21.929415889Z" level=info msg="RemovePodSandbox for \"a2e25af349ccaf02eaf050372e010ee51788e04c4821ba4f07c8a7c348808923\"" Jun 25 18:38:21.930826 containerd[1879]: time="2024-06-25T18:38:21.929452692Z" level=info msg="Forcibly stopping sandbox \"a2e25af349ccaf02eaf050372e010ee51788e04c4821ba4f07c8a7c348808923\"" Jun 25 18:38:22.184897 containerd[1879]: 2024-06-25 18:38:22.055 [WARNING][5267] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a2e25af349ccaf02eaf050372e010ee51788e04c4821ba4f07c8a7c348808923" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--217-k8s-csi--node--driver--jd8sm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4109c626-0c92-402f-a0e5-85bdc1e223de", ResourceVersion:"756", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 37, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-217", ContainerID:"9b9b292a120d1b939bde00dfcd853a57ad5b9c177a2b4bf3e8291dbff76a1472", Pod:"csi-node-driver-jd8sm", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.36.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali02677af2580", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:38:22.184897 containerd[1879]: 2024-06-25 18:38:22.055 [INFO][5267] k8s.go 608: Cleaning up netns ContainerID="a2e25af349ccaf02eaf050372e010ee51788e04c4821ba4f07c8a7c348808923" Jun 25 18:38:22.184897 containerd[1879]: 2024-06-25 18:38:22.055 [INFO][5267] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="a2e25af349ccaf02eaf050372e010ee51788e04c4821ba4f07c8a7c348808923" iface="eth0" netns="" Jun 25 18:38:22.184897 containerd[1879]: 2024-06-25 18:38:22.055 [INFO][5267] k8s.go 615: Releasing IP address(es) ContainerID="a2e25af349ccaf02eaf050372e010ee51788e04c4821ba4f07c8a7c348808923" Jun 25 18:38:22.184897 containerd[1879]: 2024-06-25 18:38:22.055 [INFO][5267] utils.go 188: Calico CNI releasing IP address ContainerID="a2e25af349ccaf02eaf050372e010ee51788e04c4821ba4f07c8a7c348808923" Jun 25 18:38:22.184897 containerd[1879]: 2024-06-25 18:38:22.151 [INFO][5274] ipam_plugin.go 411: Releasing address using handleID ContainerID="a2e25af349ccaf02eaf050372e010ee51788e04c4821ba4f07c8a7c348808923" HandleID="k8s-pod-network.a2e25af349ccaf02eaf050372e010ee51788e04c4821ba4f07c8a7c348808923" Workload="ip--172--31--20--217-k8s-csi--node--driver--jd8sm-eth0" Jun 25 18:38:22.184897 containerd[1879]: 2024-06-25 18:38:22.152 [INFO][5274] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:38:22.184897 containerd[1879]: 2024-06-25 18:38:22.152 [INFO][5274] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:38:22.184897 containerd[1879]: 2024-06-25 18:38:22.170 [WARNING][5274] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="a2e25af349ccaf02eaf050372e010ee51788e04c4821ba4f07c8a7c348808923" HandleID="k8s-pod-network.a2e25af349ccaf02eaf050372e010ee51788e04c4821ba4f07c8a7c348808923" Workload="ip--172--31--20--217-k8s-csi--node--driver--jd8sm-eth0" Jun 25 18:38:22.184897 containerd[1879]: 2024-06-25 18:38:22.170 [INFO][5274] ipam_plugin.go 439: Releasing address using workloadID ContainerID="a2e25af349ccaf02eaf050372e010ee51788e04c4821ba4f07c8a7c348808923" HandleID="k8s-pod-network.a2e25af349ccaf02eaf050372e010ee51788e04c4821ba4f07c8a7c348808923" Workload="ip--172--31--20--217-k8s-csi--node--driver--jd8sm-eth0" Jun 25 18:38:22.184897 containerd[1879]: 2024-06-25 18:38:22.176 [INFO][5274] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:38:22.184897 containerd[1879]: 2024-06-25 18:38:22.182 [INFO][5267] k8s.go 621: Teardown processing complete. ContainerID="a2e25af349ccaf02eaf050372e010ee51788e04c4821ba4f07c8a7c348808923" Jun 25 18:38:22.186830 containerd[1879]: time="2024-06-25T18:38:22.186289557Z" level=info msg="TearDown network for sandbox \"a2e25af349ccaf02eaf050372e010ee51788e04c4821ba4f07c8a7c348808923\" successfully" Jun 25 18:38:22.196040 containerd[1879]: time="2024-06-25T18:38:22.195980649Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a2e25af349ccaf02eaf050372e010ee51788e04c4821ba4f07c8a7c348808923\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 18:38:22.196358 containerd[1879]: time="2024-06-25T18:38:22.196161889Z" level=info msg="RemovePodSandbox \"a2e25af349ccaf02eaf050372e010ee51788e04c4821ba4f07c8a7c348808923\" returns successfully" Jun 25 18:38:22.412643 containerd[1879]: time="2024-06-25T18:38:22.412215669Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:38:22.414954 containerd[1879]: time="2024-06-25T18:38:22.414117865Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=10147655" Jun 25 18:38:22.417356 containerd[1879]: time="2024-06-25T18:38:22.417199749Z" level=info msg="ImageCreate event name:\"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:38:22.428418 containerd[1879]: time="2024-06-25T18:38:22.428322879Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:38:22.430236 containerd[1879]: time="2024-06-25T18:38:22.430184469Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"11595367\" in 2.073588572s" Jun 25 18:38:22.430343 containerd[1879]: time="2024-06-25T18:38:22.430245915Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\"" Jun 25 18:38:22.436279 containerd[1879]: time="2024-06-25T18:38:22.436031905Z" level=info msg="CreateContainer within sandbox \"9b9b292a120d1b939bde00dfcd853a57ad5b9c177a2b4bf3e8291dbff76a1472\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jun 25 18:38:22.489446 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount705902991.mount: Deactivated successfully. Jun 25 18:38:22.511574 containerd[1879]: time="2024-06-25T18:38:22.511422165Z" level=info msg="CreateContainer within sandbox \"9b9b292a120d1b939bde00dfcd853a57ad5b9c177a2b4bf3e8291dbff76a1472\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"8e3750a2716838015f7434cbceb43b89995f3218d77396a0befac25b2893e57b\"" Jun 25 18:38:22.513862 containerd[1879]: time="2024-06-25T18:38:22.512288706Z" level=info msg="StartContainer for \"8e3750a2716838015f7434cbceb43b89995f3218d77396a0befac25b2893e57b\"" Jun 25 18:38:22.657643 systemd[1]: Started cri-containerd-8e3750a2716838015f7434cbceb43b89995f3218d77396a0befac25b2893e57b.scope - libcontainer container 8e3750a2716838015f7434cbceb43b89995f3218d77396a0befac25b2893e57b. Jun 25 18:38:22.773048 containerd[1879]: time="2024-06-25T18:38:22.772942775Z" level=info msg="StartContainer for \"8e3750a2716838015f7434cbceb43b89995f3218d77396a0befac25b2893e57b\" returns successfully" Jun 25 18:38:23.419834 kubelet[3289]: I0625 18:38:23.419774 3289 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jun 25 18:38:23.424098 kubelet[3289]: I0625 18:38:23.424062 3289 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jun 25 18:38:23.811649 kubelet[3289]: I0625 18:38:23.810979 3289 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-jd8sm" podStartSLOduration=32.767852205 podStartE2EDuration="40.810956147s" podCreationTimestamp="2024-06-25 18:37:43 +0000 UTC" firstStartedPulling="2024-06-25 18:38:14.388913075 +0000 UTC m=+54.800998959" lastFinishedPulling="2024-06-25 18:38:22.432017028 +0000 UTC m=+62.844102901" observedRunningTime="2024-06-25 18:38:23.810641461 +0000 UTC m=+64.222727355" watchObservedRunningTime="2024-06-25 18:38:23.810956147 +0000 UTC m=+64.223042037" Jun 25 18:38:24.317269 systemd[1]: Started sshd@9-172.31.20.217:22-139.178.68.195:49010.service - OpenSSH per-connection server daemon (139.178.68.195:49010). Jun 25 18:38:24.549460 sshd[5320]: Accepted publickey for core from 139.178.68.195 port 49010 ssh2: RSA SHA256:zWpntMacToOmwCaU62vdvg6t1el6aib1JfI6hz3EHOQ Jun 25 18:38:24.556228 sshd[5320]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:38:24.578837 systemd-logind[1858]: New session 10 of user core. Jun 25 18:38:24.594475 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 25 18:38:24.982748 sshd[5320]: pam_unix(sshd:session): session closed for user core Jun 25 18:38:24.987445 systemd[1]: sshd@9-172.31.20.217:22-139.178.68.195:49010.service: Deactivated successfully. Jun 25 18:38:24.990235 systemd[1]: session-10.scope: Deactivated successfully. Jun 25 18:38:24.991532 systemd-logind[1858]: Session 10 logged out. Waiting for processes to exit. Jun 25 18:38:24.992989 systemd-logind[1858]: Removed session 10. Jun 25 18:38:25.023917 systemd[1]: Started sshd@10-172.31.20.217:22-139.178.68.195:49014.service - OpenSSH per-connection server daemon (139.178.68.195:49014). Jun 25 18:38:25.198694 sshd[5334]: Accepted publickey for core from 139.178.68.195 port 49014 ssh2: RSA SHA256:zWpntMacToOmwCaU62vdvg6t1el6aib1JfI6hz3EHOQ Jun 25 18:38:25.201152 sshd[5334]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:38:25.212069 systemd-logind[1858]: New session 11 of user core. Jun 25 18:38:25.220027 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 25 18:38:25.603403 sshd[5334]: pam_unix(sshd:session): session closed for user core Jun 25 18:38:25.613069 systemd-logind[1858]: Session 11 logged out. Waiting for processes to exit. Jun 25 18:38:25.615996 systemd[1]: sshd@10-172.31.20.217:22-139.178.68.195:49014.service: Deactivated successfully. Jun 25 18:38:25.625969 systemd[1]: session-11.scope: Deactivated successfully. Jun 25 18:38:25.650203 systemd-logind[1858]: Removed session 11. Jun 25 18:38:25.667368 systemd[1]: Started sshd@11-172.31.20.217:22-139.178.68.195:49026.service - OpenSSH per-connection server daemon (139.178.68.195:49026). Jun 25 18:38:25.890211 sshd[5345]: Accepted publickey for core from 139.178.68.195 port 49026 ssh2: RSA SHA256:zWpntMacToOmwCaU62vdvg6t1el6aib1JfI6hz3EHOQ Jun 25 18:38:25.894984 sshd[5345]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:38:25.922843 systemd-logind[1858]: New session 12 of user core. Jun 25 18:38:25.931047 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 25 18:38:26.373279 sshd[5345]: pam_unix(sshd:session): session closed for user core Jun 25 18:38:26.378248 systemd-logind[1858]: Session 12 logged out. Waiting for processes to exit. Jun 25 18:38:26.378637 systemd[1]: sshd@11-172.31.20.217:22-139.178.68.195:49026.service: Deactivated successfully. Jun 25 18:38:26.384526 systemd[1]: session-12.scope: Deactivated successfully. Jun 25 18:38:26.387156 systemd-logind[1858]: Removed session 12. Jun 25 18:38:31.433412 systemd[1]: Started sshd@12-172.31.20.217:22-139.178.68.195:52340.service - OpenSSH per-connection server daemon (139.178.68.195:52340). Jun 25 18:38:31.647491 sshd[5431]: Accepted publickey for core from 139.178.68.195 port 52340 ssh2: RSA SHA256:zWpntMacToOmwCaU62vdvg6t1el6aib1JfI6hz3EHOQ Jun 25 18:38:31.652172 sshd[5431]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:38:31.682912 systemd-logind[1858]: New session 13 of user core. Jun 25 18:38:31.700255 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 25 18:38:32.107109 sshd[5431]: pam_unix(sshd:session): session closed for user core Jun 25 18:38:32.118609 systemd[1]: sshd@12-172.31.20.217:22-139.178.68.195:52340.service: Deactivated successfully. Jun 25 18:38:32.125169 systemd[1]: session-13.scope: Deactivated successfully. Jun 25 18:38:32.126634 systemd-logind[1858]: Session 13 logged out. Waiting for processes to exit. Jun 25 18:38:32.129426 systemd-logind[1858]: Removed session 13. Jun 25 18:38:37.147207 systemd[1]: Started sshd@13-172.31.20.217:22-139.178.68.195:52346.service - OpenSSH per-connection server daemon (139.178.68.195:52346). Jun 25 18:38:37.355894 sshd[5454]: Accepted publickey for core from 139.178.68.195 port 52346 ssh2: RSA SHA256:zWpntMacToOmwCaU62vdvg6t1el6aib1JfI6hz3EHOQ Jun 25 18:38:37.357758 sshd[5454]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:38:37.372401 systemd-logind[1858]: New session 14 of user core. Jun 25 18:38:37.379044 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 25 18:38:37.736431 sshd[5454]: pam_unix(sshd:session): session closed for user core Jun 25 18:38:37.741535 systemd[1]: sshd@13-172.31.20.217:22-139.178.68.195:52346.service: Deactivated successfully. Jun 25 18:38:37.746431 systemd[1]: session-14.scope: Deactivated successfully. Jun 25 18:38:37.749011 systemd-logind[1858]: Session 14 logged out. Waiting for processes to exit. Jun 25 18:38:37.750817 systemd-logind[1858]: Removed session 14. Jun 25 18:38:42.779288 systemd[1]: Started sshd@14-172.31.20.217:22-139.178.68.195:50046.service - OpenSSH per-connection server daemon (139.178.68.195:50046). Jun 25 18:38:42.982070 sshd[5474]: Accepted publickey for core from 139.178.68.195 port 50046 ssh2: RSA SHA256:zWpntMacToOmwCaU62vdvg6t1el6aib1JfI6hz3EHOQ Jun 25 18:38:42.984906 sshd[5474]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:38:42.990863 systemd-logind[1858]: New session 15 of user core. Jun 25 18:38:42.999070 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 25 18:38:43.756355 sshd[5474]: pam_unix(sshd:session): session closed for user core Jun 25 18:38:43.762419 systemd[1]: sshd@14-172.31.20.217:22-139.178.68.195:50046.service: Deactivated successfully. Jun 25 18:38:43.765458 systemd[1]: session-15.scope: Deactivated successfully. Jun 25 18:38:43.768234 systemd-logind[1858]: Session 15 logged out. Waiting for processes to exit. Jun 25 18:38:43.770488 systemd-logind[1858]: Removed session 15. Jun 25 18:38:48.815246 systemd[1]: Started sshd@15-172.31.20.217:22-139.178.68.195:34586.service - OpenSSH per-connection server daemon (139.178.68.195:34586). Jun 25 18:38:48.998516 sshd[5505]: Accepted publickey for core from 139.178.68.195 port 34586 ssh2: RSA SHA256:zWpntMacToOmwCaU62vdvg6t1el6aib1JfI6hz3EHOQ Jun 25 18:38:49.000581 sshd[5505]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:38:49.014029 systemd-logind[1858]: New session 16 of user core. Jun 25 18:38:49.021254 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 25 18:38:49.408182 sshd[5505]: pam_unix(sshd:session): session closed for user core Jun 25 18:38:49.414751 systemd[1]: sshd@15-172.31.20.217:22-139.178.68.195:34586.service: Deactivated successfully. Jun 25 18:38:49.418997 systemd[1]: session-16.scope: Deactivated successfully. Jun 25 18:38:49.420981 systemd-logind[1858]: Session 16 logged out. Waiting for processes to exit. Jun 25 18:38:49.422690 systemd-logind[1858]: Removed session 16. Jun 25 18:38:49.440229 systemd[1]: Started sshd@16-172.31.20.217:22-139.178.68.195:34598.service - OpenSSH per-connection server daemon (139.178.68.195:34598). Jun 25 18:38:49.614490 sshd[5524]: Accepted publickey for core from 139.178.68.195 port 34598 ssh2: RSA SHA256:zWpntMacToOmwCaU62vdvg6t1el6aib1JfI6hz3EHOQ Jun 25 18:38:49.618185 sshd[5524]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:38:49.627318 systemd-logind[1858]: New session 17 of user core. Jun 25 18:38:49.635149 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 25 18:38:50.436123 sshd[5524]: pam_unix(sshd:session): session closed for user core Jun 25 18:38:50.441432 systemd[1]: sshd@16-172.31.20.217:22-139.178.68.195:34598.service: Deactivated successfully. Jun 25 18:38:50.444369 systemd[1]: session-17.scope: Deactivated successfully. Jun 25 18:38:50.446709 systemd-logind[1858]: Session 17 logged out. Waiting for processes to exit. Jun 25 18:38:50.448867 systemd-logind[1858]: Removed session 17. Jun 25 18:38:50.476588 systemd[1]: Started sshd@17-172.31.20.217:22-139.178.68.195:34602.service - OpenSSH per-connection server daemon (139.178.68.195:34602). Jun 25 18:38:50.648687 sshd[5536]: Accepted publickey for core from 139.178.68.195 port 34602 ssh2: RSA SHA256:zWpntMacToOmwCaU62vdvg6t1el6aib1JfI6hz3EHOQ Jun 25 18:38:50.654398 sshd[5536]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:38:50.660999 systemd-logind[1858]: New session 18 of user core. Jun 25 18:38:50.671243 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 25 18:38:53.168369 kubelet[3289]: I0625 18:38:53.168180 3289 topology_manager.go:215] "Topology Admit Handler" podUID="0770d9ea-51bf-4ab7-9d41-3c115d04ab97" podNamespace="calico-apiserver" podName="calico-apiserver-6dc96d4ff5-grkfm" Jun 25 18:38:53.351820 kubelet[3289]: I0625 18:38:53.350642 3289 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfltz\" (UniqueName: \"kubernetes.io/projected/0770d9ea-51bf-4ab7-9d41-3c115d04ab97-kube-api-access-lfltz\") pod \"calico-apiserver-6dc96d4ff5-grkfm\" (UID: \"0770d9ea-51bf-4ab7-9d41-3c115d04ab97\") " pod="calico-apiserver/calico-apiserver-6dc96d4ff5-grkfm" Jun 25 18:38:53.351820 kubelet[3289]: I0625 18:38:53.350743 3289 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0770d9ea-51bf-4ab7-9d41-3c115d04ab97-calico-apiserver-certs\") pod \"calico-apiserver-6dc96d4ff5-grkfm\" (UID: \"0770d9ea-51bf-4ab7-9d41-3c115d04ab97\") " pod="calico-apiserver/calico-apiserver-6dc96d4ff5-grkfm" Jun 25 18:38:53.355716 systemd[1]: Created slice kubepods-besteffort-pod0770d9ea_51bf_4ab7_9d41_3c115d04ab97.slice - libcontainer container kubepods-besteffort-pod0770d9ea_51bf_4ab7_9d41_3c115d04ab97.slice. Jun 25 18:38:53.485458 kubelet[3289]: E0625 18:38:53.462944 3289 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jun 25 18:38:53.533874 kubelet[3289]: E0625 18:38:53.529610 3289 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0770d9ea-51bf-4ab7-9d41-3c115d04ab97-calico-apiserver-certs podName:0770d9ea-51bf-4ab7-9d41-3c115d04ab97 nodeName:}" failed. No retries permitted until 2024-06-25 18:38:54.016911314 +0000 UTC m=+94.428997201 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/0770d9ea-51bf-4ab7-9d41-3c115d04ab97-calico-apiserver-certs") pod "calico-apiserver-6dc96d4ff5-grkfm" (UID: "0770d9ea-51bf-4ab7-9d41-3c115d04ab97") : secret "calico-apiserver-certs" not found Jun 25 18:38:53.703547 update_engine[1859]: I0625 18:38:53.703343 1859 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jun 25 18:38:53.703547 update_engine[1859]: I0625 18:38:53.703418 1859 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jun 25 18:38:53.712857 update_engine[1859]: I0625 18:38:53.712603 1859 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jun 25 18:38:53.716375 update_engine[1859]: I0625 18:38:53.714365 1859 omaha_request_params.cc:62] Current group set to alpha Jun 25 18:38:53.716375 update_engine[1859]: I0625 18:38:53.714632 1859 update_attempter.cc:499] Already updated boot flags. Skipping. Jun 25 18:38:53.716375 update_engine[1859]: I0625 18:38:53.714669 1859 update_attempter.cc:643] Scheduling an action processor start. Jun 25 18:38:53.716375 update_engine[1859]: I0625 18:38:53.714689 1859 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jun 25 18:38:53.716375 update_engine[1859]: I0625 18:38:53.714764 1859 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jun 25 18:38:53.716375 update_engine[1859]: I0625 18:38:53.714886 1859 omaha_request_action.cc:271] Posting an Omaha request to disabled Jun 25 18:38:53.716375 update_engine[1859]: I0625 18:38:53.714893 1859 omaha_request_action.cc:272] Request: Jun 25 18:38:53.716375 update_engine[1859]: Jun 25 18:38:53.716375 update_engine[1859]: Jun 25 18:38:53.716375 update_engine[1859]: Jun 25 18:38:53.716375 update_engine[1859]: Jun 25 18:38:53.716375 update_engine[1859]: Jun 25 18:38:53.716375 update_engine[1859]: Jun 25 18:38:53.716375 update_engine[1859]: Jun 25 18:38:53.716375 update_engine[1859]: Jun 25 18:38:53.716375 update_engine[1859]: I0625 18:38:53.714928 1859 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jun 25 18:38:53.754392 update_engine[1859]: I0625 18:38:53.754357 1859 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jun 25 18:38:53.758839 update_engine[1859]: I0625 18:38:53.758773 1859 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jun 25 18:38:53.759071 locksmithd[1910]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jun 25 18:38:53.761130 update_engine[1859]: E0625 18:38:53.760919 1859 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jun 25 18:38:53.761130 update_engine[1859]: I0625 18:38:53.761013 1859 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jun 25 18:38:54.061700 kubelet[3289]: E0625 18:38:54.061510 3289 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jun 25 18:38:54.061700 kubelet[3289]: E0625 18:38:54.061681 3289 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0770d9ea-51bf-4ab7-9d41-3c115d04ab97-calico-apiserver-certs podName:0770d9ea-51bf-4ab7-9d41-3c115d04ab97 nodeName:}" failed. No retries permitted until 2024-06-25 18:38:55.061626575 +0000 UTC m=+95.473712450 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/0770d9ea-51bf-4ab7-9d41-3c115d04ab97-calico-apiserver-certs") pod "calico-apiserver-6dc96d4ff5-grkfm" (UID: "0770d9ea-51bf-4ab7-9d41-3c115d04ab97") : secret "calico-apiserver-certs" not found Jun 25 18:38:54.385173 sshd[5536]: pam_unix(sshd:session): session closed for user core Jun 25 18:38:54.393687 systemd[1]: sshd@17-172.31.20.217:22-139.178.68.195:34602.service: Deactivated successfully. Jun 25 18:38:54.399576 systemd[1]: session-18.scope: Deactivated successfully. Jun 25 18:38:54.403313 systemd-logind[1858]: Session 18 logged out. Waiting for processes to exit. Jun 25 18:38:54.433009 systemd[1]: Started sshd@18-172.31.20.217:22-139.178.68.195:34608.service - OpenSSH per-connection server daemon (139.178.68.195:34608). Jun 25 18:38:54.436194 systemd-logind[1858]: Removed session 18. Jun 25 18:38:54.673502 sshd[5568]: Accepted publickey for core from 139.178.68.195 port 34608 ssh2: RSA SHA256:zWpntMacToOmwCaU62vdvg6t1el6aib1JfI6hz3EHOQ Jun 25 18:38:54.679062 sshd[5568]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:38:54.687075 systemd-logind[1858]: New session 19 of user core. Jun 25 18:38:54.695673 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 25 18:38:55.172380 containerd[1879]: time="2024-06-25T18:38:55.172267699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dc96d4ff5-grkfm,Uid:0770d9ea-51bf-4ab7-9d41-3c115d04ab97,Namespace:calico-apiserver,Attempt:0,}" Jun 25 18:38:55.763912 systemd-networkd[1718]: cali4ce26c9e42e: Link UP Jun 25 18:38:55.764125 systemd-networkd[1718]: cali4ce26c9e42e: Gained carrier Jun 25 18:38:55.785413 (udev-worker)[5598]: Network interface NamePolicy= disabled on kernel command line. Jun 25 18:38:55.812111 containerd[1879]: 2024-06-25 18:38:55.472 [INFO][5578] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--217-k8s-calico--apiserver--6dc96d4ff5--grkfm-eth0 calico-apiserver-6dc96d4ff5- calico-apiserver 0770d9ea-51bf-4ab7-9d41-3c115d04ab97 1019 0 2024-06-25 18:38:53 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6dc96d4ff5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-20-217 calico-apiserver-6dc96d4ff5-grkfm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4ce26c9e42e [] []}} ContainerID="d74cd0949f8ec104ac6a535d3aa9059bccc8478a729353061e884fc9c2805448" Namespace="calico-apiserver" Pod="calico-apiserver-6dc96d4ff5-grkfm" WorkloadEndpoint="ip--172--31--20--217-k8s-calico--apiserver--6dc96d4ff5--grkfm-" Jun 25 18:38:55.812111 containerd[1879]: 2024-06-25 18:38:55.473 [INFO][5578] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d74cd0949f8ec104ac6a535d3aa9059bccc8478a729353061e884fc9c2805448" Namespace="calico-apiserver" Pod="calico-apiserver-6dc96d4ff5-grkfm" WorkloadEndpoint="ip--172--31--20--217-k8s-calico--apiserver--6dc96d4ff5--grkfm-eth0" Jun 25 18:38:55.812111 containerd[1879]: 2024-06-25 18:38:55.617 [INFO][5591] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d74cd0949f8ec104ac6a535d3aa9059bccc8478a729353061e884fc9c2805448" HandleID="k8s-pod-network.d74cd0949f8ec104ac6a535d3aa9059bccc8478a729353061e884fc9c2805448" Workload="ip--172--31--20--217-k8s-calico--apiserver--6dc96d4ff5--grkfm-eth0" Jun 25 18:38:55.812111 containerd[1879]: 2024-06-25 18:38:55.641 [INFO][5591] ipam_plugin.go 264: Auto assigning IP ContainerID="d74cd0949f8ec104ac6a535d3aa9059bccc8478a729353061e884fc9c2805448" HandleID="k8s-pod-network.d74cd0949f8ec104ac6a535d3aa9059bccc8478a729353061e884fc9c2805448" Workload="ip--172--31--20--217-k8s-calico--apiserver--6dc96d4ff5--grkfm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00033f010), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-20-217", "pod":"calico-apiserver-6dc96d4ff5-grkfm", "timestamp":"2024-06-25 18:38:55.617079266 +0000 UTC"}, Hostname:"ip-172-31-20-217", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 18:38:55.812111 containerd[1879]: 2024-06-25 18:38:55.641 [INFO][5591] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:38:55.812111 containerd[1879]: 2024-06-25 18:38:55.641 [INFO][5591] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:38:55.812111 containerd[1879]: 2024-06-25 18:38:55.641 [INFO][5591] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-217' Jun 25 18:38:55.812111 containerd[1879]: 2024-06-25 18:38:55.656 [INFO][5591] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d74cd0949f8ec104ac6a535d3aa9059bccc8478a729353061e884fc9c2805448" host="ip-172-31-20-217" Jun 25 18:38:55.812111 containerd[1879]: 2024-06-25 18:38:55.671 [INFO][5591] ipam.go 372: Looking up existing affinities for host host="ip-172-31-20-217" Jun 25 18:38:55.812111 containerd[1879]: 2024-06-25 18:38:55.688 [INFO][5591] ipam.go 489: Trying affinity for 192.168.36.64/26 host="ip-172-31-20-217" Jun 25 18:38:55.812111 containerd[1879]: 2024-06-25 18:38:55.694 [INFO][5591] ipam.go 155: Attempting to load block cidr=192.168.36.64/26 host="ip-172-31-20-217" Jun 25 18:38:55.812111 containerd[1879]: 2024-06-25 18:38:55.700 [INFO][5591] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.36.64/26 host="ip-172-31-20-217" Jun 25 18:38:55.812111 containerd[1879]: 2024-06-25 18:38:55.701 [INFO][5591] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.36.64/26 handle="k8s-pod-network.d74cd0949f8ec104ac6a535d3aa9059bccc8478a729353061e884fc9c2805448" host="ip-172-31-20-217" Jun 25 18:38:55.812111 containerd[1879]: 2024-06-25 18:38:55.716 [INFO][5591] ipam.go 1685: Creating new handle: k8s-pod-network.d74cd0949f8ec104ac6a535d3aa9059bccc8478a729353061e884fc9c2805448 Jun 25 18:38:55.812111 containerd[1879]: 2024-06-25 18:38:55.733 [INFO][5591] ipam.go 1203: Writing block in order to claim IPs block=192.168.36.64/26 handle="k8s-pod-network.d74cd0949f8ec104ac6a535d3aa9059bccc8478a729353061e884fc9c2805448" host="ip-172-31-20-217" Jun 25 18:38:55.812111 containerd[1879]: 2024-06-25 18:38:55.745 [INFO][5591] ipam.go 1216: Successfully claimed IPs: [192.168.36.69/26] block=192.168.36.64/26 handle="k8s-pod-network.d74cd0949f8ec104ac6a535d3aa9059bccc8478a729353061e884fc9c2805448" host="ip-172-31-20-217" Jun 25 18:38:55.812111 containerd[1879]: 2024-06-25 18:38:55.745 [INFO][5591] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.36.69/26] handle="k8s-pod-network.d74cd0949f8ec104ac6a535d3aa9059bccc8478a729353061e884fc9c2805448" host="ip-172-31-20-217" Jun 25 18:38:55.812111 containerd[1879]: 2024-06-25 18:38:55.745 [INFO][5591] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:38:55.812111 containerd[1879]: 2024-06-25 18:38:55.746 [INFO][5591] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.36.69/26] IPv6=[] ContainerID="d74cd0949f8ec104ac6a535d3aa9059bccc8478a729353061e884fc9c2805448" HandleID="k8s-pod-network.d74cd0949f8ec104ac6a535d3aa9059bccc8478a729353061e884fc9c2805448" Workload="ip--172--31--20--217-k8s-calico--apiserver--6dc96d4ff5--grkfm-eth0" Jun 25 18:38:55.814475 containerd[1879]: 2024-06-25 18:38:55.757 [INFO][5578] k8s.go 386: Populated endpoint ContainerID="d74cd0949f8ec104ac6a535d3aa9059bccc8478a729353061e884fc9c2805448" Namespace="calico-apiserver" Pod="calico-apiserver-6dc96d4ff5-grkfm" WorkloadEndpoint="ip--172--31--20--217-k8s-calico--apiserver--6dc96d4ff5--grkfm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--217-k8s-calico--apiserver--6dc96d4ff5--grkfm-eth0", GenerateName:"calico-apiserver-6dc96d4ff5-", Namespace:"calico-apiserver", SelfLink:"", UID:"0770d9ea-51bf-4ab7-9d41-3c115d04ab97", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 38, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dc96d4ff5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-217", ContainerID:"", Pod:"calico-apiserver-6dc96d4ff5-grkfm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.36.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4ce26c9e42e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:38:55.814475 containerd[1879]: 2024-06-25 18:38:55.758 [INFO][5578] k8s.go 387: Calico CNI using IPs: [192.168.36.69/32] ContainerID="d74cd0949f8ec104ac6a535d3aa9059bccc8478a729353061e884fc9c2805448" Namespace="calico-apiserver" Pod="calico-apiserver-6dc96d4ff5-grkfm" WorkloadEndpoint="ip--172--31--20--217-k8s-calico--apiserver--6dc96d4ff5--grkfm-eth0" Jun 25 18:38:55.814475 containerd[1879]: 2024-06-25 18:38:55.758 [INFO][5578] dataplane_linux.go 68: Setting the host side veth name to cali4ce26c9e42e ContainerID="d74cd0949f8ec104ac6a535d3aa9059bccc8478a729353061e884fc9c2805448" Namespace="calico-apiserver" Pod="calico-apiserver-6dc96d4ff5-grkfm" WorkloadEndpoint="ip--172--31--20--217-k8s-calico--apiserver--6dc96d4ff5--grkfm-eth0" Jun 25 18:38:55.814475 containerd[1879]: 2024-06-25 18:38:55.763 [INFO][5578] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="d74cd0949f8ec104ac6a535d3aa9059bccc8478a729353061e884fc9c2805448" Namespace="calico-apiserver" Pod="calico-apiserver-6dc96d4ff5-grkfm" WorkloadEndpoint="ip--172--31--20--217-k8s-calico--apiserver--6dc96d4ff5--grkfm-eth0" Jun 25 18:38:55.814475 containerd[1879]: 2024-06-25 18:38:55.767 [INFO][5578] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d74cd0949f8ec104ac6a535d3aa9059bccc8478a729353061e884fc9c2805448" Namespace="calico-apiserver" Pod="calico-apiserver-6dc96d4ff5-grkfm" WorkloadEndpoint="ip--172--31--20--217-k8s-calico--apiserver--6dc96d4ff5--grkfm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--217-k8s-calico--apiserver--6dc96d4ff5--grkfm-eth0", GenerateName:"calico-apiserver-6dc96d4ff5-", Namespace:"calico-apiserver", SelfLink:"", UID:"0770d9ea-51bf-4ab7-9d41-3c115d04ab97", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 38, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dc96d4ff5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-217", ContainerID:"d74cd0949f8ec104ac6a535d3aa9059bccc8478a729353061e884fc9c2805448", Pod:"calico-apiserver-6dc96d4ff5-grkfm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.36.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4ce26c9e42e", MAC:"66:11:c2:03:cc:a4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:38:55.814475 containerd[1879]: 2024-06-25 18:38:55.801 [INFO][5578] k8s.go 500: Wrote updated endpoint to datastore ContainerID="d74cd0949f8ec104ac6a535d3aa9059bccc8478a729353061e884fc9c2805448" Namespace="calico-apiserver" Pod="calico-apiserver-6dc96d4ff5-grkfm" WorkloadEndpoint="ip--172--31--20--217-k8s-calico--apiserver--6dc96d4ff5--grkfm-eth0" Jun 25 18:38:55.956874 containerd[1879]: time="2024-06-25T18:38:55.956735422Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:38:55.960332 containerd[1879]: time="2024-06-25T18:38:55.957469903Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:38:55.960332 containerd[1879]: time="2024-06-25T18:38:55.957814865Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:38:55.960332 containerd[1879]: time="2024-06-25T18:38:55.957847976Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:38:56.145769 systemd[1]: Started cri-containerd-d74cd0949f8ec104ac6a535d3aa9059bccc8478a729353061e884fc9c2805448.scope - libcontainer container d74cd0949f8ec104ac6a535d3aa9059bccc8478a729353061e884fc9c2805448. Jun 25 18:38:56.306829 sshd[5568]: pam_unix(sshd:session): session closed for user core Jun 25 18:38:56.314061 systemd[1]: sshd@18-172.31.20.217:22-139.178.68.195:34608.service: Deactivated successfully. Jun 25 18:38:56.323602 systemd[1]: session-19.scope: Deactivated successfully. Jun 25 18:38:56.325943 systemd-logind[1858]: Session 19 logged out. Waiting for processes to exit. Jun 25 18:38:56.338723 systemd-logind[1858]: Removed session 19. Jun 25 18:38:56.350681 systemd[1]: Started sshd@19-172.31.20.217:22-139.178.68.195:34610.service - OpenSSH per-connection server daemon (139.178.68.195:34610). Jun 25 18:38:56.446464 containerd[1879]: time="2024-06-25T18:38:56.446119174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dc96d4ff5-grkfm,Uid:0770d9ea-51bf-4ab7-9d41-3c115d04ab97,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"d74cd0949f8ec104ac6a535d3aa9059bccc8478a729353061e884fc9c2805448\"" Jun 25 18:38:56.457075 containerd[1879]: time="2024-06-25T18:38:56.453986303Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jun 25 18:38:56.586667 sshd[5663]: Accepted publickey for core from 139.178.68.195 port 34610 ssh2: RSA SHA256:zWpntMacToOmwCaU62vdvg6t1el6aib1JfI6hz3EHOQ Jun 25 18:38:56.591003 sshd[5663]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:38:56.598401 systemd-logind[1858]: New session 20 of user core. Jun 25 18:38:56.608056 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 25 18:38:57.060190 sshd[5663]: pam_unix(sshd:session): session closed for user core Jun 25 18:38:57.063749 systemd[1]: sshd@19-172.31.20.217:22-139.178.68.195:34610.service: Deactivated successfully. Jun 25 18:38:57.071912 systemd[1]: session-20.scope: Deactivated successfully. Jun 25 18:38:57.075255 systemd-logind[1858]: Session 20 logged out. Waiting for processes to exit. Jun 25 18:38:57.086859 systemd-logind[1858]: Removed session 20. Jun 25 18:38:57.093101 systemd-networkd[1718]: cali4ce26c9e42e: Gained IPv6LL Jun 25 18:38:59.365351 ntpd[1849]: Listen normally on 12 cali4ce26c9e42e [fe80::ecee:eeff:feee:eeee%11]:123 Jun 25 18:38:59.378075 ntpd[1849]: 25 Jun 18:38:59 ntpd[1849]: Listen normally on 12 cali4ce26c9e42e [fe80::ecee:eeff:feee:eeee%11]:123 Jun 25 18:38:59.980609 containerd[1879]: time="2024-06-25T18:38:59.980555390Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:38:59.983834 containerd[1879]: time="2024-06-25T18:38:59.983415576Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=40421260" Jun 25 18:39:00.026847 containerd[1879]: time="2024-06-25T18:39:00.026668217Z" level=info msg="ImageCreate event name:\"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:39:00.067051 containerd[1879]: time="2024-06-25T18:39:00.066870154Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:39:00.080931 containerd[1879]: time="2024-06-25T18:39:00.080873462Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"41869036\" in 3.614505144s" Jun 25 18:39:00.080931 containerd[1879]: time="2024-06-25T18:39:00.080929371Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\"" Jun 25 18:39:00.092627 containerd[1879]: time="2024-06-25T18:39:00.092568733Z" level=info msg="CreateContainer within sandbox \"d74cd0949f8ec104ac6a535d3aa9059bccc8478a729353061e884fc9c2805448\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 25 18:39:00.136470 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3547221169.mount: Deactivated successfully. Jun 25 18:39:00.205434 containerd[1879]: time="2024-06-25T18:39:00.205372253Z" level=info msg="CreateContainer within sandbox \"d74cd0949f8ec104ac6a535d3aa9059bccc8478a729353061e884fc9c2805448\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"dccb2a770723bc8ce55535a93a0450661ab1713679b96303859f4bd3391ae366\"" Jun 25 18:39:00.207399 containerd[1879]: time="2024-06-25T18:39:00.207137920Z" level=info msg="StartContainer for \"dccb2a770723bc8ce55535a93a0450661ab1713679b96303859f4bd3391ae366\"" Jun 25 18:39:00.284098 systemd[1]: Started cri-containerd-dccb2a770723bc8ce55535a93a0450661ab1713679b96303859f4bd3391ae366.scope - libcontainer container dccb2a770723bc8ce55535a93a0450661ab1713679b96303859f4bd3391ae366. Jun 25 18:39:00.428443 containerd[1879]: time="2024-06-25T18:39:00.428386004Z" level=info msg="StartContainer for \"dccb2a770723bc8ce55535a93a0450661ab1713679b96303859f4bd3391ae366\" returns successfully" Jun 25 18:39:01.175606 kubelet[3289]: I0625 18:39:01.173547 3289 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6dc96d4ff5-grkfm" podStartSLOduration=4.50666921 podStartE2EDuration="8.136468049s" podCreationTimestamp="2024-06-25 18:38:53 +0000 UTC" firstStartedPulling="2024-06-25 18:38:56.452067115 +0000 UTC m=+96.864152990" lastFinishedPulling="2024-06-25 18:39:00.081865948 +0000 UTC m=+100.493951829" observedRunningTime="2024-06-25 18:39:01.09497425 +0000 UTC m=+101.507060147" watchObservedRunningTime="2024-06-25 18:39:01.136468049 +0000 UTC m=+101.548553944" Jun 25 18:39:02.139870 systemd[1]: Started sshd@20-172.31.20.217:22-139.178.68.195:41860.service - OpenSSH per-connection server daemon (139.178.68.195:41860). Jun 25 18:39:02.499110 sshd[5761]: Accepted publickey for core from 139.178.68.195 port 41860 ssh2: RSA SHA256:zWpntMacToOmwCaU62vdvg6t1el6aib1JfI6hz3EHOQ Jun 25 18:39:02.501029 sshd[5761]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:39:02.515639 systemd-logind[1858]: New session 21 of user core. Jun 25 18:39:02.528159 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 25 18:39:03.587749 update_engine[1859]: I0625 18:39:03.585901 1859 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jun 25 18:39:03.589351 update_engine[1859]: I0625 18:39:03.589022 1859 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jun 25 18:39:03.589351 update_engine[1859]: I0625 18:39:03.589310 1859 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jun 25 18:39:03.591474 update_engine[1859]: E0625 18:39:03.589937 1859 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jun 25 18:39:03.592731 update_engine[1859]: I0625 18:39:03.591789 1859 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jun 25 18:39:04.003748 sshd[5761]: pam_unix(sshd:session): session closed for user core Jun 25 18:39:04.019377 systemd[1]: sshd@20-172.31.20.217:22-139.178.68.195:41860.service: Deactivated successfully. Jun 25 18:39:04.023312 systemd[1]: session-21.scope: Deactivated successfully. Jun 25 18:39:04.025017 systemd-logind[1858]: Session 21 logged out. Waiting for processes to exit. Jun 25 18:39:04.026435 systemd-logind[1858]: Removed session 21. Jun 25 18:39:09.040043 systemd[1]: Started sshd@21-172.31.20.217:22-139.178.68.195:38444.service - OpenSSH per-connection server daemon (139.178.68.195:38444). Jun 25 18:39:09.228778 sshd[5789]: Accepted publickey for core from 139.178.68.195 port 38444 ssh2: RSA SHA256:zWpntMacToOmwCaU62vdvg6t1el6aib1JfI6hz3EHOQ Jun 25 18:39:09.230639 sshd[5789]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:39:09.237024 systemd-logind[1858]: New session 22 of user core. Jun 25 18:39:09.243131 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 25 18:39:09.445325 sshd[5789]: pam_unix(sshd:session): session closed for user core Jun 25 18:39:09.450327 systemd[1]: sshd@21-172.31.20.217:22-139.178.68.195:38444.service: Deactivated successfully. Jun 25 18:39:09.453479 systemd[1]: session-22.scope: Deactivated successfully. Jun 25 18:39:09.454989 systemd-logind[1858]: Session 22 logged out. Waiting for processes to exit. Jun 25 18:39:09.456859 systemd-logind[1858]: Removed session 22. Jun 25 18:39:13.587271 update_engine[1859]: I0625 18:39:13.587218 1859 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jun 25 18:39:13.588101 update_engine[1859]: I0625 18:39:13.587476 1859 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jun 25 18:39:13.588101 update_engine[1859]: I0625 18:39:13.587735 1859 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jun 25 18:39:13.588332 update_engine[1859]: E0625 18:39:13.588214 1859 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jun 25 18:39:13.588332 update_engine[1859]: I0625 18:39:13.588268 1859 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jun 25 18:39:14.492421 systemd[1]: Started sshd@22-172.31.20.217:22-139.178.68.195:38456.service - OpenSSH per-connection server daemon (139.178.68.195:38456). Jun 25 18:39:14.717688 sshd[5807]: Accepted publickey for core from 139.178.68.195 port 38456 ssh2: RSA SHA256:zWpntMacToOmwCaU62vdvg6t1el6aib1JfI6hz3EHOQ Jun 25 18:39:14.727993 sshd[5807]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:39:14.753483 systemd-logind[1858]: New session 23 of user core. Jun 25 18:39:14.758764 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 25 18:39:15.036031 sshd[5807]: pam_unix(sshd:session): session closed for user core Jun 25 18:39:15.047959 systemd-logind[1858]: Session 23 logged out. Waiting for processes to exit. Jun 25 18:39:15.049423 systemd[1]: sshd@22-172.31.20.217:22-139.178.68.195:38456.service: Deactivated successfully. Jun 25 18:39:15.058178 systemd[1]: session-23.scope: Deactivated successfully. Jun 25 18:39:15.063658 systemd-logind[1858]: Removed session 23. Jun 25 18:39:20.072171 systemd[1]: Started sshd@23-172.31.20.217:22-139.178.68.195:42366.service - OpenSSH per-connection server daemon (139.178.68.195:42366). Jun 25 18:39:20.262850 sshd[5823]: Accepted publickey for core from 139.178.68.195 port 42366 ssh2: RSA SHA256:zWpntMacToOmwCaU62vdvg6t1el6aib1JfI6hz3EHOQ Jun 25 18:39:20.264349 sshd[5823]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:39:20.272545 systemd-logind[1858]: New session 24 of user core. Jun 25 18:39:20.283425 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 25 18:39:20.679855 sshd[5823]: pam_unix(sshd:session): session closed for user core Jun 25 18:39:20.700657 systemd[1]: sshd@23-172.31.20.217:22-139.178.68.195:42366.service: Deactivated successfully. Jun 25 18:39:20.706916 systemd[1]: session-24.scope: Deactivated successfully. Jun 25 18:39:20.708496 systemd-logind[1858]: Session 24 logged out. Waiting for processes to exit. Jun 25 18:39:20.710440 systemd-logind[1858]: Removed session 24. Jun 25 18:39:23.585923 update_engine[1859]: I0625 18:39:23.585856 1859 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jun 25 18:39:23.586591 update_engine[1859]: I0625 18:39:23.586102 1859 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jun 25 18:39:23.586591 update_engine[1859]: I0625 18:39:23.586356 1859 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jun 25 18:39:23.587182 update_engine[1859]: E0625 18:39:23.587153 1859 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jun 25 18:39:23.587287 update_engine[1859]: I0625 18:39:23.587209 1859 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jun 25 18:39:23.587287 update_engine[1859]: I0625 18:39:23.587217 1859 omaha_request_action.cc:617] Omaha request response: Jun 25 18:39:23.587883 update_engine[1859]: E0625 18:39:23.587853 1859 omaha_request_action.cc:636] Omaha request network transfer failed. Jun 25 18:39:23.587971 update_engine[1859]: I0625 18:39:23.587899 1859 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jun 25 18:39:23.587971 update_engine[1859]: I0625 18:39:23.587913 1859 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jun 25 18:39:23.587971 update_engine[1859]: I0625 18:39:23.587918 1859 update_attempter.cc:306] Processing Done. Jun 25 18:39:23.587971 update_engine[1859]: E0625 18:39:23.587951 1859 update_attempter.cc:619] Update failed. Jun 25 18:39:23.587971 update_engine[1859]: I0625 18:39:23.587962 1859 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jun 25 18:39:23.587971 update_engine[1859]: I0625 18:39:23.587966 1859 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jun 25 18:39:23.587971 update_engine[1859]: I0625 18:39:23.587973 1859 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jun 25 18:39:23.588435 update_engine[1859]: I0625 18:39:23.588073 1859 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jun 25 18:39:23.588435 update_engine[1859]: I0625 18:39:23.588100 1859 omaha_request_action.cc:271] Posting an Omaha request to disabled Jun 25 18:39:23.588435 update_engine[1859]: I0625 18:39:23.588105 1859 omaha_request_action.cc:272] Request: Jun 25 18:39:23.588435 update_engine[1859]: Jun 25 18:39:23.588435 update_engine[1859]: Jun 25 18:39:23.588435 update_engine[1859]: Jun 25 18:39:23.588435 update_engine[1859]: Jun 25 18:39:23.588435 update_engine[1859]: Jun 25 18:39:23.588435 update_engine[1859]: Jun 25 18:39:23.588435 update_engine[1859]: I0625 18:39:23.588109 1859 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jun 25 18:39:23.588821 update_engine[1859]: I0625 18:39:23.588533 1859 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jun 25 18:39:23.588821 update_engine[1859]: I0625 18:39:23.588757 1859 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jun 25 18:39:23.589281 locksmithd[1910]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jun 25 18:39:23.589630 update_engine[1859]: E0625 18:39:23.589278 1859 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jun 25 18:39:23.589630 update_engine[1859]: I0625 18:39:23.589326 1859 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jun 25 18:39:23.589630 update_engine[1859]: I0625 18:39:23.589332 1859 omaha_request_action.cc:617] Omaha request response: Jun 25 18:39:23.589630 update_engine[1859]: I0625 18:39:23.589339 1859 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jun 25 18:39:23.589630 update_engine[1859]: I0625 18:39:23.589342 1859 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jun 25 18:39:23.589630 update_engine[1859]: I0625 18:39:23.589346 1859 update_attempter.cc:306] Processing Done. Jun 25 18:39:23.589630 update_engine[1859]: I0625 18:39:23.589351 1859 update_attempter.cc:310] Error event sent. Jun 25 18:39:23.589630 update_engine[1859]: I0625 18:39:23.589359 1859 update_check_scheduler.cc:74] Next update check in 40m10s Jun 25 18:39:23.590178 locksmithd[1910]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jun 25 18:39:25.717691 systemd[1]: Started sshd@24-172.31.20.217:22-139.178.68.195:42376.service - OpenSSH per-connection server daemon (139.178.68.195:42376). Jun 25 18:39:25.898560 sshd[5841]: Accepted publickey for core from 139.178.68.195 port 42376 ssh2: RSA SHA256:zWpntMacToOmwCaU62vdvg6t1el6aib1JfI6hz3EHOQ Jun 25 18:39:25.900242 sshd[5841]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:39:25.906446 systemd-logind[1858]: New session 25 of user core. Jun 25 18:39:25.912106 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 25 18:39:26.175233 systemd[1]: run-containerd-runc-k8s.io-1bcf9f8822d78a798ce987887f6c76acf62642e199bc885efcaeab27ebf147c7-runc.V5XxBg.mount: Deactivated successfully. Jun 25 18:39:26.231055 sshd[5841]: pam_unix(sshd:session): session closed for user core Jun 25 18:39:26.237606 systemd-logind[1858]: Session 25 logged out. Waiting for processes to exit. Jun 25 18:39:26.238693 systemd[1]: sshd@24-172.31.20.217:22-139.178.68.195:42376.service: Deactivated successfully. Jun 25 18:39:26.242185 systemd[1]: session-25.scope: Deactivated successfully. Jun 25 18:39:26.244603 systemd-logind[1858]: Removed session 25. Jun 25 18:39:31.270269 systemd[1]: Started sshd@25-172.31.20.217:22-139.178.68.195:40030.service - OpenSSH per-connection server daemon (139.178.68.195:40030). Jun 25 18:39:31.545822 sshd[5901]: Accepted publickey for core from 139.178.68.195 port 40030 ssh2: RSA SHA256:zWpntMacToOmwCaU62vdvg6t1el6aib1JfI6hz3EHOQ Jun 25 18:39:31.548304 sshd[5901]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:39:31.558820 systemd-logind[1858]: New session 26 of user core. Jun 25 18:39:31.562038 systemd[1]: Started session-26.scope - Session 26 of User core. Jun 25 18:39:31.839763 sshd[5901]: pam_unix(sshd:session): session closed for user core Jun 25 18:39:31.847746 systemd-logind[1858]: Session 26 logged out. Waiting for processes to exit. Jun 25 18:39:31.849821 systemd[1]: sshd@25-172.31.20.217:22-139.178.68.195:40030.service: Deactivated successfully. Jun 25 18:39:31.858347 systemd[1]: session-26.scope: Deactivated successfully. Jun 25 18:39:31.861895 systemd-logind[1858]: Removed session 26. Jun 25 18:40:19.044319 systemd[1]: cri-containerd-0c882dc54446421682187276be470494d3239818c578351ece07b293b5d08c99.scope: Deactivated successfully. Jun 25 18:40:19.044654 systemd[1]: cri-containerd-0c882dc54446421682187276be470494d3239818c578351ece07b293b5d08c99.scope: Consumed 3.475s CPU time, 25.7M memory peak, 0B memory swap peak. Jun 25 18:40:19.083154 systemd[1]: cri-containerd-e0ceabca1a930e2e742e844d8f844c47ec4989c5a5fa93ba2da4ff6868a2e24e.scope: Deactivated successfully. Jun 25 18:40:19.084190 systemd[1]: cri-containerd-e0ceabca1a930e2e742e844d8f844c47ec4989c5a5fa93ba2da4ff6868a2e24e.scope: Consumed 6.458s CPU time. Jun 25 18:40:19.135426 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0c882dc54446421682187276be470494d3239818c578351ece07b293b5d08c99-rootfs.mount: Deactivated successfully. Jun 25 18:40:19.159084 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e0ceabca1a930e2e742e844d8f844c47ec4989c5a5fa93ba2da4ff6868a2e24e-rootfs.mount: Deactivated successfully. Jun 25 18:40:19.185997 containerd[1879]: time="2024-06-25T18:40:19.136172064Z" level=info msg="shim disconnected" id=0c882dc54446421682187276be470494d3239818c578351ece07b293b5d08c99 namespace=k8s.io Jun 25 18:40:19.186570 containerd[1879]: time="2024-06-25T18:40:19.186007227Z" level=warning msg="cleaning up after shim disconnected" id=0c882dc54446421682187276be470494d3239818c578351ece07b293b5d08c99 namespace=k8s.io Jun 25 18:40:19.186570 containerd[1879]: time="2024-06-25T18:40:19.186031135Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:40:19.186570 containerd[1879]: time="2024-06-25T18:40:19.160943654Z" level=info msg="shim disconnected" id=e0ceabca1a930e2e742e844d8f844c47ec4989c5a5fa93ba2da4ff6868a2e24e namespace=k8s.io Jun 25 18:40:19.186570 containerd[1879]: time="2024-06-25T18:40:19.186460667Z" level=warning msg="cleaning up after shim disconnected" id=e0ceabca1a930e2e742e844d8f844c47ec4989c5a5fa93ba2da4ff6868a2e24e namespace=k8s.io Jun 25 18:40:19.186570 containerd[1879]: time="2024-06-25T18:40:19.186472158Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:40:19.383116 kubelet[3289]: I0625 18:40:19.382380 3289 scope.go:117] "RemoveContainer" containerID="e0ceabca1a930e2e742e844d8f844c47ec4989c5a5fa93ba2da4ff6868a2e24e" Jun 25 18:40:19.391733 kubelet[3289]: I0625 18:40:19.390195 3289 scope.go:117] "RemoveContainer" containerID="0c882dc54446421682187276be470494d3239818c578351ece07b293b5d08c99" Jun 25 18:40:19.403484 containerd[1879]: time="2024-06-25T18:40:19.403082966Z" level=info msg="CreateContainer within sandbox \"1320c4344189b46e069094b48a75fd7c624da0917a26d7d36eb062663943134a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jun 25 18:40:19.403794 containerd[1879]: time="2024-06-25T18:40:19.403461515Z" level=info msg="CreateContainer within sandbox \"b3c75770b291e349cba58561e9a240dc0560ad41684cc06dac0f67d7a66e61a9\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jun 25 18:40:19.435686 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1607629735.mount: Deactivated successfully. Jun 25 18:40:19.446953 containerd[1879]: time="2024-06-25T18:40:19.446898821Z" level=info msg="CreateContainer within sandbox \"1320c4344189b46e069094b48a75fd7c624da0917a26d7d36eb062663943134a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"62cdb01b4897c5b570d44a3ff8dd97b97377c713bd061acf872c4132f70b96b6\"" Jun 25 18:40:19.447593 containerd[1879]: time="2024-06-25T18:40:19.447513845Z" level=info msg="StartContainer for \"62cdb01b4897c5b570d44a3ff8dd97b97377c713bd061acf872c4132f70b96b6\"" Jun 25 18:40:19.451259 containerd[1879]: time="2024-06-25T18:40:19.450946302Z" level=info msg="CreateContainer within sandbox \"b3c75770b291e349cba58561e9a240dc0560ad41684cc06dac0f67d7a66e61a9\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"ee22c72ed3e6b9710443720e01e389ceaa1a4134d7b8dca437f7471d5928f7b7\"" Jun 25 18:40:19.451749 containerd[1879]: time="2024-06-25T18:40:19.451637304Z" level=info msg="StartContainer for \"ee22c72ed3e6b9710443720e01e389ceaa1a4134d7b8dca437f7471d5928f7b7\"" Jun 25 18:40:19.505040 systemd[1]: Started cri-containerd-62cdb01b4897c5b570d44a3ff8dd97b97377c713bd061acf872c4132f70b96b6.scope - libcontainer container 62cdb01b4897c5b570d44a3ff8dd97b97377c713bd061acf872c4132f70b96b6. Jun 25 18:40:19.517091 systemd[1]: Started cri-containerd-ee22c72ed3e6b9710443720e01e389ceaa1a4134d7b8dca437f7471d5928f7b7.scope - libcontainer container ee22c72ed3e6b9710443720e01e389ceaa1a4134d7b8dca437f7471d5928f7b7. Jun 25 18:40:19.604761 containerd[1879]: time="2024-06-25T18:40:19.604698219Z" level=info msg="StartContainer for \"ee22c72ed3e6b9710443720e01e389ceaa1a4134d7b8dca437f7471d5928f7b7\" returns successfully" Jun 25 18:40:19.614416 containerd[1879]: time="2024-06-25T18:40:19.613185952Z" level=info msg="StartContainer for \"62cdb01b4897c5b570d44a3ff8dd97b97377c713bd061acf872c4132f70b96b6\" returns successfully" Jun 25 18:40:23.396209 kubelet[3289]: E0625 18:40:23.396035 3289 controller.go:195] "Failed to update lease" err="Put \"https://172.31.20.217:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-217?timeout=10s\": context deadline exceeded" Jun 25 18:40:24.084713 systemd[1]: cri-containerd-0aae028db90b8bd519277b2d2681df86f8e2249013b6b22b72be783c03a65810.scope: Deactivated successfully. Jun 25 18:40:24.087338 systemd[1]: cri-containerd-0aae028db90b8bd519277b2d2681df86f8e2249013b6b22b72be783c03a65810.scope: Consumed 2.230s CPU time, 16.7M memory peak, 0B memory swap peak. Jun 25 18:40:24.143824 containerd[1879]: time="2024-06-25T18:40:24.143717924Z" level=info msg="shim disconnected" id=0aae028db90b8bd519277b2d2681df86f8e2249013b6b22b72be783c03a65810 namespace=k8s.io Jun 25 18:40:24.143824 containerd[1879]: time="2024-06-25T18:40:24.143789254Z" level=warning msg="cleaning up after shim disconnected" id=0aae028db90b8bd519277b2d2681df86f8e2249013b6b22b72be783c03a65810 namespace=k8s.io Jun 25 18:40:24.144458 containerd[1879]: time="2024-06-25T18:40:24.143835270Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:40:24.147837 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0aae028db90b8bd519277b2d2681df86f8e2249013b6b22b72be783c03a65810-rootfs.mount: Deactivated successfully. Jun 25 18:40:24.423029 kubelet[3289]: I0625 18:40:24.422396 3289 scope.go:117] "RemoveContainer" containerID="0aae028db90b8bd519277b2d2681df86f8e2249013b6b22b72be783c03a65810" Jun 25 18:40:24.428473 containerd[1879]: time="2024-06-25T18:40:24.428141238Z" level=info msg="CreateContainer within sandbox \"fec89ca21bf4796610582b79a385f2387799f96b0aaacef752c77d105d5fe778\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jun 25 18:40:24.508558 containerd[1879]: time="2024-06-25T18:40:24.508491049Z" level=info msg="CreateContainer within sandbox \"fec89ca21bf4796610582b79a385f2387799f96b0aaacef752c77d105d5fe778\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"4e9e1c95a5886dc827e7d63b16040c176edbc3d0fd6bd65bb4ca6aa723da7a95\"" Jun 25 18:40:24.509127 containerd[1879]: time="2024-06-25T18:40:24.509093900Z" level=info msg="StartContainer for \"4e9e1c95a5886dc827e7d63b16040c176edbc3d0fd6bd65bb4ca6aa723da7a95\"" Jun 25 18:40:24.589058 systemd[1]: run-containerd-runc-k8s.io-4e9e1c95a5886dc827e7d63b16040c176edbc3d0fd6bd65bb4ca6aa723da7a95-runc.gSfNE7.mount: Deactivated successfully. Jun 25 18:40:24.608484 systemd[1]: Started cri-containerd-4e9e1c95a5886dc827e7d63b16040c176edbc3d0fd6bd65bb4ca6aa723da7a95.scope - libcontainer container 4e9e1c95a5886dc827e7d63b16040c176edbc3d0fd6bd65bb4ca6aa723da7a95. Jun 25 18:40:24.700593 containerd[1879]: time="2024-06-25T18:40:24.700367411Z" level=info msg="StartContainer for \"4e9e1c95a5886dc827e7d63b16040c176edbc3d0fd6bd65bb4ca6aa723da7a95\" returns successfully" Jun 25 18:40:33.430156 kubelet[3289]: E0625 18:40:33.428924 3289 controller.go:195] "Failed to update lease" err="Put \"https://172.31.20.217:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-217?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"