Mar 25 01:41:15.088382 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Mon Mar 24 23:38:35 -00 2025 Mar 25 01:41:15.088433 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=e7a00b7ee8d97e8d255663e9d3fa92277da8316702fb7f6d664fd7b137c307e9 Mar 25 01:41:15.088454 kernel: BIOS-provided physical RAM map: Mar 25 01:41:15.088466 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 25 01:41:15.088477 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Mar 25 01:41:15.088489 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved Mar 25 01:41:15.088504 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Mar 25 01:41:15.088516 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Mar 25 01:41:15.088529 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Mar 25 01:41:15.088541 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Mar 25 01:41:15.088568 kernel: NX (Execute Disable) protection: active Mar 25 01:41:15.088579 kernel: APIC: Static calls initialized Mar 25 01:41:15.088591 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable Mar 25 01:41:15.088604 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable Mar 25 01:41:15.088619 kernel: extended physical RAM map: Mar 25 01:41:15.088633 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 25 01:41:15.088648 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000768c0017] usable Mar 25 01:41:15.088661 kernel: reserve setup_data: [mem 0x00000000768c0018-0x00000000768c8e57] usable Mar 25 01:41:15.088674 kernel: reserve setup_data: [mem 0x00000000768c8e58-0x00000000786cdfff] usable Mar 25 01:41:15.088688 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved Mar 25 01:41:15.088701 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Mar 25 01:41:15.088715 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Mar 25 01:41:15.088728 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable Mar 25 01:41:15.088741 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Mar 25 01:41:15.088754 kernel: efi: EFI v2.7 by EDK II Mar 25 01:41:15.088768 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77003518 Mar 25 01:41:15.088783 kernel: secureboot: Secure boot disabled Mar 25 01:41:15.088796 kernel: SMBIOS 2.7 present. Mar 25 01:41:15.088809 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Mar 25 01:41:15.088822 kernel: Hypervisor detected: KVM Mar 25 01:41:15.088835 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 25 01:41:15.088848 kernel: kvm-clock: using sched offset of 4234577904 cycles Mar 25 01:41:15.088879 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 25 01:41:15.088893 kernel: tsc: Detected 2500.004 MHz processor Mar 25 01:41:15.088907 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 25 01:41:15.088921 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 25 01:41:15.088935 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Mar 25 01:41:15.088952 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 25 01:41:15.088965 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 25 01:41:15.088979 kernel: Using GB pages for direct mapping Mar 25 01:41:15.088999 kernel: ACPI: Early table checksum verification disabled Mar 25 01:41:15.089013 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Mar 25 01:41:15.089028 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Mar 25 01:41:15.089046 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Mar 25 01:41:15.089073 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Mar 25 01:41:15.089088 kernel: ACPI: FACS 0x00000000789D0000 000040 Mar 25 01:41:15.089102 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Mar 25 01:41:15.089117 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Mar 25 01:41:15.089132 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Mar 25 01:41:15.089146 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Mar 25 01:41:15.089161 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Mar 25 01:41:15.089179 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Mar 25 01:41:15.089194 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Mar 25 01:41:15.089209 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Mar 25 01:41:15.089223 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Mar 25 01:41:15.089238 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Mar 25 01:41:15.089252 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Mar 25 01:41:15.089267 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Mar 25 01:41:15.089282 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Mar 25 01:41:15.089296 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Mar 25 01:41:15.089314 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Mar 25 01:41:15.089328 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Mar 25 01:41:15.089343 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Mar 25 01:41:15.089358 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Mar 25 01:41:15.089373 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Mar 25 01:41:15.089387 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Mar 25 01:41:15.089400 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Mar 25 01:41:15.089414 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Mar 25 01:41:15.091470 kernel: NUMA: Initialized distance table, cnt=1 Mar 25 01:41:15.091501 kernel: NODE_DATA(0) allocated [mem 0x7a8ef000-0x7a8f4fff] Mar 25 01:41:15.091516 kernel: Zone ranges: Mar 25 01:41:15.091530 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 25 01:41:15.091545 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Mar 25 01:41:15.091559 kernel: Normal empty Mar 25 01:41:15.091574 kernel: Movable zone start for each node Mar 25 01:41:15.091588 kernel: Early memory node ranges Mar 25 01:41:15.091603 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 25 01:41:15.091617 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Mar 25 01:41:15.091636 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Mar 25 01:41:15.091650 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Mar 25 01:41:15.091664 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 25 01:41:15.091678 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 25 01:41:15.091693 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Mar 25 01:41:15.091708 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Mar 25 01:41:15.091722 kernel: ACPI: PM-Timer IO Port: 0xb008 Mar 25 01:41:15.091737 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 25 01:41:15.091752 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Mar 25 01:41:15.091766 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 25 01:41:15.091784 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 25 01:41:15.091798 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 25 01:41:15.091812 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 25 01:41:15.091827 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 25 01:41:15.091841 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 25 01:41:15.091855 kernel: TSC deadline timer available Mar 25 01:41:15.091869 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Mar 25 01:41:15.091884 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 25 01:41:15.091899 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Mar 25 01:41:15.091917 kernel: Booting paravirtualized kernel on KVM Mar 25 01:41:15.091932 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 25 01:41:15.091947 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Mar 25 01:41:15.091961 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Mar 25 01:41:15.091976 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Mar 25 01:41:15.091991 kernel: pcpu-alloc: [0] 0 1 Mar 25 01:41:15.092005 kernel: kvm-guest: PV spinlocks enabled Mar 25 01:41:15.092019 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 25 01:41:15.092040 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=e7a00b7ee8d97e8d255663e9d3fa92277da8316702fb7f6d664fd7b137c307e9 Mar 25 01:41:15.092055 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 25 01:41:15.092068 kernel: random: crng init done Mar 25 01:41:15.092081 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 25 01:41:15.092092 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Mar 25 01:41:15.092104 kernel: Fallback order for Node 0: 0 Mar 25 01:41:15.092117 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 Mar 25 01:41:15.092130 kernel: Policy zone: DMA32 Mar 25 01:41:15.092147 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 25 01:41:15.092161 kernel: Memory: 1870488K/2037804K available (14336K kernel code, 2304K rwdata, 25060K rodata, 43592K init, 1472K bss, 167060K reserved, 0K cma-reserved) Mar 25 01:41:15.092173 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 25 01:41:15.092186 kernel: Kernel/User page tables isolation: enabled Mar 25 01:41:15.092200 kernel: ftrace: allocating 37985 entries in 149 pages Mar 25 01:41:15.092224 kernel: ftrace: allocated 149 pages with 4 groups Mar 25 01:41:15.092240 kernel: Dynamic Preempt: voluntary Mar 25 01:41:15.092254 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 25 01:41:15.092270 kernel: rcu: RCU event tracing is enabled. Mar 25 01:41:15.092283 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 25 01:41:15.092297 kernel: Trampoline variant of Tasks RCU enabled. Mar 25 01:41:15.092310 kernel: Rude variant of Tasks RCU enabled. Mar 25 01:41:15.092327 kernel: Tracing variant of Tasks RCU enabled. Mar 25 01:41:15.092341 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 25 01:41:15.092354 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 25 01:41:15.092368 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Mar 25 01:41:15.092382 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 25 01:41:15.092398 kernel: Console: colour dummy device 80x25 Mar 25 01:41:15.092412 kernel: printk: console [tty0] enabled Mar 25 01:41:15.092437 kernel: printk: console [ttyS0] enabled Mar 25 01:41:15.092451 kernel: ACPI: Core revision 20230628 Mar 25 01:41:15.092466 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Mar 25 01:41:15.092479 kernel: APIC: Switch to symmetric I/O mode setup Mar 25 01:41:15.092493 kernel: x2apic enabled Mar 25 01:41:15.092506 kernel: APIC: Switched APIC routing to: physical x2apic Mar 25 01:41:15.092531 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093d6e846, max_idle_ns: 440795249997 ns Mar 25 01:41:15.092547 kernel: Calibrating delay loop (skipped) preset value.. 5000.00 BogoMIPS (lpj=2500004) Mar 25 01:41:15.092560 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Mar 25 01:41:15.092573 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Mar 25 01:41:15.092586 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 25 01:41:15.092599 kernel: Spectre V2 : Mitigation: Retpolines Mar 25 01:41:15.092611 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Mar 25 01:41:15.092624 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Mar 25 01:41:15.092637 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Mar 25 01:41:15.092650 kernel: RETBleed: Vulnerable Mar 25 01:41:15.092663 kernel: Speculative Store Bypass: Vulnerable Mar 25 01:41:15.092680 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Mar 25 01:41:15.092693 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Mar 25 01:41:15.092705 kernel: GDS: Unknown: Dependent on hypervisor status Mar 25 01:41:15.092718 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 25 01:41:15.092731 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 25 01:41:15.092744 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 25 01:41:15.092757 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Mar 25 01:41:15.092770 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Mar 25 01:41:15.092783 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Mar 25 01:41:15.092796 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Mar 25 01:41:15.092809 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Mar 25 01:41:15.092824 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Mar 25 01:41:15.092837 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 25 01:41:15.092850 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Mar 25 01:41:15.092863 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Mar 25 01:41:15.092876 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Mar 25 01:41:15.092888 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Mar 25 01:41:15.092901 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Mar 25 01:41:15.092914 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Mar 25 01:41:15.092926 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Mar 25 01:41:15.092939 kernel: Freeing SMP alternatives memory: 32K Mar 25 01:41:15.092952 kernel: pid_max: default: 32768 minimum: 301 Mar 25 01:41:15.092965 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 25 01:41:15.092981 kernel: landlock: Up and running. Mar 25 01:41:15.092994 kernel: SELinux: Initializing. Mar 25 01:41:15.093007 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 25 01:41:15.093020 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 25 01:41:15.093035 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Mar 25 01:41:15.093048 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 25 01:41:15.093061 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 25 01:41:15.093075 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 25 01:41:15.093219 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Mar 25 01:41:15.093235 kernel: signal: max sigframe size: 3632 Mar 25 01:41:15.093254 kernel: rcu: Hierarchical SRCU implementation. Mar 25 01:41:15.093269 kernel: rcu: Max phase no-delay instances is 400. Mar 25 01:41:15.093283 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 25 01:41:15.093296 kernel: smp: Bringing up secondary CPUs ... Mar 25 01:41:15.093309 kernel: smpboot: x86: Booting SMP configuration: Mar 25 01:41:15.093323 kernel: .... node #0, CPUs: #1 Mar 25 01:41:15.093337 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Mar 25 01:41:15.093351 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Mar 25 01:41:15.093367 kernel: smp: Brought up 1 node, 2 CPUs Mar 25 01:41:15.093380 kernel: smpboot: Max logical packages: 1 Mar 25 01:41:15.093393 kernel: smpboot: Total of 2 processors activated (10000.01 BogoMIPS) Mar 25 01:41:15.093406 kernel: devtmpfs: initialized Mar 25 01:41:15.093419 kernel: x86/mm: Memory block size: 128MB Mar 25 01:41:15.094887 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Mar 25 01:41:15.094906 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 25 01:41:15.094922 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 25 01:41:15.094938 kernel: pinctrl core: initialized pinctrl subsystem Mar 25 01:41:15.094958 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 25 01:41:15.094972 kernel: audit: initializing netlink subsys (disabled) Mar 25 01:41:15.094986 kernel: audit: type=2000 audit(1742866875.170:1): state=initialized audit_enabled=0 res=1 Mar 25 01:41:15.095000 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 25 01:41:15.095015 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 25 01:41:15.095029 kernel: cpuidle: using governor menu Mar 25 01:41:15.095045 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 25 01:41:15.095060 kernel: dca service started, version 1.12.1 Mar 25 01:41:15.095075 kernel: PCI: Using configuration type 1 for base access Mar 25 01:41:15.095094 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 25 01:41:15.095109 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 25 01:41:15.095124 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 25 01:41:15.095139 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 25 01:41:15.095164 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 25 01:41:15.095179 kernel: ACPI: Added _OSI(Module Device) Mar 25 01:41:15.095192 kernel: ACPI: Added _OSI(Processor Device) Mar 25 01:41:15.095206 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 25 01:41:15.095222 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 25 01:41:15.095239 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Mar 25 01:41:15.095254 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 25 01:41:15.095268 kernel: ACPI: Interpreter enabled Mar 25 01:41:15.095283 kernel: ACPI: PM: (supports S0 S5) Mar 25 01:41:15.095297 kernel: ACPI: Using IOAPIC for interrupt routing Mar 25 01:41:15.095312 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 25 01:41:15.095327 kernel: PCI: Using E820 reservations for host bridge windows Mar 25 01:41:15.095359 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Mar 25 01:41:15.095374 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 25 01:41:15.095633 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Mar 25 01:41:15.095774 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Mar 25 01:41:15.095902 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Mar 25 01:41:15.095920 kernel: acpiphp: Slot [3] registered Mar 25 01:41:15.095936 kernel: acpiphp: Slot [4] registered Mar 25 01:41:15.095951 kernel: acpiphp: Slot [5] registered Mar 25 01:41:15.095966 kernel: acpiphp: Slot [6] registered Mar 25 01:41:15.095984 kernel: acpiphp: Slot [7] registered Mar 25 01:41:15.095999 kernel: acpiphp: Slot [8] registered Mar 25 01:41:15.096014 kernel: acpiphp: Slot [9] registered Mar 25 01:41:15.096027 kernel: acpiphp: Slot [10] registered Mar 25 01:41:15.096041 kernel: acpiphp: Slot [11] registered Mar 25 01:41:15.096056 kernel: acpiphp: Slot [12] registered Mar 25 01:41:15.096088 kernel: acpiphp: Slot [13] registered Mar 25 01:41:15.096108 kernel: acpiphp: Slot [14] registered Mar 25 01:41:15.096132 kernel: acpiphp: Slot [15] registered Mar 25 01:41:15.096146 kernel: acpiphp: Slot [16] registered Mar 25 01:41:15.096163 kernel: acpiphp: Slot [17] registered Mar 25 01:41:15.096175 kernel: acpiphp: Slot [18] registered Mar 25 01:41:15.096189 kernel: acpiphp: Slot [19] registered Mar 25 01:41:15.096202 kernel: acpiphp: Slot [20] registered Mar 25 01:41:15.096217 kernel: acpiphp: Slot [21] registered Mar 25 01:41:15.096232 kernel: acpiphp: Slot [22] registered Mar 25 01:41:15.096250 kernel: acpiphp: Slot [23] registered Mar 25 01:41:15.096263 kernel: acpiphp: Slot [24] registered Mar 25 01:41:15.096278 kernel: acpiphp: Slot [25] registered Mar 25 01:41:15.096295 kernel: acpiphp: Slot [26] registered Mar 25 01:41:15.096308 kernel: acpiphp: Slot [27] registered Mar 25 01:41:15.096322 kernel: acpiphp: Slot [28] registered Mar 25 01:41:15.096337 kernel: acpiphp: Slot [29] registered Mar 25 01:41:15.096350 kernel: acpiphp: Slot [30] registered Mar 25 01:41:15.096366 kernel: acpiphp: Slot [31] registered Mar 25 01:41:15.096381 kernel: PCI host bridge to bus 0000:00 Mar 25 01:41:15.098328 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 25 01:41:15.098523 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 25 01:41:15.098671 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 25 01:41:15.098800 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Mar 25 01:41:15.098923 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Mar 25 01:41:15.099044 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 25 01:41:15.099238 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Mar 25 01:41:15.099408 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Mar 25 01:41:15.099641 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Mar 25 01:41:15.099783 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Mar 25 01:41:15.099920 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Mar 25 01:41:15.100058 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Mar 25 01:41:15.100196 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Mar 25 01:41:15.100333 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Mar 25 01:41:15.100495 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Mar 25 01:41:15.100641 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Mar 25 01:41:15.100789 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Mar 25 01:41:15.100928 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] Mar 25 01:41:15.101067 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Mar 25 01:41:15.101205 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb Mar 25 01:41:15.101344 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 25 01:41:15.101532 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Mar 25 01:41:15.101689 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] Mar 25 01:41:15.101841 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Mar 25 01:41:15.101989 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] Mar 25 01:41:15.102009 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 25 01:41:15.102024 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 25 01:41:15.102039 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 25 01:41:15.102054 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 25 01:41:15.102072 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Mar 25 01:41:15.102087 kernel: iommu: Default domain type: Translated Mar 25 01:41:15.102103 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 25 01:41:15.102117 kernel: efivars: Registered efivars operations Mar 25 01:41:15.102131 kernel: PCI: Using ACPI for IRQ routing Mar 25 01:41:15.102146 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 25 01:41:15.102161 kernel: e820: reserve RAM buffer [mem 0x768c0018-0x77ffffff] Mar 25 01:41:15.102175 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Mar 25 01:41:15.102189 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Mar 25 01:41:15.102326 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Mar 25 01:41:15.102558 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Mar 25 01:41:15.102705 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 25 01:41:15.102724 kernel: vgaarb: loaded Mar 25 01:41:15.102739 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Mar 25 01:41:15.102754 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Mar 25 01:41:15.102770 kernel: clocksource: Switched to clocksource kvm-clock Mar 25 01:41:15.102784 kernel: VFS: Disk quotas dquot_6.6.0 Mar 25 01:41:15.102800 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 25 01:41:15.102819 kernel: pnp: PnP ACPI init Mar 25 01:41:15.102833 kernel: pnp: PnP ACPI: found 5 devices Mar 25 01:41:15.102848 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 25 01:41:15.102863 kernel: NET: Registered PF_INET protocol family Mar 25 01:41:15.102879 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 25 01:41:15.102893 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Mar 25 01:41:15.102910 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 25 01:41:15.102925 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 25 01:41:15.102944 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Mar 25 01:41:15.102959 kernel: TCP: Hash tables configured (established 16384 bind 16384) Mar 25 01:41:15.102974 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 25 01:41:15.102989 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 25 01:41:15.103004 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 25 01:41:15.103018 kernel: NET: Registered PF_XDP protocol family Mar 25 01:41:15.103141 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 25 01:41:15.103304 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 25 01:41:15.103452 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 25 01:41:15.103579 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Mar 25 01:41:15.103700 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Mar 25 01:41:15.103844 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Mar 25 01:41:15.103865 kernel: PCI: CLS 0 bytes, default 64 Mar 25 01:41:15.103882 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Mar 25 01:41:15.103898 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093d6e846, max_idle_ns: 440795249997 ns Mar 25 01:41:15.103914 kernel: clocksource: Switched to clocksource tsc Mar 25 01:41:15.103931 kernel: Initialise system trusted keyrings Mar 25 01:41:15.103950 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Mar 25 01:41:15.103967 kernel: Key type asymmetric registered Mar 25 01:41:15.103983 kernel: Asymmetric key parser 'x509' registered Mar 25 01:41:15.103998 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 25 01:41:15.104014 kernel: io scheduler mq-deadline registered Mar 25 01:41:15.104030 kernel: io scheduler kyber registered Mar 25 01:41:15.104046 kernel: io scheduler bfq registered Mar 25 01:41:15.104063 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 25 01:41:15.104079 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 25 01:41:15.104096 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 25 01:41:15.104115 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 25 01:41:15.104131 kernel: i8042: Warning: Keylock active Mar 25 01:41:15.104147 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 25 01:41:15.104163 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 25 01:41:15.104316 kernel: rtc_cmos 00:00: RTC can wake from S4 Mar 25 01:41:15.104474 kernel: rtc_cmos 00:00: registered as rtc0 Mar 25 01:41:15.104620 kernel: rtc_cmos 00:00: setting system clock to 2025-03-25T01:41:14 UTC (1742866874) Mar 25 01:41:15.104750 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Mar 25 01:41:15.104770 kernel: intel_pstate: CPU model not supported Mar 25 01:41:15.104786 kernel: efifb: probing for efifb Mar 25 01:41:15.104803 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k Mar 25 01:41:15.104844 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Mar 25 01:41:15.104863 kernel: efifb: scrolling: redraw Mar 25 01:41:15.104880 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 25 01:41:15.104897 kernel: Console: switching to colour frame buffer device 100x37 Mar 25 01:41:15.104914 kernel: fb0: EFI VGA frame buffer device Mar 25 01:41:15.104934 kernel: pstore: Using crash dump compression: deflate Mar 25 01:41:15.104950 kernel: pstore: Registered efi_pstore as persistent store backend Mar 25 01:41:15.104967 kernel: NET: Registered PF_INET6 protocol family Mar 25 01:41:15.104986 kernel: Segment Routing with IPv6 Mar 25 01:41:15.105003 kernel: In-situ OAM (IOAM) with IPv6 Mar 25 01:41:15.105021 kernel: NET: Registered PF_PACKET protocol family Mar 25 01:41:15.105038 kernel: Key type dns_resolver registered Mar 25 01:41:15.105054 kernel: IPI shorthand broadcast: enabled Mar 25 01:41:15.105071 kernel: sched_clock: Marking stable (731003235, 157482303)->(1017516394, -129030856) Mar 25 01:41:15.105091 kernel: registered taskstats version 1 Mar 25 01:41:15.105107 kernel: Loading compiled-in X.509 certificates Mar 25 01:41:15.105124 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: eff01054e94a599f8e404b9a9482f4e2220f5386' Mar 25 01:41:15.105140 kernel: Key type .fscrypt registered Mar 25 01:41:15.105157 kernel: Key type fscrypt-provisioning registered Mar 25 01:41:15.105173 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 25 01:41:15.105190 kernel: ima: Allocated hash algorithm: sha1 Mar 25 01:41:15.105207 kernel: ima: No architecture policies found Mar 25 01:41:15.105223 kernel: clk: Disabling unused clocks Mar 25 01:41:15.105244 kernel: Freeing unused kernel image (initmem) memory: 43592K Mar 25 01:41:15.105261 kernel: Write protecting the kernel read-only data: 40960k Mar 25 01:41:15.105278 kernel: Freeing unused kernel image (rodata/data gap) memory: 1564K Mar 25 01:41:15.105294 kernel: Run /init as init process Mar 25 01:41:15.105311 kernel: with arguments: Mar 25 01:41:15.105327 kernel: /init Mar 25 01:41:15.105343 kernel: with environment: Mar 25 01:41:15.105360 kernel: HOME=/ Mar 25 01:41:15.105376 kernel: TERM=linux Mar 25 01:41:15.105395 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 25 01:41:15.105413 systemd[1]: Successfully made /usr/ read-only. Mar 25 01:41:15.105463 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 25 01:41:15.105480 systemd[1]: Detected virtualization amazon. Mar 25 01:41:15.105494 systemd[1]: Detected architecture x86-64. Mar 25 01:41:15.105513 systemd[1]: Running in initrd. Mar 25 01:41:15.105529 systemd[1]: No hostname configured, using default hostname. Mar 25 01:41:15.105545 systemd[1]: Hostname set to . Mar 25 01:41:15.105563 systemd[1]: Initializing machine ID from VM UUID. Mar 25 01:41:15.105580 systemd[1]: Queued start job for default target initrd.target. Mar 25 01:41:15.105597 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 25 01:41:15.105615 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 25 01:41:15.105636 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 25 01:41:15.105653 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 25 01:41:15.105670 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 25 01:41:15.105689 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 25 01:41:15.105708 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 25 01:41:15.105725 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 25 01:41:15.105742 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 25 01:41:15.105762 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 25 01:41:15.105779 systemd[1]: Reached target paths.target - Path Units. Mar 25 01:41:15.105796 systemd[1]: Reached target slices.target - Slice Units. Mar 25 01:41:15.105813 systemd[1]: Reached target swap.target - Swaps. Mar 25 01:41:15.105830 systemd[1]: Reached target timers.target - Timer Units. Mar 25 01:41:15.105847 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 25 01:41:15.105864 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 25 01:41:15.105882 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 25 01:41:15.105898 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 25 01:41:15.105919 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 25 01:41:15.105936 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 25 01:41:15.105953 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 25 01:41:15.105970 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 25 01:41:15.105987 systemd[1]: Reached target sockets.target - Socket Units. Mar 25 01:41:15.106005 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 25 01:41:15.106022 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 25 01:41:15.106038 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 25 01:41:15.106060 systemd[1]: Starting systemd-fsck-usr.service... Mar 25 01:41:15.106076 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 25 01:41:15.106093 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 25 01:41:15.106108 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 25 01:41:15.106123 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 25 01:41:15.106140 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 25 01:41:15.106205 systemd-journald[179]: Collecting audit messages is disabled. Mar 25 01:41:15.106247 systemd[1]: Finished systemd-fsck-usr.service. Mar 25 01:41:15.106264 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 25 01:41:15.106284 systemd-journald[179]: Journal started Mar 25 01:41:15.106317 systemd-journald[179]: Runtime Journal (/run/log/journal/ec282f26c9f8a61fa16bb45be5c489dc) is 4.7M, max 38.1M, 33.3M free. Mar 25 01:41:15.094321 systemd-modules-load[180]: Inserted module 'overlay' Mar 25 01:41:15.110590 systemd[1]: Started systemd-journald.service - Journal Service. Mar 25 01:41:15.111132 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 25 01:41:15.119621 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 25 01:41:15.125582 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 25 01:41:15.132693 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 25 01:41:15.139581 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 25 01:41:15.146449 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 25 01:41:15.156561 kernel: Bridge firewalling registered Mar 25 01:41:15.157511 systemd-modules-load[180]: Inserted module 'br_netfilter' Mar 25 01:41:15.158904 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 25 01:41:15.163452 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 25 01:41:15.165697 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 25 01:41:15.169880 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 25 01:41:15.180690 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 25 01:41:15.185606 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 25 01:41:15.189943 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 25 01:41:15.199597 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 25 01:41:15.211975 dracut-cmdline[213]: dracut-dracut-053 Mar 25 01:41:15.216053 dracut-cmdline[213]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=e7a00b7ee8d97e8d255663e9d3fa92277da8316702fb7f6d664fd7b137c307e9 Mar 25 01:41:15.259892 systemd-resolved[216]: Positive Trust Anchors: Mar 25 01:41:15.260924 systemd-resolved[216]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 25 01:41:15.261015 systemd-resolved[216]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 25 01:41:15.269729 systemd-resolved[216]: Defaulting to hostname 'linux'. Mar 25 01:41:15.271645 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 25 01:41:15.273382 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 25 01:41:15.308458 kernel: SCSI subsystem initialized Mar 25 01:41:15.318459 kernel: Loading iSCSI transport class v2.0-870. Mar 25 01:41:15.329458 kernel: iscsi: registered transport (tcp) Mar 25 01:41:15.352456 kernel: iscsi: registered transport (qla4xxx) Mar 25 01:41:15.352538 kernel: QLogic iSCSI HBA Driver Mar 25 01:41:15.391576 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 25 01:41:15.393500 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 25 01:41:15.433534 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 25 01:41:15.433610 kernel: device-mapper: uevent: version 1.0.3 Mar 25 01:41:15.434560 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 25 01:41:15.476474 kernel: raid6: avx512x4 gen() 15032 MB/s Mar 25 01:41:15.493459 kernel: raid6: avx512x2 gen() 14619 MB/s Mar 25 01:41:15.510457 kernel: raid6: avx512x1 gen() 14948 MB/s Mar 25 01:41:15.528452 kernel: raid6: avx2x4 gen() 14771 MB/s Mar 25 01:41:15.546448 kernel: raid6: avx2x2 gen() 14951 MB/s Mar 25 01:41:15.563646 kernel: raid6: avx2x1 gen() 11207 MB/s Mar 25 01:41:15.563693 kernel: raid6: using algorithm avx512x4 gen() 15032 MB/s Mar 25 01:41:15.583484 kernel: raid6: .... xor() 7370 MB/s, rmw enabled Mar 25 01:41:15.583544 kernel: raid6: using avx512x2 recovery algorithm Mar 25 01:41:15.605451 kernel: xor: automatically using best checksumming function avx Mar 25 01:41:15.759458 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 25 01:41:15.769324 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 25 01:41:15.771251 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 25 01:41:15.796707 systemd-udevd[399]: Using default interface naming scheme 'v255'. Mar 25 01:41:15.802448 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 25 01:41:15.808242 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 25 01:41:15.831826 dracut-pre-trigger[405]: rd.md=0: removing MD RAID activation Mar 25 01:41:15.861382 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 25 01:41:15.863515 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 25 01:41:15.927927 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 25 01:41:15.933693 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 25 01:41:15.967009 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 25 01:41:15.973297 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 25 01:41:15.974994 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 25 01:41:15.976837 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 25 01:41:15.979985 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 25 01:41:16.011086 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 25 01:41:16.028235 kernel: ena 0000:00:05.0: ENA device version: 0.10 Mar 25 01:41:16.040098 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Mar 25 01:41:16.040320 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Mar 25 01:41:16.040540 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:5a:d3:02:81:65 Mar 25 01:41:16.040706 kernel: cryptd: max_cpu_qlen set to 1000 Mar 25 01:41:16.052270 (udev-worker)[460]: Network interface NamePolicy= disabled on kernel command line. Mar 25 01:41:16.057404 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 25 01:41:16.057590 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 25 01:41:16.067950 kernel: AVX2 version of gcm_enc/dec engaged. Mar 25 01:41:16.067984 kernel: AES CTR mode by8 optimization enabled Mar 25 01:41:16.066577 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 25 01:41:16.069510 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 25 01:41:16.069738 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 25 01:41:16.073156 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 25 01:41:16.078755 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 25 01:41:16.085762 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 25 01:41:16.103731 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 25 01:41:16.103862 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 25 01:41:16.105079 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 25 01:41:16.110376 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 25 01:41:16.135289 kernel: nvme nvme0: pci function 0000:00:04.0 Mar 25 01:41:16.135617 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Mar 25 01:41:16.137341 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 25 01:41:16.139401 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 25 01:41:16.147455 kernel: nvme nvme0: 2/0/0 default/read/poll queues Mar 25 01:41:16.152849 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 25 01:41:16.152911 kernel: GPT:9289727 != 16777215 Mar 25 01:41:16.152931 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 25 01:41:16.154812 kernel: GPT:9289727 != 16777215 Mar 25 01:41:16.154853 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 25 01:41:16.156715 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 25 01:41:16.170221 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 25 01:41:16.211884 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by (udev-worker) (451) Mar 25 01:41:16.228490 kernel: BTRFS: device fsid 6d9424cd-1432-492b-b006-b311869817e2 devid 1 transid 39 /dev/nvme0n1p3 scanned by (udev-worker) (447) Mar 25 01:41:16.303664 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Mar 25 01:41:16.321869 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Mar 25 01:41:16.322531 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Mar 25 01:41:16.341986 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 25 01:41:16.353318 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Mar 25 01:41:16.355200 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 25 01:41:16.375531 disk-uuid[635]: Primary Header is updated. Mar 25 01:41:16.375531 disk-uuid[635]: Secondary Entries is updated. Mar 25 01:41:16.375531 disk-uuid[635]: Secondary Header is updated. Mar 25 01:41:16.381449 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 25 01:41:16.402452 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 25 01:41:17.396444 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 25 01:41:17.397179 disk-uuid[636]: The operation has completed successfully. Mar 25 01:41:17.525886 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 25 01:41:17.526006 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 25 01:41:17.574276 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 25 01:41:17.590775 sh[894]: Success Mar 25 01:41:17.605458 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Mar 25 01:41:17.696458 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 25 01:41:17.701535 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 25 01:41:17.714589 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 25 01:41:17.739556 kernel: BTRFS info (device dm-0): first mount of filesystem 6d9424cd-1432-492b-b006-b311869817e2 Mar 25 01:41:17.739621 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 25 01:41:17.742444 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 25 01:41:17.742490 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 25 01:41:17.743705 kernel: BTRFS info (device dm-0): using free space tree Mar 25 01:41:17.792455 kernel: BTRFS info (device dm-0): enabling ssd optimizations Mar 25 01:41:17.806842 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 25 01:41:17.808115 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 25 01:41:17.809367 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 25 01:41:17.813574 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 25 01:41:17.863669 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a72930ba-1354-475c-94df-b83a66efea67 Mar 25 01:41:17.863750 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Mar 25 01:41:17.863773 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 25 01:41:17.869477 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 25 01:41:17.877453 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem a72930ba-1354-475c-94df-b83a66efea67 Mar 25 01:41:17.879459 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 25 01:41:17.885613 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 25 01:41:17.921438 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 25 01:41:17.923859 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 25 01:41:17.972386 systemd-networkd[1083]: lo: Link UP Mar 25 01:41:17.972397 systemd-networkd[1083]: lo: Gained carrier Mar 25 01:41:17.974079 systemd-networkd[1083]: Enumeration completed Mar 25 01:41:17.974198 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 25 01:41:17.975065 systemd-networkd[1083]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 25 01:41:17.975070 systemd-networkd[1083]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 25 01:41:17.975585 systemd[1]: Reached target network.target - Network. Mar 25 01:41:17.978018 systemd-networkd[1083]: eth0: Link UP Mar 25 01:41:17.978023 systemd-networkd[1083]: eth0: Gained carrier Mar 25 01:41:17.978035 systemd-networkd[1083]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 25 01:41:17.988511 systemd-networkd[1083]: eth0: DHCPv4 address 172.31.30.255/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 25 01:41:18.097573 ignition[1034]: Ignition 2.20.0 Mar 25 01:41:18.097590 ignition[1034]: Stage: fetch-offline Mar 25 01:41:18.097823 ignition[1034]: no configs at "/usr/lib/ignition/base.d" Mar 25 01:41:18.097837 ignition[1034]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 25 01:41:18.099750 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 25 01:41:18.098180 ignition[1034]: Ignition finished successfully Mar 25 01:41:18.102181 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 25 01:41:18.128663 ignition[1093]: Ignition 2.20.0 Mar 25 01:41:18.128676 ignition[1093]: Stage: fetch Mar 25 01:41:18.129078 ignition[1093]: no configs at "/usr/lib/ignition/base.d" Mar 25 01:41:18.129092 ignition[1093]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 25 01:41:18.129220 ignition[1093]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 25 01:41:18.149566 ignition[1093]: PUT result: OK Mar 25 01:41:18.152759 ignition[1093]: parsed url from cmdline: "" Mar 25 01:41:18.152807 ignition[1093]: no config URL provided Mar 25 01:41:18.152845 ignition[1093]: reading system config file "/usr/lib/ignition/user.ign" Mar 25 01:41:18.152862 ignition[1093]: no config at "/usr/lib/ignition/user.ign" Mar 25 01:41:18.152901 ignition[1093]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 25 01:41:18.153957 ignition[1093]: PUT result: OK Mar 25 01:41:18.154023 ignition[1093]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Mar 25 01:41:18.154968 ignition[1093]: GET result: OK Mar 25 01:41:18.155114 ignition[1093]: parsing config with SHA512: cf5fc495aa3377f309e8aa7e0f299afcdde1377ab4b41355baf1b4e0ee08072a2744b7f849e692e17e41bc441c434b1b1405e4922ff129d5c873de39679119a5 Mar 25 01:41:18.160170 unknown[1093]: fetched base config from "system" Mar 25 01:41:18.160185 unknown[1093]: fetched base config from "system" Mar 25 01:41:18.160771 ignition[1093]: fetch: fetch complete Mar 25 01:41:18.160192 unknown[1093]: fetched user config from "aws" Mar 25 01:41:18.160778 ignition[1093]: fetch: fetch passed Mar 25 01:41:18.160834 ignition[1093]: Ignition finished successfully Mar 25 01:41:18.162919 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 25 01:41:18.164980 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 25 01:41:18.189157 ignition[1100]: Ignition 2.20.0 Mar 25 01:41:18.189171 ignition[1100]: Stage: kargs Mar 25 01:41:18.189600 ignition[1100]: no configs at "/usr/lib/ignition/base.d" Mar 25 01:41:18.189614 ignition[1100]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 25 01:41:18.189735 ignition[1100]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 25 01:41:18.190730 ignition[1100]: PUT result: OK Mar 25 01:41:18.193522 ignition[1100]: kargs: kargs passed Mar 25 01:41:18.193600 ignition[1100]: Ignition finished successfully Mar 25 01:41:18.195269 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 25 01:41:18.196649 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 25 01:41:18.215979 ignition[1106]: Ignition 2.20.0 Mar 25 01:41:18.215993 ignition[1106]: Stage: disks Mar 25 01:41:18.216399 ignition[1106]: no configs at "/usr/lib/ignition/base.d" Mar 25 01:41:18.216412 ignition[1106]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 25 01:41:18.216569 ignition[1106]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 25 01:41:18.217627 ignition[1106]: PUT result: OK Mar 25 01:41:18.220516 ignition[1106]: disks: disks passed Mar 25 01:41:18.220572 ignition[1106]: Ignition finished successfully Mar 25 01:41:18.222475 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 25 01:41:18.223042 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 25 01:41:18.223418 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 25 01:41:18.223930 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 25 01:41:18.224465 systemd[1]: Reached target sysinit.target - System Initialization. Mar 25 01:41:18.224997 systemd[1]: Reached target basic.target - Basic System. Mar 25 01:41:18.226566 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 25 01:41:18.271371 systemd-fsck[1115]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 25 01:41:18.274378 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 25 01:41:18.276504 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 25 01:41:18.394444 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 4e6dca82-2e50-453c-be25-61f944b72008 r/w with ordered data mode. Quota mode: none. Mar 25 01:41:18.395581 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 25 01:41:18.396732 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 25 01:41:18.399157 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 25 01:41:18.402519 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 25 01:41:18.404194 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 25 01:41:18.404258 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 25 01:41:18.404291 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 25 01:41:18.413720 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 25 01:41:18.415668 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 25 01:41:18.430056 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1134) Mar 25 01:41:18.430119 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a72930ba-1354-475c-94df-b83a66efea67 Mar 25 01:41:18.432995 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Mar 25 01:41:18.433061 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 25 01:41:18.439451 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 25 01:41:18.441440 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 25 01:41:18.718323 initrd-setup-root[1158]: cut: /sysroot/etc/passwd: No such file or directory Mar 25 01:41:18.740969 initrd-setup-root[1165]: cut: /sysroot/etc/group: No such file or directory Mar 25 01:41:18.753688 initrd-setup-root[1172]: cut: /sysroot/etc/shadow: No such file or directory Mar 25 01:41:18.779525 initrd-setup-root[1179]: cut: /sysroot/etc/gshadow: No such file or directory Mar 25 01:41:18.947853 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 25 01:41:18.949909 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 25 01:41:18.952615 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 25 01:41:18.969408 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 25 01:41:18.971450 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem a72930ba-1354-475c-94df-b83a66efea67 Mar 25 01:41:19.005757 ignition[1247]: INFO : Ignition 2.20.0 Mar 25 01:41:19.006843 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 25 01:41:19.009072 ignition[1247]: INFO : Stage: mount Mar 25 01:41:19.009072 ignition[1247]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 25 01:41:19.009072 ignition[1247]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 25 01:41:19.009072 ignition[1247]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 25 01:41:19.011201 ignition[1247]: INFO : PUT result: OK Mar 25 01:41:19.013900 ignition[1247]: INFO : mount: mount passed Mar 25 01:41:19.014459 ignition[1247]: INFO : Ignition finished successfully Mar 25 01:41:19.015249 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 25 01:41:19.017526 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 25 01:41:19.035998 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 25 01:41:19.065462 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/nvme0n1p6 scanned by mount (1259) Mar 25 01:41:19.069502 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a72930ba-1354-475c-94df-b83a66efea67 Mar 25 01:41:19.069572 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Mar 25 01:41:19.069594 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 25 01:41:19.075450 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 25 01:41:19.078696 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 25 01:41:19.112385 ignition[1275]: INFO : Ignition 2.20.0 Mar 25 01:41:19.112385 ignition[1275]: INFO : Stage: files Mar 25 01:41:19.113830 ignition[1275]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 25 01:41:19.113830 ignition[1275]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 25 01:41:19.113830 ignition[1275]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 25 01:41:19.114947 ignition[1275]: INFO : PUT result: OK Mar 25 01:41:19.116992 ignition[1275]: DEBUG : files: compiled without relabeling support, skipping Mar 25 01:41:19.118166 ignition[1275]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 25 01:41:19.118166 ignition[1275]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 25 01:41:19.123461 ignition[1275]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 25 01:41:19.124409 ignition[1275]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 25 01:41:19.124409 ignition[1275]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 25 01:41:19.124119 unknown[1275]: wrote ssh authorized keys file for user: core Mar 25 01:41:19.127083 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 25 01:41:19.128119 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Mar 25 01:41:19.208680 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 25 01:41:19.345266 systemd-networkd[1083]: eth0: Gained IPv6LL Mar 25 01:41:19.368515 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 25 01:41:19.369808 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 25 01:41:19.369808 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 25 01:41:19.369808 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 25 01:41:19.369808 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 25 01:41:19.369808 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 25 01:41:19.369808 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 25 01:41:19.369808 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 25 01:41:19.369808 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 25 01:41:19.369808 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 25 01:41:19.369808 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 25 01:41:19.369808 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Mar 25 01:41:19.369808 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Mar 25 01:41:19.369808 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Mar 25 01:41:19.379990 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Mar 25 01:41:19.858468 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 25 01:41:20.231242 ignition[1275]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Mar 25 01:41:20.231242 ignition[1275]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 25 01:41:20.234043 ignition[1275]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 25 01:41:20.235144 ignition[1275]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 25 01:41:20.235144 ignition[1275]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 25 01:41:20.235144 ignition[1275]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Mar 25 01:41:20.235144 ignition[1275]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Mar 25 01:41:20.235144 ignition[1275]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 25 01:41:20.235144 ignition[1275]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 25 01:41:20.235144 ignition[1275]: INFO : files: files passed Mar 25 01:41:20.235144 ignition[1275]: INFO : Ignition finished successfully Mar 25 01:41:20.236465 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 25 01:41:20.240570 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 25 01:41:20.247016 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 25 01:41:20.257393 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 25 01:41:20.257551 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 25 01:41:20.266885 initrd-setup-root-after-ignition[1305]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 25 01:41:20.266885 initrd-setup-root-after-ignition[1305]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 25 01:41:20.269653 initrd-setup-root-after-ignition[1309]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 25 01:41:20.268578 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 25 01:41:20.270812 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 25 01:41:20.273277 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 25 01:41:20.322836 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 25 01:41:20.322965 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 25 01:41:20.324289 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 25 01:41:20.325375 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 25 01:41:20.326163 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 25 01:41:20.327958 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 25 01:41:20.353954 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 25 01:41:20.355917 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 25 01:41:20.381785 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 25 01:41:20.382494 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 25 01:41:20.383383 systemd[1]: Stopped target timers.target - Timer Units. Mar 25 01:41:20.384200 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 25 01:41:20.384377 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 25 01:41:20.385476 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 25 01:41:20.386410 systemd[1]: Stopped target basic.target - Basic System. Mar 25 01:41:20.387189 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 25 01:41:20.387955 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 25 01:41:20.388803 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 25 01:41:20.389648 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 25 01:41:20.390372 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 25 01:41:20.391167 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 25 01:41:20.392323 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 25 01:41:20.393097 systemd[1]: Stopped target swap.target - Swaps. Mar 25 01:41:20.393807 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 25 01:41:20.394215 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 25 01:41:20.395302 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 25 01:41:20.396097 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 25 01:41:20.396791 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 25 01:41:20.397489 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 25 01:41:20.397935 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 25 01:41:20.398102 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 25 01:41:20.399551 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 25 01:41:20.399730 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 25 01:41:20.400407 systemd[1]: ignition-files.service: Deactivated successfully. Mar 25 01:41:20.400574 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 25 01:41:20.403620 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 25 01:41:20.404213 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 25 01:41:20.404378 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 25 01:41:20.412558 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 25 01:41:20.416164 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 25 01:41:20.417090 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 25 01:41:20.419018 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 25 01:41:20.419756 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 25 01:41:20.430231 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 25 01:41:20.430767 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 25 01:41:20.449846 ignition[1329]: INFO : Ignition 2.20.0 Mar 25 01:41:20.449846 ignition[1329]: INFO : Stage: umount Mar 25 01:41:20.449846 ignition[1329]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 25 01:41:20.449846 ignition[1329]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 25 01:41:20.449846 ignition[1329]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 25 01:41:20.453948 ignition[1329]: INFO : PUT result: OK Mar 25 01:41:20.451276 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 25 01:41:20.455724 ignition[1329]: INFO : umount: umount passed Mar 25 01:41:20.457117 ignition[1329]: INFO : Ignition finished successfully Mar 25 01:41:20.459309 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 25 01:41:20.459624 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 25 01:41:20.461631 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 25 01:41:20.461705 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 25 01:41:20.462161 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 25 01:41:20.462219 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 25 01:41:20.462902 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 25 01:41:20.462958 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 25 01:41:20.463727 systemd[1]: Stopped target network.target - Network. Mar 25 01:41:20.464279 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 25 01:41:20.464345 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 25 01:41:20.465010 systemd[1]: Stopped target paths.target - Path Units. Mar 25 01:41:20.465583 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 25 01:41:20.469477 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 25 01:41:20.470595 systemd[1]: Stopped target slices.target - Slice Units. Mar 25 01:41:20.471005 systemd[1]: Stopped target sockets.target - Socket Units. Mar 25 01:41:20.471693 systemd[1]: iscsid.socket: Deactivated successfully. Mar 25 01:41:20.471751 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 25 01:41:20.476492 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 25 01:41:20.476554 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 25 01:41:20.479560 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 25 01:41:20.479645 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 25 01:41:20.480384 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 25 01:41:20.480479 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 25 01:41:20.481208 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 25 01:41:20.481818 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 25 01:41:20.482829 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 25 01:41:20.482940 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 25 01:41:20.484340 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 25 01:41:20.484832 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 25 01:41:20.489257 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 25 01:41:20.489393 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 25 01:41:20.493242 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 25 01:41:20.493595 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 25 01:41:20.493714 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 25 01:41:20.496125 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 25 01:41:20.497201 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 25 01:41:20.497265 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 25 01:41:20.498814 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 25 01:41:20.500013 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 25 01:41:20.500091 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 25 01:41:20.500806 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 25 01:41:20.500869 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 25 01:41:20.503847 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 25 01:41:20.503909 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 25 01:41:20.504971 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 25 01:41:20.505027 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 25 01:41:20.505526 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 25 01:41:20.510892 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 25 01:41:20.510987 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 25 01:41:20.532752 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 25 01:41:20.532957 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 25 01:41:20.536110 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 25 01:41:20.536177 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 25 01:41:20.537167 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 25 01:41:20.537233 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 25 01:41:20.537950 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 25 01:41:20.538013 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 25 01:41:20.539057 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 25 01:41:20.539129 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 25 01:41:20.540192 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 25 01:41:20.540252 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 25 01:41:20.543019 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 25 01:41:20.543631 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 25 01:41:20.543696 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 25 01:41:20.546939 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 25 01:41:20.547536 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 25 01:41:20.549633 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 25 01:41:20.549716 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 25 01:41:20.552702 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 25 01:41:20.552830 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 25 01:41:20.562918 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 25 01:41:20.563054 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 25 01:41:20.564722 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 25 01:41:20.566600 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 25 01:41:20.586859 systemd[1]: Switching root. Mar 25 01:41:20.621838 systemd-journald[179]: Journal stopped Mar 25 01:41:22.423407 systemd-journald[179]: Received SIGTERM from PID 1 (systemd). Mar 25 01:41:22.423520 kernel: SELinux: policy capability network_peer_controls=1 Mar 25 01:41:22.423544 kernel: SELinux: policy capability open_perms=1 Mar 25 01:41:22.423577 kernel: SELinux: policy capability extended_socket_class=1 Mar 25 01:41:22.423594 kernel: SELinux: policy capability always_check_network=0 Mar 25 01:41:22.423611 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 25 01:41:22.423629 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 25 01:41:22.423647 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 25 01:41:22.423670 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 25 01:41:22.423693 kernel: audit: type=1403 audit(1742866881.094:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 25 01:41:22.423803 systemd[1]: Successfully loaded SELinux policy in 55.201ms. Mar 25 01:41:22.423838 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.183ms. Mar 25 01:41:22.423865 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 25 01:41:22.423890 systemd[1]: Detected virtualization amazon. Mar 25 01:41:22.423910 systemd[1]: Detected architecture x86-64. Mar 25 01:41:22.423928 systemd[1]: Detected first boot. Mar 25 01:41:22.423945 systemd[1]: Initializing machine ID from VM UUID. Mar 25 01:41:22.423963 zram_generator::config[1374]: No configuration found. Mar 25 01:41:22.423983 kernel: Guest personality initialized and is inactive Mar 25 01:41:22.424000 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Mar 25 01:41:22.424021 kernel: Initialized host personality Mar 25 01:41:22.424038 kernel: NET: Registered PF_VSOCK protocol family Mar 25 01:41:22.424056 systemd[1]: Populated /etc with preset unit settings. Mar 25 01:41:22.424076 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 25 01:41:22.424095 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 25 01:41:22.424113 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 25 01:41:22.424130 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 25 01:41:22.424148 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 25 01:41:22.424166 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 25 01:41:22.424187 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 25 01:41:22.424205 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 25 01:41:22.424223 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 25 01:41:22.424242 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 25 01:41:22.424258 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 25 01:41:22.424275 systemd[1]: Created slice user.slice - User and Session Slice. Mar 25 01:41:22.424293 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 25 01:41:22.424313 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 25 01:41:22.424332 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 25 01:41:22.424353 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 25 01:41:22.424372 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 25 01:41:22.424391 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 25 01:41:22.424408 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 25 01:41:22.424444 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 25 01:41:22.424463 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 25 01:41:22.424482 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 25 01:41:22.424506 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 25 01:41:22.424525 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 25 01:41:22.424544 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 25 01:41:22.424565 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 25 01:41:22.424585 systemd[1]: Reached target slices.target - Slice Units. Mar 25 01:41:22.424605 systemd[1]: Reached target swap.target - Swaps. Mar 25 01:41:22.424625 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 25 01:41:22.424645 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 25 01:41:22.424726 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 25 01:41:22.424753 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 25 01:41:22.424773 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 25 01:41:22.424792 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 25 01:41:22.424905 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 25 01:41:22.424940 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 25 01:41:22.424964 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 25 01:41:22.424984 systemd[1]: Mounting media.mount - External Media Directory... Mar 25 01:41:22.425006 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 25 01:41:22.425027 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 25 01:41:22.425051 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 25 01:41:22.425071 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 25 01:41:22.425092 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 25 01:41:22.425116 systemd[1]: Reached target machines.target - Containers. Mar 25 01:41:22.425139 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 25 01:41:22.425162 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 25 01:41:22.425185 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 25 01:41:22.425207 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 25 01:41:22.425229 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 25 01:41:22.425255 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 25 01:41:22.425275 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 25 01:41:22.425296 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 25 01:41:22.425317 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 25 01:41:22.425389 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 25 01:41:22.425416 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 25 01:41:22.425468 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 25 01:41:22.425487 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 25 01:41:22.425510 systemd[1]: Stopped systemd-fsck-usr.service. Mar 25 01:41:22.425531 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 25 01:41:22.425550 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 25 01:41:22.425570 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 25 01:41:22.425590 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 25 01:41:22.425609 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 25 01:41:22.425628 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 25 01:41:22.429484 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 25 01:41:22.429522 kernel: loop: module loaded Mar 25 01:41:22.429553 systemd[1]: verity-setup.service: Deactivated successfully. Mar 25 01:41:22.429575 systemd[1]: Stopped verity-setup.service. Mar 25 01:41:22.429597 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 25 01:41:22.429618 kernel: fuse: init (API version 7.39) Mar 25 01:41:22.429642 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 25 01:41:22.429665 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 25 01:41:22.429690 systemd[1]: Mounted media.mount - External Media Directory. Mar 25 01:41:22.429710 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 25 01:41:22.429731 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 25 01:41:22.429752 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 25 01:41:22.429777 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 25 01:41:22.429798 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 25 01:41:22.429819 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 25 01:41:22.429839 kernel: ACPI: bus type drm_connector registered Mar 25 01:41:22.429908 systemd-journald[1460]: Collecting audit messages is disabled. Mar 25 01:41:22.429949 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 25 01:41:22.429972 systemd-journald[1460]: Journal started Mar 25 01:41:22.430017 systemd-journald[1460]: Runtime Journal (/run/log/journal/ec282f26c9f8a61fa16bb45be5c489dc) is 4.7M, max 38.1M, 33.3M free. Mar 25 01:41:21.947853 systemd[1]: Queued start job for default target multi-user.target. Mar 25 01:41:21.956662 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Mar 25 01:41:21.957107 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 25 01:41:22.434447 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 25 01:41:22.439341 systemd[1]: Started systemd-journald.service - Journal Service. Mar 25 01:41:22.442362 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 25 01:41:22.444610 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 25 01:41:22.446637 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 25 01:41:22.447858 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 25 01:41:22.448098 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 25 01:41:22.449578 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 25 01:41:22.449789 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 25 01:41:22.451065 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 25 01:41:22.451361 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 25 01:41:22.452637 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 25 01:41:22.453863 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 25 01:41:22.454999 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 25 01:41:22.486993 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 25 01:41:22.490631 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 25 01:41:22.509243 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 25 01:41:22.510391 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 25 01:41:22.510455 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 25 01:41:22.513228 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 25 01:41:22.517583 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 25 01:41:22.524815 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 25 01:41:22.527997 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 25 01:41:22.532011 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 25 01:41:22.536603 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 25 01:41:22.537471 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 25 01:41:22.542057 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 25 01:41:22.542785 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 25 01:41:22.547757 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 25 01:41:22.561716 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 25 01:41:22.571845 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 25 01:41:22.577711 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 25 01:41:22.579077 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 25 01:41:22.586196 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 25 01:41:22.588208 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 25 01:41:22.611212 systemd-journald[1460]: Time spent on flushing to /var/log/journal/ec282f26c9f8a61fa16bb45be5c489dc is 97.436ms for 1011 entries. Mar 25 01:41:22.611212 systemd-journald[1460]: System Journal (/var/log/journal/ec282f26c9f8a61fa16bb45be5c489dc) is 8M, max 195.6M, 187.6M free. Mar 25 01:41:22.728630 systemd-journald[1460]: Received client request to flush runtime journal. Mar 25 01:41:22.728709 kernel: loop0: detected capacity change from 0 to 151640 Mar 25 01:41:22.616788 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 25 01:41:22.619627 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 25 01:41:22.631687 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 25 01:41:22.724967 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 25 01:41:22.729356 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 25 01:41:22.736933 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 25 01:41:22.747182 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 25 01:41:22.746266 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 25 01:41:22.767286 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 25 01:41:22.777455 kernel: loop1: detected capacity change from 0 to 109808 Mar 25 01:41:22.791569 udevadm[1525]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 25 01:41:22.812169 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 25 01:41:22.816642 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 25 01:41:22.864726 systemd-tmpfiles[1529]: ACLs are not supported, ignoring. Mar 25 01:41:22.865250 systemd-tmpfiles[1529]: ACLs are not supported, ignoring. Mar 25 01:41:22.873483 kernel: loop2: detected capacity change from 0 to 205544 Mar 25 01:41:22.874312 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 25 01:41:22.959177 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 25 01:41:22.994456 kernel: loop3: detected capacity change from 0 to 64352 Mar 25 01:41:23.092488 kernel: loop4: detected capacity change from 0 to 151640 Mar 25 01:41:23.112447 kernel: loop5: detected capacity change from 0 to 109808 Mar 25 01:41:23.144452 kernel: loop6: detected capacity change from 0 to 205544 Mar 25 01:41:23.186460 kernel: loop7: detected capacity change from 0 to 64352 Mar 25 01:41:23.214150 (sd-merge)[1535]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Mar 25 01:41:23.214993 (sd-merge)[1535]: Merged extensions into '/usr'. Mar 25 01:41:23.222022 systemd[1]: Reload requested from client PID 1508 ('systemd-sysext') (unit systemd-sysext.service)... Mar 25 01:41:23.222038 systemd[1]: Reloading... Mar 25 01:41:23.348515 zram_generator::config[1563]: No configuration found. Mar 25 01:41:23.583603 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 25 01:41:23.730689 systemd[1]: Reloading finished in 507 ms. Mar 25 01:41:23.753789 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 25 01:41:23.766632 systemd[1]: Starting ensure-sysext.service... Mar 25 01:41:23.770638 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 25 01:41:23.807781 systemd[1]: Reload requested from client PID 1614 ('systemctl') (unit ensure-sysext.service)... Mar 25 01:41:23.807967 systemd[1]: Reloading... Mar 25 01:41:23.818554 systemd-tmpfiles[1615]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 25 01:41:23.826546 systemd-tmpfiles[1615]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 25 01:41:23.831645 systemd-tmpfiles[1615]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 25 01:41:23.832146 systemd-tmpfiles[1615]: ACLs are not supported, ignoring. Mar 25 01:41:23.832248 systemd-tmpfiles[1615]: ACLs are not supported, ignoring. Mar 25 01:41:23.840283 systemd-tmpfiles[1615]: Detected autofs mount point /boot during canonicalization of boot. Mar 25 01:41:23.840298 systemd-tmpfiles[1615]: Skipping /boot Mar 25 01:41:23.908446 systemd-tmpfiles[1615]: Detected autofs mount point /boot during canonicalization of boot. Mar 25 01:41:23.908461 systemd-tmpfiles[1615]: Skipping /boot Mar 25 01:41:23.967520 ldconfig[1503]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 25 01:41:23.986458 zram_generator::config[1642]: No configuration found. Mar 25 01:41:24.144373 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 25 01:41:24.230303 systemd[1]: Reloading finished in 419 ms. Mar 25 01:41:24.245052 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 25 01:41:24.246252 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 25 01:41:24.259002 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 25 01:41:24.270915 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 25 01:41:24.275717 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 25 01:41:24.286911 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 25 01:41:24.292597 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 25 01:41:24.297194 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 25 01:41:24.300701 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 25 01:41:24.312364 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 25 01:41:24.314371 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 25 01:41:24.318916 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 25 01:41:24.325042 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 25 01:41:24.330544 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 25 01:41:24.332684 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 25 01:41:24.332860 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 25 01:41:24.332999 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 25 01:41:24.360097 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 25 01:41:24.366731 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 25 01:41:24.367191 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 25 01:41:24.370324 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 25 01:41:24.371579 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 25 01:41:24.375016 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 25 01:41:24.377679 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 25 01:41:24.404078 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 25 01:41:24.404804 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 25 01:41:24.412536 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 25 01:41:24.420472 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 25 01:41:24.430392 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 25 01:41:24.447770 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 25 01:41:24.448969 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 25 01:41:24.449703 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 25 01:41:24.450085 systemd[1]: Reached target time-set.target - System Time Set. Mar 25 01:41:24.452648 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 25 01:41:24.463054 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 25 01:41:24.469263 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 25 01:41:24.472881 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 25 01:41:24.473797 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 25 01:41:24.476835 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 25 01:41:24.477559 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 25 01:41:24.480112 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 25 01:41:24.480349 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 25 01:41:24.483858 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 25 01:41:24.484482 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 25 01:41:24.486116 systemd-udevd[1704]: Using default interface naming scheme 'v255'. Mar 25 01:41:24.500819 systemd[1]: Finished ensure-sysext.service. Mar 25 01:41:24.506968 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 25 01:41:24.507260 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 25 01:41:24.509856 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 25 01:41:24.536944 augenrules[1744]: No rules Mar 25 01:41:24.538233 systemd[1]: audit-rules.service: Deactivated successfully. Mar 25 01:41:24.540607 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 25 01:41:24.551558 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 25 01:41:24.561514 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 25 01:41:24.563018 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 25 01:41:24.564892 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 25 01:41:24.581341 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 25 01:41:24.586889 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 25 01:41:24.790493 systemd-resolved[1703]: Positive Trust Anchors: Mar 25 01:41:24.790515 systemd-resolved[1703]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 25 01:41:24.790575 systemd-resolved[1703]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 25 01:41:24.797336 (udev-worker)[1770]: Network interface NamePolicy= disabled on kernel command line. Mar 25 01:41:24.805727 systemd-resolved[1703]: Defaulting to hostname 'linux'. Mar 25 01:41:24.807582 systemd-networkd[1761]: lo: Link UP Mar 25 01:41:24.808013 systemd-networkd[1761]: lo: Gained carrier Mar 25 01:41:24.809043 systemd-networkd[1761]: Enumeration completed Mar 25 01:41:24.809166 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 25 01:41:24.813046 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 25 01:41:24.816854 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 25 01:41:24.818650 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 25 01:41:24.819250 systemd[1]: Reached target network.target - Network. Mar 25 01:41:24.820199 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 25 01:41:24.860519 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 25 01:41:24.865235 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 25 01:41:24.874803 systemd-networkd[1761]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 25 01:41:24.874961 systemd-networkd[1761]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 25 01:41:24.877413 systemd-networkd[1761]: eth0: Link UP Mar 25 01:41:24.877859 systemd-networkd[1761]: eth0: Gained carrier Mar 25 01:41:24.877968 systemd-networkd[1761]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 25 01:41:24.890523 systemd-networkd[1761]: eth0: DHCPv4 address 172.31.30.255/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 25 01:41:24.895446 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (1766) Mar 25 01:41:24.927453 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Mar 25 01:41:24.935454 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Mar 25 01:41:24.944448 kernel: ACPI: button: Power Button [PWRF] Mar 25 01:41:24.946449 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Mar 25 01:41:24.951445 kernel: ACPI: button: Sleep Button [SLPF] Mar 25 01:41:25.001463 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input5 Mar 25 01:41:25.064721 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 25 01:41:25.107007 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 25 01:41:25.107313 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 25 01:41:25.114120 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 25 01:41:25.118717 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 25 01:41:25.127455 kernel: mousedev: PS/2 mouse device common for all mice Mar 25 01:41:25.129127 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 25 01:41:25.151500 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 25 01:41:25.160978 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 25 01:41:25.165600 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 25 01:41:25.175858 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 25 01:41:25.190484 lvm[1880]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 25 01:41:25.218943 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 25 01:41:25.219859 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 25 01:41:25.222612 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 25 01:41:25.237073 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 25 01:41:25.238736 systemd[1]: Reached target sysinit.target - System Initialization. Mar 25 01:41:25.239684 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 25 01:41:25.241003 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 25 01:41:25.241214 lvm[1885]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 25 01:41:25.242413 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 25 01:41:25.243343 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 25 01:41:25.244141 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 25 01:41:25.244980 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 25 01:41:25.245140 systemd[1]: Reached target paths.target - Path Units. Mar 25 01:41:25.245951 systemd[1]: Reached target timers.target - Timer Units. Mar 25 01:41:25.248000 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 25 01:41:25.250895 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 25 01:41:25.255637 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 25 01:41:25.256803 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 25 01:41:25.257989 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 25 01:41:25.267293 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 25 01:41:25.268834 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 25 01:41:25.270121 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 25 01:41:25.270970 systemd[1]: Reached target sockets.target - Socket Units. Mar 25 01:41:25.271575 systemd[1]: Reached target basic.target - Basic System. Mar 25 01:41:25.272088 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 25 01:41:25.272130 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 25 01:41:25.273216 systemd[1]: Starting containerd.service - containerd container runtime... Mar 25 01:41:25.277600 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 25 01:41:25.281588 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 25 01:41:25.286797 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 25 01:41:25.289163 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 25 01:41:25.290503 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 25 01:41:25.294433 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 25 01:41:25.301266 systemd[1]: Started ntpd.service - Network Time Service. Mar 25 01:41:25.306038 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 25 01:41:25.309810 jq[1893]: false Mar 25 01:41:25.310525 systemd[1]: Starting setup-oem.service - Setup OEM... Mar 25 01:41:25.315087 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 25 01:41:25.348338 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 25 01:41:25.362719 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 25 01:41:25.367795 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 25 01:41:25.371834 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 25 01:41:25.375673 systemd[1]: Starting update-engine.service - Update Engine... Mar 25 01:41:25.386881 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 25 01:41:25.396968 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 25 01:41:25.405213 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 25 01:41:25.426197 jq[1905]: true Mar 25 01:41:25.410989 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 25 01:41:25.448558 extend-filesystems[1894]: Found loop4 Mar 25 01:41:25.448558 extend-filesystems[1894]: Found loop5 Mar 25 01:41:25.448558 extend-filesystems[1894]: Found loop6 Mar 25 01:41:25.448558 extend-filesystems[1894]: Found loop7 Mar 25 01:41:25.448558 extend-filesystems[1894]: Found nvme0n1 Mar 25 01:41:25.448558 extend-filesystems[1894]: Found nvme0n1p1 Mar 25 01:41:25.448558 extend-filesystems[1894]: Found nvme0n1p2 Mar 25 01:41:25.448558 extend-filesystems[1894]: Found nvme0n1p3 Mar 25 01:41:25.448558 extend-filesystems[1894]: Found usr Mar 25 01:41:25.448558 extend-filesystems[1894]: Found nvme0n1p4 Mar 25 01:41:25.448558 extend-filesystems[1894]: Found nvme0n1p6 Mar 25 01:41:25.448558 extend-filesystems[1894]: Found nvme0n1p7 Mar 25 01:41:25.448558 extend-filesystems[1894]: Found nvme0n1p9 Mar 25 01:41:25.448558 extend-filesystems[1894]: Checking size of /dev/nvme0n1p9 Mar 25 01:41:25.553029 extend-filesystems[1894]: Resized partition /dev/nvme0n1p9 Mar 25 01:41:25.475544 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 25 01:41:25.553630 coreos-metadata[1891]: Mar 25 01:41:25.510 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 25 01:41:25.553630 coreos-metadata[1891]: Mar 25 01:41:25.522 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Mar 25 01:41:25.553630 coreos-metadata[1891]: Mar 25 01:41:25.531 INFO Fetch successful Mar 25 01:41:25.553630 coreos-metadata[1891]: Mar 25 01:41:25.532 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Mar 25 01:41:25.553630 coreos-metadata[1891]: Mar 25 01:41:25.541 INFO Fetch successful Mar 25 01:41:25.553630 coreos-metadata[1891]: Mar 25 01:41:25.541 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Mar 25 01:41:25.500348 dbus-daemon[1892]: [system] SELinux support is enabled Mar 25 01:41:25.475840 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 25 01:41:25.543703 dbus-daemon[1892]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1761 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 25 01:41:25.558767 coreos-metadata[1891]: Mar 25 01:41:25.555 INFO Fetch successful Mar 25 01:41:25.558767 coreos-metadata[1891]: Mar 25 01:41:25.555 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Mar 25 01:41:25.558868 extend-filesystems[1935]: resize2fs 1.47.2 (1-Jan-2025) Mar 25 01:41:25.506813 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 25 01:41:25.563831 dbus-daemon[1892]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 25 01:41:25.564413 update_engine[1904]: I20250325 01:41:25.559578 1904 main.cc:92] Flatcar Update Engine starting Mar 25 01:41:25.564689 tar[1914]: linux-amd64/helm Mar 25 01:41:25.533517 systemd[1]: motdgen.service: Deactivated successfully. Mar 25 01:41:25.565029 jq[1911]: true Mar 25 01:41:25.534501 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 25 01:41:25.544291 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 25 01:41:25.573078 coreos-metadata[1891]: Mar 25 01:41:25.568 INFO Fetch successful Mar 25 01:41:25.573078 coreos-metadata[1891]: Mar 25 01:41:25.568 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Mar 25 01:41:25.573078 coreos-metadata[1891]: Mar 25 01:41:25.569 INFO Fetch failed with 404: resource not found Mar 25 01:41:25.573078 coreos-metadata[1891]: Mar 25 01:41:25.569 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Mar 25 01:41:25.573078 coreos-metadata[1891]: Mar 25 01:41:25.570 INFO Fetch successful Mar 25 01:41:25.573078 coreos-metadata[1891]: Mar 25 01:41:25.570 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Mar 25 01:41:25.573078 coreos-metadata[1891]: Mar 25 01:41:25.571 INFO Fetch successful Mar 25 01:41:25.573078 coreos-metadata[1891]: Mar 25 01:41:25.571 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Mar 25 01:41:25.573078 coreos-metadata[1891]: Mar 25 01:41:25.572 INFO Fetch successful Mar 25 01:41:25.573078 coreos-metadata[1891]: Mar 25 01:41:25.572 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Mar 25 01:41:25.544350 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 25 01:41:25.545036 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 25 01:41:25.545061 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 25 01:41:25.579137 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Mar 25 01:41:25.579236 coreos-metadata[1891]: Mar 25 01:41:25.578 INFO Fetch successful Mar 25 01:41:25.579236 coreos-metadata[1891]: Mar 25 01:41:25.578 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Mar 25 01:41:25.579341 update_engine[1904]: I20250325 01:41:25.574244 1904 update_check_scheduler.cc:74] Next update check in 6m51s Mar 25 01:41:25.577060 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Mar 25 01:41:25.582359 coreos-metadata[1891]: Mar 25 01:41:25.579 INFO Fetch successful Mar 25 01:41:25.580106 systemd[1]: Started update-engine.service - Update Engine. Mar 25 01:41:25.598739 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 25 01:41:25.606794 (ntainerd)[1933]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 25 01:41:25.656949 ntpd[1896]: ntpd 4.2.8p17@1.4004-o Mon Mar 24 23:09:41 UTC 2025 (1): Starting Mar 25 01:41:25.664787 ntpd[1896]: 25 Mar 01:41:25 ntpd[1896]: ntpd 4.2.8p17@1.4004-o Mon Mar 24 23:09:41 UTC 2025 (1): Starting Mar 25 01:41:25.664787 ntpd[1896]: 25 Mar 01:41:25 ntpd[1896]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 25 01:41:25.664787 ntpd[1896]: 25 Mar 01:41:25 ntpd[1896]: ---------------------------------------------------- Mar 25 01:41:25.664787 ntpd[1896]: 25 Mar 01:41:25 ntpd[1896]: ntp-4 is maintained by Network Time Foundation, Mar 25 01:41:25.664787 ntpd[1896]: 25 Mar 01:41:25 ntpd[1896]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 25 01:41:25.664787 ntpd[1896]: 25 Mar 01:41:25 ntpd[1896]: corporation. Support and training for ntp-4 are Mar 25 01:41:25.664787 ntpd[1896]: 25 Mar 01:41:25 ntpd[1896]: available at https://www.nwtime.org/support Mar 25 01:41:25.664787 ntpd[1896]: 25 Mar 01:41:25 ntpd[1896]: ---------------------------------------------------- Mar 25 01:41:25.656980 ntpd[1896]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 25 01:41:25.656991 ntpd[1896]: ---------------------------------------------------- Mar 25 01:41:25.657000 ntpd[1896]: ntp-4 is maintained by Network Time Foundation, Mar 25 01:41:25.657009 ntpd[1896]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 25 01:41:25.657019 ntpd[1896]: corporation. Support and training for ntp-4 are Mar 25 01:41:25.657029 ntpd[1896]: available at https://www.nwtime.org/support Mar 25 01:41:25.657038 ntpd[1896]: ---------------------------------------------------- Mar 25 01:41:25.679237 ntpd[1896]: proto: precision = 0.063 usec (-24) Mar 25 01:41:25.679616 ntpd[1896]: 25 Mar 01:41:25 ntpd[1896]: proto: precision = 0.063 usec (-24) Mar 25 01:41:25.680249 ntpd[1896]: basedate set to 2025-03-12 Mar 25 01:41:25.682593 ntpd[1896]: 25 Mar 01:41:25 ntpd[1896]: basedate set to 2025-03-12 Mar 25 01:41:25.682593 ntpd[1896]: 25 Mar 01:41:25 ntpd[1896]: gps base set to 2025-03-16 (week 2358) Mar 25 01:41:25.680270 ntpd[1896]: gps base set to 2025-03-16 (week 2358) Mar 25 01:41:25.684678 ntpd[1896]: Listen and drop on 0 v6wildcard [::]:123 Mar 25 01:41:25.684796 ntpd[1896]: 25 Mar 01:41:25 ntpd[1896]: Listen and drop on 0 v6wildcard [::]:123 Mar 25 01:41:25.684876 ntpd[1896]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 25 01:41:25.684936 ntpd[1896]: 25 Mar 01:41:25 ntpd[1896]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 25 01:41:25.685157 ntpd[1896]: Listen normally on 2 lo 127.0.0.1:123 Mar 25 01:41:25.685260 ntpd[1896]: 25 Mar 01:41:25 ntpd[1896]: Listen normally on 2 lo 127.0.0.1:123 Mar 25 01:41:25.687142 ntpd[1896]: Listen normally on 3 eth0 172.31.30.255:123 Mar 25 01:41:25.688612 ntpd[1896]: 25 Mar 01:41:25 ntpd[1896]: Listen normally on 3 eth0 172.31.30.255:123 Mar 25 01:41:25.688612 ntpd[1896]: 25 Mar 01:41:25 ntpd[1896]: Listen normally on 4 lo [::1]:123 Mar 25 01:41:25.688612 ntpd[1896]: 25 Mar 01:41:25 ntpd[1896]: bind(21) AF_INET6 fe80::45a:d3ff:fe02:8165%2#123 flags 0x11 failed: Cannot assign requested address Mar 25 01:41:25.688612 ntpd[1896]: 25 Mar 01:41:25 ntpd[1896]: unable to create socket on eth0 (5) for fe80::45a:d3ff:fe02:8165%2#123 Mar 25 01:41:25.688612 ntpd[1896]: 25 Mar 01:41:25 ntpd[1896]: failed to init interface for address fe80::45a:d3ff:fe02:8165%2 Mar 25 01:41:25.688612 ntpd[1896]: 25 Mar 01:41:25 ntpd[1896]: Listening on routing socket on fd #21 for interface updates Mar 25 01:41:25.687203 ntpd[1896]: Listen normally on 4 lo [::1]:123 Mar 25 01:41:25.705571 ntpd[1896]: 25 Mar 01:41:25 ntpd[1896]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 25 01:41:25.705571 ntpd[1896]: 25 Mar 01:41:25 ntpd[1896]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 25 01:41:25.687258 ntpd[1896]: bind(21) AF_INET6 fe80::45a:d3ff:fe02:8165%2#123 flags 0x11 failed: Cannot assign requested address Mar 25 01:41:25.687282 ntpd[1896]: unable to create socket on eth0 (5) for fe80::45a:d3ff:fe02:8165%2#123 Mar 25 01:41:25.687300 ntpd[1896]: failed to init interface for address fe80::45a:d3ff:fe02:8165%2 Mar 25 01:41:25.687341 ntpd[1896]: Listening on routing socket on fd #21 for interface updates Mar 25 01:41:25.693097 ntpd[1896]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 25 01:41:25.693131 ntpd[1896]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 25 01:41:25.715489 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Mar 25 01:41:25.717238 systemd[1]: Finished setup-oem.service - Setup OEM. Mar 25 01:41:25.718943 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 25 01:41:25.723758 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 25 01:41:25.735320 systemd-logind[1901]: Watching system buttons on /dev/input/event1 (Power Button) Mar 25 01:41:25.735355 systemd-logind[1901]: Watching system buttons on /dev/input/event2 (Sleep Button) Mar 25 01:41:25.735377 systemd-logind[1901]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 25 01:41:25.737559 bash[1966]: Updated "/home/core/.ssh/authorized_keys" Mar 25 01:41:25.737604 systemd-logind[1901]: New seat seat0. Mar 25 01:41:25.738320 extend-filesystems[1935]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Mar 25 01:41:25.738320 extend-filesystems[1935]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 25 01:41:25.738320 extend-filesystems[1935]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Mar 25 01:41:25.758628 extend-filesystems[1894]: Resized filesystem in /dev/nvme0n1p9 Mar 25 01:41:25.738417 systemd[1]: Started systemd-logind.service - User Login Management. Mar 25 01:41:25.751518 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 25 01:41:25.751795 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 25 01:41:25.760110 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 25 01:41:25.771961 systemd[1]: Starting sshkeys.service... Mar 25 01:41:25.788960 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (1770) Mar 25 01:41:25.855484 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 25 01:41:25.861033 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 25 01:41:25.994930 coreos-metadata[2006]: Mar 25 01:41:25.992 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 25 01:41:25.999917 coreos-metadata[2006]: Mar 25 01:41:25.998 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Mar 25 01:41:26.008053 coreos-metadata[2006]: Mar 25 01:41:26.000 INFO Fetch successful Mar 25 01:41:26.008053 coreos-metadata[2006]: Mar 25 01:41:26.000 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Mar 25 01:41:26.008053 coreos-metadata[2006]: Mar 25 01:41:26.007 INFO Fetch successful Mar 25 01:41:26.014738 unknown[2006]: wrote ssh authorized keys file for user: core Mar 25 01:41:26.153453 update-ssh-keys[2053]: Updated "/home/core/.ssh/authorized_keys" Mar 25 01:41:26.154418 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 25 01:41:26.175380 systemd[1]: Finished sshkeys.service. Mar 25 01:41:26.255307 systemd-networkd[1761]: eth0: Gained IPv6LL Mar 25 01:41:26.260471 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Mar 25 01:41:26.266945 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 25 01:41:26.273692 dbus-daemon[1892]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 25 01:41:26.290985 dbus-daemon[1892]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1940 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 25 01:41:26.303462 systemd[1]: Reached target network-online.target - Network is Online. Mar 25 01:41:26.320761 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Mar 25 01:41:26.326730 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:41:26.332828 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 25 01:41:26.347811 systemd[1]: Starting polkit.service - Authorization Manager... Mar 25 01:41:26.416292 polkitd[2076]: Started polkitd version 121 Mar 25 01:41:26.493026 polkitd[2076]: Loading rules from directory /etc/polkit-1/rules.d Mar 25 01:41:26.513556 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 25 01:41:26.532713 polkitd[2076]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 25 01:41:26.538500 polkitd[2076]: Finished loading, compiling and executing 2 rules Mar 25 01:41:26.542635 dbus-daemon[1892]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 25 01:41:26.549411 systemd[1]: Started polkit.service - Authorization Manager. Mar 25 01:41:26.554979 polkitd[2076]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 25 01:41:26.619868 amazon-ssm-agent[2072]: Initializing new seelog logger Mar 25 01:41:26.622651 amazon-ssm-agent[2072]: New Seelog Logger Creation Complete Mar 25 01:41:26.622777 amazon-ssm-agent[2072]: 2025/03/25 01:41:26 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 25 01:41:26.622777 amazon-ssm-agent[2072]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 25 01:41:26.627125 amazon-ssm-agent[2072]: 2025/03/25 01:41:26 processing appconfig overrides Mar 25 01:41:26.630776 amazon-ssm-agent[2072]: 2025/03/25 01:41:26 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 25 01:41:26.630776 amazon-ssm-agent[2072]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 25 01:41:26.632172 amazon-ssm-agent[2072]: 2025/03/25 01:41:26 processing appconfig overrides Mar 25 01:41:26.634371 amazon-ssm-agent[2072]: 2025-03-25 01:41:26 INFO Proxy environment variables: Mar 25 01:41:26.634655 amazon-ssm-agent[2072]: 2025/03/25 01:41:26 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 25 01:41:26.634655 amazon-ssm-agent[2072]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 25 01:41:26.637446 amazon-ssm-agent[2072]: 2025/03/25 01:41:26 processing appconfig overrides Mar 25 01:41:26.646797 amazon-ssm-agent[2072]: 2025/03/25 01:41:26 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 25 01:41:26.646797 amazon-ssm-agent[2072]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 25 01:41:26.646797 amazon-ssm-agent[2072]: 2025/03/25 01:41:26 processing appconfig overrides Mar 25 01:41:26.650263 systemd-hostnamed[1940]: Hostname set to (transient) Mar 25 01:41:26.650824 locksmithd[1947]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 25 01:41:26.654671 systemd-resolved[1703]: System hostname changed to 'ip-172-31-30-255'. Mar 25 01:41:26.724591 containerd[1933]: time="2025-03-25T01:41:26Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 25 01:41:26.725243 containerd[1933]: time="2025-03-25T01:41:26.725050696Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 Mar 25 01:41:26.734150 amazon-ssm-agent[2072]: 2025-03-25 01:41:26 INFO https_proxy: Mar 25 01:41:26.742308 sshd_keygen[1929]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 25 01:41:26.776275 containerd[1933]: time="2025-03-25T01:41:26.776222332Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.03µs" Mar 25 01:41:26.776726 containerd[1933]: time="2025-03-25T01:41:26.776688677Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 25 01:41:26.776913 containerd[1933]: time="2025-03-25T01:41:26.776892365Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 25 01:41:26.777158 containerd[1933]: time="2025-03-25T01:41:26.777138099Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 25 01:41:26.778352 containerd[1933]: time="2025-03-25T01:41:26.778323938Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 25 01:41:26.780460 containerd[1933]: time="2025-03-25T01:41:26.779957156Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 25 01:41:26.780460 containerd[1933]: time="2025-03-25T01:41:26.780182348Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 25 01:41:26.780460 containerd[1933]: time="2025-03-25T01:41:26.780203672Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 25 01:41:26.783185 containerd[1933]: time="2025-03-25T01:41:26.782619001Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 25 01:41:26.783185 containerd[1933]: time="2025-03-25T01:41:26.782650761Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 25 01:41:26.783185 containerd[1933]: time="2025-03-25T01:41:26.782670241Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 25 01:41:26.783185 containerd[1933]: time="2025-03-25T01:41:26.782685938Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 25 01:41:26.783185 containerd[1933]: time="2025-03-25T01:41:26.782812149Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 25 01:41:26.783185 containerd[1933]: time="2025-03-25T01:41:26.783068857Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 25 01:41:26.783185 containerd[1933]: time="2025-03-25T01:41:26.783126979Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 25 01:41:26.783185 containerd[1933]: time="2025-03-25T01:41:26.783143030Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 25 01:41:26.786872 containerd[1933]: time="2025-03-25T01:41:26.786514632Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 25 01:41:26.788503 containerd[1933]: time="2025-03-25T01:41:26.787821609Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 25 01:41:26.788503 containerd[1933]: time="2025-03-25T01:41:26.788064607Z" level=info msg="metadata content store policy set" policy=shared Mar 25 01:41:26.802459 containerd[1933]: time="2025-03-25T01:41:26.801438877Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 25 01:41:26.802459 containerd[1933]: time="2025-03-25T01:41:26.801803677Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 25 01:41:26.802459 containerd[1933]: time="2025-03-25T01:41:26.801838498Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 25 01:41:26.802459 containerd[1933]: time="2025-03-25T01:41:26.801858174Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 25 01:41:26.802459 containerd[1933]: time="2025-03-25T01:41:26.801976651Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 25 01:41:26.802459 containerd[1933]: time="2025-03-25T01:41:26.801996888Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 25 01:41:26.802459 containerd[1933]: time="2025-03-25T01:41:26.802021202Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 25 01:41:26.802459 containerd[1933]: time="2025-03-25T01:41:26.802040012Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 25 01:41:26.802459 containerd[1933]: time="2025-03-25T01:41:26.802055861Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 25 01:41:26.802459 containerd[1933]: time="2025-03-25T01:41:26.802071680Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 25 01:41:26.802459 containerd[1933]: time="2025-03-25T01:41:26.802086676Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 25 01:41:26.802459 containerd[1933]: time="2025-03-25T01:41:26.802103869Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 25 01:41:26.802459 containerd[1933]: time="2025-03-25T01:41:26.802288338Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 25 01:41:26.802459 containerd[1933]: time="2025-03-25T01:41:26.802323021Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 25 01:41:26.803007 containerd[1933]: time="2025-03-25T01:41:26.802350898Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 25 01:41:26.803007 containerd[1933]: time="2025-03-25T01:41:26.802367727Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 25 01:41:26.803007 containerd[1933]: time="2025-03-25T01:41:26.802386127Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 25 01:41:26.803007 containerd[1933]: time="2025-03-25T01:41:26.802402269Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 25 01:41:26.806332 containerd[1933]: time="2025-03-25T01:41:26.802418321Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 25 01:41:26.806332 containerd[1933]: time="2025-03-25T01:41:26.803396309Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 25 01:41:26.806332 containerd[1933]: time="2025-03-25T01:41:26.804220778Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 25 01:41:26.806332 containerd[1933]: time="2025-03-25T01:41:26.804262176Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 25 01:41:26.806332 containerd[1933]: time="2025-03-25T01:41:26.804279210Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 25 01:41:26.806332 containerd[1933]: time="2025-03-25T01:41:26.804367274Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 25 01:41:26.806332 containerd[1933]: time="2025-03-25T01:41:26.804387878Z" level=info msg="Start snapshots syncer" Mar 25 01:41:26.806332 containerd[1933]: time="2025-03-25T01:41:26.804682067Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 25 01:41:26.806753 containerd[1933]: time="2025-03-25T01:41:26.805151491Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 25 01:41:26.806753 containerd[1933]: time="2025-03-25T01:41:26.805223147Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 25 01:41:26.810388 containerd[1933]: time="2025-03-25T01:41:26.808193339Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 25 01:41:26.810388 containerd[1933]: time="2025-03-25T01:41:26.808400080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 25 01:41:26.810388 containerd[1933]: time="2025-03-25T01:41:26.808479498Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 25 01:41:26.810388 containerd[1933]: time="2025-03-25T01:41:26.808500930Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 25 01:41:26.810388 containerd[1933]: time="2025-03-25T01:41:26.808519492Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 25 01:41:26.810388 containerd[1933]: time="2025-03-25T01:41:26.808541167Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 25 01:41:26.810388 containerd[1933]: time="2025-03-25T01:41:26.808558745Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 25 01:41:26.810388 containerd[1933]: time="2025-03-25T01:41:26.808575618Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 25 01:41:26.810388 containerd[1933]: time="2025-03-25T01:41:26.808613263Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 25 01:41:26.810388 containerd[1933]: time="2025-03-25T01:41:26.808632335Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 25 01:41:26.810388 containerd[1933]: time="2025-03-25T01:41:26.808647130Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 25 01:41:26.810388 containerd[1933]: time="2025-03-25T01:41:26.808691664Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 25 01:41:26.810388 containerd[1933]: time="2025-03-25T01:41:26.808713228Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 25 01:41:26.810388 containerd[1933]: time="2025-03-25T01:41:26.808727132Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 25 01:41:26.811016 containerd[1933]: time="2025-03-25T01:41:26.808742317Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 25 01:41:26.811016 containerd[1933]: time="2025-03-25T01:41:26.808756072Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 25 01:41:26.811016 containerd[1933]: time="2025-03-25T01:41:26.808771075Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 25 01:41:26.811016 containerd[1933]: time="2025-03-25T01:41:26.808788240Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 25 01:41:26.811016 containerd[1933]: time="2025-03-25T01:41:26.808810979Z" level=info msg="runtime interface created" Mar 25 01:41:26.811016 containerd[1933]: time="2025-03-25T01:41:26.808819114Z" level=info msg="created NRI interface" Mar 25 01:41:26.811016 containerd[1933]: time="2025-03-25T01:41:26.808831840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 25 01:41:26.811016 containerd[1933]: time="2025-03-25T01:41:26.808849643Z" level=info msg="Connect containerd service" Mar 25 01:41:26.811016 containerd[1933]: time="2025-03-25T01:41:26.808893109Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 25 01:41:26.815256 containerd[1933]: time="2025-03-25T01:41:26.814264959Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 25 01:41:26.828416 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 25 01:41:26.834558 amazon-ssm-agent[2072]: 2025-03-25 01:41:26 INFO http_proxy: Mar 25 01:41:26.838718 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 25 01:41:26.885881 systemd[1]: issuegen.service: Deactivated successfully. Mar 25 01:41:26.886197 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 25 01:41:26.893153 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 25 01:41:26.935541 amazon-ssm-agent[2072]: 2025-03-25 01:41:26 INFO no_proxy: Mar 25 01:41:26.948160 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 25 01:41:26.953057 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 25 01:41:26.958059 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 25 01:41:26.959736 systemd[1]: Reached target getty.target - Login Prompts. Mar 25 01:41:27.035880 amazon-ssm-agent[2072]: 2025-03-25 01:41:26 INFO Checking if agent identity type OnPrem can be assumed Mar 25 01:41:27.133689 amazon-ssm-agent[2072]: 2025-03-25 01:41:26 INFO Checking if agent identity type EC2 can be assumed Mar 25 01:41:27.216450 containerd[1933]: time="2025-03-25T01:41:27.216386285Z" level=info msg="Start subscribing containerd event" Mar 25 01:41:27.216602 containerd[1933]: time="2025-03-25T01:41:27.216517325Z" level=info msg="Start recovering state" Mar 25 01:41:27.216645 containerd[1933]: time="2025-03-25T01:41:27.216626652Z" level=info msg="Start event monitor" Mar 25 01:41:27.216681 containerd[1933]: time="2025-03-25T01:41:27.216645436Z" level=info msg="Start cni network conf syncer for default" Mar 25 01:41:27.216681 containerd[1933]: time="2025-03-25T01:41:27.216656655Z" level=info msg="Start streaming server" Mar 25 01:41:27.216681 containerd[1933]: time="2025-03-25T01:41:27.216677239Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 25 01:41:27.216774 containerd[1933]: time="2025-03-25T01:41:27.216688143Z" level=info msg="runtime interface starting up..." Mar 25 01:41:27.216774 containerd[1933]: time="2025-03-25T01:41:27.216697945Z" level=info msg="starting plugins..." Mar 25 01:41:27.216774 containerd[1933]: time="2025-03-25T01:41:27.216715347Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 25 01:41:27.218104 containerd[1933]: time="2025-03-25T01:41:27.218068345Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 25 01:41:27.218192 containerd[1933]: time="2025-03-25T01:41:27.218145855Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 25 01:41:27.219403 systemd[1]: Started containerd.service - containerd container runtime. Mar 25 01:41:27.220109 containerd[1933]: time="2025-03-25T01:41:27.220079631Z" level=info msg="containerd successfully booted in 0.498781s" Mar 25 01:41:27.232499 amazon-ssm-agent[2072]: 2025-03-25 01:41:26 INFO Agent will take identity from EC2 Mar 25 01:41:27.312152 tar[1914]: linux-amd64/LICENSE Mar 25 01:41:27.312152 tar[1914]: linux-amd64/README.md Mar 25 01:41:27.330765 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 25 01:41:27.332075 amazon-ssm-agent[2072]: 2025-03-25 01:41:26 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 25 01:41:27.385882 amazon-ssm-agent[2072]: 2025-03-25 01:41:26 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 25 01:41:27.385882 amazon-ssm-agent[2072]: 2025-03-25 01:41:26 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 25 01:41:27.385882 amazon-ssm-agent[2072]: 2025-03-25 01:41:26 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Mar 25 01:41:27.385882 amazon-ssm-agent[2072]: 2025-03-25 01:41:26 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Mar 25 01:41:27.385882 amazon-ssm-agent[2072]: 2025-03-25 01:41:26 INFO [amazon-ssm-agent] Starting Core Agent Mar 25 01:41:27.385882 amazon-ssm-agent[2072]: 2025-03-25 01:41:26 INFO [amazon-ssm-agent] registrar detected. Attempting registration Mar 25 01:41:27.385882 amazon-ssm-agent[2072]: 2025-03-25 01:41:26 INFO [Registrar] Starting registrar module Mar 25 01:41:27.386261 amazon-ssm-agent[2072]: 2025-03-25 01:41:26 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Mar 25 01:41:27.386261 amazon-ssm-agent[2072]: 2025-03-25 01:41:27 INFO [EC2Identity] EC2 registration was successful. Mar 25 01:41:27.386261 amazon-ssm-agent[2072]: 2025-03-25 01:41:27 INFO [CredentialRefresher] credentialRefresher has started Mar 25 01:41:27.386261 amazon-ssm-agent[2072]: 2025-03-25 01:41:27 INFO [CredentialRefresher] Starting credentials refresher loop Mar 25 01:41:27.386261 amazon-ssm-agent[2072]: 2025-03-25 01:41:27 INFO EC2RoleProvider Successfully connected with instance profile role credentials Mar 25 01:41:27.431689 amazon-ssm-agent[2072]: 2025-03-25 01:41:27 INFO [CredentialRefresher] Next credential rotation will be in 30.608327259566668 minutes Mar 25 01:41:28.307213 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 25 01:41:28.309675 systemd[1]: Started sshd@0-172.31.30.255:22-147.75.109.163:49490.service - OpenSSH per-connection server daemon (147.75.109.163:49490). Mar 25 01:41:28.398404 amazon-ssm-agent[2072]: 2025-03-25 01:41:28 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Mar 25 01:41:28.500311 amazon-ssm-agent[2072]: 2025-03-25 01:41:28 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2151) started Mar 25 01:41:28.536984 sshd[2147]: Accepted publickey for core from 147.75.109.163 port 49490 ssh2: RSA SHA256:pZSzr7AABY+GJWUQ/10Qq8YqIpXZSwyycEbuJ7d4HJc Mar 25 01:41:28.539312 sshd-session[2147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:41:28.547142 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 25 01:41:28.552064 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 25 01:41:28.577506 systemd-logind[1901]: New session 1 of user core. Mar 25 01:41:28.586548 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 25 01:41:28.591247 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 25 01:41:28.600340 amazon-ssm-agent[2072]: 2025-03-25 01:41:28 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Mar 25 01:41:28.609083 (systemd)[2164]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 25 01:41:28.612807 systemd-logind[1901]: New session c1 of user core. Mar 25 01:41:28.657397 ntpd[1896]: Listen normally on 6 eth0 [fe80::45a:d3ff:fe02:8165%2]:123 Mar 25 01:41:28.657775 ntpd[1896]: 25 Mar 01:41:28 ntpd[1896]: Listen normally on 6 eth0 [fe80::45a:d3ff:fe02:8165%2]:123 Mar 25 01:41:28.745621 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:41:28.746911 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 25 01:41:28.758183 (kubelet)[2175]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 25 01:41:28.794469 systemd[2164]: Queued start job for default target default.target. Mar 25 01:41:28.801009 systemd[2164]: Created slice app.slice - User Application Slice. Mar 25 01:41:28.801046 systemd[2164]: Reached target paths.target - Paths. Mar 25 01:41:28.801192 systemd[2164]: Reached target timers.target - Timers. Mar 25 01:41:28.803215 systemd[2164]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 25 01:41:28.817216 systemd[2164]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 25 01:41:28.817373 systemd[2164]: Reached target sockets.target - Sockets. Mar 25 01:41:28.817613 systemd[2164]: Reached target basic.target - Basic System. Mar 25 01:41:28.817687 systemd[2164]: Reached target default.target - Main User Target. Mar 25 01:41:28.817717 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 25 01:41:28.817727 systemd[2164]: Startup finished in 195ms. Mar 25 01:41:28.826609 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 25 01:41:28.829212 systemd[1]: Startup finished in 948ms (kernel) + 6.270s (initrd) + 7.787s (userspace) = 15.006s. Mar 25 01:41:28.985721 systemd[1]: Started sshd@1-172.31.30.255:22-147.75.109.163:58382.service - OpenSSH per-connection server daemon (147.75.109.163:58382). Mar 25 01:41:29.163249 sshd[2185]: Accepted publickey for core from 147.75.109.163 port 58382 ssh2: RSA SHA256:pZSzr7AABY+GJWUQ/10Qq8YqIpXZSwyycEbuJ7d4HJc Mar 25 01:41:29.165885 sshd-session[2185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:41:29.171667 systemd-logind[1901]: New session 2 of user core. Mar 25 01:41:29.177848 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 25 01:41:29.300488 sshd[2191]: Connection closed by 147.75.109.163 port 58382 Mar 25 01:41:29.301778 sshd-session[2185]: pam_unix(sshd:session): session closed for user core Mar 25 01:41:29.305465 systemd[1]: sshd@1-172.31.30.255:22-147.75.109.163:58382.service: Deactivated successfully. Mar 25 01:41:29.308833 systemd[1]: session-2.scope: Deactivated successfully. Mar 25 01:41:29.311892 systemd-logind[1901]: Session 2 logged out. Waiting for processes to exit. Mar 25 01:41:29.313142 systemd-logind[1901]: Removed session 2. Mar 25 01:41:29.331192 systemd[1]: Started sshd@2-172.31.30.255:22-147.75.109.163:58398.service - OpenSSH per-connection server daemon (147.75.109.163:58398). Mar 25 01:41:29.510099 sshd[2197]: Accepted publickey for core from 147.75.109.163 port 58398 ssh2: RSA SHA256:pZSzr7AABY+GJWUQ/10Qq8YqIpXZSwyycEbuJ7d4HJc Mar 25 01:41:29.511574 sshd-session[2197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:41:29.518493 systemd-logind[1901]: New session 3 of user core. Mar 25 01:41:29.523605 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 25 01:41:29.639404 sshd[2199]: Connection closed by 147.75.109.163 port 58398 Mar 25 01:41:29.640712 sshd-session[2197]: pam_unix(sshd:session): session closed for user core Mar 25 01:41:29.644478 systemd[1]: sshd@2-172.31.30.255:22-147.75.109.163:58398.service: Deactivated successfully. Mar 25 01:41:29.647224 systemd[1]: session-3.scope: Deactivated successfully. Mar 25 01:41:29.653803 systemd-logind[1901]: Session 3 logged out. Waiting for processes to exit. Mar 25 01:41:29.655492 systemd-logind[1901]: Removed session 3. Mar 25 01:41:29.676321 systemd[1]: Started sshd@3-172.31.30.255:22-147.75.109.163:58400.service - OpenSSH per-connection server daemon (147.75.109.163:58400). Mar 25 01:41:29.865280 sshd[2205]: Accepted publickey for core from 147.75.109.163 port 58400 ssh2: RSA SHA256:pZSzr7AABY+GJWUQ/10Qq8YqIpXZSwyycEbuJ7d4HJc Mar 25 01:41:29.867636 sshd-session[2205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:41:29.874628 systemd-logind[1901]: New session 4 of user core. Mar 25 01:41:29.877628 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 25 01:41:29.956904 kubelet[2175]: E0325 01:41:29.956851 2175 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 25 01:41:29.959606 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 25 01:41:29.959813 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 25 01:41:29.960193 systemd[1]: kubelet.service: Consumed 994ms CPU time, 238.2M memory peak. Mar 25 01:41:29.999260 sshd[2208]: Connection closed by 147.75.109.163 port 58400 Mar 25 01:41:29.999955 sshd-session[2205]: pam_unix(sshd:session): session closed for user core Mar 25 01:41:30.003282 systemd[1]: sshd@3-172.31.30.255:22-147.75.109.163:58400.service: Deactivated successfully. Mar 25 01:41:30.005259 systemd[1]: session-4.scope: Deactivated successfully. Mar 25 01:41:30.006684 systemd-logind[1901]: Session 4 logged out. Waiting for processes to exit. Mar 25 01:41:30.008085 systemd-logind[1901]: Removed session 4. Mar 25 01:41:30.033943 systemd[1]: Started sshd@4-172.31.30.255:22-147.75.109.163:58410.service - OpenSSH per-connection server daemon (147.75.109.163:58410). Mar 25 01:41:30.199728 sshd[2216]: Accepted publickey for core from 147.75.109.163 port 58410 ssh2: RSA SHA256:pZSzr7AABY+GJWUQ/10Qq8YqIpXZSwyycEbuJ7d4HJc Mar 25 01:41:30.200933 sshd-session[2216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:41:30.206907 systemd-logind[1901]: New session 5 of user core. Mar 25 01:41:30.216631 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 25 01:41:30.339643 sudo[2219]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 25 01:41:30.340013 sudo[2219]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 25 01:41:30.358259 sudo[2219]: pam_unix(sudo:session): session closed for user root Mar 25 01:41:30.380110 sshd[2218]: Connection closed by 147.75.109.163 port 58410 Mar 25 01:41:30.381156 sshd-session[2216]: pam_unix(sshd:session): session closed for user core Mar 25 01:41:30.385578 systemd[1]: sshd@4-172.31.30.255:22-147.75.109.163:58410.service: Deactivated successfully. Mar 25 01:41:30.387784 systemd[1]: session-5.scope: Deactivated successfully. Mar 25 01:41:30.388680 systemd-logind[1901]: Session 5 logged out. Waiting for processes to exit. Mar 25 01:41:30.389964 systemd-logind[1901]: Removed session 5. Mar 25 01:41:30.418856 systemd[1]: Started sshd@5-172.31.30.255:22-147.75.109.163:58412.service - OpenSSH per-connection server daemon (147.75.109.163:58412). Mar 25 01:41:30.590091 sshd[2225]: Accepted publickey for core from 147.75.109.163 port 58412 ssh2: RSA SHA256:pZSzr7AABY+GJWUQ/10Qq8YqIpXZSwyycEbuJ7d4HJc Mar 25 01:41:30.591677 sshd-session[2225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:41:30.600846 systemd-logind[1901]: New session 6 of user core. Mar 25 01:41:30.613653 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 25 01:41:30.713072 sudo[2229]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 25 01:41:30.713491 sudo[2229]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 25 01:41:30.717564 sudo[2229]: pam_unix(sudo:session): session closed for user root Mar 25 01:41:30.723458 sudo[2228]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 25 01:41:30.723822 sudo[2228]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 25 01:41:30.734612 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 25 01:41:30.782810 augenrules[2251]: No rules Mar 25 01:41:30.784249 systemd[1]: audit-rules.service: Deactivated successfully. Mar 25 01:41:30.784532 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 25 01:41:30.785618 sudo[2228]: pam_unix(sudo:session): session closed for user root Mar 25 01:41:30.808602 sshd[2227]: Connection closed by 147.75.109.163 port 58412 Mar 25 01:41:30.809357 sshd-session[2225]: pam_unix(sshd:session): session closed for user core Mar 25 01:41:30.812983 systemd[1]: sshd@5-172.31.30.255:22-147.75.109.163:58412.service: Deactivated successfully. Mar 25 01:41:30.815720 systemd[1]: session-6.scope: Deactivated successfully. Mar 25 01:41:30.818129 systemd-logind[1901]: Session 6 logged out. Waiting for processes to exit. Mar 25 01:41:30.820096 systemd-logind[1901]: Removed session 6. Mar 25 01:41:30.841047 systemd[1]: Started sshd@6-172.31.30.255:22-147.75.109.163:58420.service - OpenSSH per-connection server daemon (147.75.109.163:58420). Mar 25 01:41:31.010937 sshd[2260]: Accepted publickey for core from 147.75.109.163 port 58420 ssh2: RSA SHA256:pZSzr7AABY+GJWUQ/10Qq8YqIpXZSwyycEbuJ7d4HJc Mar 25 01:41:31.012361 sshd-session[2260]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:41:31.018012 systemd-logind[1901]: New session 7 of user core. Mar 25 01:41:31.024599 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 25 01:41:31.123658 sudo[2263]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 25 01:41:31.124037 sudo[2263]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 25 01:41:31.754799 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 25 01:41:31.766904 (dockerd)[2281]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 25 01:41:32.307368 dockerd[2281]: time="2025-03-25T01:41:32.307310616Z" level=info msg="Starting up" Mar 25 01:41:32.309083 dockerd[2281]: time="2025-03-25T01:41:32.309033951Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 25 01:41:32.342069 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3450684019-merged.mount: Deactivated successfully. Mar 25 01:41:32.437053 dockerd[2281]: time="2025-03-25T01:41:32.437003301Z" level=info msg="Loading containers: start." Mar 25 01:41:34.219755 systemd-resolved[1703]: Clock change detected. Flushing caches. Mar 25 01:41:34.230526 kernel: Initializing XFRM netlink socket Mar 25 01:41:34.232377 (udev-worker)[2304]: Network interface NamePolicy= disabled on kernel command line. Mar 25 01:41:34.323011 systemd-networkd[1761]: docker0: Link UP Mar 25 01:41:34.383093 dockerd[2281]: time="2025-03-25T01:41:34.383051084Z" level=info msg="Loading containers: done." Mar 25 01:41:34.399836 dockerd[2281]: time="2025-03-25T01:41:34.399789807Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 25 01:41:34.400015 dockerd[2281]: time="2025-03-25T01:41:34.399891797Z" level=info msg="Docker daemon" commit=c710b88579fcb5e0d53f96dcae976d79323b9166 containerd-snapshotter=false storage-driver=overlay2 version=27.4.1 Mar 25 01:41:34.400064 dockerd[2281]: time="2025-03-25T01:41:34.400018501Z" level=info msg="Daemon has completed initialization" Mar 25 01:41:34.436569 dockerd[2281]: time="2025-03-25T01:41:34.436424683Z" level=info msg="API listen on /run/docker.sock" Mar 25 01:41:34.436892 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 25 01:41:35.743826 containerd[1933]: time="2025-03-25T01:41:35.743698368Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.7\"" Mar 25 01:41:36.366078 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount328630173.mount: Deactivated successfully. Mar 25 01:41:38.953916 containerd[1933]: time="2025-03-25T01:41:38.953844229Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:41:38.955111 containerd[1933]: time="2025-03-25T01:41:38.955061800Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.7: active requests=0, bytes read=27959268" Mar 25 01:41:38.956073 containerd[1933]: time="2025-03-25T01:41:38.956012173Z" level=info msg="ImageCreate event name:\"sha256:f084bc047a8cf7c8484d47c51e70e646dde3977d916f282feb99207b7b9241af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:41:38.958376 containerd[1933]: time="2025-03-25T01:41:38.958324104Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:22c19cc70fe5806d0a2cb28a6b6b33fd34e6f9e50616bdf6d53649bcfafbc277\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:41:38.959601 containerd[1933]: time="2025-03-25T01:41:38.959399427Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.7\" with image id \"sha256:f084bc047a8cf7c8484d47c51e70e646dde3977d916f282feb99207b7b9241af\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:22c19cc70fe5806d0a2cb28a6b6b33fd34e6f9e50616bdf6d53649bcfafbc277\", size \"27956068\" in 3.215565672s" Mar 25 01:41:38.959601 containerd[1933]: time="2025-03-25T01:41:38.959441312Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.7\" returns image reference \"sha256:f084bc047a8cf7c8484d47c51e70e646dde3977d916f282feb99207b7b9241af\"" Mar 25 01:41:38.961547 containerd[1933]: time="2025-03-25T01:41:38.961518235Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.7\"" Mar 25 01:41:41.507325 containerd[1933]: time="2025-03-25T01:41:41.507277903Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:41:41.508365 containerd[1933]: time="2025-03-25T01:41:41.508301189Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.7: active requests=0, bytes read=24713776" Mar 25 01:41:41.509453 containerd[1933]: time="2025-03-25T01:41:41.509401909Z" level=info msg="ImageCreate event name:\"sha256:652dcad615a9a0c252c253860d5b5b7bfebd3efe159dc033a8555bc15a6d1985\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:41:41.512576 containerd[1933]: time="2025-03-25T01:41:41.512170066Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6abe7a0accecf29db6ebab18a10f844678ffed693d79e2e51a18a6f2b4530cbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:41:41.513243 containerd[1933]: time="2025-03-25T01:41:41.513203807Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.7\" with image id \"sha256:652dcad615a9a0c252c253860d5b5b7bfebd3efe159dc033a8555bc15a6d1985\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6abe7a0accecf29db6ebab18a10f844678ffed693d79e2e51a18a6f2b4530cbb\", size \"26201384\" in 2.551649295s" Mar 25 01:41:41.513313 containerd[1933]: time="2025-03-25T01:41:41.513250827Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.7\" returns image reference \"sha256:652dcad615a9a0c252c253860d5b5b7bfebd3efe159dc033a8555bc15a6d1985\"" Mar 25 01:41:41.513881 containerd[1933]: time="2025-03-25T01:41:41.513835387Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.7\"" Mar 25 01:41:41.551545 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 25 01:41:41.573962 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:41:41.796329 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:41:41.809077 (kubelet)[2544]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 25 01:41:41.855828 kubelet[2544]: E0325 01:41:41.855775 2544 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 25 01:41:41.860697 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 25 01:41:41.861002 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 25 01:41:41.861706 systemd[1]: kubelet.service: Consumed 163ms CPU time, 97.3M memory peak. Mar 25 01:41:43.538543 containerd[1933]: time="2025-03-25T01:41:43.538476157Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:41:43.543920 containerd[1933]: time="2025-03-25T01:41:43.543855506Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.7: active requests=0, bytes read=18780368" Mar 25 01:41:43.548209 containerd[1933]: time="2025-03-25T01:41:43.548159263Z" level=info msg="ImageCreate event name:\"sha256:7f1f6a63d8aa14cf61d0045e912ad312b4ade24637cecccc933b163582eae68c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:41:43.557173 containerd[1933]: time="2025-03-25T01:41:43.557086726Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:fb80249bcb77ee72b1c9fa5b70bc28a83ed107c9ca71957841ad91db379963bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:41:43.559102 containerd[1933]: time="2025-03-25T01:41:43.558199619Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.7\" with image id \"sha256:7f1f6a63d8aa14cf61d0045e912ad312b4ade24637cecccc933b163582eae68c\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:fb80249bcb77ee72b1c9fa5b70bc28a83ed107c9ca71957841ad91db379963bf\", size \"20267994\" in 2.044325287s" Mar 25 01:41:43.559102 containerd[1933]: time="2025-03-25T01:41:43.558242310Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.7\" returns image reference \"sha256:7f1f6a63d8aa14cf61d0045e912ad312b4ade24637cecccc933b163582eae68c\"" Mar 25 01:41:43.562819 containerd[1933]: time="2025-03-25T01:41:43.559334365Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.7\"" Mar 25 01:41:44.645811 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2394982614.mount: Deactivated successfully. Mar 25 01:41:45.260752 containerd[1933]: time="2025-03-25T01:41:45.260696150Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:41:45.261949 containerd[1933]: time="2025-03-25T01:41:45.261880339Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.7: active requests=0, bytes read=30354630" Mar 25 01:41:45.262972 containerd[1933]: time="2025-03-25T01:41:45.262917526Z" level=info msg="ImageCreate event name:\"sha256:dcfc039c372ea285997a302d60e58a75b80905b4c4dba969993b9b22e8ac66d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:41:45.264992 containerd[1933]: time="2025-03-25T01:41:45.264937940Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e5839270c96c3ad1bea1dce4935126d3281297527f3655408d2970aa4b5cf178\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:41:45.265713 containerd[1933]: time="2025-03-25T01:41:45.265566953Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.7\" with image id \"sha256:dcfc039c372ea285997a302d60e58a75b80905b4c4dba969993b9b22e8ac66d1\", repo tag \"registry.k8s.io/kube-proxy:v1.31.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:e5839270c96c3ad1bea1dce4935126d3281297527f3655408d2970aa4b5cf178\", size \"30353649\" in 1.703478s" Mar 25 01:41:45.265713 containerd[1933]: time="2025-03-25T01:41:45.265609771Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.7\" returns image reference \"sha256:dcfc039c372ea285997a302d60e58a75b80905b4c4dba969993b9b22e8ac66d1\"" Mar 25 01:41:45.266446 containerd[1933]: time="2025-03-25T01:41:45.266254105Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 25 01:41:45.842467 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4119293429.mount: Deactivated successfully. Mar 25 01:41:47.002342 containerd[1933]: time="2025-03-25T01:41:47.002288765Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:41:47.003822 containerd[1933]: time="2025-03-25T01:41:47.003763626Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Mar 25 01:41:47.004897 containerd[1933]: time="2025-03-25T01:41:47.004861465Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:41:47.007662 containerd[1933]: time="2025-03-25T01:41:47.007605168Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:41:47.009304 containerd[1933]: time="2025-03-25T01:41:47.008966948Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.742674381s" Mar 25 01:41:47.009304 containerd[1933]: time="2025-03-25T01:41:47.009014352Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Mar 25 01:41:47.010441 containerd[1933]: time="2025-03-25T01:41:47.010414055Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 25 01:41:47.513330 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1766295629.mount: Deactivated successfully. Mar 25 01:41:47.530074 containerd[1933]: time="2025-03-25T01:41:47.529917648Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 25 01:41:47.531887 containerd[1933]: time="2025-03-25T01:41:47.531789744Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Mar 25 01:41:47.534181 containerd[1933]: time="2025-03-25T01:41:47.534105058Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 25 01:41:47.539662 containerd[1933]: time="2025-03-25T01:41:47.538486442Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 25 01:41:47.539662 containerd[1933]: time="2025-03-25T01:41:47.539365288Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 528.913561ms" Mar 25 01:41:47.539662 containerd[1933]: time="2025-03-25T01:41:47.539403892Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Mar 25 01:41:47.540422 containerd[1933]: time="2025-03-25T01:41:47.540290055Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Mar 25 01:41:48.100851 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount58585415.mount: Deactivated successfully. Mar 25 01:41:51.312831 containerd[1933]: time="2025-03-25T01:41:51.312755640Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:41:51.314959 containerd[1933]: time="2025-03-25T01:41:51.314880748Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779973" Mar 25 01:41:51.317667 containerd[1933]: time="2025-03-25T01:41:51.317604000Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:41:51.323074 containerd[1933]: time="2025-03-25T01:41:51.321656994Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:41:51.323074 containerd[1933]: time="2025-03-25T01:41:51.322889487Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.782563774s" Mar 25 01:41:51.323074 containerd[1933]: time="2025-03-25T01:41:51.322928555Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Mar 25 01:41:51.951651 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 25 01:41:51.953679 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:41:52.256434 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:41:52.275241 (kubelet)[2691]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 25 01:41:52.366248 kubelet[2691]: E0325 01:41:52.366200 2691 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 25 01:41:52.370806 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 25 01:41:52.371348 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 25 01:41:52.372722 systemd[1]: kubelet.service: Consumed 209ms CPU time, 97M memory peak. Mar 25 01:41:54.531072 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:41:54.531315 systemd[1]: kubelet.service: Consumed 209ms CPU time, 97M memory peak. Mar 25 01:41:54.534486 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:41:54.579779 systemd[1]: Reload requested from client PID 2707 ('systemctl') (unit session-7.scope)... Mar 25 01:41:54.579794 systemd[1]: Reloading... Mar 25 01:41:54.747798 zram_generator::config[2750]: No configuration found. Mar 25 01:41:54.985066 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 25 01:41:55.162209 systemd[1]: Reloading finished in 581 ms. Mar 25 01:41:55.245151 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:41:55.250991 systemd[1]: kubelet.service: Deactivated successfully. Mar 25 01:41:55.251260 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:41:55.251331 systemd[1]: kubelet.service: Consumed 137ms CPU time, 83.6M memory peak. Mar 25 01:41:55.253247 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:41:55.460153 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:41:55.474000 (kubelet)[2818]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 25 01:41:55.527923 kubelet[2818]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 25 01:41:55.527923 kubelet[2818]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 25 01:41:55.527923 kubelet[2818]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 25 01:41:55.529723 kubelet[2818]: I0325 01:41:55.529671 2818 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 25 01:41:56.125481 kubelet[2818]: I0325 01:41:56.125436 2818 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Mar 25 01:41:56.125481 kubelet[2818]: I0325 01:41:56.125468 2818 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 25 01:41:56.125910 kubelet[2818]: I0325 01:41:56.125886 2818 server.go:929] "Client rotation is on, will bootstrap in background" Mar 25 01:41:56.167487 kubelet[2818]: I0325 01:41:56.167435 2818 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 25 01:41:56.170379 kubelet[2818]: E0325 01:41:56.170109 2818 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.30.255:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.30.255:6443: connect: connection refused" logger="UnhandledError" Mar 25 01:41:56.188340 kubelet[2818]: I0325 01:41:56.188301 2818 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 25 01:41:56.194788 kubelet[2818]: I0325 01:41:56.194734 2818 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 25 01:41:56.199818 kubelet[2818]: I0325 01:41:56.199781 2818 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 25 01:41:56.200048 kubelet[2818]: I0325 01:41:56.200006 2818 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 25 01:41:56.200306 kubelet[2818]: I0325 01:41:56.200042 2818 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-30-255","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 25 01:41:56.200467 kubelet[2818]: I0325 01:41:56.200323 2818 topology_manager.go:138] "Creating topology manager with none policy" Mar 25 01:41:56.200467 kubelet[2818]: I0325 01:41:56.200338 2818 container_manager_linux.go:300] "Creating device plugin manager" Mar 25 01:41:56.200572 kubelet[2818]: I0325 01:41:56.200490 2818 state_mem.go:36] "Initialized new in-memory state store" Mar 25 01:41:56.203711 kubelet[2818]: I0325 01:41:56.203416 2818 kubelet.go:408] "Attempting to sync node with API server" Mar 25 01:41:56.203711 kubelet[2818]: I0325 01:41:56.203450 2818 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 25 01:41:56.203711 kubelet[2818]: I0325 01:41:56.203491 2818 kubelet.go:314] "Adding apiserver pod source" Mar 25 01:41:56.203711 kubelet[2818]: I0325 01:41:56.203527 2818 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 25 01:41:56.207248 kubelet[2818]: W0325 01:41:56.207189 2818 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.30.255:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-255&limit=500&resourceVersion=0": dial tcp 172.31.30.255:6443: connect: connection refused Mar 25 01:41:56.207360 kubelet[2818]: E0325 01:41:56.207255 2818 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.30.255:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-255&limit=500&resourceVersion=0\": dial tcp 172.31.30.255:6443: connect: connection refused" logger="UnhandledError" Mar 25 01:41:56.212019 kubelet[2818]: W0325 01:41:56.211905 2818 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.30.255:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.30.255:6443: connect: connection refused Mar 25 01:41:56.212550 kubelet[2818]: E0325 01:41:56.212088 2818 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.30.255:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.30.255:6443: connect: connection refused" logger="UnhandledError" Mar 25 01:41:56.212550 kubelet[2818]: I0325 01:41:56.212418 2818 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" Mar 25 01:41:56.218026 kubelet[2818]: I0325 01:41:56.217716 2818 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 25 01:41:56.220032 kubelet[2818]: W0325 01:41:56.219988 2818 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 25 01:41:56.222263 kubelet[2818]: I0325 01:41:56.222228 2818 server.go:1269] "Started kubelet" Mar 25 01:41:56.224250 kubelet[2818]: I0325 01:41:56.223470 2818 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 25 01:41:56.226001 kubelet[2818]: I0325 01:41:56.224881 2818 server.go:460] "Adding debug handlers to kubelet server" Mar 25 01:41:56.230530 kubelet[2818]: I0325 01:41:56.229946 2818 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 25 01:41:56.230530 kubelet[2818]: I0325 01:41:56.230308 2818 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 25 01:41:56.230872 kubelet[2818]: I0325 01:41:56.230843 2818 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 25 01:41:56.240418 kubelet[2818]: E0325 01:41:56.232360 2818 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.30.255:6443/api/v1/namespaces/default/events\": dial tcp 172.31.30.255:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-30-255.182fe839e8347b3d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-30-255,UID:ip-172-31-30-255,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-30-255,},FirstTimestamp:2025-03-25 01:41:56.222204733 +0000 UTC m=+0.742093799,LastTimestamp:2025-03-25 01:41:56.222204733 +0000 UTC m=+0.742093799,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-30-255,}" Mar 25 01:41:56.242408 kubelet[2818]: I0325 01:41:56.240961 2818 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 25 01:41:56.242408 kubelet[2818]: E0325 01:41:56.241839 2818 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-30-255\" not found" Mar 25 01:41:56.253869 kubelet[2818]: I0325 01:41:56.253834 2818 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 25 01:41:56.258076 kubelet[2818]: I0325 01:41:56.256479 2818 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 25 01:41:56.258076 kubelet[2818]: I0325 01:41:56.256647 2818 reconciler.go:26] "Reconciler: start to sync state" Mar 25 01:41:56.258076 kubelet[2818]: E0325 01:41:56.257253 2818 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.255:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-255?timeout=10s\": dial tcp 172.31.30.255:6443: connect: connection refused" interval="200ms" Mar 25 01:41:56.258076 kubelet[2818]: W0325 01:41:56.257335 2818 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.30.255:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.255:6443: connect: connection refused Mar 25 01:41:56.258076 kubelet[2818]: E0325 01:41:56.257395 2818 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.30.255:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.30.255:6443: connect: connection refused" logger="UnhandledError" Mar 25 01:41:56.259356 kubelet[2818]: I0325 01:41:56.258958 2818 factory.go:221] Registration of the systemd container factory successfully Mar 25 01:41:56.259356 kubelet[2818]: I0325 01:41:56.259111 2818 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 25 01:41:56.260547 kubelet[2818]: E0325 01:41:56.260421 2818 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 25 01:41:56.264634 kubelet[2818]: I0325 01:41:56.261895 2818 factory.go:221] Registration of the containerd container factory successfully Mar 25 01:41:56.265247 kubelet[2818]: I0325 01:41:56.265136 2818 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 25 01:41:56.271601 kubelet[2818]: I0325 01:41:56.269297 2818 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 25 01:41:56.271601 kubelet[2818]: I0325 01:41:56.269332 2818 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 25 01:41:56.271601 kubelet[2818]: I0325 01:41:56.269380 2818 kubelet.go:2321] "Starting kubelet main sync loop" Mar 25 01:41:56.271601 kubelet[2818]: E0325 01:41:56.269451 2818 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 25 01:41:56.292372 kubelet[2818]: W0325 01:41:56.292263 2818 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.30.255:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.255:6443: connect: connection refused Mar 25 01:41:56.292766 kubelet[2818]: E0325 01:41:56.292386 2818 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.30.255:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.30.255:6443: connect: connection refused" logger="UnhandledError" Mar 25 01:41:56.305811 kubelet[2818]: I0325 01:41:56.305756 2818 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 25 01:41:56.305811 kubelet[2818]: I0325 01:41:56.305778 2818 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 25 01:41:56.305811 kubelet[2818]: I0325 01:41:56.305799 2818 state_mem.go:36] "Initialized new in-memory state store" Mar 25 01:41:56.309525 kubelet[2818]: I0325 01:41:56.309477 2818 policy_none.go:49] "None policy: Start" Mar 25 01:41:56.310334 kubelet[2818]: I0325 01:41:56.310237 2818 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 25 01:41:56.310334 kubelet[2818]: I0325 01:41:56.310316 2818 state_mem.go:35] "Initializing new in-memory state store" Mar 25 01:41:56.322039 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 25 01:41:56.332605 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 25 01:41:56.336716 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 25 01:41:56.342434 kubelet[2818]: E0325 01:41:56.342390 2818 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-30-255\" not found" Mar 25 01:41:56.344484 kubelet[2818]: I0325 01:41:56.344434 2818 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 25 01:41:56.344710 kubelet[2818]: I0325 01:41:56.344689 2818 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 25 01:41:56.344795 kubelet[2818]: I0325 01:41:56.344708 2818 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 25 01:41:56.345528 kubelet[2818]: I0325 01:41:56.345496 2818 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 25 01:41:56.349251 kubelet[2818]: E0325 01:41:56.349221 2818 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-30-255\" not found" Mar 25 01:41:56.386918 systemd[1]: Created slice kubepods-burstable-pod432adae8258aaa9360f8e1886310f113.slice - libcontainer container kubepods-burstable-pod432adae8258aaa9360f8e1886310f113.slice. Mar 25 01:41:56.407683 systemd[1]: Created slice kubepods-burstable-pod1802f1e00ea5890188e1954bbfff66db.slice - libcontainer container kubepods-burstable-pod1802f1e00ea5890188e1954bbfff66db.slice. Mar 25 01:41:56.420145 systemd[1]: Created slice kubepods-burstable-pod209fea63824ebbb39dd50d712bd7e4fe.slice - libcontainer container kubepods-burstable-pod209fea63824ebbb39dd50d712bd7e4fe.slice. Mar 25 01:41:56.447142 kubelet[2818]: I0325 01:41:56.447096 2818 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-30-255" Mar 25 01:41:56.447541 kubelet[2818]: E0325 01:41:56.447514 2818 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.30.255:6443/api/v1/nodes\": dial tcp 172.31.30.255:6443: connect: connection refused" node="ip-172-31-30-255" Mar 25 01:41:56.457155 kubelet[2818]: I0325 01:41:56.457117 2818 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/432adae8258aaa9360f8e1886310f113-ca-certs\") pod \"kube-apiserver-ip-172-31-30-255\" (UID: \"432adae8258aaa9360f8e1886310f113\") " pod="kube-system/kube-apiserver-ip-172-31-30-255" Mar 25 01:41:56.457155 kubelet[2818]: I0325 01:41:56.457157 2818 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1802f1e00ea5890188e1954bbfff66db-ca-certs\") pod \"kube-controller-manager-ip-172-31-30-255\" (UID: \"1802f1e00ea5890188e1954bbfff66db\") " pod="kube-system/kube-controller-manager-ip-172-31-30-255" Mar 25 01:41:56.457155 kubelet[2818]: I0325 01:41:56.457183 2818 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1802f1e00ea5890188e1954bbfff66db-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-30-255\" (UID: \"1802f1e00ea5890188e1954bbfff66db\") " pod="kube-system/kube-controller-manager-ip-172-31-30-255" Mar 25 01:41:56.457155 kubelet[2818]: I0325 01:41:56.457206 2818 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/432adae8258aaa9360f8e1886310f113-k8s-certs\") pod \"kube-apiserver-ip-172-31-30-255\" (UID: \"432adae8258aaa9360f8e1886310f113\") " pod="kube-system/kube-apiserver-ip-172-31-30-255" Mar 25 01:41:56.457610 kubelet[2818]: I0325 01:41:56.457239 2818 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/432adae8258aaa9360f8e1886310f113-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-30-255\" (UID: \"432adae8258aaa9360f8e1886310f113\") " pod="kube-system/kube-apiserver-ip-172-31-30-255" Mar 25 01:41:56.457610 kubelet[2818]: I0325 01:41:56.457259 2818 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1802f1e00ea5890188e1954bbfff66db-k8s-certs\") pod \"kube-controller-manager-ip-172-31-30-255\" (UID: \"1802f1e00ea5890188e1954bbfff66db\") " pod="kube-system/kube-controller-manager-ip-172-31-30-255" Mar 25 01:41:56.457610 kubelet[2818]: I0325 01:41:56.457280 2818 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1802f1e00ea5890188e1954bbfff66db-kubeconfig\") pod \"kube-controller-manager-ip-172-31-30-255\" (UID: \"1802f1e00ea5890188e1954bbfff66db\") " pod="kube-system/kube-controller-manager-ip-172-31-30-255" Mar 25 01:41:56.457610 kubelet[2818]: I0325 01:41:56.457303 2818 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1802f1e00ea5890188e1954bbfff66db-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-30-255\" (UID: \"1802f1e00ea5890188e1954bbfff66db\") " pod="kube-system/kube-controller-manager-ip-172-31-30-255" Mar 25 01:41:56.457610 kubelet[2818]: I0325 01:41:56.457325 2818 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/209fea63824ebbb39dd50d712bd7e4fe-kubeconfig\") pod \"kube-scheduler-ip-172-31-30-255\" (UID: \"209fea63824ebbb39dd50d712bd7e4fe\") " pod="kube-system/kube-scheduler-ip-172-31-30-255" Mar 25 01:41:56.464319 kubelet[2818]: E0325 01:41:56.458183 2818 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.255:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-255?timeout=10s\": dial tcp 172.31.30.255:6443: connect: connection refused" interval="400ms" Mar 25 01:41:56.482973 kubelet[2818]: E0325 01:41:56.482866 2818 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.30.255:6443/api/v1/namespaces/default/events\": dial tcp 172.31.30.255:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-30-255.182fe839e8347b3d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-30-255,UID:ip-172-31-30-255,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-30-255,},FirstTimestamp:2025-03-25 01:41:56.222204733 +0000 UTC m=+0.742093799,LastTimestamp:2025-03-25 01:41:56.222204733 +0000 UTC m=+0.742093799,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-30-255,}" Mar 25 01:41:56.649735 kubelet[2818]: I0325 01:41:56.649624 2818 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-30-255" Mar 25 01:41:56.650319 kubelet[2818]: E0325 01:41:56.650061 2818 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.30.255:6443/api/v1/nodes\": dial tcp 172.31.30.255:6443: connect: connection refused" node="ip-172-31-30-255" Mar 25 01:41:56.705793 containerd[1933]: time="2025-03-25T01:41:56.705479149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-30-255,Uid:432adae8258aaa9360f8e1886310f113,Namespace:kube-system,Attempt:0,}" Mar 25 01:41:56.717554 containerd[1933]: time="2025-03-25T01:41:56.717493599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-30-255,Uid:1802f1e00ea5890188e1954bbfff66db,Namespace:kube-system,Attempt:0,}" Mar 25 01:41:56.723636 containerd[1933]: time="2025-03-25T01:41:56.723595615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-30-255,Uid:209fea63824ebbb39dd50d712bd7e4fe,Namespace:kube-system,Attempt:0,}" Mar 25 01:41:56.865254 kubelet[2818]: E0325 01:41:56.865122 2818 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.255:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-255?timeout=10s\": dial tcp 172.31.30.255:6443: connect: connection refused" interval="800ms" Mar 25 01:41:56.882155 containerd[1933]: time="2025-03-25T01:41:56.881816850Z" level=info msg="connecting to shim 9b9a440a1a2f194ad334bba48e4c8e0d0feddfdc9881760e4d5e9f6f7c75a1b6" address="unix:///run/containerd/s/d376b2e139f35016ddd67060dba4a066959efddfad23934e78a5543ca2960e3c" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:41:56.889833 containerd[1933]: time="2025-03-25T01:41:56.889479094Z" level=info msg="connecting to shim 64ccd6b7d64fd7c735e282ac7c8544156afe6091c88bcd998755f5320cef9c7f" address="unix:///run/containerd/s/0c754d2d3dcb59a6dbe944da2e5b079f9a7160596683ed45876bb917a2237788" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:41:56.907762 containerd[1933]: time="2025-03-25T01:41:56.907580499Z" level=info msg="connecting to shim 12761700c66664a5dc246605c2801bb2b901d95e6b764789dfd9e86f46af761e" address="unix:///run/containerd/s/624b0b021e2008929c5fcf0819482df98fd74f11cf40cb88e92a263a76676e56" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:41:57.033703 systemd[1]: Started cri-containerd-64ccd6b7d64fd7c735e282ac7c8544156afe6091c88bcd998755f5320cef9c7f.scope - libcontainer container 64ccd6b7d64fd7c735e282ac7c8544156afe6091c88bcd998755f5320cef9c7f. Mar 25 01:41:57.042394 systemd[1]: Started cri-containerd-12761700c66664a5dc246605c2801bb2b901d95e6b764789dfd9e86f46af761e.scope - libcontainer container 12761700c66664a5dc246605c2801bb2b901d95e6b764789dfd9e86f46af761e. Mar 25 01:41:57.046855 systemd[1]: Started cri-containerd-9b9a440a1a2f194ad334bba48e4c8e0d0feddfdc9881760e4d5e9f6f7c75a1b6.scope - libcontainer container 9b9a440a1a2f194ad334bba48e4c8e0d0feddfdc9881760e4d5e9f6f7c75a1b6. Mar 25 01:41:57.057105 kubelet[2818]: W0325 01:41:57.057027 2818 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.30.255:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.30.255:6443: connect: connection refused Mar 25 01:41:57.057261 kubelet[2818]: E0325 01:41:57.057117 2818 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.30.255:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.30.255:6443: connect: connection refused" logger="UnhandledError" Mar 25 01:41:57.061008 kubelet[2818]: I0325 01:41:57.060972 2818 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-30-255" Mar 25 01:41:57.061374 kubelet[2818]: E0325 01:41:57.061339 2818 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.30.255:6443/api/v1/nodes\": dial tcp 172.31.30.255:6443: connect: connection refused" node="ip-172-31-30-255" Mar 25 01:41:57.167864 containerd[1933]: time="2025-03-25T01:41:57.167155138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-30-255,Uid:209fea63824ebbb39dd50d712bd7e4fe,Namespace:kube-system,Attempt:0,} returns sandbox id \"12761700c66664a5dc246605c2801bb2b901d95e6b764789dfd9e86f46af761e\"" Mar 25 01:41:57.175015 containerd[1933]: time="2025-03-25T01:41:57.174772118Z" level=info msg="CreateContainer within sandbox \"12761700c66664a5dc246605c2801bb2b901d95e6b764789dfd9e86f46af761e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 25 01:41:57.176646 containerd[1933]: time="2025-03-25T01:41:57.176611482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-30-255,Uid:432adae8258aaa9360f8e1886310f113,Namespace:kube-system,Attempt:0,} returns sandbox id \"64ccd6b7d64fd7c735e282ac7c8544156afe6091c88bcd998755f5320cef9c7f\"" Mar 25 01:41:57.184019 containerd[1933]: time="2025-03-25T01:41:57.183965225Z" level=info msg="CreateContainer within sandbox \"64ccd6b7d64fd7c735e282ac7c8544156afe6091c88bcd998755f5320cef9c7f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 25 01:41:57.189260 containerd[1933]: time="2025-03-25T01:41:57.189215240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-30-255,Uid:1802f1e00ea5890188e1954bbfff66db,Namespace:kube-system,Attempt:0,} returns sandbox id \"9b9a440a1a2f194ad334bba48e4c8e0d0feddfdc9881760e4d5e9f6f7c75a1b6\"" Mar 25 01:41:57.192611 containerd[1933]: time="2025-03-25T01:41:57.192571397Z" level=info msg="CreateContainer within sandbox \"9b9a440a1a2f194ad334bba48e4c8e0d0feddfdc9881760e4d5e9f6f7c75a1b6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 25 01:41:57.225945 containerd[1933]: time="2025-03-25T01:41:57.225887953Z" level=info msg="Container efa86f22c643d4ea0a77a6734504ada007b4e15bcb28cf00fcdac91519a713c6: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:41:57.226393 containerd[1933]: time="2025-03-25T01:41:57.225886020Z" level=info msg="Container 4e15700fe39a3b36e2f62c34f37546503e767fcd425a8ca1552cf85213fdafff: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:41:57.229563 containerd[1933]: time="2025-03-25T01:41:57.229445111Z" level=info msg="Container 1cc39ba9bc9540597f46740d44b4dbe9f0eadac7fb3cd5c7ca1eb81483975129: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:41:57.251601 containerd[1933]: time="2025-03-25T01:41:57.251560329Z" level=info msg="CreateContainer within sandbox \"9b9a440a1a2f194ad334bba48e4c8e0d0feddfdc9881760e4d5e9f6f7c75a1b6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1cc39ba9bc9540597f46740d44b4dbe9f0eadac7fb3cd5c7ca1eb81483975129\"" Mar 25 01:41:57.253330 containerd[1933]: time="2025-03-25T01:41:57.253148809Z" level=info msg="StartContainer for \"1cc39ba9bc9540597f46740d44b4dbe9f0eadac7fb3cd5c7ca1eb81483975129\"" Mar 25 01:41:57.255612 containerd[1933]: time="2025-03-25T01:41:57.255574879Z" level=info msg="connecting to shim 1cc39ba9bc9540597f46740d44b4dbe9f0eadac7fb3cd5c7ca1eb81483975129" address="unix:///run/containerd/s/d376b2e139f35016ddd67060dba4a066959efddfad23934e78a5543ca2960e3c" protocol=ttrpc version=3 Mar 25 01:41:57.259117 containerd[1933]: time="2025-03-25T01:41:57.258948131Z" level=info msg="CreateContainer within sandbox \"64ccd6b7d64fd7c735e282ac7c8544156afe6091c88bcd998755f5320cef9c7f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"efa86f22c643d4ea0a77a6734504ada007b4e15bcb28cf00fcdac91519a713c6\"" Mar 25 01:41:57.260595 containerd[1933]: time="2025-03-25T01:41:57.260528431Z" level=info msg="StartContainer for \"efa86f22c643d4ea0a77a6734504ada007b4e15bcb28cf00fcdac91519a713c6\"" Mar 25 01:41:57.261339 containerd[1933]: time="2025-03-25T01:41:57.261283692Z" level=info msg="CreateContainer within sandbox \"12761700c66664a5dc246605c2801bb2b901d95e6b764789dfd9e86f46af761e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4e15700fe39a3b36e2f62c34f37546503e767fcd425a8ca1552cf85213fdafff\"" Mar 25 01:41:57.262752 containerd[1933]: time="2025-03-25T01:41:57.262698299Z" level=info msg="StartContainer for \"4e15700fe39a3b36e2f62c34f37546503e767fcd425a8ca1552cf85213fdafff\"" Mar 25 01:41:57.263251 containerd[1933]: time="2025-03-25T01:41:57.263216916Z" level=info msg="connecting to shim efa86f22c643d4ea0a77a6734504ada007b4e15bcb28cf00fcdac91519a713c6" address="unix:///run/containerd/s/0c754d2d3dcb59a6dbe944da2e5b079f9a7160596683ed45876bb917a2237788" protocol=ttrpc version=3 Mar 25 01:41:57.268746 containerd[1933]: time="2025-03-25T01:41:57.268705880Z" level=info msg="connecting to shim 4e15700fe39a3b36e2f62c34f37546503e767fcd425a8ca1552cf85213fdafff" address="unix:///run/containerd/s/624b0b021e2008929c5fcf0819482df98fd74f11cf40cb88e92a263a76676e56" protocol=ttrpc version=3 Mar 25 01:41:57.309768 systemd[1]: Started cri-containerd-1cc39ba9bc9540597f46740d44b4dbe9f0eadac7fb3cd5c7ca1eb81483975129.scope - libcontainer container 1cc39ba9bc9540597f46740d44b4dbe9f0eadac7fb3cd5c7ca1eb81483975129. Mar 25 01:41:57.312020 systemd[1]: Started cri-containerd-4e15700fe39a3b36e2f62c34f37546503e767fcd425a8ca1552cf85213fdafff.scope - libcontainer container 4e15700fe39a3b36e2f62c34f37546503e767fcd425a8ca1552cf85213fdafff. Mar 25 01:41:57.315645 systemd[1]: Started cri-containerd-efa86f22c643d4ea0a77a6734504ada007b4e15bcb28cf00fcdac91519a713c6.scope - libcontainer container efa86f22c643d4ea0a77a6734504ada007b4e15bcb28cf00fcdac91519a713c6. Mar 25 01:41:57.504874 containerd[1933]: time="2025-03-25T01:41:57.504752997Z" level=info msg="StartContainer for \"efa86f22c643d4ea0a77a6734504ada007b4e15bcb28cf00fcdac91519a713c6\" returns successfully" Mar 25 01:41:57.512978 containerd[1933]: time="2025-03-25T01:41:57.512939392Z" level=info msg="StartContainer for \"1cc39ba9bc9540597f46740d44b4dbe9f0eadac7fb3cd5c7ca1eb81483975129\" returns successfully" Mar 25 01:41:57.520053 kubelet[2818]: W0325 01:41:57.520006 2818 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.30.255:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.255:6443: connect: connection refused Mar 25 01:41:57.523688 kubelet[2818]: E0325 01:41:57.523602 2818 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.30.255:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.30.255:6443: connect: connection refused" logger="UnhandledError" Mar 25 01:41:57.530376 containerd[1933]: time="2025-03-25T01:41:57.530257057Z" level=info msg="StartContainer for \"4e15700fe39a3b36e2f62c34f37546503e767fcd425a8ca1552cf85213fdafff\" returns successfully" Mar 25 01:41:57.541170 kubelet[2818]: W0325 01:41:57.541045 2818 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.30.255:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.255:6443: connect: connection refused Mar 25 01:41:57.541170 kubelet[2818]: E0325 01:41:57.541127 2818 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.30.255:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.30.255:6443: connect: connection refused" logger="UnhandledError" Mar 25 01:41:57.641787 kubelet[2818]: W0325 01:41:57.641669 2818 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.30.255:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-255&limit=500&resourceVersion=0": dial tcp 172.31.30.255:6443: connect: connection refused Mar 25 01:41:57.641787 kubelet[2818]: E0325 01:41:57.641756 2818 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.30.255:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-255&limit=500&resourceVersion=0\": dial tcp 172.31.30.255:6443: connect: connection refused" logger="UnhandledError" Mar 25 01:41:57.666756 kubelet[2818]: E0325 01:41:57.666679 2818 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.255:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-255?timeout=10s\": dial tcp 172.31.30.255:6443: connect: connection refused" interval="1.6s" Mar 25 01:41:57.877111 kubelet[2818]: I0325 01:41:57.876750 2818 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-30-255" Mar 25 01:41:57.877425 kubelet[2818]: E0325 01:41:57.877399 2818 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.30.255:6443/api/v1/nodes\": dial tcp 172.31.30.255:6443: connect: connection refused" node="ip-172-31-30-255" Mar 25 01:41:58.186290 kubelet[2818]: E0325 01:41:58.185169 2818 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.30.255:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.30.255:6443: connect: connection refused" logger="UnhandledError" Mar 25 01:41:58.246827 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 25 01:41:59.480131 kubelet[2818]: I0325 01:41:59.480095 2818 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-30-255" Mar 25 01:42:00.763952 kubelet[2818]: E0325 01:42:00.763897 2818 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-30-255\" not found" node="ip-172-31-30-255" Mar 25 01:42:00.782523 kubelet[2818]: I0325 01:42:00.782372 2818 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-30-255" Mar 25 01:42:00.782523 kubelet[2818]: E0325 01:42:00.782414 2818 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ip-172-31-30-255\": node \"ip-172-31-30-255\" not found" Mar 25 01:42:01.208829 kubelet[2818]: I0325 01:42:01.208662 2818 apiserver.go:52] "Watching apiserver" Mar 25 01:42:01.257161 kubelet[2818]: I0325 01:42:01.257113 2818 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 25 01:42:03.445770 systemd[1]: Reload requested from client PID 3091 ('systemctl') (unit session-7.scope)... Mar 25 01:42:03.445789 systemd[1]: Reloading... Mar 25 01:42:03.582590 zram_generator::config[3136]: No configuration found. Mar 25 01:42:03.732291 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 25 01:42:03.881684 systemd[1]: Reloading finished in 435 ms. Mar 25 01:42:03.913813 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:42:03.928291 systemd[1]: kubelet.service: Deactivated successfully. Mar 25 01:42:03.928609 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:42:03.928694 systemd[1]: kubelet.service: Consumed 1.108s CPU time, 114.1M memory peak. Mar 25 01:42:03.931803 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:42:04.237738 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:42:04.252153 (kubelet)[3195]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 25 01:42:04.378258 kubelet[3195]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 25 01:42:04.378258 kubelet[3195]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 25 01:42:04.378258 kubelet[3195]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 25 01:42:04.378744 kubelet[3195]: I0325 01:42:04.378385 3195 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 25 01:42:04.389625 kubelet[3195]: I0325 01:42:04.389584 3195 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Mar 25 01:42:04.389625 kubelet[3195]: I0325 01:42:04.389622 3195 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 25 01:42:04.390829 kubelet[3195]: I0325 01:42:04.389967 3195 server.go:929] "Client rotation is on, will bootstrap in background" Mar 25 01:42:04.394845 kubelet[3195]: I0325 01:42:04.394048 3195 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 25 01:42:04.412174 kubelet[3195]: I0325 01:42:04.411899 3195 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 25 01:42:04.421586 kubelet[3195]: I0325 01:42:04.421559 3195 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 25 01:42:04.426342 kubelet[3195]: I0325 01:42:04.426114 3195 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 25 01:42:04.426342 kubelet[3195]: I0325 01:42:04.426299 3195 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 25 01:42:04.427262 kubelet[3195]: I0325 01:42:04.426788 3195 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 25 01:42:04.427262 kubelet[3195]: I0325 01:42:04.426827 3195 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-30-255","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 25 01:42:04.427262 kubelet[3195]: I0325 01:42:04.427084 3195 topology_manager.go:138] "Creating topology manager with none policy" Mar 25 01:42:04.427262 kubelet[3195]: I0325 01:42:04.427103 3195 container_manager_linux.go:300] "Creating device plugin manager" Mar 25 01:42:04.427584 kubelet[3195]: I0325 01:42:04.427145 3195 state_mem.go:36] "Initialized new in-memory state store" Mar 25 01:42:04.427695 kubelet[3195]: I0325 01:42:04.427681 3195 kubelet.go:408] "Attempting to sync node with API server" Mar 25 01:42:04.428052 kubelet[3195]: I0325 01:42:04.428002 3195 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 25 01:42:04.428285 kubelet[3195]: I0325 01:42:04.428125 3195 kubelet.go:314] "Adding apiserver pod source" Mar 25 01:42:04.428285 kubelet[3195]: I0325 01:42:04.428144 3195 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 25 01:42:04.441286 kubelet[3195]: I0325 01:42:04.440254 3195 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" Mar 25 01:42:04.442525 kubelet[3195]: I0325 01:42:04.442051 3195 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 25 01:42:04.442946 kubelet[3195]: I0325 01:42:04.442927 3195 server.go:1269] "Started kubelet" Mar 25 01:42:04.455811 kubelet[3195]: I0325 01:42:04.455210 3195 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 25 01:42:04.468536 kubelet[3195]: I0325 01:42:04.466568 3195 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 25 01:42:04.468536 kubelet[3195]: I0325 01:42:04.466776 3195 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 25 01:42:04.468536 kubelet[3195]: I0325 01:42:04.468356 3195 server.go:460] "Adding debug handlers to kubelet server" Mar 25 01:42:04.474817 kubelet[3195]: I0325 01:42:04.474733 3195 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 25 01:42:04.475055 kubelet[3195]: I0325 01:42:04.475010 3195 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 25 01:42:04.490300 kubelet[3195]: I0325 01:42:04.489142 3195 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 25 01:42:04.490300 kubelet[3195]: E0325 01:42:04.489429 3195 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-30-255\" not found" Mar 25 01:42:04.491932 kubelet[3195]: I0325 01:42:04.491844 3195 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 25 01:42:04.492042 kubelet[3195]: I0325 01:42:04.492031 3195 reconciler.go:26] "Reconciler: start to sync state" Mar 25 01:42:04.504581 kubelet[3195]: I0325 01:42:04.502367 3195 factory.go:221] Registration of the systemd container factory successfully Mar 25 01:42:04.515569 kubelet[3195]: I0325 01:42:04.514414 3195 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 25 01:42:04.525909 kubelet[3195]: E0325 01:42:04.524783 3195 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 25 01:42:04.562544 kubelet[3195]: I0325 01:42:04.559961 3195 factory.go:221] Registration of the containerd container factory successfully Mar 25 01:42:04.572628 kubelet[3195]: I0325 01:42:04.571812 3195 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 25 01:42:04.578858 kubelet[3195]: I0325 01:42:04.578012 3195 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 25 01:42:04.578858 kubelet[3195]: I0325 01:42:04.578054 3195 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 25 01:42:04.578858 kubelet[3195]: I0325 01:42:04.578076 3195 kubelet.go:2321] "Starting kubelet main sync loop" Mar 25 01:42:04.578858 kubelet[3195]: E0325 01:42:04.578125 3195 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 25 01:42:04.654448 kubelet[3195]: I0325 01:42:04.654423 3195 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 25 01:42:04.654804 kubelet[3195]: I0325 01:42:04.654648 3195 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 25 01:42:04.654804 kubelet[3195]: I0325 01:42:04.654707 3195 state_mem.go:36] "Initialized new in-memory state store" Mar 25 01:42:04.655352 kubelet[3195]: I0325 01:42:04.655220 3195 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 25 01:42:04.655352 kubelet[3195]: I0325 01:42:04.655239 3195 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 25 01:42:04.655352 kubelet[3195]: I0325 01:42:04.655273 3195 policy_none.go:49] "None policy: Start" Mar 25 01:42:04.657104 kubelet[3195]: I0325 01:42:04.656734 3195 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 25 01:42:04.657104 kubelet[3195]: I0325 01:42:04.656758 3195 state_mem.go:35] "Initializing new in-memory state store" Mar 25 01:42:04.657104 kubelet[3195]: I0325 01:42:04.656955 3195 state_mem.go:75] "Updated machine memory state" Mar 25 01:42:04.665652 kubelet[3195]: I0325 01:42:04.665450 3195 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 25 01:42:04.667754 kubelet[3195]: I0325 01:42:04.665714 3195 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 25 01:42:04.667754 kubelet[3195]: I0325 01:42:04.665727 3195 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 25 01:42:04.667754 kubelet[3195]: I0325 01:42:04.666375 3195 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 25 01:42:04.703097 kubelet[3195]: E0325 01:42:04.701285 3195 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-30-255\" already exists" pod="kube-system/kube-apiserver-ip-172-31-30-255" Mar 25 01:42:04.786037 kubelet[3195]: I0325 01:42:04.785916 3195 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-30-255" Mar 25 01:42:04.793345 kubelet[3195]: I0325 01:42:04.792862 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/432adae8258aaa9360f8e1886310f113-ca-certs\") pod \"kube-apiserver-ip-172-31-30-255\" (UID: \"432adae8258aaa9360f8e1886310f113\") " pod="kube-system/kube-apiserver-ip-172-31-30-255" Mar 25 01:42:04.793345 kubelet[3195]: I0325 01:42:04.792912 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/432adae8258aaa9360f8e1886310f113-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-30-255\" (UID: \"432adae8258aaa9360f8e1886310f113\") " pod="kube-system/kube-apiserver-ip-172-31-30-255" Mar 25 01:42:04.793345 kubelet[3195]: I0325 01:42:04.792949 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1802f1e00ea5890188e1954bbfff66db-kubeconfig\") pod \"kube-controller-manager-ip-172-31-30-255\" (UID: \"1802f1e00ea5890188e1954bbfff66db\") " pod="kube-system/kube-controller-manager-ip-172-31-30-255" Mar 25 01:42:04.793345 kubelet[3195]: I0325 01:42:04.792976 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/209fea63824ebbb39dd50d712bd7e4fe-kubeconfig\") pod \"kube-scheduler-ip-172-31-30-255\" (UID: \"209fea63824ebbb39dd50d712bd7e4fe\") " pod="kube-system/kube-scheduler-ip-172-31-30-255" Mar 25 01:42:04.793345 kubelet[3195]: I0325 01:42:04.793094 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/432adae8258aaa9360f8e1886310f113-k8s-certs\") pod \"kube-apiserver-ip-172-31-30-255\" (UID: \"432adae8258aaa9360f8e1886310f113\") " pod="kube-system/kube-apiserver-ip-172-31-30-255" Mar 25 01:42:04.794348 kubelet[3195]: I0325 01:42:04.793119 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1802f1e00ea5890188e1954bbfff66db-ca-certs\") pod \"kube-controller-manager-ip-172-31-30-255\" (UID: \"1802f1e00ea5890188e1954bbfff66db\") " pod="kube-system/kube-controller-manager-ip-172-31-30-255" Mar 25 01:42:04.794348 kubelet[3195]: I0325 01:42:04.793144 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1802f1e00ea5890188e1954bbfff66db-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-30-255\" (UID: \"1802f1e00ea5890188e1954bbfff66db\") " pod="kube-system/kube-controller-manager-ip-172-31-30-255" Mar 25 01:42:04.794348 kubelet[3195]: I0325 01:42:04.793171 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1802f1e00ea5890188e1954bbfff66db-k8s-certs\") pod \"kube-controller-manager-ip-172-31-30-255\" (UID: \"1802f1e00ea5890188e1954bbfff66db\") " pod="kube-system/kube-controller-manager-ip-172-31-30-255" Mar 25 01:42:04.794348 kubelet[3195]: I0325 01:42:04.793197 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1802f1e00ea5890188e1954bbfff66db-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-30-255\" (UID: \"1802f1e00ea5890188e1954bbfff66db\") " pod="kube-system/kube-controller-manager-ip-172-31-30-255" Mar 25 01:42:04.802364 kubelet[3195]: I0325 01:42:04.802316 3195 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-30-255" Mar 25 01:42:04.802877 kubelet[3195]: I0325 01:42:04.802659 3195 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-30-255" Mar 25 01:42:05.442412 kubelet[3195]: I0325 01:42:05.442349 3195 apiserver.go:52] "Watching apiserver" Mar 25 01:42:05.493985 kubelet[3195]: I0325 01:42:05.493948 3195 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 25 01:42:05.660277 kubelet[3195]: E0325 01:42:05.659994 3195 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-30-255\" already exists" pod="kube-system/kube-apiserver-ip-172-31-30-255" Mar 25 01:42:05.748758 kubelet[3195]: I0325 01:42:05.748042 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-30-255" podStartSLOduration=1.748016775 podStartE2EDuration="1.748016775s" podCreationTimestamp="2025-03-25 01:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 01:42:05.728549676 +0000 UTC m=+1.459849076" watchObservedRunningTime="2025-03-25 01:42:05.748016775 +0000 UTC m=+1.479316168" Mar 25 01:42:05.791475 kubelet[3195]: I0325 01:42:05.791409 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-30-255" podStartSLOduration=2.791385799 podStartE2EDuration="2.791385799s" podCreationTimestamp="2025-03-25 01:42:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 01:42:05.763217239 +0000 UTC m=+1.494516637" watchObservedRunningTime="2025-03-25 01:42:05.791385799 +0000 UTC m=+1.522685197" Mar 25 01:42:05.817530 kubelet[3195]: I0325 01:42:05.816170 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-30-255" podStartSLOduration=1.8161510760000001 podStartE2EDuration="1.816151076s" podCreationTimestamp="2025-03-25 01:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 01:42:05.792551468 +0000 UTC m=+1.523850866" watchObservedRunningTime="2025-03-25 01:42:05.816151076 +0000 UTC m=+1.547450470" Mar 25 01:42:08.832852 kubelet[3195]: I0325 01:42:08.832812 3195 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 25 01:42:08.834062 containerd[1933]: time="2025-03-25T01:42:08.834021931Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 25 01:42:08.834761 kubelet[3195]: I0325 01:42:08.834707 3195 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 25 01:42:09.841041 kubelet[3195]: I0325 01:42:09.840998 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a668eed7-9bf0-4354-a22e-b4e7e1981393-kube-proxy\") pod \"kube-proxy-7r46n\" (UID: \"a668eed7-9bf0-4354-a22e-b4e7e1981393\") " pod="kube-system/kube-proxy-7r46n" Mar 25 01:42:09.841542 kubelet[3195]: I0325 01:42:09.841049 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6lmj\" (UniqueName: \"kubernetes.io/projected/a668eed7-9bf0-4354-a22e-b4e7e1981393-kube-api-access-z6lmj\") pod \"kube-proxy-7r46n\" (UID: \"a668eed7-9bf0-4354-a22e-b4e7e1981393\") " pod="kube-system/kube-proxy-7r46n" Mar 25 01:42:09.841542 kubelet[3195]: I0325 01:42:09.841088 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a668eed7-9bf0-4354-a22e-b4e7e1981393-xtables-lock\") pod \"kube-proxy-7r46n\" (UID: \"a668eed7-9bf0-4354-a22e-b4e7e1981393\") " pod="kube-system/kube-proxy-7r46n" Mar 25 01:42:09.841542 kubelet[3195]: I0325 01:42:09.841111 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a668eed7-9bf0-4354-a22e-b4e7e1981393-lib-modules\") pod \"kube-proxy-7r46n\" (UID: \"a668eed7-9bf0-4354-a22e-b4e7e1981393\") " pod="kube-system/kube-proxy-7r46n" Mar 25 01:42:09.866870 systemd[1]: Created slice kubepods-besteffort-poda668eed7_9bf0_4354_a22e_b4e7e1981393.slice - libcontainer container kubepods-besteffort-poda668eed7_9bf0_4354_a22e_b4e7e1981393.slice. Mar 25 01:42:09.954962 systemd[1]: Created slice kubepods-besteffort-pod3452c454_5ab6_4005_a6af_d9daa864be83.slice - libcontainer container kubepods-besteffort-pod3452c454_5ab6_4005_a6af_d9daa864be83.slice. Mar 25 01:42:10.042708 kubelet[3195]: I0325 01:42:10.042649 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vf2pc\" (UniqueName: \"kubernetes.io/projected/3452c454-5ab6-4005-a6af-d9daa864be83-kube-api-access-vf2pc\") pod \"tigera-operator-64ff5465b7-x2k5r\" (UID: \"3452c454-5ab6-4005-a6af-d9daa864be83\") " pod="tigera-operator/tigera-operator-64ff5465b7-x2k5r" Mar 25 01:42:10.042882 kubelet[3195]: I0325 01:42:10.042746 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3452c454-5ab6-4005-a6af-d9daa864be83-var-lib-calico\") pod \"tigera-operator-64ff5465b7-x2k5r\" (UID: \"3452c454-5ab6-4005-a6af-d9daa864be83\") " pod="tigera-operator/tigera-operator-64ff5465b7-x2k5r" Mar 25 01:42:10.187577 containerd[1933]: time="2025-03-25T01:42:10.187335538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7r46n,Uid:a668eed7-9bf0-4354-a22e-b4e7e1981393,Namespace:kube-system,Attempt:0,}" Mar 25 01:42:10.238268 containerd[1933]: time="2025-03-25T01:42:10.238212108Z" level=info msg="connecting to shim 0aa4a9cec622fc94b3e0708dac0c26ec4afdaf316ca20e65371ee71532925a92" address="unix:///run/containerd/s/b8dcd00ff527571ccbf8802832f6a39083a63fb8ef077e7a533fcfb9ae76d46d" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:42:10.264520 containerd[1933]: time="2025-03-25T01:42:10.264418141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-64ff5465b7-x2k5r,Uid:3452c454-5ab6-4005-a6af-d9daa864be83,Namespace:tigera-operator,Attempt:0,}" Mar 25 01:42:10.278921 systemd[1]: Started cri-containerd-0aa4a9cec622fc94b3e0708dac0c26ec4afdaf316ca20e65371ee71532925a92.scope - libcontainer container 0aa4a9cec622fc94b3e0708dac0c26ec4afdaf316ca20e65371ee71532925a92. Mar 25 01:42:10.335211 containerd[1933]: time="2025-03-25T01:42:10.335162767Z" level=info msg="connecting to shim a04ab0d6d5f0aa1329a57f8efbaa50112aa39d0f255738affb31aa7da07b722b" address="unix:///run/containerd/s/028eb54d3c63585a7cf5748283c440bc2682382f533c693eec68a4a8d0c27129" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:42:10.339664 containerd[1933]: time="2025-03-25T01:42:10.339623015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7r46n,Uid:a668eed7-9bf0-4354-a22e-b4e7e1981393,Namespace:kube-system,Attempt:0,} returns sandbox id \"0aa4a9cec622fc94b3e0708dac0c26ec4afdaf316ca20e65371ee71532925a92\"" Mar 25 01:42:10.348811 containerd[1933]: time="2025-03-25T01:42:10.347089349Z" level=info msg="CreateContainer within sandbox \"0aa4a9cec622fc94b3e0708dac0c26ec4afdaf316ca20e65371ee71532925a92\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 25 01:42:10.377785 systemd[1]: Started cri-containerd-a04ab0d6d5f0aa1329a57f8efbaa50112aa39d0f255738affb31aa7da07b722b.scope - libcontainer container a04ab0d6d5f0aa1329a57f8efbaa50112aa39d0f255738affb31aa7da07b722b. Mar 25 01:42:10.378903 containerd[1933]: time="2025-03-25T01:42:10.378858370Z" level=info msg="Container fc686383ed544f0f48536e21f2a1a50c84793397737b3d8cb77703692da6be7b: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:42:10.402573 containerd[1933]: time="2025-03-25T01:42:10.400748459Z" level=info msg="CreateContainer within sandbox \"0aa4a9cec622fc94b3e0708dac0c26ec4afdaf316ca20e65371ee71532925a92\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fc686383ed544f0f48536e21f2a1a50c84793397737b3d8cb77703692da6be7b\"" Mar 25 01:42:10.406874 containerd[1933]: time="2025-03-25T01:42:10.406829424Z" level=info msg="StartContainer for \"fc686383ed544f0f48536e21f2a1a50c84793397737b3d8cb77703692da6be7b\"" Mar 25 01:42:10.415911 containerd[1933]: time="2025-03-25T01:42:10.415851815Z" level=info msg="connecting to shim fc686383ed544f0f48536e21f2a1a50c84793397737b3d8cb77703692da6be7b" address="unix:///run/containerd/s/b8dcd00ff527571ccbf8802832f6a39083a63fb8ef077e7a533fcfb9ae76d46d" protocol=ttrpc version=3 Mar 25 01:42:10.449047 systemd[1]: Started cri-containerd-fc686383ed544f0f48536e21f2a1a50c84793397737b3d8cb77703692da6be7b.scope - libcontainer container fc686383ed544f0f48536e21f2a1a50c84793397737b3d8cb77703692da6be7b. Mar 25 01:42:10.461892 containerd[1933]: time="2025-03-25T01:42:10.461713697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-64ff5465b7-x2k5r,Uid:3452c454-5ab6-4005-a6af-d9daa864be83,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"a04ab0d6d5f0aa1329a57f8efbaa50112aa39d0f255738affb31aa7da07b722b\"" Mar 25 01:42:10.467320 containerd[1933]: time="2025-03-25T01:42:10.467247302Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.5\"" Mar 25 01:42:10.509299 containerd[1933]: time="2025-03-25T01:42:10.509090899Z" level=info msg="StartContainer for \"fc686383ed544f0f48536e21f2a1a50c84793397737b3d8cb77703692da6be7b\" returns successfully" Mar 25 01:42:11.170485 sudo[2263]: pam_unix(sudo:session): session closed for user root Mar 25 01:42:11.193184 sshd[2262]: Connection closed by 147.75.109.163 port 58420 Mar 25 01:42:11.195769 sshd-session[2260]: pam_unix(sshd:session): session closed for user core Mar 25 01:42:11.200180 systemd[1]: sshd@6-172.31.30.255:22-147.75.109.163:58420.service: Deactivated successfully. Mar 25 01:42:11.202971 systemd[1]: session-7.scope: Deactivated successfully. Mar 25 01:42:11.203223 systemd[1]: session-7.scope: Consumed 4.868s CPU time, 150.3M memory peak. Mar 25 01:42:11.205019 systemd-logind[1901]: Session 7 logged out. Waiting for processes to exit. Mar 25 01:42:11.208162 systemd-logind[1901]: Removed session 7. Mar 25 01:42:12.385825 update_engine[1904]: I20250325 01:42:12.385732 1904 update_attempter.cc:509] Updating boot flags... Mar 25 01:42:12.462036 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3537) Mar 25 01:42:12.711574 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3539) Mar 25 01:42:12.878846 kubelet[3195]: I0325 01:42:12.878755 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7r46n" podStartSLOduration=3.878529894 podStartE2EDuration="3.878529894s" podCreationTimestamp="2025-03-25 01:42:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 01:42:10.647414248 +0000 UTC m=+6.378713648" watchObservedRunningTime="2025-03-25 01:42:12.878529894 +0000 UTC m=+8.609829292" Mar 25 01:42:13.802464 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount470593730.mount: Deactivated successfully. Mar 25 01:42:14.889527 containerd[1933]: time="2025-03-25T01:42:14.889468034Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:42:14.890743 containerd[1933]: time="2025-03-25T01:42:14.890530383Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.5: active requests=0, bytes read=21945008" Mar 25 01:42:14.893212 containerd[1933]: time="2025-03-25T01:42:14.892117130Z" level=info msg="ImageCreate event name:\"sha256:dc4a8a56c133edb1bc4c3d6bc94bcd96f2bde82413370cb1783ac2d7f3a46d53\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:42:14.896423 containerd[1933]: time="2025-03-25T01:42:14.895400925Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:3341fa9475c0325b86228c8726389f9bae9fd6c430c66fe5cd5dc39d7bb6ad4b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:42:14.896423 containerd[1933]: time="2025-03-25T01:42:14.896162665Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.5\" with image id \"sha256:dc4a8a56c133edb1bc4c3d6bc94bcd96f2bde82413370cb1783ac2d7f3a46d53\", repo tag \"quay.io/tigera/operator:v1.36.5\", repo digest \"quay.io/tigera/operator@sha256:3341fa9475c0325b86228c8726389f9bae9fd6c430c66fe5cd5dc39d7bb6ad4b\", size \"21941003\" in 4.428872935s" Mar 25 01:42:14.896423 containerd[1933]: time="2025-03-25T01:42:14.896187947Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.5\" returns image reference \"sha256:dc4a8a56c133edb1bc4c3d6bc94bcd96f2bde82413370cb1783ac2d7f3a46d53\"" Mar 25 01:42:14.904721 containerd[1933]: time="2025-03-25T01:42:14.904261000Z" level=info msg="CreateContainer within sandbox \"a04ab0d6d5f0aa1329a57f8efbaa50112aa39d0f255738affb31aa7da07b722b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 25 01:42:14.916737 containerd[1933]: time="2025-03-25T01:42:14.916689844Z" level=info msg="Container 93f20234b4de16658f321743ba9c40d44f79db290f1513b86f5ddef7dc94a006: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:42:14.930036 containerd[1933]: time="2025-03-25T01:42:14.929980813Z" level=info msg="CreateContainer within sandbox \"a04ab0d6d5f0aa1329a57f8efbaa50112aa39d0f255738affb31aa7da07b722b\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"93f20234b4de16658f321743ba9c40d44f79db290f1513b86f5ddef7dc94a006\"" Mar 25 01:42:14.932315 containerd[1933]: time="2025-03-25T01:42:14.930797450Z" level=info msg="StartContainer for \"93f20234b4de16658f321743ba9c40d44f79db290f1513b86f5ddef7dc94a006\"" Mar 25 01:42:14.932315 containerd[1933]: time="2025-03-25T01:42:14.931953632Z" level=info msg="connecting to shim 93f20234b4de16658f321743ba9c40d44f79db290f1513b86f5ddef7dc94a006" address="unix:///run/containerd/s/028eb54d3c63585a7cf5748283c440bc2682382f533c693eec68a4a8d0c27129" protocol=ttrpc version=3 Mar 25 01:42:14.967727 systemd[1]: Started cri-containerd-93f20234b4de16658f321743ba9c40d44f79db290f1513b86f5ddef7dc94a006.scope - libcontainer container 93f20234b4de16658f321743ba9c40d44f79db290f1513b86f5ddef7dc94a006. Mar 25 01:42:15.018049 containerd[1933]: time="2025-03-25T01:42:15.018001388Z" level=info msg="StartContainer for \"93f20234b4de16658f321743ba9c40d44f79db290f1513b86f5ddef7dc94a006\" returns successfully" Mar 25 01:42:18.358027 kubelet[3195]: I0325 01:42:18.357850 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-64ff5465b7-x2k5r" podStartSLOduration=4.9127327229999995 podStartE2EDuration="9.351021405s" podCreationTimestamp="2025-03-25 01:42:09 +0000 UTC" firstStartedPulling="2025-03-25 01:42:10.464437544 +0000 UTC m=+6.195736935" lastFinishedPulling="2025-03-25 01:42:14.902726229 +0000 UTC m=+10.634025617" observedRunningTime="2025-03-25 01:42:15.671124306 +0000 UTC m=+11.402423704" watchObservedRunningTime="2025-03-25 01:42:18.351021405 +0000 UTC m=+14.082320839" Mar 25 01:42:18.418551 systemd[1]: Created slice kubepods-besteffort-pod2251ad9b_7622_4c24_a011_53211da3e456.slice - libcontainer container kubepods-besteffort-pod2251ad9b_7622_4c24_a011_53211da3e456.slice. Mar 25 01:42:18.431341 kubelet[3195]: I0325 01:42:18.430418 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2251ad9b-7622-4c24-a011-53211da3e456-tigera-ca-bundle\") pod \"calico-typha-8c56d65db-jpg2m\" (UID: \"2251ad9b-7622-4c24-a011-53211da3e456\") " pod="calico-system/calico-typha-8c56d65db-jpg2m" Mar 25 01:42:18.452636 kubelet[3195]: I0325 01:42:18.430497 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/2251ad9b-7622-4c24-a011-53211da3e456-typha-certs\") pod \"calico-typha-8c56d65db-jpg2m\" (UID: \"2251ad9b-7622-4c24-a011-53211da3e456\") " pod="calico-system/calico-typha-8c56d65db-jpg2m" Mar 25 01:42:18.453158 kubelet[3195]: I0325 01:42:18.453050 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqg9q\" (UniqueName: \"kubernetes.io/projected/2251ad9b-7622-4c24-a011-53211da3e456-kube-api-access-pqg9q\") pod \"calico-typha-8c56d65db-jpg2m\" (UID: \"2251ad9b-7622-4c24-a011-53211da3e456\") " pod="calico-system/calico-typha-8c56d65db-jpg2m" Mar 25 01:42:18.604586 systemd[1]: Created slice kubepods-besteffort-pod834f9a62_cf5c_4d59_a8a4_3b115698be00.slice - libcontainer container kubepods-besteffort-pod834f9a62_cf5c_4d59_a8a4_3b115698be00.slice. Mar 25 01:42:18.655379 kubelet[3195]: I0325 01:42:18.654672 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/834f9a62-cf5c-4d59-a8a4-3b115698be00-var-run-calico\") pod \"calico-node-xq6r6\" (UID: \"834f9a62-cf5c-4d59-a8a4-3b115698be00\") " pod="calico-system/calico-node-xq6r6" Mar 25 01:42:18.655379 kubelet[3195]: I0325 01:42:18.654721 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/834f9a62-cf5c-4d59-a8a4-3b115698be00-xtables-lock\") pod \"calico-node-xq6r6\" (UID: \"834f9a62-cf5c-4d59-a8a4-3b115698be00\") " pod="calico-system/calico-node-xq6r6" Mar 25 01:42:18.655379 kubelet[3195]: I0325 01:42:18.654759 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/834f9a62-cf5c-4d59-a8a4-3b115698be00-lib-modules\") pod \"calico-node-xq6r6\" (UID: \"834f9a62-cf5c-4d59-a8a4-3b115698be00\") " pod="calico-system/calico-node-xq6r6" Mar 25 01:42:18.655379 kubelet[3195]: I0325 01:42:18.654783 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/834f9a62-cf5c-4d59-a8a4-3b115698be00-tigera-ca-bundle\") pod \"calico-node-xq6r6\" (UID: \"834f9a62-cf5c-4d59-a8a4-3b115698be00\") " pod="calico-system/calico-node-xq6r6" Mar 25 01:42:18.655379 kubelet[3195]: I0325 01:42:18.654805 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/834f9a62-cf5c-4d59-a8a4-3b115698be00-cni-net-dir\") pod \"calico-node-xq6r6\" (UID: \"834f9a62-cf5c-4d59-a8a4-3b115698be00\") " pod="calico-system/calico-node-xq6r6" Mar 25 01:42:18.655891 kubelet[3195]: I0325 01:42:18.654827 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/834f9a62-cf5c-4d59-a8a4-3b115698be00-node-certs\") pod \"calico-node-xq6r6\" (UID: \"834f9a62-cf5c-4d59-a8a4-3b115698be00\") " pod="calico-system/calico-node-xq6r6" Mar 25 01:42:18.655891 kubelet[3195]: I0325 01:42:18.654852 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/834f9a62-cf5c-4d59-a8a4-3b115698be00-var-lib-calico\") pod \"calico-node-xq6r6\" (UID: \"834f9a62-cf5c-4d59-a8a4-3b115698be00\") " pod="calico-system/calico-node-xq6r6" Mar 25 01:42:18.655891 kubelet[3195]: I0325 01:42:18.654874 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/834f9a62-cf5c-4d59-a8a4-3b115698be00-cni-bin-dir\") pod \"calico-node-xq6r6\" (UID: \"834f9a62-cf5c-4d59-a8a4-3b115698be00\") " pod="calico-system/calico-node-xq6r6" Mar 25 01:42:18.655891 kubelet[3195]: I0325 01:42:18.654896 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/834f9a62-cf5c-4d59-a8a4-3b115698be00-cni-log-dir\") pod \"calico-node-xq6r6\" (UID: \"834f9a62-cf5c-4d59-a8a4-3b115698be00\") " pod="calico-system/calico-node-xq6r6" Mar 25 01:42:18.655891 kubelet[3195]: I0325 01:42:18.654919 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/834f9a62-cf5c-4d59-a8a4-3b115698be00-flexvol-driver-host\") pod \"calico-node-xq6r6\" (UID: \"834f9a62-cf5c-4d59-a8a4-3b115698be00\") " pod="calico-system/calico-node-xq6r6" Mar 25 01:42:18.656099 kubelet[3195]: I0325 01:42:18.654946 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/834f9a62-cf5c-4d59-a8a4-3b115698be00-policysync\") pod \"calico-node-xq6r6\" (UID: \"834f9a62-cf5c-4d59-a8a4-3b115698be00\") " pod="calico-system/calico-node-xq6r6" Mar 25 01:42:18.656099 kubelet[3195]: I0325 01:42:18.654970 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp9hm\" (UniqueName: \"kubernetes.io/projected/834f9a62-cf5c-4d59-a8a4-3b115698be00-kube-api-access-vp9hm\") pod \"calico-node-xq6r6\" (UID: \"834f9a62-cf5c-4d59-a8a4-3b115698be00\") " pod="calico-system/calico-node-xq6r6" Mar 25 01:42:18.748292 containerd[1933]: time="2025-03-25T01:42:18.748079401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8c56d65db-jpg2m,Uid:2251ad9b-7622-4c24-a011-53211da3e456,Namespace:calico-system,Attempt:0,}" Mar 25 01:42:18.785654 kubelet[3195]: E0325 01:42:18.777175 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:42:18.785654 kubelet[3195]: W0325 01:42:18.777221 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:42:18.785654 kubelet[3195]: E0325 01:42:18.777250 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:42:18.796057 kubelet[3195]: E0325 01:42:18.793222 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:42:18.796057 kubelet[3195]: W0325 01:42:18.793256 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:42:18.796057 kubelet[3195]: E0325 01:42:18.793288 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:42:18.802705 kubelet[3195]: E0325 01:42:18.802455 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:42:18.802705 kubelet[3195]: W0325 01:42:18.802480 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:42:18.802705 kubelet[3195]: E0325 01:42:18.802527 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:42:18.805059 kubelet[3195]: E0325 01:42:18.804767 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:42:18.805059 kubelet[3195]: W0325 01:42:18.804789 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:42:18.805059 kubelet[3195]: E0325 01:42:18.804814 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:42:18.807169 kubelet[3195]: E0325 01:42:18.807085 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:42:18.807169 kubelet[3195]: W0325 01:42:18.807106 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:42:18.807169 kubelet[3195]: E0325 01:42:18.807128 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:42:18.839824 containerd[1933]: time="2025-03-25T01:42:18.839771338Z" level=info msg="connecting to shim f941c8a28233c299b14c905463b0bf989ee3cdbd26762f44cd8ec77a88a11040" address="unix:///run/containerd/s/2d20f492ec50bf4b3ec095b8c20cb5ef8be8792f23f87eb3e5fe80bbb0262220" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:42:18.845032 kubelet[3195]: E0325 01:42:18.844763 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:42:18.845032 kubelet[3195]: W0325 01:42:18.844789 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:42:18.845032 kubelet[3195]: E0325 01:42:18.844816 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:42:18.900953 systemd[1]: Started cri-containerd-f941c8a28233c299b14c905463b0bf989ee3cdbd26762f44cd8ec77a88a11040.scope - libcontainer container f941c8a28233c299b14c905463b0bf989ee3cdbd26762f44cd8ec77a88a11040. Mar 25 01:42:18.910462 containerd[1933]: time="2025-03-25T01:42:18.910046941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xq6r6,Uid:834f9a62-cf5c-4d59-a8a4-3b115698be00,Namespace:calico-system,Attempt:0,}" Mar 25 01:42:18.967795 containerd[1933]: time="2025-03-25T01:42:18.967306545Z" level=info msg="connecting to shim e8ba6598b8edb953a43da2911cdff13e8494f58dcf979ceb8bc328f7010f97ca" address="unix:///run/containerd/s/d83a38d66ae388f23c1bed241a8bf1c99d3a5b4b0e034708a6d46e1ffa162c5e" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:42:19.007473 kubelet[3195]: E0325 01:42:19.006636 3195 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gm72j" podUID="bbc7e29b-a6fe-4fe7-a72c-99ff59b8eee7" Mar 25 01:42:19.009183 kubelet[3195]: E0325 01:42:19.009151 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:42:19.010692 kubelet[3195]: W0325 01:42:19.010660 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:42:19.012008 kubelet[3195]: E0325 01:42:19.010894 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:42:19.012008 kubelet[3195]: E0325 01:42:19.011197 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:42:19.012008 kubelet[3195]: W0325 01:42:19.011211 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:42:19.012008 kubelet[3195]: E0325 01:42:19.011230 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:42:19.012008 kubelet[3195]: E0325 01:42:19.011660 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:42:19.012008 kubelet[3195]: W0325 01:42:19.011672 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:42:19.012008 kubelet[3195]: E0325 01:42:19.011688 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:42:19.013440 kubelet[3195]: E0325 01:42:19.013423 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:42:19.014202 kubelet[3195]: W0325 01:42:19.013849 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:42:19.014202 kubelet[3195]: E0325 01:42:19.013877 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:42:19.014622 kubelet[3195]: E0325 01:42:19.014570 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:42:19.014622 kubelet[3195]: W0325 01:42:19.014591 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:42:19.014622 kubelet[3195]: E0325 01:42:19.014607 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:42:19.015828 kubelet[3195]: E0325 01:42:19.015807 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:42:19.015828 kubelet[3195]: W0325 01:42:19.015826 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:42:19.015956 kubelet[3195]: E0325 01:42:19.015842 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:42:19.016830 kubelet[3195]: E0325 01:42:19.016808 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:42:19.016830 kubelet[3195]: W0325 01:42:19.016827 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:42:19.016963 kubelet[3195]: E0325 01:42:19.016843 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:42:19.018243 kubelet[3195]: E0325 01:42:19.018221 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:42:19.018243 kubelet[3195]: W0325 01:42:19.018241 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:42:19.018375 kubelet[3195]: E0325 01:42:19.018257 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:42:19.020088 kubelet[3195]: E0325 01:42:19.018922 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:42:19.020088 kubelet[3195]: W0325 01:42:19.018944 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:42:19.020088 kubelet[3195]: E0325 01:42:19.018958 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:42:19.020088 kubelet[3195]: E0325 01:42:19.019589 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:42:19.020088 kubelet[3195]: W0325 01:42:19.019601 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:42:19.020088 kubelet[3195]: E0325 01:42:19.019615 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:42:19.021400 kubelet[3195]: E0325 01:42:19.021021 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:42:19.021400 kubelet[3195]: W0325 01:42:19.021038 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:42:19.021400 kubelet[3195]: E0325 01:42:19.021055 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:42:19.023524 kubelet[3195]: E0325 01:42:19.022002 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:42:19.023524 kubelet[3195]: W0325 01:42:19.022018 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:42:19.023524 kubelet[3195]: E0325 01:42:19.022032 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:42:19.023824 kubelet[3195]: E0325 01:42:19.023815 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:42:19.023874 kubelet[3195]: W0325 01:42:19.023829 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:42:19.023874 kubelet[3195]: E0325 01:42:19.023844 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:42:19.024761 kubelet[3195]: E0325 01:42:19.024734 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:42:19.024761 kubelet[3195]: W0325 01:42:19.024753 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:42:19.024877 kubelet[3195]: E0325 01:42:19.024796 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:42:19.025889 kubelet[3195]: E0325 01:42:19.025871 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:42:19.025980 kubelet[3195]: W0325 01:42:19.025890 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:42:19.025980 kubelet[3195]: E0325 01:42:19.025916 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:42:19.026874 kubelet[3195]: E0325 01:42:19.026855 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:42:19.026874 kubelet[3195]: W0325 01:42:19.026874 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:42:19.027019 kubelet[3195]: E0325 01:42:19.026889 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:42:19.028254 kubelet[3195]: E0325 01:42:19.028235 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:42:19.028254 kubelet[3195]: W0325 01:42:19.028254 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:42:19.028408 kubelet[3195]: E0325 01:42:19.028268 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:42:19.028704 kubelet[3195]: E0325 01:42:19.028668 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:42:19.029717 kubelet[3195]: W0325 01:42:19.028686 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:42:19.029717 kubelet[3195]: E0325 01:42:19.029701 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:42:19.033263 kubelet[3195]: E0325 01:42:19.032735 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:42:19.033263 kubelet[3195]: W0325 01:42:19.032756 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:42:19.033263 kubelet[3195]: E0325 01:42:19.032775 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:42:19.036934 kubelet[3195]: E0325 01:42:19.033942 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:42:19.036934 kubelet[3195]: W0325 01:42:19.033958 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:42:19.036934 kubelet[3195]: E0325 01:42:19.033973 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:42:19.061795 kubelet[3195]: E0325 01:42:19.061732 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:42:19.061795 kubelet[3195]: W0325 01:42:19.061789 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:42:19.062093 kubelet[3195]: E0325 01:42:19.061818 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:42:19.062093 kubelet[3195]: I0325 01:42:19.061854 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bbc7e29b-a6fe-4fe7-a72c-99ff59b8eee7-kubelet-dir\") pod \"csi-node-driver-gm72j\" (UID: \"bbc7e29b-a6fe-4fe7-a72c-99ff59b8eee7\") " pod="calico-system/csi-node-driver-gm72j" Mar 25 01:42:19.063529 kubelet[3195]: E0325 01:42:19.062324 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:42:19.063529 kubelet[3195]: W0325 01:42:19.062345 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:42:19.063529 kubelet[3195]: E0325 01:42:19.062386 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:42:19.063529 kubelet[3195]: I0325 01:42:19.062412 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/bbc7e29b-a6fe-4fe7-a72c-99ff59b8eee7-socket-dir\") pod \"csi-node-driver-gm72j\" (UID: \"bbc7e29b-a6fe-4fe7-a72c-99ff59b8eee7\") " pod="calico-system/csi-node-driver-gm72j" Mar 25 01:42:19.063529 kubelet[3195]: E0325 01:42:19.062767 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:42:19.063529 kubelet[3195]: W0325 01:42:19.062780 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:42:19.063529 kubelet[3195]: E0325 01:42:19.062794 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:42:19.063529 kubelet[3195]: I0325 01:42:19.062817 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/bbc7e29b-a6fe-4fe7-a72c-99ff59b8eee7-varrun\") pod \"csi-node-driver-gm72j\" (UID: \"bbc7e29b-a6fe-4fe7-a72c-99ff59b8eee7\") " pod="calico-system/csi-node-driver-gm72j" Mar 25 01:42:19.063529 kubelet[3195]: E0325 01:42:19.063057 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:42:19.064014 kubelet[3195]: W0325 01:42:19.063069 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:42:19.064014 kubelet[3195]: E0325 01:42:19.063081 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:42:19.064014 kubelet[3195]: I0325 01:42:19.063104 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6tsh\" (UniqueName: \"kubernetes.io/projected/bbc7e29b-a6fe-4fe7-a72c-99ff59b8eee7-kube-api-access-n6tsh\") pod \"csi-node-driver-gm72j\" (UID: \"bbc7e29b-a6fe-4fe7-a72c-99ff59b8eee7\") " pod="calico-system/csi-node-driver-gm72j" Mar 25 01:42:19.064014 kubelet[3195]: E0325 01:42:19.063320 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:42:19.064014 kubelet[3195]: W0325 01:42:19.063453 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:42:19.064014 kubelet[3195]: E0325 01:42:19.063471 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:42:19.064014 kubelet[3195]: I0325 01:42:19.063493 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/bbc7e29b-a6fe-4fe7-a72c-99ff59b8eee7-registration-dir\") pod \"csi-node-driver-gm72j\" (UID: \"bbc7e29b-a6fe-4fe7-a72c-99ff59b8eee7\") " pod="calico-system/csi-node-driver-gm72j" Mar 25 01:42:19.064014 kubelet[3195]: E0325 01:42:19.063834 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:42:19.064403 kubelet[3195]: W0325 01:42:19.063848 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:42:19.064403 kubelet[3195]: E0325 01:42:19.063863 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:42:19.064403 kubelet[3195]: E0325 01:42:19.064068 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:42:19.064403 kubelet[3195]: W0325 01:42:19.064077 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:42:19.064403 kubelet[3195]: E0325 01:42:19.064089 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:42:19.064403 kubelet[3195]: E0325 01:42:19.064337 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:42:19.064403 kubelet[3195]: W0325 01:42:19.064348 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:42:19.064403 kubelet[3195]: E0325 01:42:19.064360 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:42:19.064760 kubelet[3195]: E0325 01:42:19.064645 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:42:19.064760 kubelet[3195]: W0325 01:42:19.064656 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:42:19.064760 kubelet[3195]: E0325 01:42:19.064669 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:42:19.064910 kubelet[3195]: E0325 01:42:19.064873 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:42:19.064910 kubelet[3195]: W0325 01:42:19.064882 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:42:19.064910 kubelet[3195]: E0325 01:42:19.064894 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:42:19.068558 kubelet[3195]: E0325 01:42:19.065069 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:42:19.068558 kubelet[3195]: W0325 01:42:19.065080 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:42:19.068558 kubelet[3195]: E0325 01:42:19.065091 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:42:19.068558 kubelet[3195]: E0325 01:42:19.065291 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:42:19.068558 kubelet[3195]: W0325 01:42:19.065299 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:42:19.068558 kubelet[3195]: E0325 01:42:19.065309 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:42:19.068558 kubelet[3195]: E0325 01:42:19.065565 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:42:19.068558 kubelet[3195]: W0325 01:42:19.065576 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:42:19.068558 kubelet[3195]: E0325 01:42:19.065598 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:42:19.068558 kubelet[3195]: E0325 01:42:19.065808 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:42:19.074907 kubelet[3195]: W0325 01:42:19.065818 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:42:19.074907 kubelet[3195]: E0325 01:42:19.065830 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:42:19.074907 kubelet[3195]: E0325 01:42:19.066051 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:42:19.074907 kubelet[3195]: W0325 01:42:19.066061 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:42:19.074907 kubelet[3195]: E0325 01:42:19.066073 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:42:19.069087 systemd[1]: Started cri-containerd-e8ba6598b8edb953a43da2911cdff13e8494f58dcf979ceb8bc328f7010f97ca.scope - libcontainer container e8ba6598b8edb953a43da2911cdff13e8494f58dcf979ceb8bc328f7010f97ca. Mar 25 01:42:19.165685 kubelet[3195]: E0325 01:42:19.165305 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:42:19.168051 kubelet[3195]: W0325 01:42:19.167780 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:42:19.168051 kubelet[3195]: E0325 01:42:19.167829 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:42:19.168498 kubelet[3195]: E0325 01:42:19.168459 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:42:19.168498 kubelet[3195]: W0325 01:42:19.168484 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:42:19.168676 kubelet[3195]: E0325 01:42:19.168571 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:42:19.169929 kubelet[3195]: E0325 01:42:19.169894 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:42:19.169929 kubelet[3195]: W0325 01:42:19.169911 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:42:19.170824 kubelet[3195]: E0325 01:42:19.169997 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:42:19.170824 kubelet[3195]: E0325 01:42:19.170216 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:42:19.170824 kubelet[3195]: W0325 01:42:19.170226 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:42:19.170824 kubelet[3195]: E0325 01:42:19.170447 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:42:19.172333 kubelet[3195]: E0325 01:42:19.171785 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:42:19.172333 kubelet[3195]: W0325 01:42:19.171800 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:42:19.172333 kubelet[3195]: E0325 01:42:19.171820 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:42:19.172333 kubelet[3195]: E0325 01:42:19.172221 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:42:19.172333 kubelet[3195]: W0325 01:42:19.172233 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:42:19.172333 kubelet[3195]: E0325 01:42:19.172321 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:42:19.172701 kubelet[3195]: E0325 01:42:19.172491 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:42:19.172701 kubelet[3195]: W0325 01:42:19.172531 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:42:19.172701 kubelet[3195]: E0325 01:42:19.172664 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:42:19.172981 kubelet[3195]: E0325 01:42:19.172967 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:42:19.172981 kubelet[3195]: W0325 01:42:19.172977 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:42:19.176414 kubelet[3195]: E0325 01:42:19.173067 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:42:19.176414 kubelet[3195]: E0325 01:42:19.173576 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:42:19.176414 kubelet[3195]: W0325 01:42:19.173587 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:42:19.176414 kubelet[3195]: E0325 01:42:19.173690 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:42:19.176414 kubelet[3195]: E0325 01:42:19.173838 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:42:19.176414 kubelet[3195]: W0325 01:42:19.173847 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:42:19.176414 kubelet[3195]: E0325 01:42:19.174075 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:42:19.177017 kubelet[3195]: E0325 01:42:19.176680 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:42:19.177017 kubelet[3195]: W0325 01:42:19.176695 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:42:19.177098 kubelet[3195]: E0325 01:42:19.177021 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:42:19.179184 kubelet[3195]: E0325 01:42:19.177380 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:42:19.179184 kubelet[3195]: W0325 01:42:19.177398 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:42:19.179184 kubelet[3195]: E0325 01:42:19.178057 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:42:19.179184 kubelet[3195]: E0325 01:42:19.178126 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:42:19.179184 kubelet[3195]: W0325 01:42:19.178135 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:42:19.179184 kubelet[3195]: E0325 01:42:19.178216 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:42:19.179184 kubelet[3195]: E0325 01:42:19.178599 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:42:19.179184 kubelet[3195]: W0325 01:42:19.178697 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:42:19.181267 kubelet[3195]: E0325 01:42:19.180575 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:42:19.181267 kubelet[3195]: E0325 01:42:19.180925 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:42:19.181267 kubelet[3195]: W0325 01:42:19.180971 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:42:19.181267 kubelet[3195]: E0325 01:42:19.181063 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:42:19.182990 kubelet[3195]: E0325 01:42:19.181492 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:42:19.182990 kubelet[3195]: W0325 01:42:19.181580 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:42:19.182990 kubelet[3195]: E0325 01:42:19.181755 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:42:19.182990 kubelet[3195]: E0325 01:42:19.182347 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:42:19.182990 kubelet[3195]: W0325 01:42:19.182359 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:42:19.182990 kubelet[3195]: E0325 01:42:19.182542 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:42:19.183394 kubelet[3195]: E0325 01:42:19.183367 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:42:19.183484 kubelet[3195]: W0325 01:42:19.183471 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:42:19.183790 kubelet[3195]: E0325 01:42:19.183636 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:42:19.184627 kubelet[3195]: E0325 01:42:19.184580 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:42:19.184627 kubelet[3195]: W0325 01:42:19.184597 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:42:19.185630 kubelet[3195]: E0325 01:42:19.185613 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:42:19.185901 kubelet[3195]: E0325 01:42:19.185867 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:42:19.186172 kubelet[3195]: W0325 01:42:19.185994 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:42:19.186798 kubelet[3195]: E0325 01:42:19.186255 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:42:19.186997 kubelet[3195]: E0325 01:42:19.186975 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:42:19.186997 kubelet[3195]: W0325 01:42:19.186989 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:42:19.187214 kubelet[3195]: E0325 01:42:19.187117 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:42:19.187647 kubelet[3195]: E0325 01:42:19.187294 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:42:19.187647 kubelet[3195]: W0325 01:42:19.187304 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:42:19.187647 kubelet[3195]: E0325 01:42:19.187399 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:42:19.187854 kubelet[3195]: E0325 01:42:19.187839 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:42:19.188019 kubelet[3195]: W0325 01:42:19.187935 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:42:19.188198 kubelet[3195]: E0325 01:42:19.188089 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:42:19.189553 kubelet[3195]: E0325 01:42:19.188582 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:42:19.189553 kubelet[3195]: W0325 01:42:19.188597 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:42:19.189553 kubelet[3195]: E0325 01:42:19.188615 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:42:19.192276 kubelet[3195]: E0325 01:42:19.191483 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:42:19.192276 kubelet[3195]: W0325 01:42:19.191557 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:42:19.192276 kubelet[3195]: E0325 01:42:19.191575 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:42:19.211081 kubelet[3195]: E0325 01:42:19.210989 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:42:19.211081 kubelet[3195]: W0325 01:42:19.211013 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:42:19.211081 kubelet[3195]: E0325 01:42:19.211035 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:42:19.306894 containerd[1933]: time="2025-03-25T01:42:19.306842175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xq6r6,Uid:834f9a62-cf5c-4d59-a8a4-3b115698be00,Namespace:calico-system,Attempt:0,} returns sandbox id \"e8ba6598b8edb953a43da2911cdff13e8494f58dcf979ceb8bc328f7010f97ca\"" Mar 25 01:42:19.358429 containerd[1933]: time="2025-03-25T01:42:19.357908264Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\"" Mar 25 01:42:19.391447 containerd[1933]: time="2025-03-25T01:42:19.391135742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8c56d65db-jpg2m,Uid:2251ad9b-7622-4c24-a011-53211da3e456,Namespace:calico-system,Attempt:0,} returns sandbox id \"f941c8a28233c299b14c905463b0bf989ee3cdbd26762f44cd8ec77a88a11040\"" Mar 25 01:42:20.581033 kubelet[3195]: E0325 01:42:20.579183 3195 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gm72j" podUID="bbc7e29b-a6fe-4fe7-a72c-99ff59b8eee7" Mar 25 01:42:20.961539 containerd[1933]: time="2025-03-25T01:42:20.961169793Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:42:20.969984 containerd[1933]: time="2025-03-25T01:42:20.969900156Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2: active requests=0, bytes read=5364011" Mar 25 01:42:20.970339 containerd[1933]: time="2025-03-25T01:42:20.970277729Z" level=info msg="ImageCreate event name:\"sha256:441bf8ace5b7fa3742b7fafaf6cd60fea340dd307169a18c75a1d78cba3a8365\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:42:20.974722 containerd[1933]: time="2025-03-25T01:42:20.974664787Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:51d9341a4a37e278a906f40ecc73f5076e768612c21621f1b1d4f2b2f0735a1d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:42:20.975957 containerd[1933]: time="2025-03-25T01:42:20.975424812Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" with image id \"sha256:441bf8ace5b7fa3742b7fafaf6cd60fea340dd307169a18c75a1d78cba3a8365\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:51d9341a4a37e278a906f40ecc73f5076e768612c21621f1b1d4f2b2f0735a1d\", size \"6857075\" in 1.617460343s" Mar 25 01:42:20.975957 containerd[1933]: time="2025-03-25T01:42:20.975473081Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" returns image reference \"sha256:441bf8ace5b7fa3742b7fafaf6cd60fea340dd307169a18c75a1d78cba3a8365\"" Mar 25 01:42:20.977007 containerd[1933]: time="2025-03-25T01:42:20.976834574Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.2\"" Mar 25 01:42:20.987407 containerd[1933]: time="2025-03-25T01:42:20.987365030Z" level=info msg="CreateContainer within sandbox \"e8ba6598b8edb953a43da2911cdff13e8494f58dcf979ceb8bc328f7010f97ca\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 25 01:42:21.006614 containerd[1933]: time="2025-03-25T01:42:21.002372801Z" level=info msg="Container 34e9b00d094459b900966c6d7661e1123ee95c63357a21b1fa246c679a8ef5a9: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:42:21.022202 containerd[1933]: time="2025-03-25T01:42:21.022144797Z" level=info msg="CreateContainer within sandbox \"e8ba6598b8edb953a43da2911cdff13e8494f58dcf979ceb8bc328f7010f97ca\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"34e9b00d094459b900966c6d7661e1123ee95c63357a21b1fa246c679a8ef5a9\"" Mar 25 01:42:21.023106 containerd[1933]: time="2025-03-25T01:42:21.022962112Z" level=info msg="StartContainer for \"34e9b00d094459b900966c6d7661e1123ee95c63357a21b1fa246c679a8ef5a9\"" Mar 25 01:42:21.025161 containerd[1933]: time="2025-03-25T01:42:21.025128675Z" level=info msg="connecting to shim 34e9b00d094459b900966c6d7661e1123ee95c63357a21b1fa246c679a8ef5a9" address="unix:///run/containerd/s/d83a38d66ae388f23c1bed241a8bf1c99d3a5b4b0e034708a6d46e1ffa162c5e" protocol=ttrpc version=3 Mar 25 01:42:21.053743 systemd[1]: Started cri-containerd-34e9b00d094459b900966c6d7661e1123ee95c63357a21b1fa246c679a8ef5a9.scope - libcontainer container 34e9b00d094459b900966c6d7661e1123ee95c63357a21b1fa246c679a8ef5a9. Mar 25 01:42:21.169907 containerd[1933]: time="2025-03-25T01:42:21.169813953Z" level=info msg="StartContainer for \"34e9b00d094459b900966c6d7661e1123ee95c63357a21b1fa246c679a8ef5a9\" returns successfully" Mar 25 01:42:21.186885 systemd[1]: cri-containerd-34e9b00d094459b900966c6d7661e1123ee95c63357a21b1fa246c679a8ef5a9.scope: Deactivated successfully. Mar 25 01:42:21.202694 containerd[1933]: time="2025-03-25T01:42:21.202638185Z" level=info msg="TaskExit event in podsandbox handler container_id:\"34e9b00d094459b900966c6d7661e1123ee95c63357a21b1fa246c679a8ef5a9\" id:\"34e9b00d094459b900966c6d7661e1123ee95c63357a21b1fa246c679a8ef5a9\" pid:3948 exited_at:{seconds:1742866941 nanos:189263957}" Mar 25 01:42:21.202832 containerd[1933]: time="2025-03-25T01:42:21.202699063Z" level=info msg="received exit event container_id:\"34e9b00d094459b900966c6d7661e1123ee95c63357a21b1fa246c679a8ef5a9\" id:\"34e9b00d094459b900966c6d7661e1123ee95c63357a21b1fa246c679a8ef5a9\" pid:3948 exited_at:{seconds:1742866941 nanos:189263957}" Mar 25 01:42:21.243771 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-34e9b00d094459b900966c6d7661e1123ee95c63357a21b1fa246c679a8ef5a9-rootfs.mount: Deactivated successfully. Mar 25 01:42:22.691698 kubelet[3195]: E0325 01:42:22.691639 3195 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gm72j" podUID="bbc7e29b-a6fe-4fe7-a72c-99ff59b8eee7" Mar 25 01:42:23.635582 containerd[1933]: time="2025-03-25T01:42:23.635536335Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:42:23.638237 containerd[1933]: time="2025-03-25T01:42:23.637941555Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.2: active requests=0, bytes read=30414075" Mar 25 01:42:23.641768 containerd[1933]: time="2025-03-25T01:42:23.640304477Z" level=info msg="ImageCreate event name:\"sha256:1d6f9d005866d74e6f0a8b0b8b743d0eaf4efcb7c7032fd2215da9c6ca131cb5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:42:23.644179 containerd[1933]: time="2025-03-25T01:42:23.644137195Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:9839fd34b4c1bad50beed72aec59c64893487a46eea57dc2d7d66c3041d7bcce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:42:23.683546 containerd[1933]: time="2025-03-25T01:42:23.683438343Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.2\" with image id \"sha256:1d6f9d005866d74e6f0a8b0b8b743d0eaf4efcb7c7032fd2215da9c6ca131cb5\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:9839fd34b4c1bad50beed72aec59c64893487a46eea57dc2d7d66c3041d7bcce\", size \"31907171\" in 2.706566447s" Mar 25 01:42:23.683926 containerd[1933]: time="2025-03-25T01:42:23.683897638Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.2\" returns image reference \"sha256:1d6f9d005866d74e6f0a8b0b8b743d0eaf4efcb7c7032fd2215da9c6ca131cb5\"" Mar 25 01:42:23.685604 containerd[1933]: time="2025-03-25T01:42:23.685572352Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.2\"" Mar 25 01:42:23.722777 containerd[1933]: time="2025-03-25T01:42:23.722736096Z" level=info msg="CreateContainer within sandbox \"f941c8a28233c299b14c905463b0bf989ee3cdbd26762f44cd8ec77a88a11040\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 25 01:42:23.757877 containerd[1933]: time="2025-03-25T01:42:23.751839598Z" level=info msg="Container 4730a474579a5f03593765e88aa835661d2882abc43167526b9377a9c6ce2b7c: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:42:23.810409 containerd[1933]: time="2025-03-25T01:42:23.810355147Z" level=info msg="CreateContainer within sandbox \"f941c8a28233c299b14c905463b0bf989ee3cdbd26762f44cd8ec77a88a11040\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"4730a474579a5f03593765e88aa835661d2882abc43167526b9377a9c6ce2b7c\"" Mar 25 01:42:23.813596 containerd[1933]: time="2025-03-25T01:42:23.812661431Z" level=info msg="StartContainer for \"4730a474579a5f03593765e88aa835661d2882abc43167526b9377a9c6ce2b7c\"" Mar 25 01:42:23.814899 containerd[1933]: time="2025-03-25T01:42:23.814864631Z" level=info msg="connecting to shim 4730a474579a5f03593765e88aa835661d2882abc43167526b9377a9c6ce2b7c" address="unix:///run/containerd/s/2d20f492ec50bf4b3ec095b8c20cb5ef8be8792f23f87eb3e5fe80bbb0262220" protocol=ttrpc version=3 Mar 25 01:42:23.842803 systemd[1]: Started cri-containerd-4730a474579a5f03593765e88aa835661d2882abc43167526b9377a9c6ce2b7c.scope - libcontainer container 4730a474579a5f03593765e88aa835661d2882abc43167526b9377a9c6ce2b7c. Mar 25 01:42:23.915778 containerd[1933]: time="2025-03-25T01:42:23.915667953Z" level=info msg="StartContainer for \"4730a474579a5f03593765e88aa835661d2882abc43167526b9377a9c6ce2b7c\" returns successfully" Mar 25 01:42:24.580526 kubelet[3195]: E0325 01:42:24.579657 3195 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gm72j" podUID="bbc7e29b-a6fe-4fe7-a72c-99ff59b8eee7" Mar 25 01:42:24.871268 kubelet[3195]: I0325 01:42:24.871111 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-8c56d65db-jpg2m" podStartSLOduration=2.580190093 podStartE2EDuration="6.871090665s" podCreationTimestamp="2025-03-25 01:42:18 +0000 UTC" firstStartedPulling="2025-03-25 01:42:19.393938583 +0000 UTC m=+15.125237972" lastFinishedPulling="2025-03-25 01:42:23.684839144 +0000 UTC m=+19.416138544" observedRunningTime="2025-03-25 01:42:24.866327249 +0000 UTC m=+20.597626653" watchObservedRunningTime="2025-03-25 01:42:24.871090665 +0000 UTC m=+20.602390066" Mar 25 01:42:25.828578 kubelet[3195]: I0325 01:42:25.828486 3195 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 25 01:42:26.579798 kubelet[3195]: E0325 01:42:26.579747 3195 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gm72j" podUID="bbc7e29b-a6fe-4fe7-a72c-99ff59b8eee7" Mar 25 01:42:28.580236 kubelet[3195]: E0325 01:42:28.578642 3195 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gm72j" podUID="bbc7e29b-a6fe-4fe7-a72c-99ff59b8eee7" Mar 25 01:42:29.111573 containerd[1933]: time="2025-03-25T01:42:29.111528016Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:42:29.112870 containerd[1933]: time="2025-03-25T01:42:29.112522843Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.2: active requests=0, bytes read=97781477" Mar 25 01:42:29.114217 containerd[1933]: time="2025-03-25T01:42:29.114004108Z" level=info msg="ImageCreate event name:\"sha256:cda13293c895a8a3b06c1e190b70fb6fe61036db2e59764036fc6e65ec374693\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:42:29.116206 containerd[1933]: time="2025-03-25T01:42:29.116175955Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:890e1db6ae363695cfc23ffae4d612cc85cdd99d759bd539af6683969d0c3c25\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:42:29.116951 containerd[1933]: time="2025-03-25T01:42:29.116918925Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.2\" with image id \"sha256:cda13293c895a8a3b06c1e190b70fb6fe61036db2e59764036fc6e65ec374693\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:890e1db6ae363695cfc23ffae4d612cc85cdd99d759bd539af6683969d0c3c25\", size \"99274581\" in 5.431306825s" Mar 25 01:42:29.117028 containerd[1933]: time="2025-03-25T01:42:29.116958100Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.2\" returns image reference \"sha256:cda13293c895a8a3b06c1e190b70fb6fe61036db2e59764036fc6e65ec374693\"" Mar 25 01:42:29.120810 containerd[1933]: time="2025-03-25T01:42:29.120772797Z" level=info msg="CreateContainer within sandbox \"e8ba6598b8edb953a43da2911cdff13e8494f58dcf979ceb8bc328f7010f97ca\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 25 01:42:29.135660 containerd[1933]: time="2025-03-25T01:42:29.131779538Z" level=info msg="Container 4e6be73f97e32a2234bf59675734c31701bcfcf42005515337fd300eff78c9b4: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:42:29.145563 containerd[1933]: time="2025-03-25T01:42:29.145516776Z" level=info msg="CreateContainer within sandbox \"e8ba6598b8edb953a43da2911cdff13e8494f58dcf979ceb8bc328f7010f97ca\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"4e6be73f97e32a2234bf59675734c31701bcfcf42005515337fd300eff78c9b4\"" Mar 25 01:42:29.147216 containerd[1933]: time="2025-03-25T01:42:29.147177377Z" level=info msg="StartContainer for \"4e6be73f97e32a2234bf59675734c31701bcfcf42005515337fd300eff78c9b4\"" Mar 25 01:42:29.152321 containerd[1933]: time="2025-03-25T01:42:29.152280145Z" level=info msg="connecting to shim 4e6be73f97e32a2234bf59675734c31701bcfcf42005515337fd300eff78c9b4" address="unix:///run/containerd/s/d83a38d66ae388f23c1bed241a8bf1c99d3a5b4b0e034708a6d46e1ffa162c5e" protocol=ttrpc version=3 Mar 25 01:42:29.231772 systemd[1]: Started cri-containerd-4e6be73f97e32a2234bf59675734c31701bcfcf42005515337fd300eff78c9b4.scope - libcontainer container 4e6be73f97e32a2234bf59675734c31701bcfcf42005515337fd300eff78c9b4. Mar 25 01:42:29.322281 containerd[1933]: time="2025-03-25T01:42:29.322231675Z" level=info msg="StartContainer for \"4e6be73f97e32a2234bf59675734c31701bcfcf42005515337fd300eff78c9b4\" returns successfully" Mar 25 01:42:30.174324 systemd[1]: cri-containerd-4e6be73f97e32a2234bf59675734c31701bcfcf42005515337fd300eff78c9b4.scope: Deactivated successfully. Mar 25 01:42:30.174784 systemd[1]: cri-containerd-4e6be73f97e32a2234bf59675734c31701bcfcf42005515337fd300eff78c9b4.scope: Consumed 587ms CPU time, 145.9M memory peak, 3.5M read from disk, 154M written to disk. Mar 25 01:42:30.187312 containerd[1933]: time="2025-03-25T01:42:30.187269547Z" level=info msg="received exit event container_id:\"4e6be73f97e32a2234bf59675734c31701bcfcf42005515337fd300eff78c9b4\" id:\"4e6be73f97e32a2234bf59675734c31701bcfcf42005515337fd300eff78c9b4\" pid:4047 exited_at:{seconds:1742866950 nanos:187057301}" Mar 25 01:42:30.248323 containerd[1933]: time="2025-03-25T01:42:30.248228254Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4e6be73f97e32a2234bf59675734c31701bcfcf42005515337fd300eff78c9b4\" id:\"4e6be73f97e32a2234bf59675734c31701bcfcf42005515337fd300eff78c9b4\" pid:4047 exited_at:{seconds:1742866950 nanos:187057301}" Mar 25 01:42:30.337266 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4e6be73f97e32a2234bf59675734c31701bcfcf42005515337fd300eff78c9b4-rootfs.mount: Deactivated successfully. Mar 25 01:42:30.352569 kubelet[3195]: I0325 01:42:30.350949 3195 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Mar 25 01:42:30.431345 systemd[1]: Created slice kubepods-burstable-podd0e2df9c_92a5_478b_afa0_fa39eaee69f4.slice - libcontainer container kubepods-burstable-podd0e2df9c_92a5_478b_afa0_fa39eaee69f4.slice. Mar 25 01:42:30.450644 kubelet[3195]: W0325 01:42:30.447143 3195 reflector.go:561] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-30-255" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ip-172-31-30-255' and this object Mar 25 01:42:30.450644 kubelet[3195]: E0325 01:42:30.447225 3195 reflector.go:158] "Unhandled Error" err="object-\"calico-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ip-172-31-30-255\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ip-172-31-30-255' and this object" logger="UnhandledError" Mar 25 01:42:30.462554 systemd[1]: Created slice kubepods-burstable-pod26c958e2_82c6_4601_8d91_3a0d6d24c487.slice - libcontainer container kubepods-burstable-pod26c958e2_82c6_4601_8d91_3a0d6d24c487.slice. Mar 25 01:42:30.478005 systemd[1]: Created slice kubepods-besteffort-podaa6d36d8_2608_4f39_a261_631631193351.slice - libcontainer container kubepods-besteffort-podaa6d36d8_2608_4f39_a261_631631193351.slice. Mar 25 01:42:30.482570 kubelet[3195]: I0325 01:42:30.482368 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srssx\" (UniqueName: \"kubernetes.io/projected/aa6d36d8-2608-4f39-a261-631631193351-kube-api-access-srssx\") pod \"calico-apiserver-5dd6977d74-jf5rg\" (UID: \"aa6d36d8-2608-4f39-a261-631631193351\") " pod="calico-apiserver/calico-apiserver-5dd6977d74-jf5rg" Mar 25 01:42:30.482570 kubelet[3195]: I0325 01:42:30.482424 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/aa6d36d8-2608-4f39-a261-631631193351-calico-apiserver-certs\") pod \"calico-apiserver-5dd6977d74-jf5rg\" (UID: \"aa6d36d8-2608-4f39-a261-631631193351\") " pod="calico-apiserver/calico-apiserver-5dd6977d74-jf5rg" Mar 25 01:42:30.482570 kubelet[3195]: I0325 01:42:30.482461 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d0e2df9c-92a5-478b-afa0-fa39eaee69f4-config-volume\") pod \"coredns-6f6b679f8f-tl4pj\" (UID: \"d0e2df9c-92a5-478b-afa0-fa39eaee69f4\") " pod="kube-system/coredns-6f6b679f8f-tl4pj" Mar 25 01:42:30.482964 kubelet[3195]: I0325 01:42:30.482488 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/26c958e2-82c6-4601-8d91-3a0d6d24c487-config-volume\") pod \"coredns-6f6b679f8f-jhw4j\" (UID: \"26c958e2-82c6-4601-8d91-3a0d6d24c487\") " pod="kube-system/coredns-6f6b679f8f-jhw4j" Mar 25 01:42:30.482964 kubelet[3195]: I0325 01:42:30.482866 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fvbn\" (UniqueName: \"kubernetes.io/projected/26c958e2-82c6-4601-8d91-3a0d6d24c487-kube-api-access-8fvbn\") pod \"coredns-6f6b679f8f-jhw4j\" (UID: \"26c958e2-82c6-4601-8d91-3a0d6d24c487\") " pod="kube-system/coredns-6f6b679f8f-jhw4j" Mar 25 01:42:30.482964 kubelet[3195]: I0325 01:42:30.482916 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqw5k\" (UniqueName: \"kubernetes.io/projected/d0e2df9c-92a5-478b-afa0-fa39eaee69f4-kube-api-access-vqw5k\") pod \"coredns-6f6b679f8f-tl4pj\" (UID: \"d0e2df9c-92a5-478b-afa0-fa39eaee69f4\") " pod="kube-system/coredns-6f6b679f8f-tl4pj" Mar 25 01:42:30.493635 systemd[1]: Created slice kubepods-besteffort-pod7e97e061_a2b2_4acc_b91c_b54bb064bbfc.slice - libcontainer container kubepods-besteffort-pod7e97e061_a2b2_4acc_b91c_b54bb064bbfc.slice. Mar 25 01:42:30.511879 systemd[1]: Created slice kubepods-besteffort-pod4d234cee_22f7_4ba9_9e7c_8eb803c64a3c.slice - libcontainer container kubepods-besteffort-pod4d234cee_22f7_4ba9_9e7c_8eb803c64a3c.slice. Mar 25 01:42:30.588532 kubelet[3195]: I0325 01:42:30.583901 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbgmh\" (UniqueName: \"kubernetes.io/projected/7e97e061-a2b2-4acc-b91c-b54bb064bbfc-kube-api-access-dbgmh\") pod \"calico-kube-controllers-58ffd4d689-dbxxh\" (UID: \"7e97e061-a2b2-4acc-b91c-b54bb064bbfc\") " pod="calico-system/calico-kube-controllers-58ffd4d689-dbxxh" Mar 25 01:42:30.588532 kubelet[3195]: I0325 01:42:30.583977 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4d234cee-22f7-4ba9-9e7c-8eb803c64a3c-calico-apiserver-certs\") pod \"calico-apiserver-5dd6977d74-dd8xm\" (UID: \"4d234cee-22f7-4ba9-9e7c-8eb803c64a3c\") " pod="calico-apiserver/calico-apiserver-5dd6977d74-dd8xm" Mar 25 01:42:30.588532 kubelet[3195]: I0325 01:42:30.584066 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e97e061-a2b2-4acc-b91c-b54bb064bbfc-tigera-ca-bundle\") pod \"calico-kube-controllers-58ffd4d689-dbxxh\" (UID: \"7e97e061-a2b2-4acc-b91c-b54bb064bbfc\") " pod="calico-system/calico-kube-controllers-58ffd4d689-dbxxh" Mar 25 01:42:30.588532 kubelet[3195]: I0325 01:42:30.584095 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5cbn\" (UniqueName: \"kubernetes.io/projected/4d234cee-22f7-4ba9-9e7c-8eb803c64a3c-kube-api-access-k5cbn\") pod \"calico-apiserver-5dd6977d74-dd8xm\" (UID: \"4d234cee-22f7-4ba9-9e7c-8eb803c64a3c\") " pod="calico-apiserver/calico-apiserver-5dd6977d74-dd8xm" Mar 25 01:42:30.628850 systemd[1]: Created slice kubepods-besteffort-podbbc7e29b_a6fe_4fe7_a72c_99ff59b8eee7.slice - libcontainer container kubepods-besteffort-podbbc7e29b_a6fe_4fe7_a72c_99ff59b8eee7.slice. Mar 25 01:42:30.642277 containerd[1933]: time="2025-03-25T01:42:30.642229552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gm72j,Uid:bbc7e29b-a6fe-4fe7-a72c-99ff59b8eee7,Namespace:calico-system,Attempt:0,}" Mar 25 01:42:30.758195 containerd[1933]: time="2025-03-25T01:42:30.758034305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-tl4pj,Uid:d0e2df9c-92a5-478b-afa0-fa39eaee69f4,Namespace:kube-system,Attempt:0,}" Mar 25 01:42:30.771045 containerd[1933]: time="2025-03-25T01:42:30.770918972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-jhw4j,Uid:26c958e2-82c6-4601-8d91-3a0d6d24c487,Namespace:kube-system,Attempt:0,}" Mar 25 01:42:30.813025 containerd[1933]: time="2025-03-25T01:42:30.812913242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58ffd4d689-dbxxh,Uid:7e97e061-a2b2-4acc-b91c-b54bb064bbfc,Namespace:calico-system,Attempt:0,}" Mar 25 01:42:30.912787 containerd[1933]: time="2025-03-25T01:42:30.912443427Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.2\"" Mar 25 01:42:31.238925 containerd[1933]: time="2025-03-25T01:42:31.238807638Z" level=error msg="Failed to destroy network for sandbox \"64763c752a9f61af41e260dc7594f6eb4246161729613523bc307827ed048a04\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:42:31.298033 containerd[1933]: time="2025-03-25T01:42:31.251751356Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58ffd4d689-dbxxh,Uid:7e97e061-a2b2-4acc-b91c-b54bb064bbfc,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"64763c752a9f61af41e260dc7594f6eb4246161729613523bc307827ed048a04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:42:31.298361 containerd[1933]: time="2025-03-25T01:42:31.275606917Z" level=error msg="Failed to destroy network for sandbox \"9c0b59d916f2bd03a95125afed56b915db208d5b94d9e016d8ca7906e352f0da\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:42:31.298847 containerd[1933]: time="2025-03-25T01:42:31.275655316Z" level=error msg="Failed to destroy network for sandbox \"ec50a348fd3dd2991df974e9a10561d46e207f9f51c5f9ed53ea345a0f124616\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:42:31.298980 containerd[1933]: time="2025-03-25T01:42:31.293935624Z" level=error msg="Failed to destroy network for sandbox \"799791f6ecf838b921f08b069b377590e7f196e5fe9520b54a153e04d184ace8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:42:31.299663 containerd[1933]: time="2025-03-25T01:42:31.299610639Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-jhw4j,Uid:26c958e2-82c6-4601-8d91-3a0d6d24c487,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c0b59d916f2bd03a95125afed56b915db208d5b94d9e016d8ca7906e352f0da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:42:31.301310 kubelet[3195]: E0325 01:42:31.301250 3195 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64763c752a9f61af41e260dc7594f6eb4246161729613523bc307827ed048a04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:42:31.301720 kubelet[3195]: E0325 01:42:31.301466 3195 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64763c752a9f61af41e260dc7594f6eb4246161729613523bc307827ed048a04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-58ffd4d689-dbxxh" Mar 25 01:42:31.301720 kubelet[3195]: E0325 01:42:31.301536 3195 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64763c752a9f61af41e260dc7594f6eb4246161729613523bc307827ed048a04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-58ffd4d689-dbxxh" Mar 25 01:42:31.301720 kubelet[3195]: E0325 01:42:31.301624 3195 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-58ffd4d689-dbxxh_calico-system(7e97e061-a2b2-4acc-b91c-b54bb064bbfc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-58ffd4d689-dbxxh_calico-system(7e97e061-a2b2-4acc-b91c-b54bb064bbfc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"64763c752a9f61af41e260dc7594f6eb4246161729613523bc307827ed048a04\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-58ffd4d689-dbxxh" podUID="7e97e061-a2b2-4acc-b91c-b54bb064bbfc" Mar 25 01:42:31.309701 kubelet[3195]: E0325 01:42:31.309655 3195 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c0b59d916f2bd03a95125afed56b915db208d5b94d9e016d8ca7906e352f0da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:42:31.309851 containerd[1933]: time="2025-03-25T01:42:31.309651684Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-tl4pj,Uid:d0e2df9c-92a5-478b-afa0-fa39eaee69f4,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec50a348fd3dd2991df974e9a10561d46e207f9f51c5f9ed53ea345a0f124616\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:42:31.309948 kubelet[3195]: E0325 01:42:31.309720 3195 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c0b59d916f2bd03a95125afed56b915db208d5b94d9e016d8ca7906e352f0da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-jhw4j" Mar 25 01:42:31.309948 kubelet[3195]: E0325 01:42:31.309744 3195 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c0b59d916f2bd03a95125afed56b915db208d5b94d9e016d8ca7906e352f0da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-jhw4j" Mar 25 01:42:31.309948 kubelet[3195]: E0325 01:42:31.309788 3195 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-jhw4j_kube-system(26c958e2-82c6-4601-8d91-3a0d6d24c487)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-jhw4j_kube-system(26c958e2-82c6-4601-8d91-3a0d6d24c487)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9c0b59d916f2bd03a95125afed56b915db208d5b94d9e016d8ca7906e352f0da\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-jhw4j" podUID="26c958e2-82c6-4601-8d91-3a0d6d24c487" Mar 25 01:42:31.311396 kubelet[3195]: E0325 01:42:31.310404 3195 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec50a348fd3dd2991df974e9a10561d46e207f9f51c5f9ed53ea345a0f124616\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:42:31.311396 kubelet[3195]: E0325 01:42:31.310604 3195 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec50a348fd3dd2991df974e9a10561d46e207f9f51c5f9ed53ea345a0f124616\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-tl4pj" Mar 25 01:42:31.311396 kubelet[3195]: E0325 01:42:31.310698 3195 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec50a348fd3dd2991df974e9a10561d46e207f9f51c5f9ed53ea345a0f124616\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-tl4pj" Mar 25 01:42:31.311864 kubelet[3195]: E0325 01:42:31.310929 3195 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-tl4pj_kube-system(d0e2df9c-92a5-478b-afa0-fa39eaee69f4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-tl4pj_kube-system(d0e2df9c-92a5-478b-afa0-fa39eaee69f4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ec50a348fd3dd2991df974e9a10561d46e207f9f51c5f9ed53ea345a0f124616\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-tl4pj" podUID="d0e2df9c-92a5-478b-afa0-fa39eaee69f4" Mar 25 01:42:31.312023 containerd[1933]: time="2025-03-25T01:42:31.311988216Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gm72j,Uid:bbc7e29b-a6fe-4fe7-a72c-99ff59b8eee7,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"799791f6ecf838b921f08b069b377590e7f196e5fe9520b54a153e04d184ace8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:42:31.312237 kubelet[3195]: E0325 01:42:31.312209 3195 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"799791f6ecf838b921f08b069b377590e7f196e5fe9520b54a153e04d184ace8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:42:31.312316 kubelet[3195]: E0325 01:42:31.312249 3195 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"799791f6ecf838b921f08b069b377590e7f196e5fe9520b54a153e04d184ace8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gm72j" Mar 25 01:42:31.312316 kubelet[3195]: E0325 01:42:31.312272 3195 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"799791f6ecf838b921f08b069b377590e7f196e5fe9520b54a153e04d184ace8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gm72j" Mar 25 01:42:31.312429 kubelet[3195]: E0325 01:42:31.312333 3195 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gm72j_calico-system(bbc7e29b-a6fe-4fe7-a72c-99ff59b8eee7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gm72j_calico-system(bbc7e29b-a6fe-4fe7-a72c-99ff59b8eee7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"799791f6ecf838b921f08b069b377590e7f196e5fe9520b54a153e04d184ace8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gm72j" podUID="bbc7e29b-a6fe-4fe7-a72c-99ff59b8eee7" Mar 25 01:42:31.339440 systemd[1]: run-netns-cni\x2dd00da626\x2dd2bb\x2d9e4d\x2dc70d\x2def6e26b34b13.mount: Deactivated successfully. Mar 25 01:42:31.703931 containerd[1933]: time="2025-03-25T01:42:31.703891797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dd6977d74-jf5rg,Uid:aa6d36d8-2608-4f39-a261-631631193351,Namespace:calico-apiserver,Attempt:0,}" Mar 25 01:42:31.721656 containerd[1933]: time="2025-03-25T01:42:31.721238521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dd6977d74-dd8xm,Uid:4d234cee-22f7-4ba9-9e7c-8eb803c64a3c,Namespace:calico-apiserver,Attempt:0,}" Mar 25 01:42:31.822115 containerd[1933]: time="2025-03-25T01:42:31.822053199Z" level=error msg="Failed to destroy network for sandbox \"c7ed452b08ed5037cbff8a8e62fda38acb0c9f5d20bc1eeefe187a829eb8cd87\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:42:31.823316 containerd[1933]: time="2025-03-25T01:42:31.823237887Z" level=error msg="Failed to destroy network for sandbox \"df32c22bda9306336e49b3e0601e4f469b458dc14ec36a942d42855028b6fad7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:42:31.823711 containerd[1933]: time="2025-03-25T01:42:31.823486168Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dd6977d74-jf5rg,Uid:aa6d36d8-2608-4f39-a261-631631193351,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7ed452b08ed5037cbff8a8e62fda38acb0c9f5d20bc1eeefe187a829eb8cd87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:42:31.824122 kubelet[3195]: E0325 01:42:31.824077 3195 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7ed452b08ed5037cbff8a8e62fda38acb0c9f5d20bc1eeefe187a829eb8cd87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:42:31.824769 kubelet[3195]: E0325 01:42:31.824145 3195 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7ed452b08ed5037cbff8a8e62fda38acb0c9f5d20bc1eeefe187a829eb8cd87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5dd6977d74-jf5rg" Mar 25 01:42:31.824769 kubelet[3195]: E0325 01:42:31.824171 3195 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7ed452b08ed5037cbff8a8e62fda38acb0c9f5d20bc1eeefe187a829eb8cd87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5dd6977d74-jf5rg" Mar 25 01:42:31.824769 kubelet[3195]: E0325 01:42:31.824223 3195 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5dd6977d74-jf5rg_calico-apiserver(aa6d36d8-2608-4f39-a261-631631193351)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5dd6977d74-jf5rg_calico-apiserver(aa6d36d8-2608-4f39-a261-631631193351)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c7ed452b08ed5037cbff8a8e62fda38acb0c9f5d20bc1eeefe187a829eb8cd87\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5dd6977d74-jf5rg" podUID="aa6d36d8-2608-4f39-a261-631631193351" Mar 25 01:42:31.826039 containerd[1933]: time="2025-03-25T01:42:31.825995100Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dd6977d74-dd8xm,Uid:4d234cee-22f7-4ba9-9e7c-8eb803c64a3c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"df32c22bda9306336e49b3e0601e4f469b458dc14ec36a942d42855028b6fad7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:42:31.826232 kubelet[3195]: E0325 01:42:31.826198 3195 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df32c22bda9306336e49b3e0601e4f469b458dc14ec36a942d42855028b6fad7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:42:31.826314 kubelet[3195]: E0325 01:42:31.826259 3195 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df32c22bda9306336e49b3e0601e4f469b458dc14ec36a942d42855028b6fad7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5dd6977d74-dd8xm" Mar 25 01:42:31.826314 kubelet[3195]: E0325 01:42:31.826286 3195 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df32c22bda9306336e49b3e0601e4f469b458dc14ec36a942d42855028b6fad7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5dd6977d74-dd8xm" Mar 25 01:42:31.826395 kubelet[3195]: E0325 01:42:31.826340 3195 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5dd6977d74-dd8xm_calico-apiserver(4d234cee-22f7-4ba9-9e7c-8eb803c64a3c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5dd6977d74-dd8xm_calico-apiserver(4d234cee-22f7-4ba9-9e7c-8eb803c64a3c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"df32c22bda9306336e49b3e0601e4f469b458dc14ec36a942d42855028b6fad7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5dd6977d74-dd8xm" podUID="4d234cee-22f7-4ba9-9e7c-8eb803c64a3c" Mar 25 01:42:32.337146 systemd[1]: run-netns-cni\x2da3e660b2\x2d3b23\x2dfd26\x2d831d\x2d378349a06ebe.mount: Deactivated successfully. Mar 25 01:42:32.337291 systemd[1]: run-netns-cni\x2df0bda5ed\x2d19b1\x2d3ced\x2d3474\x2d67d61070ec51.mount: Deactivated successfully. Mar 25 01:42:38.726090 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount178327783.mount: Deactivated successfully. Mar 25 01:42:38.939676 containerd[1933]: time="2025-03-25T01:42:38.858702755Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.2: active requests=0, bytes read=142241445" Mar 25 01:42:38.941766 containerd[1933]: time="2025-03-25T01:42:38.911749789Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:42:38.941766 containerd[1933]: time="2025-03-25T01:42:38.917655039Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.2\" with image id \"sha256:048bf7af1f8c697d151dbecc478a18e89d89ed8627da98e17a56c11b3d45d351\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:d9a21be37fe591ee5ab5a2e3dc26408ea165a44a55705102ffaa002de9908b32\", size \"142241307\" in 8.003706206s" Mar 25 01:42:38.941766 containerd[1933]: time="2025-03-25T01:42:38.940861542Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.2\" returns image reference \"sha256:048bf7af1f8c697d151dbecc478a18e89d89ed8627da98e17a56c11b3d45d351\"" Mar 25 01:42:38.977855 containerd[1933]: time="2025-03-25T01:42:38.976030237Z" level=info msg="ImageCreate event name:\"sha256:048bf7af1f8c697d151dbecc478a18e89d89ed8627da98e17a56c11b3d45d351\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:42:38.977855 containerd[1933]: time="2025-03-25T01:42:38.976919135Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:d9a21be37fe591ee5ab5a2e3dc26408ea165a44a55705102ffaa002de9908b32\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:42:39.038067 containerd[1933]: time="2025-03-25T01:42:39.037946538Z" level=info msg="CreateContainer within sandbox \"e8ba6598b8edb953a43da2911cdff13e8494f58dcf979ceb8bc328f7010f97ca\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 25 01:42:39.113369 containerd[1933]: time="2025-03-25T01:42:39.113113095Z" level=info msg="Container e484e3b6d1085bcb57106e5e4e13db61e85a9bc7a7f7a85719a72bd70cb27aaf: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:42:39.115035 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount644129324.mount: Deactivated successfully. Mar 25 01:42:39.189370 containerd[1933]: time="2025-03-25T01:42:39.189328581Z" level=info msg="CreateContainer within sandbox \"e8ba6598b8edb953a43da2911cdff13e8494f58dcf979ceb8bc328f7010f97ca\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e484e3b6d1085bcb57106e5e4e13db61e85a9bc7a7f7a85719a72bd70cb27aaf\"" Mar 25 01:42:39.190223 containerd[1933]: time="2025-03-25T01:42:39.190178940Z" level=info msg="StartContainer for \"e484e3b6d1085bcb57106e5e4e13db61e85a9bc7a7f7a85719a72bd70cb27aaf\"" Mar 25 01:42:39.199357 containerd[1933]: time="2025-03-25T01:42:39.198898114Z" level=info msg="connecting to shim e484e3b6d1085bcb57106e5e4e13db61e85a9bc7a7f7a85719a72bd70cb27aaf" address="unix:///run/containerd/s/d83a38d66ae388f23c1bed241a8bf1c99d3a5b4b0e034708a6d46e1ffa162c5e" protocol=ttrpc version=3 Mar 25 01:42:39.393867 systemd[1]: Started cri-containerd-e484e3b6d1085bcb57106e5e4e13db61e85a9bc7a7f7a85719a72bd70cb27aaf.scope - libcontainer container e484e3b6d1085bcb57106e5e4e13db61e85a9bc7a7f7a85719a72bd70cb27aaf. Mar 25 01:42:39.512755 containerd[1933]: time="2025-03-25T01:42:39.512109675Z" level=info msg="StartContainer for \"e484e3b6d1085bcb57106e5e4e13db61e85a9bc7a7f7a85719a72bd70cb27aaf\" returns successfully" Mar 25 01:42:39.691806 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Mar 25 01:42:39.693179 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Mar 25 01:42:40.100520 kubelet[3195]: I0325 01:42:40.080724 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-xq6r6" podStartSLOduration=2.43425626 podStartE2EDuration="22.055218639s" podCreationTimestamp="2025-03-25 01:42:18 +0000 UTC" firstStartedPulling="2025-03-25 01:42:19.357104359 +0000 UTC m=+15.088403743" lastFinishedPulling="2025-03-25 01:42:38.978066743 +0000 UTC m=+34.709366122" observedRunningTime="2025-03-25 01:42:40.033358377 +0000 UTC m=+35.764657776" watchObservedRunningTime="2025-03-25 01:42:40.055218639 +0000 UTC m=+35.786518196" Mar 25 01:42:41.001915 kubelet[3195]: I0325 01:42:41.001881 3195 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 25 01:42:41.579640 containerd[1933]: time="2025-03-25T01:42:41.579420111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58ffd4d689-dbxxh,Uid:7e97e061-a2b2-4acc-b91c-b54bb064bbfc,Namespace:calico-system,Attempt:0,}" Mar 25 01:42:41.628397 systemd[1]: Started sshd@7-172.31.30.255:22-147.75.109.163:33160.service - OpenSSH per-connection server daemon (147.75.109.163:33160). Mar 25 01:42:41.980251 sshd[4422]: Accepted publickey for core from 147.75.109.163 port 33160 ssh2: RSA SHA256:pZSzr7AABY+GJWUQ/10Qq8YqIpXZSwyycEbuJ7d4HJc Mar 25 01:42:41.985916 sshd-session[4422]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:42:41.999834 systemd-logind[1901]: New session 8 of user core. Mar 25 01:42:42.010323 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 25 01:42:42.099542 kernel: bpftool[4478]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 25 01:42:42.541863 (udev-worker)[4306]: Network interface NamePolicy= disabled on kernel command line. Mar 25 01:42:42.547720 systemd-networkd[1761]: vxlan.calico: Link UP Mar 25 01:42:42.547728 systemd-networkd[1761]: vxlan.calico: Gained carrier Mar 25 01:42:42.585305 systemd-networkd[1761]: calia21901c9fd2: Link UP Mar 25 01:42:42.586825 (udev-worker)[4308]: Network interface NamePolicy= disabled on kernel command line. Mar 25 01:42:42.587898 systemd-networkd[1761]: calia21901c9fd2: Gained carrier Mar 25 01:42:42.624911 containerd[1933]: 2025-03-25 01:42:41.774 [INFO][4430] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 25 01:42:42.624911 containerd[1933]: 2025-03-25 01:42:41.848 [INFO][4430] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--255-k8s-calico--kube--controllers--58ffd4d689--dbxxh-eth0 calico-kube-controllers-58ffd4d689- calico-system 7e97e061-a2b2-4acc-b91c-b54bb064bbfc 679 0 2025-03-25 01:42:19 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:58ffd4d689 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-30-255 calico-kube-controllers-58ffd4d689-dbxxh eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia21901c9fd2 [] []}} ContainerID="5352687f42b631479d6e3721ff49f79042cebe8cd5821c8619f002645cbbd7a9" Namespace="calico-system" Pod="calico-kube-controllers-58ffd4d689-dbxxh" WorkloadEndpoint="ip--172--31--30--255-k8s-calico--kube--controllers--58ffd4d689--dbxxh-" Mar 25 01:42:42.624911 containerd[1933]: 2025-03-25 01:42:41.848 [INFO][4430] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5352687f42b631479d6e3721ff49f79042cebe8cd5821c8619f002645cbbd7a9" Namespace="calico-system" Pod="calico-kube-controllers-58ffd4d689-dbxxh" WorkloadEndpoint="ip--172--31--30--255-k8s-calico--kube--controllers--58ffd4d689--dbxxh-eth0" Mar 25 01:42:42.624911 containerd[1933]: 2025-03-25 01:42:42.329 [INFO][4444] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5352687f42b631479d6e3721ff49f79042cebe8cd5821c8619f002645cbbd7a9" HandleID="k8s-pod-network.5352687f42b631479d6e3721ff49f79042cebe8cd5821c8619f002645cbbd7a9" Workload="ip--172--31--30--255-k8s-calico--kube--controllers--58ffd4d689--dbxxh-eth0" Mar 25 01:42:42.630280 containerd[1933]: 2025-03-25 01:42:42.464 [INFO][4444] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5352687f42b631479d6e3721ff49f79042cebe8cd5821c8619f002645cbbd7a9" HandleID="k8s-pod-network.5352687f42b631479d6e3721ff49f79042cebe8cd5821c8619f002645cbbd7a9" Workload="ip--172--31--30--255-k8s-calico--kube--controllers--58ffd4d689--dbxxh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000289cc0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-30-255", "pod":"calico-kube-controllers-58ffd4d689-dbxxh", "timestamp":"2025-03-25 01:42:42.329285691 +0000 UTC"}, Hostname:"ip-172-31-30-255", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 25 01:42:42.630280 containerd[1933]: 2025-03-25 01:42:42.464 [INFO][4444] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 25 01:42:42.630280 containerd[1933]: 2025-03-25 01:42:42.464 [INFO][4444] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 25 01:42:42.630280 containerd[1933]: 2025-03-25 01:42:42.465 [INFO][4444] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-255' Mar 25 01:42:42.630280 containerd[1933]: 2025-03-25 01:42:42.470 [INFO][4444] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5352687f42b631479d6e3721ff49f79042cebe8cd5821c8619f002645cbbd7a9" host="ip-172-31-30-255" Mar 25 01:42:42.630280 containerd[1933]: 2025-03-25 01:42:42.491 [INFO][4444] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-30-255" Mar 25 01:42:42.630280 containerd[1933]: 2025-03-25 01:42:42.504 [INFO][4444] ipam/ipam.go 489: Trying affinity for 192.168.79.64/26 host="ip-172-31-30-255" Mar 25 01:42:42.630280 containerd[1933]: 2025-03-25 01:42:42.507 [INFO][4444] ipam/ipam.go 155: Attempting to load block cidr=192.168.79.64/26 host="ip-172-31-30-255" Mar 25 01:42:42.630280 containerd[1933]: 2025-03-25 01:42:42.512 [INFO][4444] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.79.64/26 host="ip-172-31-30-255" Mar 25 01:42:42.630775 containerd[1933]: 2025-03-25 01:42:42.512 [INFO][4444] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.79.64/26 handle="k8s-pod-network.5352687f42b631479d6e3721ff49f79042cebe8cd5821c8619f002645cbbd7a9" host="ip-172-31-30-255" Mar 25 01:42:42.630775 containerd[1933]: 2025-03-25 01:42:42.517 [INFO][4444] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5352687f42b631479d6e3721ff49f79042cebe8cd5821c8619f002645cbbd7a9 Mar 25 01:42:42.630775 containerd[1933]: 2025-03-25 01:42:42.528 [INFO][4444] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.79.64/26 handle="k8s-pod-network.5352687f42b631479d6e3721ff49f79042cebe8cd5821c8619f002645cbbd7a9" host="ip-172-31-30-255" Mar 25 01:42:42.630775 containerd[1933]: 2025-03-25 01:42:42.551 [INFO][4444] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.79.65/26] block=192.168.79.64/26 handle="k8s-pod-network.5352687f42b631479d6e3721ff49f79042cebe8cd5821c8619f002645cbbd7a9" host="ip-172-31-30-255" Mar 25 01:42:42.630775 containerd[1933]: 2025-03-25 01:42:42.551 [INFO][4444] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.79.65/26] handle="k8s-pod-network.5352687f42b631479d6e3721ff49f79042cebe8cd5821c8619f002645cbbd7a9" host="ip-172-31-30-255" Mar 25 01:42:42.630775 containerd[1933]: 2025-03-25 01:42:42.551 [INFO][4444] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 25 01:42:42.630775 containerd[1933]: 2025-03-25 01:42:42.551 [INFO][4444] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.79.65/26] IPv6=[] ContainerID="5352687f42b631479d6e3721ff49f79042cebe8cd5821c8619f002645cbbd7a9" HandleID="k8s-pod-network.5352687f42b631479d6e3721ff49f79042cebe8cd5821c8619f002645cbbd7a9" Workload="ip--172--31--30--255-k8s-calico--kube--controllers--58ffd4d689--dbxxh-eth0" Mar 25 01:42:42.631248 containerd[1933]: 2025-03-25 01:42:42.565 [INFO][4430] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5352687f42b631479d6e3721ff49f79042cebe8cd5821c8619f002645cbbd7a9" Namespace="calico-system" Pod="calico-kube-controllers-58ffd4d689-dbxxh" WorkloadEndpoint="ip--172--31--30--255-k8s-calico--kube--controllers--58ffd4d689--dbxxh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--255-k8s-calico--kube--controllers--58ffd4d689--dbxxh-eth0", GenerateName:"calico-kube-controllers-58ffd4d689-", Namespace:"calico-system", SelfLink:"", UID:"7e97e061-a2b2-4acc-b91c-b54bb064bbfc", ResourceVersion:"679", Generation:0, CreationTimestamp:time.Date(2025, time.March, 25, 1, 42, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"58ffd4d689", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-255", ContainerID:"", Pod:"calico-kube-controllers-58ffd4d689-dbxxh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.79.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia21901c9fd2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 25 01:42:42.631364 containerd[1933]: 2025-03-25 01:42:42.565 [INFO][4430] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.79.65/32] ContainerID="5352687f42b631479d6e3721ff49f79042cebe8cd5821c8619f002645cbbd7a9" Namespace="calico-system" Pod="calico-kube-controllers-58ffd4d689-dbxxh" WorkloadEndpoint="ip--172--31--30--255-k8s-calico--kube--controllers--58ffd4d689--dbxxh-eth0" Mar 25 01:42:42.631364 containerd[1933]: 2025-03-25 01:42:42.565 [INFO][4430] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia21901c9fd2 ContainerID="5352687f42b631479d6e3721ff49f79042cebe8cd5821c8619f002645cbbd7a9" Namespace="calico-system" Pod="calico-kube-controllers-58ffd4d689-dbxxh" WorkloadEndpoint="ip--172--31--30--255-k8s-calico--kube--controllers--58ffd4d689--dbxxh-eth0" Mar 25 01:42:42.631364 containerd[1933]: 2025-03-25 01:42:42.588 [INFO][4430] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5352687f42b631479d6e3721ff49f79042cebe8cd5821c8619f002645cbbd7a9" Namespace="calico-system" Pod="calico-kube-controllers-58ffd4d689-dbxxh" WorkloadEndpoint="ip--172--31--30--255-k8s-calico--kube--controllers--58ffd4d689--dbxxh-eth0" Mar 25 01:42:42.631487 containerd[1933]: 2025-03-25 01:42:42.589 [INFO][4430] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5352687f42b631479d6e3721ff49f79042cebe8cd5821c8619f002645cbbd7a9" Namespace="calico-system" Pod="calico-kube-controllers-58ffd4d689-dbxxh" WorkloadEndpoint="ip--172--31--30--255-k8s-calico--kube--controllers--58ffd4d689--dbxxh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--255-k8s-calico--kube--controllers--58ffd4d689--dbxxh-eth0", GenerateName:"calico-kube-controllers-58ffd4d689-", Namespace:"calico-system", SelfLink:"", UID:"7e97e061-a2b2-4acc-b91c-b54bb064bbfc", ResourceVersion:"679", Generation:0, CreationTimestamp:time.Date(2025, time.March, 25, 1, 42, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"58ffd4d689", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-255", ContainerID:"5352687f42b631479d6e3721ff49f79042cebe8cd5821c8619f002645cbbd7a9", Pod:"calico-kube-controllers-58ffd4d689-dbxxh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.79.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia21901c9fd2", MAC:"46:e7:6e:2f:28:16", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 25 01:42:42.632543 containerd[1933]: 2025-03-25 01:42:42.616 [INFO][4430] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5352687f42b631479d6e3721ff49f79042cebe8cd5821c8619f002645cbbd7a9" Namespace="calico-system" Pod="calico-kube-controllers-58ffd4d689-dbxxh" WorkloadEndpoint="ip--172--31--30--255-k8s-calico--kube--controllers--58ffd4d689--dbxxh-eth0" Mar 25 01:42:42.780028 sshd[4460]: Connection closed by 147.75.109.163 port 33160 Mar 25 01:42:42.780876 sshd-session[4422]: pam_unix(sshd:session): session closed for user core Mar 25 01:42:42.785692 systemd[1]: sshd@7-172.31.30.255:22-147.75.109.163:33160.service: Deactivated successfully. Mar 25 01:42:42.790106 systemd[1]: session-8.scope: Deactivated successfully. Mar 25 01:42:42.792447 systemd-logind[1901]: Session 8 logged out. Waiting for processes to exit. Mar 25 01:42:42.795114 systemd-logind[1901]: Removed session 8. Mar 25 01:42:43.307649 kubelet[3195]: I0325 01:42:43.307604 3195 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 25 01:42:43.581292 containerd[1933]: time="2025-03-25T01:42:43.580760242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dd6977d74-dd8xm,Uid:4d234cee-22f7-4ba9-9e7c-8eb803c64a3c,Namespace:calico-apiserver,Attempt:0,}" Mar 25 01:42:43.582006 containerd[1933]: time="2025-03-25T01:42:43.581706404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-tl4pj,Uid:d0e2df9c-92a5-478b-afa0-fa39eaee69f4,Namespace:kube-system,Attempt:0,}" Mar 25 01:42:43.663046 containerd[1933]: time="2025-03-25T01:42:43.662994132Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e484e3b6d1085bcb57106e5e4e13db61e85a9bc7a7f7a85719a72bd70cb27aaf\" id:\"d0dbc35d4f38dcdd833aab60fe4d1c990724de6755f687c90c9924333cb1f286\" pid:4595 exited_at:{seconds:1742866963 nanos:615999058}" Mar 25 01:42:43.763272 containerd[1933]: time="2025-03-25T01:42:43.762924767Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e484e3b6d1085bcb57106e5e4e13db61e85a9bc7a7f7a85719a72bd70cb27aaf\" id:\"f001d493456d006d3590b8e1afbd1cdb741fd6ee423499458c5c918ea4b71941\" pid:4620 exited_at:{seconds:1742866963 nanos:762603320}" Mar 25 01:42:44.045588 systemd-networkd[1761]: calia21901c9fd2: Gained IPv6LL Mar 25 01:42:44.296069 systemd-networkd[1761]: vxlan.calico: Gained IPv6LL Mar 25 01:42:44.582567 containerd[1933]: time="2025-03-25T01:42:44.582525002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-jhw4j,Uid:26c958e2-82c6-4601-8d91-3a0d6d24c487,Namespace:kube-system,Attempt:0,}" Mar 25 01:42:44.584691 containerd[1933]: time="2025-03-25T01:42:44.582530638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dd6977d74-jf5rg,Uid:aa6d36d8-2608-4f39-a261-631631193351,Namespace:calico-apiserver,Attempt:0,}" Mar 25 01:42:44.584691 containerd[1933]: time="2025-03-25T01:42:44.582583393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gm72j,Uid:bbc7e29b-a6fe-4fe7-a72c-99ff59b8eee7,Namespace:calico-system,Attempt:0,}" Mar 25 01:42:47.219552 ntpd[1896]: Listen normally on 7 vxlan.calico 192.168.79.64:123 Mar 25 01:42:47.219646 ntpd[1896]: Listen normally on 8 vxlan.calico [fe80::64cd:b1ff:fe1d:e78%4]:123 Mar 25 01:42:47.220069 ntpd[1896]: 25 Mar 01:42:47 ntpd[1896]: Listen normally on 7 vxlan.calico 192.168.79.64:123 Mar 25 01:42:47.220069 ntpd[1896]: 25 Mar 01:42:47 ntpd[1896]: Listen normally on 8 vxlan.calico [fe80::64cd:b1ff:fe1d:e78%4]:123 Mar 25 01:42:47.220069 ntpd[1896]: 25 Mar 01:42:47 ntpd[1896]: Listen normally on 9 calia21901c9fd2 [fe80::ecee:eeff:feee:eeee%5]:123 Mar 25 01:42:47.219721 ntpd[1896]: Listen normally on 9 calia21901c9fd2 [fe80::ecee:eeff:feee:eeee%5]:123 Mar 25 01:42:47.813867 systemd[1]: Started sshd@8-172.31.30.255:22-147.75.109.163:33172.service - OpenSSH per-connection server daemon (147.75.109.163:33172). Mar 25 01:42:48.039644 sshd[4636]: Accepted publickey for core from 147.75.109.163 port 33172 ssh2: RSA SHA256:pZSzr7AABY+GJWUQ/10Qq8YqIpXZSwyycEbuJ7d4HJc Mar 25 01:42:48.041485 sshd-session[4636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:42:48.051200 systemd-logind[1901]: New session 9 of user core. Mar 25 01:42:48.057273 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 25 01:42:48.285866 sshd[4646]: Connection closed by 147.75.109.163 port 33172 Mar 25 01:42:48.288251 sshd-session[4636]: pam_unix(sshd:session): session closed for user core Mar 25 01:42:48.295682 systemd-logind[1901]: Session 9 logged out. Waiting for processes to exit. Mar 25 01:42:48.298692 systemd[1]: sshd@8-172.31.30.255:22-147.75.109.163:33172.service: Deactivated successfully. Mar 25 01:42:48.303996 systemd[1]: session-9.scope: Deactivated successfully. Mar 25 01:42:48.305655 systemd-logind[1901]: Removed session 9. Mar 25 01:42:53.322847 systemd[1]: Started sshd@9-172.31.30.255:22-147.75.109.163:58372.service - OpenSSH per-connection server daemon (147.75.109.163:58372). Mar 25 01:42:53.507205 sshd[4667]: Accepted publickey for core from 147.75.109.163 port 58372 ssh2: RSA SHA256:pZSzr7AABY+GJWUQ/10Qq8YqIpXZSwyycEbuJ7d4HJc Mar 25 01:42:53.508791 sshd-session[4667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:42:53.516452 systemd-logind[1901]: New session 10 of user core. Mar 25 01:42:53.520040 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 25 01:42:53.779327 sshd[4669]: Connection closed by 147.75.109.163 port 58372 Mar 25 01:42:53.781202 sshd-session[4667]: pam_unix(sshd:session): session closed for user core Mar 25 01:42:53.801446 systemd[1]: sshd@9-172.31.30.255:22-147.75.109.163:58372.service: Deactivated successfully. Mar 25 01:42:53.809928 systemd[1]: session-10.scope: Deactivated successfully. Mar 25 01:42:53.813208 systemd-logind[1901]: Session 10 logged out. Waiting for processes to exit. Mar 25 01:42:53.837302 systemd[1]: Started sshd@10-172.31.30.255:22-147.75.109.163:58380.service - OpenSSH per-connection server daemon (147.75.109.163:58380). Mar 25 01:42:53.839582 systemd-logind[1901]: Removed session 10. Mar 25 01:42:54.031989 sshd[4681]: Accepted publickey for core from 147.75.109.163 port 58380 ssh2: RSA SHA256:pZSzr7AABY+GJWUQ/10Qq8YqIpXZSwyycEbuJ7d4HJc Mar 25 01:42:54.032701 sshd-session[4681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:42:54.040191 systemd-logind[1901]: New session 11 of user core. Mar 25 01:42:54.046728 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 25 01:42:54.352108 sshd[4684]: Connection closed by 147.75.109.163 port 58380 Mar 25 01:42:54.357519 sshd-session[4681]: pam_unix(sshd:session): session closed for user core Mar 25 01:42:54.364450 systemd[1]: sshd@10-172.31.30.255:22-147.75.109.163:58380.service: Deactivated successfully. Mar 25 01:42:54.368613 systemd[1]: session-11.scope: Deactivated successfully. Mar 25 01:42:54.374935 systemd-logind[1901]: Session 11 logged out. Waiting for processes to exit. Mar 25 01:42:54.394861 systemd[1]: Started sshd@11-172.31.30.255:22-147.75.109.163:58392.service - OpenSSH per-connection server daemon (147.75.109.163:58392). Mar 25 01:42:54.397495 systemd-logind[1901]: Removed session 11. Mar 25 01:42:54.594643 sshd[4693]: Accepted publickey for core from 147.75.109.163 port 58392 ssh2: RSA SHA256:pZSzr7AABY+GJWUQ/10Qq8YqIpXZSwyycEbuJ7d4HJc Mar 25 01:42:54.596480 sshd-session[4693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:42:54.603323 systemd-logind[1901]: New session 12 of user core. Mar 25 01:42:54.607696 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 25 01:42:54.871586 sshd[4696]: Connection closed by 147.75.109.163 port 58392 Mar 25 01:42:54.872325 sshd-session[4693]: pam_unix(sshd:session): session closed for user core Mar 25 01:42:54.877080 systemd[1]: sshd@11-172.31.30.255:22-147.75.109.163:58392.service: Deactivated successfully. Mar 25 01:42:54.879643 systemd[1]: session-12.scope: Deactivated successfully. Mar 25 01:42:54.880628 systemd-logind[1901]: Session 12 logged out. Waiting for processes to exit. Mar 25 01:42:54.882291 systemd-logind[1901]: Removed session 12. Mar 25 01:42:59.917222 systemd[1]: Started sshd@12-172.31.30.255:22-147.75.109.163:58394.service - OpenSSH per-connection server daemon (147.75.109.163:58394). Mar 25 01:43:00.144612 sshd[4710]: Accepted publickey for core from 147.75.109.163 port 58394 ssh2: RSA SHA256:pZSzr7AABY+GJWUQ/10Qq8YqIpXZSwyycEbuJ7d4HJc Mar 25 01:43:00.148988 sshd-session[4710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:43:00.172013 systemd-logind[1901]: New session 13 of user core. Mar 25 01:43:00.180774 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 25 01:43:00.432880 sshd[4712]: Connection closed by 147.75.109.163 port 58394 Mar 25 01:43:00.434871 sshd-session[4710]: pam_unix(sshd:session): session closed for user core Mar 25 01:43:00.439188 systemd-logind[1901]: Session 13 logged out. Waiting for processes to exit. Mar 25 01:43:00.440013 systemd[1]: sshd@12-172.31.30.255:22-147.75.109.163:58394.service: Deactivated successfully. Mar 25 01:43:00.443988 systemd[1]: session-13.scope: Deactivated successfully. Mar 25 01:43:00.445134 systemd-logind[1901]: Removed session 13. Mar 25 01:43:05.468620 systemd[1]: Started sshd@13-172.31.30.255:22-147.75.109.163:49452.service - OpenSSH per-connection server daemon (147.75.109.163:49452). Mar 25 01:43:05.671124 sshd[4738]: Accepted publickey for core from 147.75.109.163 port 49452 ssh2: RSA SHA256:pZSzr7AABY+GJWUQ/10Qq8YqIpXZSwyycEbuJ7d4HJc Mar 25 01:43:05.672688 sshd-session[4738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:43:05.678337 systemd-logind[1901]: New session 14 of user core. Mar 25 01:43:05.687878 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 25 01:43:05.933423 sshd[4740]: Connection closed by 147.75.109.163 port 49452 Mar 25 01:43:05.935302 sshd-session[4738]: pam_unix(sshd:session): session closed for user core Mar 25 01:43:05.952041 systemd-logind[1901]: Session 14 logged out. Waiting for processes to exit. Mar 25 01:43:05.953171 systemd[1]: sshd@13-172.31.30.255:22-147.75.109.163:49452.service: Deactivated successfully. Mar 25 01:43:05.956291 systemd[1]: session-14.scope: Deactivated successfully. Mar 25 01:43:05.959035 systemd-logind[1901]: Removed session 14. Mar 25 01:43:10.232839 kubelet[3195]: E0325 01:43:10.232597 3195 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 01:43:10.333593 kubelet[3195]: E0325 01:43:10.333547 3195 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 01:43:10.534760 kubelet[3195]: E0325 01:43:10.534632 3195 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 01:43:10.935101 kubelet[3195]: E0325 01:43:10.935049 3195 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 01:43:10.973343 systemd[1]: Started sshd@14-172.31.30.255:22-147.75.109.163:42238.service - OpenSSH per-connection server daemon (147.75.109.163:42238). Mar 25 01:43:11.253550 sshd[4752]: Accepted publickey for core from 147.75.109.163 port 42238 ssh2: RSA SHA256:pZSzr7AABY+GJWUQ/10Qq8YqIpXZSwyycEbuJ7d4HJc Mar 25 01:43:11.274329 sshd-session[4752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:43:11.282884 systemd-logind[1901]: New session 15 of user core. Mar 25 01:43:11.288714 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 25 01:43:11.533050 sshd[4754]: Connection closed by 147.75.109.163 port 42238 Mar 25 01:43:11.535333 sshd-session[4752]: pam_unix(sshd:session): session closed for user core Mar 25 01:43:11.539527 systemd[1]: sshd@14-172.31.30.255:22-147.75.109.163:42238.service: Deactivated successfully. Mar 25 01:43:11.542305 systemd[1]: session-15.scope: Deactivated successfully. Mar 25 01:43:11.544564 systemd-logind[1901]: Session 15 logged out. Waiting for processes to exit. Mar 25 01:43:11.545998 systemd-logind[1901]: Removed session 15. Mar 25 01:43:11.735217 kubelet[3195]: E0325 01:43:11.735163 3195 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 01:43:13.336115 kubelet[3195]: E0325 01:43:13.336073 3195 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 01:43:13.440243 containerd[1933]: time="2025-03-25T01:43:13.439787512Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e484e3b6d1085bcb57106e5e4e13db61e85a9bc7a7f7a85719a72bd70cb27aaf\" id:\"a4a15dc819b7720d07c93fb6f4c710cde1b1effa9a24ec9d932859e535b92c96\" pid:4780 exited_at:{seconds:1742866993 nanos:438963827}" Mar 25 01:43:16.201559 kubelet[3195]: I0325 01:43:16.200490 3195 setters.go:600] "Node became not ready" node="ip-172-31-30-255" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-03-25T01:43:16Z","lastTransitionTime":"2025-03-25T01:43:16Z","reason":"KubeletNotReady","message":"container runtime is down"} Mar 25 01:43:16.537321 kubelet[3195]: E0325 01:43:16.537185 3195 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 01:43:16.590561 systemd[1]: Started sshd@15-172.31.30.255:22-147.75.109.163:42254.service - OpenSSH per-connection server daemon (147.75.109.163:42254). Mar 25 01:43:16.818306 sshd[4793]: Accepted publickey for core from 147.75.109.163 port 42254 ssh2: RSA SHA256:pZSzr7AABY+GJWUQ/10Qq8YqIpXZSwyycEbuJ7d4HJc Mar 25 01:43:16.823854 sshd-session[4793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:43:16.839862 systemd-logind[1901]: New session 16 of user core. Mar 25 01:43:16.852548 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 25 01:43:17.105564 sshd[4795]: Connection closed by 147.75.109.163 port 42254 Mar 25 01:43:17.107470 sshd-session[4793]: pam_unix(sshd:session): session closed for user core Mar 25 01:43:17.115418 systemd[1]: sshd@15-172.31.30.255:22-147.75.109.163:42254.service: Deactivated successfully. Mar 25 01:43:17.120040 systemd[1]: session-16.scope: Deactivated successfully. Mar 25 01:43:17.122135 systemd-logind[1901]: Session 16 logged out. Waiting for processes to exit. Mar 25 01:43:17.124006 systemd-logind[1901]: Removed session 16. Mar 25 01:43:17.150306 systemd[1]: Started sshd@16-172.31.30.255:22-147.75.109.163:42258.service - OpenSSH per-connection server daemon (147.75.109.163:42258). Mar 25 01:43:17.332934 sshd[4807]: Accepted publickey for core from 147.75.109.163 port 42258 ssh2: RSA SHA256:pZSzr7AABY+GJWUQ/10Qq8YqIpXZSwyycEbuJ7d4HJc Mar 25 01:43:17.333685 sshd-session[4807]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:43:17.341630 systemd-logind[1901]: New session 17 of user core. Mar 25 01:43:17.343742 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 25 01:43:18.026577 sshd[4809]: Connection closed by 147.75.109.163 port 42258 Mar 25 01:43:18.029731 sshd-session[4807]: pam_unix(sshd:session): session closed for user core Mar 25 01:43:18.034078 systemd-logind[1901]: Session 17 logged out. Waiting for processes to exit. Mar 25 01:43:18.035255 systemd[1]: sshd@16-172.31.30.255:22-147.75.109.163:42258.service: Deactivated successfully. Mar 25 01:43:18.038789 systemd[1]: session-17.scope: Deactivated successfully. Mar 25 01:43:18.040074 systemd-logind[1901]: Removed session 17. Mar 25 01:43:18.059736 systemd[1]: Started sshd@17-172.31.30.255:22-147.75.109.163:42262.service - OpenSSH per-connection server daemon (147.75.109.163:42262). Mar 25 01:43:18.294048 sshd[4819]: Accepted publickey for core from 147.75.109.163 port 42262 ssh2: RSA SHA256:pZSzr7AABY+GJWUQ/10Qq8YqIpXZSwyycEbuJ7d4HJc Mar 25 01:43:18.297491 sshd-session[4819]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:43:18.303216 systemd-logind[1901]: New session 18 of user core. Mar 25 01:43:18.309702 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 25 01:43:21.330956 sshd[4821]: Connection closed by 147.75.109.163 port 42262 Mar 25 01:43:21.332558 sshd-session[4819]: pam_unix(sshd:session): session closed for user core Mar 25 01:43:21.351652 systemd-logind[1901]: Session 18 logged out. Waiting for processes to exit. Mar 25 01:43:21.353362 systemd[1]: sshd@17-172.31.30.255:22-147.75.109.163:42262.service: Deactivated successfully. Mar 25 01:43:21.363701 systemd[1]: session-18.scope: Deactivated successfully. Mar 25 01:43:21.391977 systemd-logind[1901]: Removed session 18. Mar 25 01:43:21.398380 systemd[1]: Started sshd@18-172.31.30.255:22-147.75.109.163:49732.service - OpenSSH per-connection server daemon (147.75.109.163:49732). Mar 25 01:43:21.537335 kubelet[3195]: E0325 01:43:21.537285 3195 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 01:43:21.593833 sshd[4838]: Accepted publickey for core from 147.75.109.163 port 49732 ssh2: RSA SHA256:pZSzr7AABY+GJWUQ/10Qq8YqIpXZSwyycEbuJ7d4HJc Mar 25 01:43:21.595854 sshd-session[4838]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:43:21.602206 systemd-logind[1901]: New session 19 of user core. Mar 25 01:43:21.606713 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 25 01:43:22.161210 sshd[4840]: Connection closed by 147.75.109.163 port 49732 Mar 25 01:43:22.162047 sshd-session[4838]: pam_unix(sshd:session): session closed for user core Mar 25 01:43:22.174763 systemd[1]: sshd@18-172.31.30.255:22-147.75.109.163:49732.service: Deactivated successfully. Mar 25 01:43:22.185272 systemd[1]: session-19.scope: Deactivated successfully. Mar 25 01:43:22.193176 systemd-logind[1901]: Session 19 logged out. Waiting for processes to exit. Mar 25 01:43:22.215699 systemd[1]: Started sshd@19-172.31.30.255:22-147.75.109.163:49736.service - OpenSSH per-connection server daemon (147.75.109.163:49736). Mar 25 01:43:22.218613 systemd-logind[1901]: Removed session 19. Mar 25 01:43:22.419563 sshd[4849]: Accepted publickey for core from 147.75.109.163 port 49736 ssh2: RSA SHA256:pZSzr7AABY+GJWUQ/10Qq8YqIpXZSwyycEbuJ7d4HJc Mar 25 01:43:22.421494 sshd-session[4849]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:43:22.430265 systemd-logind[1901]: New session 20 of user core. Mar 25 01:43:22.436133 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 25 01:43:22.651459 sshd[4852]: Connection closed by 147.75.109.163 port 49736 Mar 25 01:43:22.652790 sshd-session[4849]: pam_unix(sshd:session): session closed for user core Mar 25 01:43:22.657530 systemd-logind[1901]: Session 20 logged out. Waiting for processes to exit. Mar 25 01:43:22.658824 systemd[1]: sshd@19-172.31.30.255:22-147.75.109.163:49736.service: Deactivated successfully. Mar 25 01:43:22.682087 systemd[1]: session-20.scope: Deactivated successfully. Mar 25 01:43:22.694380 systemd-logind[1901]: Removed session 20. Mar 25 01:43:26.537622 kubelet[3195]: E0325 01:43:26.537402 3195 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 01:43:27.684684 systemd[1]: Started sshd@20-172.31.30.255:22-147.75.109.163:49752.service - OpenSSH per-connection server daemon (147.75.109.163:49752). Mar 25 01:43:27.858177 sshd[4873]: Accepted publickey for core from 147.75.109.163 port 49752 ssh2: RSA SHA256:pZSzr7AABY+GJWUQ/10Qq8YqIpXZSwyycEbuJ7d4HJc Mar 25 01:43:27.860349 sshd-session[4873]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:43:27.867082 systemd-logind[1901]: New session 21 of user core. Mar 25 01:43:27.875701 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 25 01:43:28.083762 sshd[4875]: Connection closed by 147.75.109.163 port 49752 Mar 25 01:43:28.084877 sshd-session[4873]: pam_unix(sshd:session): session closed for user core Mar 25 01:43:28.089189 systemd[1]: sshd@20-172.31.30.255:22-147.75.109.163:49752.service: Deactivated successfully. Mar 25 01:43:28.091651 systemd[1]: session-21.scope: Deactivated successfully. Mar 25 01:43:28.092629 systemd-logind[1901]: Session 21 logged out. Waiting for processes to exit. Mar 25 01:43:28.094133 systemd-logind[1901]: Removed session 21. Mar 25 01:43:31.538429 kubelet[3195]: E0325 01:43:31.538373 3195 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 01:43:33.121434 systemd[1]: Started sshd@21-172.31.30.255:22-147.75.109.163:51830.service - OpenSSH per-connection server daemon (147.75.109.163:51830). Mar 25 01:43:33.342930 sshd[4893]: Accepted publickey for core from 147.75.109.163 port 51830 ssh2: RSA SHA256:pZSzr7AABY+GJWUQ/10Qq8YqIpXZSwyycEbuJ7d4HJc Mar 25 01:43:33.343671 sshd-session[4893]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:43:33.353466 systemd-logind[1901]: New session 22 of user core. Mar 25 01:43:33.362730 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 25 01:43:33.622543 sshd[4895]: Connection closed by 147.75.109.163 port 51830 Mar 25 01:43:33.625160 sshd-session[4893]: pam_unix(sshd:session): session closed for user core Mar 25 01:43:33.640480 systemd[1]: sshd@21-172.31.30.255:22-147.75.109.163:51830.service: Deactivated successfully. Mar 25 01:43:33.643495 systemd[1]: session-22.scope: Deactivated successfully. Mar 25 01:43:33.644887 systemd-logind[1901]: Session 22 logged out. Waiting for processes to exit. Mar 25 01:43:33.646762 systemd-logind[1901]: Removed session 22. Mar 25 01:43:36.539293 kubelet[3195]: E0325 01:43:36.539236 3195 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 01:43:38.667007 systemd[1]: Started sshd@22-172.31.30.255:22-147.75.109.163:51840.service - OpenSSH per-connection server daemon (147.75.109.163:51840). Mar 25 01:43:38.940532 sshd[4907]: Accepted publickey for core from 147.75.109.163 port 51840 ssh2: RSA SHA256:pZSzr7AABY+GJWUQ/10Qq8YqIpXZSwyycEbuJ7d4HJc Mar 25 01:43:38.945368 sshd-session[4907]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:43:38.955689 systemd-logind[1901]: New session 23 of user core. Mar 25 01:43:38.964927 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 25 01:43:39.266323 sshd[4909]: Connection closed by 147.75.109.163 port 51840 Mar 25 01:43:39.268308 sshd-session[4907]: pam_unix(sshd:session): session closed for user core Mar 25 01:43:39.271699 systemd[1]: sshd@22-172.31.30.255:22-147.75.109.163:51840.service: Deactivated successfully. Mar 25 01:43:39.274261 systemd[1]: session-23.scope: Deactivated successfully. Mar 25 01:43:39.276261 systemd-logind[1901]: Session 23 logged out. Waiting for processes to exit. Mar 25 01:43:39.277556 systemd-logind[1901]: Removed session 23. Mar 25 01:43:41.540060 kubelet[3195]: E0325 01:43:41.540008 3195 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 01:43:43.461530 containerd[1933]: time="2025-03-25T01:43:43.461381514Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e484e3b6d1085bcb57106e5e4e13db61e85a9bc7a7f7a85719a72bd70cb27aaf\" id:\"7a487e8465c9373186747772956f81d559e37f9a2921370f876ebfda254405f4\" pid:4934 exited_at:{seconds:1742867023 nanos:460911914}" Mar 25 01:43:44.301446 systemd[1]: Started sshd@23-172.31.30.255:22-147.75.109.163:55920.service - OpenSSH per-connection server daemon (147.75.109.163:55920). Mar 25 01:43:44.474186 sshd[4947]: Accepted publickey for core from 147.75.109.163 port 55920 ssh2: RSA SHA256:pZSzr7AABY+GJWUQ/10Qq8YqIpXZSwyycEbuJ7d4HJc Mar 25 01:43:44.475086 sshd-session[4947]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:43:44.482677 systemd-logind[1901]: New session 24 of user core. Mar 25 01:43:44.489700 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 25 01:43:44.697398 sshd[4949]: Connection closed by 147.75.109.163 port 55920 Mar 25 01:43:44.698827 sshd-session[4947]: pam_unix(sshd:session): session closed for user core Mar 25 01:43:44.704202 systemd-logind[1901]: Session 24 logged out. Waiting for processes to exit. Mar 25 01:43:44.705248 systemd[1]: sshd@23-172.31.30.255:22-147.75.109.163:55920.service: Deactivated successfully. Mar 25 01:43:44.707765 systemd[1]: session-24.scope: Deactivated successfully. Mar 25 01:43:44.709228 systemd-logind[1901]: Removed session 24. Mar 25 01:43:46.540491 kubelet[3195]: E0325 01:43:46.540444 3195 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 01:43:51.540956 kubelet[3195]: E0325 01:43:51.540906 3195 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 01:43:56.541270 kubelet[3195]: E0325 01:43:56.541219 3195 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 01:44:00.026801 systemd[1]: cri-containerd-93f20234b4de16658f321743ba9c40d44f79db290f1513b86f5ddef7dc94a006.scope: Deactivated successfully. Mar 25 01:44:00.028812 systemd[1]: cri-containerd-93f20234b4de16658f321743ba9c40d44f79db290f1513b86f5ddef7dc94a006.scope: Consumed 3.053s CPU time, 65.2M memory peak, 29M read from disk. Mar 25 01:44:00.030846 containerd[1933]: time="2025-03-25T01:44:00.030804016Z" level=info msg="received exit event container_id:\"93f20234b4de16658f321743ba9c40d44f79db290f1513b86f5ddef7dc94a006\" id:\"93f20234b4de16658f321743ba9c40d44f79db290f1513b86f5ddef7dc94a006\" pid:3726 exit_status:1 exited_at:{seconds:1742867040 nanos:28043513}" Mar 25 01:44:00.031867 containerd[1933]: time="2025-03-25T01:44:00.031747534Z" level=info msg="TaskExit event in podsandbox handler container_id:\"93f20234b4de16658f321743ba9c40d44f79db290f1513b86f5ddef7dc94a006\" id:\"93f20234b4de16658f321743ba9c40d44f79db290f1513b86f5ddef7dc94a006\" pid:3726 exit_status:1 exited_at:{seconds:1742867040 nanos:28043513}" Mar 25 01:44:00.102953 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-93f20234b4de16658f321743ba9c40d44f79db290f1513b86f5ddef7dc94a006-rootfs.mount: Deactivated successfully. Mar 25 01:44:00.641099 systemd[1]: cri-containerd-1cc39ba9bc9540597f46740d44b4dbe9f0eadac7fb3cd5c7ca1eb81483975129.scope: Deactivated successfully. Mar 25 01:44:00.641455 systemd[1]: cri-containerd-1cc39ba9bc9540597f46740d44b4dbe9f0eadac7fb3cd5c7ca1eb81483975129.scope: Consumed 3.283s CPU time, 82.5M memory peak, 46.1M read from disk. Mar 25 01:44:00.651060 containerd[1933]: time="2025-03-25T01:44:00.651010641Z" level=info msg="received exit event container_id:\"1cc39ba9bc9540597f46740d44b4dbe9f0eadac7fb3cd5c7ca1eb81483975129\" id:\"1cc39ba9bc9540597f46740d44b4dbe9f0eadac7fb3cd5c7ca1eb81483975129\" pid:3029 exit_status:1 exited_at:{seconds:1742867040 nanos:650600037}" Mar 25 01:44:00.651437 containerd[1933]: time="2025-03-25T01:44:00.651408567Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1cc39ba9bc9540597f46740d44b4dbe9f0eadac7fb3cd5c7ca1eb81483975129\" id:\"1cc39ba9bc9540597f46740d44b4dbe9f0eadac7fb3cd5c7ca1eb81483975129\" pid:3029 exit_status:1 exited_at:{seconds:1742867040 nanos:650600037}" Mar 25 01:44:00.697846 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1cc39ba9bc9540597f46740d44b4dbe9f0eadac7fb3cd5c7ca1eb81483975129-rootfs.mount: Deactivated successfully. Mar 25 01:44:01.541642 kubelet[3195]: E0325 01:44:01.541603 3195 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 01:44:04.787445 systemd[1]: cri-containerd-4e15700fe39a3b36e2f62c34f37546503e767fcd425a8ca1552cf85213fdafff.scope: Deactivated successfully. Mar 25 01:44:04.788721 systemd[1]: cri-containerd-4e15700fe39a3b36e2f62c34f37546503e767fcd425a8ca1552cf85213fdafff.scope: Consumed 1.588s CPU time, 35.4M memory peak, 20.8M read from disk. Mar 25 01:44:04.793622 containerd[1933]: time="2025-03-25T01:44:04.793585807Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4e15700fe39a3b36e2f62c34f37546503e767fcd425a8ca1552cf85213fdafff\" id:\"4e15700fe39a3b36e2f62c34f37546503e767fcd425a8ca1552cf85213fdafff\" pid:3034 exit_status:1 exited_at:{seconds:1742867044 nanos:789479752}" Mar 25 01:44:04.795063 containerd[1933]: time="2025-03-25T01:44:04.793856357Z" level=info msg="received exit event container_id:\"4e15700fe39a3b36e2f62c34f37546503e767fcd425a8ca1552cf85213fdafff\" id:\"4e15700fe39a3b36e2f62c34f37546503e767fcd425a8ca1552cf85213fdafff\" pid:3034 exit_status:1 exited_at:{seconds:1742867044 nanos:789479752}" Mar 25 01:44:04.830550 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4e15700fe39a3b36e2f62c34f37546503e767fcd425a8ca1552cf85213fdafff-rootfs.mount: Deactivated successfully. Mar 25 01:44:06.542385 kubelet[3195]: E0325 01:44:06.542298 3195 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 01:44:06.657819 kubelet[3195]: E0325 01:44:06.657753 3195 controller.go:195] "Failed to update lease" err="Put \"https://172.31.30.255:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-255?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 25 01:44:11.542999 kubelet[3195]: E0325 01:44:11.542948 3195 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 01:44:13.379254 containerd[1933]: time="2025-03-25T01:44:13.379196776Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e484e3b6d1085bcb57106e5e4e13db61e85a9bc7a7f7a85719a72bd70cb27aaf\" id:\"c1b59126acb57551ad50d943a367f633f329acd5aad6bbf17cb105b558c4adb5\" pid:5027 exit_status:1 exited_at:{seconds:1742867053 nanos:378904841}" Mar 25 01:44:16.544082 kubelet[3195]: E0325 01:44:16.544021 3195 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 01:44:16.658290 kubelet[3195]: E0325 01:44:16.658156 3195 controller.go:195] "Failed to update lease" err="Put \"https://172.31.30.255:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-255?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"