Mar 17 17:25:59.252637 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Mar 17 17:25:59.252689 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Mon Mar 17 16:05:23 -00 2025 Mar 17 17:25:59.252717 kernel: KASLR disabled due to lack of seed Mar 17 17:25:59.252734 kernel: efi: EFI v2.7 by EDK II Mar 17 17:25:59.252751 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x78503d98 Mar 17 17:25:59.252767 kernel: secureboot: Secure boot disabled Mar 17 17:25:59.252785 kernel: ACPI: Early table checksum verification disabled Mar 17 17:25:59.252800 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Mar 17 17:25:59.252817 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Mar 17 17:25:59.252885 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Mar 17 17:25:59.252916 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Mar 17 17:25:59.252934 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Mar 17 17:25:59.252950 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Mar 17 17:25:59.252966 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Mar 17 17:25:59.252986 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Mar 17 17:25:59.253007 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Mar 17 17:25:59.253024 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Mar 17 17:25:59.253041 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Mar 17 17:25:59.253057 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Mar 17 17:25:59.253073 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Mar 17 17:25:59.253091 kernel: printk: bootconsole [uart0] enabled Mar 17 17:25:59.253107 kernel: NUMA: Failed to initialise from firmware Mar 17 17:25:59.253144 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Mar 17 17:25:59.253167 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Mar 17 17:25:59.253184 kernel: Zone ranges: Mar 17 17:25:59.253201 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Mar 17 17:25:59.253224 kernel: DMA32 empty Mar 17 17:25:59.253241 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Mar 17 17:25:59.253259 kernel: Movable zone start for each node Mar 17 17:25:59.253275 kernel: Early memory node ranges Mar 17 17:25:59.253291 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Mar 17 17:25:59.253308 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Mar 17 17:25:59.253325 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Mar 17 17:25:59.253341 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Mar 17 17:25:59.253358 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Mar 17 17:25:59.253375 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Mar 17 17:25:59.253391 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Mar 17 17:25:59.253407 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Mar 17 17:25:59.253429 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Mar 17 17:25:59.253446 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Mar 17 17:25:59.253470 kernel: psci: probing for conduit method from ACPI. Mar 17 17:25:59.253487 kernel: psci: PSCIv1.0 detected in firmware. Mar 17 17:25:59.253505 kernel: psci: Using standard PSCI v0.2 function IDs Mar 17 17:25:59.253528 kernel: psci: Trusted OS migration not required Mar 17 17:25:59.253545 kernel: psci: SMC Calling Convention v1.1 Mar 17 17:25:59.253562 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Mar 17 17:25:59.253579 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Mar 17 17:25:59.253597 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 17 17:25:59.253614 kernel: Detected PIPT I-cache on CPU0 Mar 17 17:25:59.253632 kernel: CPU features: detected: GIC system register CPU interface Mar 17 17:25:59.253650 kernel: CPU features: detected: Spectre-v2 Mar 17 17:25:59.253667 kernel: CPU features: detected: Spectre-v3a Mar 17 17:25:59.253685 kernel: CPU features: detected: Spectre-BHB Mar 17 17:25:59.253702 kernel: CPU features: detected: ARM erratum 1742098 Mar 17 17:25:59.253719 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Mar 17 17:25:59.253743 kernel: alternatives: applying boot alternatives Mar 17 17:25:59.253763 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=31b104f73129b84fa679201ebe02fbfd197d071bbf0576d6ccc5c5442bcbb405 Mar 17 17:25:59.253782 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 17:25:59.253800 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 17 17:25:59.253818 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 17 17:25:59.253872 kernel: Fallback order for Node 0: 0 Mar 17 17:25:59.253897 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Mar 17 17:25:59.253917 kernel: Policy zone: Normal Mar 17 17:25:59.253938 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 17:25:59.253957 kernel: software IO TLB: area num 2. Mar 17 17:25:59.253990 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Mar 17 17:25:59.254011 kernel: Memory: 3819896K/4030464K available (10240K kernel code, 2186K rwdata, 8100K rodata, 39744K init, 897K bss, 210568K reserved, 0K cma-reserved) Mar 17 17:25:59.254030 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 17 17:25:59.254048 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 17 17:25:59.254067 kernel: rcu: RCU event tracing is enabled. Mar 17 17:25:59.254085 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 17 17:25:59.254103 kernel: Trampoline variant of Tasks RCU enabled. Mar 17 17:25:59.254121 kernel: Tracing variant of Tasks RCU enabled. Mar 17 17:25:59.254139 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 17:25:59.254156 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 17 17:25:59.254174 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 17 17:25:59.254201 kernel: GICv3: 96 SPIs implemented Mar 17 17:25:59.254220 kernel: GICv3: 0 Extended SPIs implemented Mar 17 17:25:59.254238 kernel: Root IRQ handler: gic_handle_irq Mar 17 17:25:59.254255 kernel: GICv3: GICv3 features: 16 PPIs Mar 17 17:25:59.254273 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Mar 17 17:25:59.254291 kernel: ITS [mem 0x10080000-0x1009ffff] Mar 17 17:25:59.254309 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Mar 17 17:25:59.254327 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Mar 17 17:25:59.254346 kernel: GICv3: using LPI property table @0x00000004000d0000 Mar 17 17:25:59.254364 kernel: ITS: Using hypervisor restricted LPI range [128] Mar 17 17:25:59.254381 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Mar 17 17:25:59.254399 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 17 17:25:59.254423 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Mar 17 17:25:59.254441 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Mar 17 17:25:59.254459 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Mar 17 17:25:59.254477 kernel: Console: colour dummy device 80x25 Mar 17 17:25:59.254495 kernel: printk: console [tty1] enabled Mar 17 17:25:59.254512 kernel: ACPI: Core revision 20230628 Mar 17 17:25:59.254530 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Mar 17 17:25:59.254548 kernel: pid_max: default: 32768 minimum: 301 Mar 17 17:25:59.254565 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 17 17:25:59.254592 kernel: landlock: Up and running. Mar 17 17:25:59.254612 kernel: SELinux: Initializing. Mar 17 17:25:59.254631 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 17:25:59.254648 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 17:25:59.254666 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 17 17:25:59.254683 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 17 17:25:59.254701 kernel: rcu: Hierarchical SRCU implementation. Mar 17 17:25:59.254719 kernel: rcu: Max phase no-delay instances is 400. Mar 17 17:25:59.254736 kernel: Platform MSI: ITS@0x10080000 domain created Mar 17 17:25:59.254759 kernel: PCI/MSI: ITS@0x10080000 domain created Mar 17 17:25:59.254776 kernel: Remapping and enabling EFI services. Mar 17 17:25:59.254794 kernel: smp: Bringing up secondary CPUs ... Mar 17 17:25:59.254811 kernel: Detected PIPT I-cache on CPU1 Mar 17 17:25:59.254829 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Mar 17 17:25:59.254883 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Mar 17 17:25:59.254903 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Mar 17 17:25:59.254921 kernel: smp: Brought up 1 node, 2 CPUs Mar 17 17:25:59.254938 kernel: SMP: Total of 2 processors activated. Mar 17 17:25:59.254964 kernel: CPU features: detected: 32-bit EL0 Support Mar 17 17:25:59.254981 kernel: CPU features: detected: 32-bit EL1 Support Mar 17 17:25:59.254999 kernel: CPU features: detected: CRC32 instructions Mar 17 17:25:59.255028 kernel: CPU: All CPU(s) started at EL1 Mar 17 17:25:59.255051 kernel: alternatives: applying system-wide alternatives Mar 17 17:25:59.255069 kernel: devtmpfs: initialized Mar 17 17:25:59.255088 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 17:25:59.255106 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 17 17:25:59.255125 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 17:25:59.255143 kernel: SMBIOS 3.0.0 present. Mar 17 17:25:59.255167 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Mar 17 17:25:59.255185 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 17:25:59.255204 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 17 17:25:59.255222 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 17 17:25:59.255242 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 17 17:25:59.255261 kernel: audit: initializing netlink subsys (disabled) Mar 17 17:25:59.255280 kernel: audit: type=2000 audit(0.231:1): state=initialized audit_enabled=0 res=1 Mar 17 17:25:59.255306 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 17:25:59.255324 kernel: cpuidle: using governor menu Mar 17 17:25:59.255343 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 17 17:25:59.255362 kernel: ASID allocator initialised with 65536 entries Mar 17 17:25:59.255380 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 17:25:59.255399 kernel: Serial: AMBA PL011 UART driver Mar 17 17:25:59.255417 kernel: Modules: 17424 pages in range for non-PLT usage Mar 17 17:25:59.255436 kernel: Modules: 508944 pages in range for PLT usage Mar 17 17:25:59.255454 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 17 17:25:59.255478 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 17 17:25:59.255498 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 17 17:25:59.255517 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 17 17:25:59.255536 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 17:25:59.255556 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 17 17:25:59.255578 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 17 17:25:59.255598 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 17 17:25:59.255621 kernel: ACPI: Added _OSI(Module Device) Mar 17 17:25:59.255639 kernel: ACPI: Added _OSI(Processor Device) Mar 17 17:25:59.255663 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 17:25:59.255682 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 17:25:59.255701 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 17 17:25:59.255720 kernel: ACPI: Interpreter enabled Mar 17 17:25:59.255738 kernel: ACPI: Using GIC for interrupt routing Mar 17 17:25:59.255756 kernel: ACPI: MCFG table detected, 1 entries Mar 17 17:25:59.255774 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Mar 17 17:25:59.256192 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 17 17:25:59.256496 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 17 17:25:59.256758 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 17 17:25:59.257056 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Mar 17 17:25:59.257342 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Mar 17 17:25:59.257380 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Mar 17 17:25:59.257401 kernel: acpiphp: Slot [1] registered Mar 17 17:25:59.257420 kernel: acpiphp: Slot [2] registered Mar 17 17:25:59.257439 kernel: acpiphp: Slot [3] registered Mar 17 17:25:59.257467 kernel: acpiphp: Slot [4] registered Mar 17 17:25:59.257487 kernel: acpiphp: Slot [5] registered Mar 17 17:25:59.257506 kernel: acpiphp: Slot [6] registered Mar 17 17:25:59.257524 kernel: acpiphp: Slot [7] registered Mar 17 17:25:59.257542 kernel: acpiphp: Slot [8] registered Mar 17 17:25:59.257561 kernel: acpiphp: Slot [9] registered Mar 17 17:25:59.257579 kernel: acpiphp: Slot [10] registered Mar 17 17:25:59.257598 kernel: acpiphp: Slot [11] registered Mar 17 17:25:59.257617 kernel: acpiphp: Slot [12] registered Mar 17 17:25:59.257635 kernel: acpiphp: Slot [13] registered Mar 17 17:25:59.257660 kernel: acpiphp: Slot [14] registered Mar 17 17:25:59.257680 kernel: acpiphp: Slot [15] registered Mar 17 17:25:59.257699 kernel: acpiphp: Slot [16] registered Mar 17 17:25:59.257718 kernel: acpiphp: Slot [17] registered Mar 17 17:25:59.257737 kernel: acpiphp: Slot [18] registered Mar 17 17:25:59.257756 kernel: acpiphp: Slot [19] registered Mar 17 17:25:59.257775 kernel: acpiphp: Slot [20] registered Mar 17 17:25:59.257793 kernel: acpiphp: Slot [21] registered Mar 17 17:25:59.257812 kernel: acpiphp: Slot [22] registered Mar 17 17:25:59.257946 kernel: acpiphp: Slot [23] registered Mar 17 17:25:59.257974 kernel: acpiphp: Slot [24] registered Mar 17 17:25:59.257995 kernel: acpiphp: Slot [25] registered Mar 17 17:25:59.258015 kernel: acpiphp: Slot [26] registered Mar 17 17:25:59.258034 kernel: acpiphp: Slot [27] registered Mar 17 17:25:59.258054 kernel: acpiphp: Slot [28] registered Mar 17 17:25:59.258073 kernel: acpiphp: Slot [29] registered Mar 17 17:25:59.258092 kernel: acpiphp: Slot [30] registered Mar 17 17:25:59.258110 kernel: acpiphp: Slot [31] registered Mar 17 17:25:59.258128 kernel: PCI host bridge to bus 0000:00 Mar 17 17:25:59.258435 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Mar 17 17:25:59.258825 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Mar 17 17:25:59.259173 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Mar 17 17:25:59.259428 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Mar 17 17:25:59.259707 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Mar 17 17:25:59.260089 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Mar 17 17:25:59.260361 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Mar 17 17:25:59.260627 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Mar 17 17:25:59.260970 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Mar 17 17:25:59.261255 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Mar 17 17:25:59.261520 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Mar 17 17:25:59.261755 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Mar 17 17:25:59.264199 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Mar 17 17:25:59.264454 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Mar 17 17:25:59.264690 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Mar 17 17:25:59.265212 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Mar 17 17:25:59.265488 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Mar 17 17:25:59.265721 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Mar 17 17:25:59.268108 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Mar 17 17:25:59.268394 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Mar 17 17:25:59.268640 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Mar 17 17:25:59.268879 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Mar 17 17:25:59.269144 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Mar 17 17:25:59.269187 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Mar 17 17:25:59.269208 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Mar 17 17:25:59.269228 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Mar 17 17:25:59.269247 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Mar 17 17:25:59.269268 kernel: iommu: Default domain type: Translated Mar 17 17:25:59.269300 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 17 17:25:59.269321 kernel: efivars: Registered efivars operations Mar 17 17:25:59.269340 kernel: vgaarb: loaded Mar 17 17:25:59.269360 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 17 17:25:59.269380 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 17:25:59.269399 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 17:25:59.269420 kernel: pnp: PnP ACPI init Mar 17 17:25:59.269746 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Mar 17 17:25:59.269799 kernel: pnp: PnP ACPI: found 1 devices Mar 17 17:25:59.269821 kernel: NET: Registered PF_INET protocol family Mar 17 17:25:59.269876 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 17 17:25:59.269899 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 17 17:25:59.269919 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 17:25:59.269938 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 17 17:25:59.269957 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 17 17:25:59.269977 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 17 17:25:59.269997 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 17:25:59.270026 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 17:25:59.270045 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 17:25:59.270065 kernel: PCI: CLS 0 bytes, default 64 Mar 17 17:25:59.270084 kernel: kvm [1]: HYP mode not available Mar 17 17:25:59.270102 kernel: Initialise system trusted keyrings Mar 17 17:25:59.270122 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 17 17:25:59.270141 kernel: Key type asymmetric registered Mar 17 17:25:59.270159 kernel: Asymmetric key parser 'x509' registered Mar 17 17:25:59.270177 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 17 17:25:59.270201 kernel: io scheduler mq-deadline registered Mar 17 17:25:59.270219 kernel: io scheduler kyber registered Mar 17 17:25:59.270238 kernel: io scheduler bfq registered Mar 17 17:25:59.270536 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Mar 17 17:25:59.270573 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Mar 17 17:25:59.270593 kernel: ACPI: button: Power Button [PWRB] Mar 17 17:25:59.270612 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Mar 17 17:25:59.270631 kernel: ACPI: button: Sleep Button [SLPB] Mar 17 17:25:59.270661 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 17:25:59.270680 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Mar 17 17:25:59.270996 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Mar 17 17:25:59.271038 kernel: printk: console [ttyS0] disabled Mar 17 17:25:59.271059 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Mar 17 17:25:59.271079 kernel: printk: console [ttyS0] enabled Mar 17 17:25:59.271099 kernel: printk: bootconsole [uart0] disabled Mar 17 17:25:59.271121 kernel: thunder_xcv, ver 1.0 Mar 17 17:25:59.271143 kernel: thunder_bgx, ver 1.0 Mar 17 17:25:59.271175 kernel: nicpf, ver 1.0 Mar 17 17:25:59.271196 kernel: nicvf, ver 1.0 Mar 17 17:25:59.271497 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 17 17:25:59.271770 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-03-17T17:25:58 UTC (1742232358) Mar 17 17:25:59.271805 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 17 17:25:59.271825 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Mar 17 17:25:59.274165 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 17 17:25:59.274188 kernel: watchdog: Hard watchdog permanently disabled Mar 17 17:25:59.274220 kernel: NET: Registered PF_INET6 protocol family Mar 17 17:25:59.274240 kernel: Segment Routing with IPv6 Mar 17 17:25:59.274260 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 17:25:59.274278 kernel: NET: Registered PF_PACKET protocol family Mar 17 17:25:59.274299 kernel: Key type dns_resolver registered Mar 17 17:25:59.274318 kernel: registered taskstats version 1 Mar 17 17:25:59.274339 kernel: Loading compiled-in X.509 certificates Mar 17 17:25:59.274358 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: 74c9b4f5dfad711856d7363c976664fc02c1e24c' Mar 17 17:25:59.274377 kernel: Key type .fscrypt registered Mar 17 17:25:59.274402 kernel: Key type fscrypt-provisioning registered Mar 17 17:25:59.274421 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 17 17:25:59.274439 kernel: ima: Allocated hash algorithm: sha1 Mar 17 17:25:59.274458 kernel: ima: No architecture policies found Mar 17 17:25:59.274478 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 17 17:25:59.274496 kernel: clk: Disabling unused clocks Mar 17 17:25:59.274515 kernel: Freeing unused kernel memory: 39744K Mar 17 17:25:59.274533 kernel: Run /init as init process Mar 17 17:25:59.274552 kernel: with arguments: Mar 17 17:25:59.274570 kernel: /init Mar 17 17:25:59.274597 kernel: with environment: Mar 17 17:25:59.274615 kernel: HOME=/ Mar 17 17:25:59.274634 kernel: TERM=linux Mar 17 17:25:59.274654 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 17:25:59.274680 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 17 17:25:59.274706 systemd[1]: Detected virtualization amazon. Mar 17 17:25:59.274727 systemd[1]: Detected architecture arm64. Mar 17 17:25:59.274754 systemd[1]: Running in initrd. Mar 17 17:25:59.274775 systemd[1]: No hostname configured, using default hostname. Mar 17 17:25:59.274795 systemd[1]: Hostname set to . Mar 17 17:25:59.274816 systemd[1]: Initializing machine ID from VM UUID. Mar 17 17:25:59.274872 systemd[1]: Queued start job for default target initrd.target. Mar 17 17:25:59.274898 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:25:59.274918 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:25:59.274942 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 17 17:25:59.274973 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:25:59.274995 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 17 17:25:59.275016 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 17 17:25:59.275041 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 17 17:25:59.275063 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 17 17:25:59.275085 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:25:59.275106 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:25:59.275134 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:25:59.275155 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:25:59.275176 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:25:59.275196 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:25:59.275217 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:25:59.275238 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:25:59.275259 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 17 17:25:59.275279 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 17 17:25:59.275299 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:25:59.275326 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:25:59.275346 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:25:59.275367 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:25:59.275388 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 17 17:25:59.275409 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:25:59.275430 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 17 17:25:59.275450 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 17:25:59.275471 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:25:59.275499 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:25:59.275520 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:25:59.275541 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 17 17:25:59.275562 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:25:59.275585 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 17:25:59.275607 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 17 17:25:59.275688 systemd-journald[252]: Collecting audit messages is disabled. Mar 17 17:25:59.275738 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:25:59.275759 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 17:25:59.275784 kernel: Bridge firewalling registered Mar 17 17:25:59.275805 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:25:59.275825 systemd-journald[252]: Journal started Mar 17 17:25:59.275905 systemd-journald[252]: Runtime Journal (/run/log/journal/ec2d6547e7fdfc266370cd39de351729) is 8.0M, max 75.3M, 67.3M free. Mar 17 17:25:59.222968 systemd-modules-load[253]: Inserted module 'overlay' Mar 17 17:25:59.297318 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:25:59.267579 systemd-modules-load[253]: Inserted module 'br_netfilter' Mar 17 17:25:59.281438 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:25:59.282341 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:25:59.285169 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:25:59.289349 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:25:59.301265 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:25:59.351670 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:25:59.359349 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:25:59.370341 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:25:59.382228 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:25:59.386750 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:25:59.399113 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 17 17:25:59.445271 dracut-cmdline[291]: dracut-dracut-053 Mar 17 17:25:59.456948 dracut-cmdline[291]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=31b104f73129b84fa679201ebe02fbfd197d071bbf0576d6ccc5c5442bcbb405 Mar 17 17:25:59.460699 systemd-resolved[288]: Positive Trust Anchors: Mar 17 17:25:59.460743 systemd-resolved[288]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:25:59.460804 systemd-resolved[288]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:25:59.665763 kernel: SCSI subsystem initialized Mar 17 17:25:59.673003 kernel: Loading iSCSI transport class v2.0-870. Mar 17 17:25:59.687866 kernel: iscsi: registered transport (tcp) Mar 17 17:25:59.710578 kernel: iscsi: registered transport (qla4xxx) Mar 17 17:25:59.710663 kernel: QLogic iSCSI HBA Driver Mar 17 17:25:59.738887 kernel: random: crng init done Mar 17 17:25:59.738248 systemd-resolved[288]: Defaulting to hostname 'linux'. Mar 17 17:25:59.742038 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:25:59.744395 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:25:59.810951 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 17 17:25:59.822161 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 17 17:25:59.865010 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 17:25:59.865098 kernel: device-mapper: uevent: version 1.0.3 Mar 17 17:25:59.866801 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 17 17:25:59.935899 kernel: raid6: neonx8 gen() 6590 MB/s Mar 17 17:25:59.952894 kernel: raid6: neonx4 gen() 6424 MB/s Mar 17 17:25:59.969889 kernel: raid6: neonx2 gen() 5424 MB/s Mar 17 17:25:59.986898 kernel: raid6: neonx1 gen() 3912 MB/s Mar 17 17:26:00.003898 kernel: raid6: int64x8 gen() 3728 MB/s Mar 17 17:26:00.020895 kernel: raid6: int64x4 gen() 3597 MB/s Mar 17 17:26:00.037890 kernel: raid6: int64x2 gen() 3569 MB/s Mar 17 17:26:00.055721 kernel: raid6: int64x1 gen() 2750 MB/s Mar 17 17:26:00.055794 kernel: raid6: using algorithm neonx8 gen() 6590 MB/s Mar 17 17:26:00.073685 kernel: raid6: .... xor() 4889 MB/s, rmw enabled Mar 17 17:26:00.073761 kernel: raid6: using neon recovery algorithm Mar 17 17:26:00.082655 kernel: xor: measuring software checksum speed Mar 17 17:26:00.082733 kernel: 8regs : 11025 MB/sec Mar 17 17:26:00.083885 kernel: 32regs : 10974 MB/sec Mar 17 17:26:00.085925 kernel: arm64_neon : 8642 MB/sec Mar 17 17:26:00.085985 kernel: xor: using function: 8regs (11025 MB/sec) Mar 17 17:26:00.172877 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 17 17:26:00.191389 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:26:00.201183 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:26:00.239602 systemd-udevd[472]: Using default interface naming scheme 'v255'. Mar 17 17:26:00.248458 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:26:00.268413 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 17 17:26:00.305821 dracut-pre-trigger[479]: rd.md=0: removing MD RAID activation Mar 17 17:26:00.361373 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:26:00.373169 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:26:00.503953 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:26:00.515770 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 17 17:26:00.564756 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 17 17:26:00.571328 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:26:00.576164 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:26:00.580744 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:26:00.594163 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 17 17:26:00.641797 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:26:00.690125 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Mar 17 17:26:00.690187 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Mar 17 17:26:00.735550 kernel: ena 0000:00:05.0: ENA device version: 0.10 Mar 17 17:26:00.735869 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Mar 17 17:26:00.736128 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:e6:65:ab:5f:ed Mar 17 17:26:00.709158 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:26:00.709399 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:26:00.712821 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:26:00.714990 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:26:00.715286 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:26:00.729349 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:26:00.744110 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:26:00.766357 (udev-worker)[527]: Network interface NamePolicy= disabled on kernel command line. Mar 17 17:26:00.795878 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Mar 17 17:26:00.797910 kernel: nvme nvme0: pci function 0000:00:04.0 Mar 17 17:26:00.806958 kernel: nvme nvme0: 2/0/0 default/read/poll queues Mar 17 17:26:00.810916 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:26:00.821278 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 17 17:26:00.821355 kernel: GPT:9289727 != 16777215 Mar 17 17:26:00.821381 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 17 17:26:00.822112 kernel: GPT:9289727 != 16777215 Mar 17 17:26:00.823160 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 17 17:26:00.824066 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 17 17:26:00.827248 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:26:00.860268 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:26:00.935280 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (528) Mar 17 17:26:00.959878 kernel: BTRFS: device fsid c0c482e3-6885-4a4e-b31c-6bc8f8c403e7 devid 1 transid 40 /dev/nvme0n1p3 scanned by (udev-worker) (541) Mar 17 17:26:00.998723 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Mar 17 17:26:01.055507 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Mar 17 17:26:01.072681 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 17 17:26:01.088027 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Mar 17 17:26:01.092753 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Mar 17 17:26:01.103150 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 17 17:26:01.126846 disk-uuid[663]: Primary Header is updated. Mar 17 17:26:01.126846 disk-uuid[663]: Secondary Entries is updated. Mar 17 17:26:01.126846 disk-uuid[663]: Secondary Header is updated. Mar 17 17:26:01.138867 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 17 17:26:02.156929 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 17 17:26:02.158400 disk-uuid[664]: The operation has completed successfully. Mar 17 17:26:02.335151 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 17:26:02.337383 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 17 17:26:02.409197 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 17 17:26:02.426308 sh[926]: Success Mar 17 17:26:02.453155 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 17 17:26:02.568273 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 17 17:26:02.591541 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 17 17:26:02.599928 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 17 17:26:02.635866 kernel: BTRFS info (device dm-0): first mount of filesystem c0c482e3-6885-4a4e-b31c-6bc8f8c403e7 Mar 17 17:26:02.635944 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:26:02.635971 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 17 17:26:02.637503 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 17 17:26:02.637554 kernel: BTRFS info (device dm-0): using free space tree Mar 17 17:26:02.729887 kernel: BTRFS info (device dm-0): enabling ssd optimizations Mar 17 17:26:02.767330 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 17 17:26:02.771476 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 17 17:26:02.783151 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 17 17:26:02.790378 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 17 17:26:02.836887 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 3dbd9b64-bd31-4292-be10-51551993b53f Mar 17 17:26:02.836967 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:26:02.838245 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 17 17:26:02.847386 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 17 17:26:02.867033 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 3dbd9b64-bd31-4292-be10-51551993b53f Mar 17 17:26:02.865506 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 17:26:02.882900 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 17 17:26:02.893197 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 17 17:26:02.992516 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:26:03.009189 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:26:03.067332 systemd-networkd[1118]: lo: Link UP Mar 17 17:26:03.067359 systemd-networkd[1118]: lo: Gained carrier Mar 17 17:26:03.071372 systemd-networkd[1118]: Enumeration completed Mar 17 17:26:03.071561 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:26:03.073917 systemd-networkd[1118]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:26:03.073925 systemd-networkd[1118]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:26:03.074659 systemd[1]: Reached target network.target - Network. Mar 17 17:26:03.078282 systemd-networkd[1118]: eth0: Link UP Mar 17 17:26:03.078291 systemd-networkd[1118]: eth0: Gained carrier Mar 17 17:26:03.078311 systemd-networkd[1118]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:26:03.104960 systemd-networkd[1118]: eth0: DHCPv4 address 172.31.16.223/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 17 17:26:03.277087 ignition[1045]: Ignition 2.20.0 Mar 17 17:26:03.277651 ignition[1045]: Stage: fetch-offline Mar 17 17:26:03.278217 ignition[1045]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:26:03.278243 ignition[1045]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 17:26:03.278763 ignition[1045]: Ignition finished successfully Mar 17 17:26:03.288082 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:26:03.300157 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 17 17:26:03.326534 ignition[1127]: Ignition 2.20.0 Mar 17 17:26:03.326564 ignition[1127]: Stage: fetch Mar 17 17:26:03.327494 ignition[1127]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:26:03.327523 ignition[1127]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 17:26:03.327718 ignition[1127]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 17:26:03.349366 ignition[1127]: PUT result: OK Mar 17 17:26:03.352686 ignition[1127]: parsed url from cmdline: "" Mar 17 17:26:03.352711 ignition[1127]: no config URL provided Mar 17 17:26:03.352727 ignition[1127]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 17:26:03.352757 ignition[1127]: no config at "/usr/lib/ignition/user.ign" Mar 17 17:26:03.352814 ignition[1127]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 17:26:03.356594 ignition[1127]: PUT result: OK Mar 17 17:26:03.356707 ignition[1127]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Mar 17 17:26:03.358954 ignition[1127]: GET result: OK Mar 17 17:26:03.367137 unknown[1127]: fetched base config from "system" Mar 17 17:26:03.359704 ignition[1127]: parsing config with SHA512: dbf8ef77072cccd75bab73a1b5e30d41a12fb05fba6c982bddc4e2251ca624ad422f345666c517f484cc20f5bade6ca1dd665ea243bb2a1f42a9b540b22342cc Mar 17 17:26:03.367154 unknown[1127]: fetched base config from "system" Mar 17 17:26:03.368072 ignition[1127]: fetch: fetch complete Mar 17 17:26:03.367176 unknown[1127]: fetched user config from "aws" Mar 17 17:26:03.368101 ignition[1127]: fetch: fetch passed Mar 17 17:26:03.378925 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 17 17:26:03.368206 ignition[1127]: Ignition finished successfully Mar 17 17:26:03.403224 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 17 17:26:03.427197 ignition[1134]: Ignition 2.20.0 Mar 17 17:26:03.427232 ignition[1134]: Stage: kargs Mar 17 17:26:03.428388 ignition[1134]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:26:03.428417 ignition[1134]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 17:26:03.428579 ignition[1134]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 17:26:03.432344 ignition[1134]: PUT result: OK Mar 17 17:26:03.440816 ignition[1134]: kargs: kargs passed Mar 17 17:26:03.441280 ignition[1134]: Ignition finished successfully Mar 17 17:26:03.446440 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 17 17:26:03.457158 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 17 17:26:03.486068 ignition[1140]: Ignition 2.20.0 Mar 17 17:26:03.486091 ignition[1140]: Stage: disks Mar 17 17:26:03.487363 ignition[1140]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:26:03.487395 ignition[1140]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 17:26:03.487565 ignition[1140]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 17:26:03.491919 ignition[1140]: PUT result: OK Mar 17 17:26:03.500455 ignition[1140]: disks: disks passed Mar 17 17:26:03.500597 ignition[1140]: Ignition finished successfully Mar 17 17:26:03.505464 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 17 17:26:03.508183 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 17 17:26:03.514071 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 17 17:26:03.516497 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:26:03.519605 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:26:03.525270 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:26:03.534125 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 17 17:26:03.581878 systemd-fsck[1149]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 17 17:26:03.588295 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 17 17:26:03.599170 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 17 17:26:03.697928 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 6b579bf2-7716-4d59-98eb-b92ea668693e r/w with ordered data mode. Quota mode: none. Mar 17 17:26:03.699456 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 17 17:26:03.703413 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 17 17:26:03.721005 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:26:03.727726 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 17 17:26:03.730643 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 17 17:26:03.730738 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 17:26:03.730790 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:26:03.762867 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1168) Mar 17 17:26:03.763023 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 17 17:26:03.774307 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 3dbd9b64-bd31-4292-be10-51551993b53f Mar 17 17:26:03.774941 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:26:03.774972 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 17 17:26:03.781178 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 17 17:26:03.793305 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 17 17:26:03.795306 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:26:04.254290 initrd-setup-root[1192]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 17:26:04.276509 initrd-setup-root[1199]: cut: /sysroot/etc/group: No such file or directory Mar 17 17:26:04.285556 initrd-setup-root[1206]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 17:26:04.295160 initrd-setup-root[1213]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 17:26:04.618373 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 17 17:26:04.635184 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 17 17:26:04.642788 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 17 17:26:04.663790 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 17 17:26:04.666095 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 3dbd9b64-bd31-4292-be10-51551993b53f Mar 17 17:26:04.708145 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 17 17:26:04.713328 ignition[1281]: INFO : Ignition 2.20.0 Mar 17 17:26:04.713328 ignition[1281]: INFO : Stage: mount Mar 17 17:26:04.717757 ignition[1281]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:26:04.717757 ignition[1281]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 17:26:04.717757 ignition[1281]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 17:26:04.724724 ignition[1281]: INFO : PUT result: OK Mar 17 17:26:04.728609 ignition[1281]: INFO : mount: mount passed Mar 17 17:26:04.728609 ignition[1281]: INFO : Ignition finished successfully Mar 17 17:26:04.733185 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 17 17:26:04.744993 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 17 17:26:04.763353 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:26:04.805879 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1292) Mar 17 17:26:04.809317 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 3dbd9b64-bd31-4292-be10-51551993b53f Mar 17 17:26:04.809387 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:26:04.809413 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 17 17:26:04.815875 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 17 17:26:04.819598 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:26:04.862762 ignition[1309]: INFO : Ignition 2.20.0 Mar 17 17:26:04.862762 ignition[1309]: INFO : Stage: files Mar 17 17:26:04.866141 ignition[1309]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:26:04.866141 ignition[1309]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 17:26:04.866141 ignition[1309]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 17:26:04.873605 ignition[1309]: INFO : PUT result: OK Mar 17 17:26:04.877920 ignition[1309]: DEBUG : files: compiled without relabeling support, skipping Mar 17 17:26:04.881887 ignition[1309]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 17:26:04.881887 ignition[1309]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 17:26:04.912596 ignition[1309]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 17:26:04.915424 ignition[1309]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 17:26:04.915424 ignition[1309]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 17:26:04.914264 unknown[1309]: wrote ssh authorized keys file for user: core Mar 17 17:26:04.923102 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Mar 17 17:26:04.926817 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 17:26:04.930156 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:26:04.930156 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:26:04.930156 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Mar 17 17:26:04.930156 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Mar 17 17:26:04.930156 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Mar 17 17:26:04.950026 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 Mar 17 17:26:04.948459 systemd-networkd[1118]: eth0: Gained IPv6LL Mar 17 17:26:05.434598 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Mar 17 17:26:05.796128 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Mar 17 17:26:05.800002 ignition[1309]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:26:05.800002 ignition[1309]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:26:05.800002 ignition[1309]: INFO : files: files passed Mar 17 17:26:05.800002 ignition[1309]: INFO : Ignition finished successfully Mar 17 17:26:05.809475 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 17 17:26:05.828752 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 17 17:26:05.834185 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 17 17:26:05.844112 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 17:26:05.844311 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 17 17:26:05.871722 initrd-setup-root-after-ignition[1338]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:26:05.871722 initrd-setup-root-after-ignition[1338]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:26:05.879743 initrd-setup-root-after-ignition[1342]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:26:05.885998 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:26:05.891594 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 17 17:26:05.904122 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 17 17:26:05.959187 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 17:26:05.959580 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 17 17:26:05.966994 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 17 17:26:05.969006 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 17 17:26:05.971176 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 17 17:26:05.980116 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 17 17:26:06.021748 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:26:06.036287 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 17 17:26:06.061399 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:26:06.064442 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:26:06.068072 systemd[1]: Stopped target timers.target - Timer Units. Mar 17 17:26:06.075337 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 17:26:06.075577 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:26:06.078188 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 17 17:26:06.080539 systemd[1]: Stopped target basic.target - Basic System. Mar 17 17:26:06.089808 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 17 17:26:06.092042 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:26:06.094547 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 17 17:26:06.096960 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 17 17:26:06.101379 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:26:06.105892 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 17 17:26:06.114995 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 17 17:26:06.117085 systemd[1]: Stopped target swap.target - Swaps. Mar 17 17:26:06.119563 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 17:26:06.119810 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:26:06.124423 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:26:06.126740 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:26:06.129275 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 17 17:26:06.135050 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:26:06.146110 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 17:26:06.146339 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 17 17:26:06.150370 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 17:26:06.150607 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:26:06.151605 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 17:26:06.151795 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 17 17:26:06.174874 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 17 17:26:06.176698 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 17:26:06.177434 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:26:06.201575 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 17 17:26:06.208507 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 17:26:06.211294 ignition[1362]: INFO : Ignition 2.20.0 Mar 17 17:26:06.211294 ignition[1362]: INFO : Stage: umount Mar 17 17:26:06.211294 ignition[1362]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:26:06.211294 ignition[1362]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 17:26:06.211294 ignition[1362]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 17:26:06.210978 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:26:06.233028 ignition[1362]: INFO : PUT result: OK Mar 17 17:26:06.233028 ignition[1362]: INFO : umount: umount passed Mar 17 17:26:06.233028 ignition[1362]: INFO : Ignition finished successfully Mar 17 17:26:06.216572 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 17:26:06.216803 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:26:06.236784 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 17:26:06.238929 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 17 17:26:06.246635 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 17:26:06.247159 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 17 17:26:06.256442 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 17:26:06.256559 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 17 17:26:06.259551 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 17 17:26:06.259656 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 17 17:26:06.263697 systemd[1]: Stopped target network.target - Network. Mar 17 17:26:06.265710 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 17:26:06.267496 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:26:06.271720 systemd[1]: Stopped target paths.target - Path Units. Mar 17 17:26:06.281143 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 17:26:06.284191 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:26:06.297335 systemd[1]: Stopped target slices.target - Slice Units. Mar 17 17:26:06.306301 systemd[1]: Stopped target sockets.target - Socket Units. Mar 17 17:26:06.312333 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 17:26:06.312434 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:26:06.314452 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 17:26:06.314547 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:26:06.316537 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 17:26:06.316653 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 17 17:26:06.318640 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 17 17:26:06.318752 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 17 17:26:06.321144 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 17 17:26:06.323230 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 17 17:26:06.328446 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 17:26:06.329708 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 17:26:06.329932 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 17 17:26:06.335227 systemd-networkd[1118]: eth0: DHCPv6 lease lost Mar 17 17:26:06.339245 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 17:26:06.339492 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 17 17:26:06.356328 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 17:26:06.356542 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 17 17:26:06.360560 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 17:26:06.360757 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 17 17:26:06.371116 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 17:26:06.371243 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:26:06.377913 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 17:26:06.378031 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 17 17:26:06.394433 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 17 17:26:06.405746 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 17:26:06.406377 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:26:06.410670 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 17:26:06.410948 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:26:06.425828 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 17:26:06.426190 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 17 17:26:06.432949 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 17 17:26:06.433085 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:26:06.450956 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:26:06.476697 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 17:26:06.477086 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:26:06.480450 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 17:26:06.480659 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 17 17:26:06.487648 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 17:26:06.487807 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 17 17:26:06.494690 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 17:26:06.494776 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:26:06.501542 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 17:26:06.501674 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:26:06.507908 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 17:26:06.508054 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 17 17:26:06.513862 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:26:06.513995 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:26:06.525168 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 17 17:26:06.532425 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 17 17:26:06.532567 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:26:06.535148 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:26:06.535261 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:26:06.570189 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 17:26:06.570683 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 17 17:26:06.578757 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 17 17:26:06.591788 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 17 17:26:06.607605 systemd[1]: Switching root. Mar 17 17:26:06.664530 systemd-journald[252]: Journal stopped Mar 17 17:26:09.105070 systemd-journald[252]: Received SIGTERM from PID 1 (systemd). Mar 17 17:26:09.105224 kernel: SELinux: policy capability network_peer_controls=1 Mar 17 17:26:09.105269 kernel: SELinux: policy capability open_perms=1 Mar 17 17:26:09.105300 kernel: SELinux: policy capability extended_socket_class=1 Mar 17 17:26:09.105331 kernel: SELinux: policy capability always_check_network=0 Mar 17 17:26:09.105361 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 17 17:26:09.105391 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 17 17:26:09.105419 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 17 17:26:09.105455 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 17 17:26:09.105488 kernel: audit: type=1403 audit(1742232367.205:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 17 17:26:09.105535 systemd[1]: Successfully loaded SELinux policy in 72.770ms. Mar 17 17:26:09.105572 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.337ms. Mar 17 17:26:09.105605 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 17 17:26:09.105637 systemd[1]: Detected virtualization amazon. Mar 17 17:26:09.105669 systemd[1]: Detected architecture arm64. Mar 17 17:26:09.105699 systemd[1]: Detected first boot. Mar 17 17:26:09.105730 systemd[1]: Initializing machine ID from VM UUID. Mar 17 17:26:09.105765 zram_generator::config[1405]: No configuration found. Mar 17 17:26:09.105798 systemd[1]: Populated /etc with preset unit settings. Mar 17 17:26:09.105851 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 17 17:26:09.105893 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 17 17:26:09.105923 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 17 17:26:09.105959 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 17 17:26:09.105991 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 17 17:26:09.106021 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 17 17:26:09.106052 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 17 17:26:09.106085 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 17 17:26:09.106114 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 17 17:26:09.106143 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 17 17:26:09.106174 systemd[1]: Created slice user.slice - User and Session Slice. Mar 17 17:26:09.106204 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:26:09.106250 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:26:09.106287 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 17 17:26:09.106319 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 17 17:26:09.106354 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 17 17:26:09.106390 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:26:09.106421 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 17 17:26:09.106453 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:26:09.106484 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 17 17:26:09.106515 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 17 17:26:09.106559 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 17 17:26:09.106595 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 17 17:26:09.106626 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:26:09.106658 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:26:09.108957 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:26:09.109005 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:26:09.109037 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 17 17:26:09.109075 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 17 17:26:09.109129 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:26:09.109162 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:26:09.109192 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:26:09.109225 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 17 17:26:09.109255 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 17 17:26:09.109302 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 17 17:26:09.109334 systemd[1]: Mounting media.mount - External Media Directory... Mar 17 17:26:09.109368 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 17 17:26:09.109403 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 17 17:26:09.109434 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 17 17:26:09.109478 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 17 17:26:09.109511 systemd[1]: Reached target machines.target - Containers. Mar 17 17:26:09.109544 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 17 17:26:09.109574 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:26:09.109603 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:26:09.109631 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 17 17:26:09.109662 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:26:09.109697 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:26:09.109729 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:26:09.109758 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 17 17:26:09.109786 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:26:09.109816 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 17:26:09.109872 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 17 17:26:09.109904 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 17 17:26:09.109935 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 17 17:26:09.109969 systemd[1]: Stopped systemd-fsck-usr.service. Mar 17 17:26:09.109998 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:26:09.110029 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:26:09.110058 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 17 17:26:09.110087 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 17 17:26:09.110116 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:26:09.110148 systemd[1]: verity-setup.service: Deactivated successfully. Mar 17 17:26:09.110179 systemd[1]: Stopped verity-setup.service. Mar 17 17:26:09.110210 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 17 17:26:09.110243 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 17 17:26:09.110272 systemd[1]: Mounted media.mount - External Media Directory. Mar 17 17:26:09.110300 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 17 17:26:09.110329 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 17 17:26:09.110357 kernel: fuse: init (API version 7.39) Mar 17 17:26:09.110390 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 17 17:26:09.110421 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:26:09.110450 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 17 17:26:09.110478 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 17 17:26:09.110507 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:26:09.110539 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:26:09.110570 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:26:09.110601 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:26:09.110631 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 17 17:26:09.110666 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 17 17:26:09.110694 kernel: loop: module loaded Mar 17 17:26:09.110723 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:26:09.110752 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:26:09.110780 kernel: ACPI: bus type drm_connector registered Mar 17 17:26:09.110807 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:26:09.112930 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:26:09.113008 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:26:09.113044 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 17 17:26:09.113090 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 17 17:26:09.113126 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 17 17:26:09.113158 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 17 17:26:09.113188 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 17 17:26:09.113222 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 17:26:09.113259 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:26:09.113293 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 17 17:26:09.113379 systemd-journald[1487]: Collecting audit messages is disabled. Mar 17 17:26:09.113436 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 17 17:26:09.113468 systemd-journald[1487]: Journal started Mar 17 17:26:09.113518 systemd-journald[1487]: Runtime Journal (/run/log/journal/ec2d6547e7fdfc266370cd39de351729) is 8.0M, max 75.3M, 67.3M free. Mar 17 17:26:09.116893 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 17 17:26:08.462104 systemd[1]: Queued start job for default target multi-user.target. Mar 17 17:26:08.512150 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Mar 17 17:26:08.512985 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 17 17:26:09.125875 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:26:09.135875 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 17 17:26:09.135980 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:26:09.150861 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 17 17:26:09.150954 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:26:09.166753 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:26:09.177875 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 17 17:26:09.183987 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:26:09.188576 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 17 17:26:09.191228 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 17 17:26:09.194233 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 17 17:26:09.197478 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 17 17:26:09.246979 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 17 17:26:09.270564 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 17 17:26:09.281171 kernel: loop0: detected capacity change from 0 to 113536 Mar 17 17:26:09.285180 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 17 17:26:09.290669 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 17 17:26:09.306252 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 17 17:26:09.309640 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:26:09.338757 systemd-journald[1487]: Time spent on flushing to /var/log/journal/ec2d6547e7fdfc266370cd39de351729 is 101.220ms for 894 entries. Mar 17 17:26:09.338757 systemd-journald[1487]: System Journal (/var/log/journal/ec2d6547e7fdfc266370cd39de351729) is 8.0M, max 195.6M, 187.6M free. Mar 17 17:26:09.455285 systemd-journald[1487]: Received client request to flush runtime journal. Mar 17 17:26:09.455381 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 17 17:26:09.422206 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:26:09.435335 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 17 17:26:09.464990 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 17 17:26:09.472895 kernel: loop1: detected capacity change from 0 to 116808 Mar 17 17:26:09.481684 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 17 17:26:09.485897 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 17 17:26:09.507942 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 17 17:26:09.524494 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:26:09.530036 udevadm[1548]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 17 17:26:09.585881 kernel: loop2: detected capacity change from 0 to 201592 Mar 17 17:26:09.593752 systemd-tmpfiles[1555]: ACLs are not supported, ignoring. Mar 17 17:26:09.593784 systemd-tmpfiles[1555]: ACLs are not supported, ignoring. Mar 17 17:26:09.605086 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:26:09.706877 kernel: loop3: detected capacity change from 0 to 53784 Mar 17 17:26:09.826911 kernel: loop4: detected capacity change from 0 to 113536 Mar 17 17:26:09.854105 kernel: loop5: detected capacity change from 0 to 116808 Mar 17 17:26:09.880871 kernel: loop6: detected capacity change from 0 to 201592 Mar 17 17:26:09.918870 kernel: loop7: detected capacity change from 0 to 53784 Mar 17 17:26:09.930084 (sd-merge)[1561]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Mar 17 17:26:09.933037 (sd-merge)[1561]: Merged extensions into '/usr'. Mar 17 17:26:09.941347 systemd[1]: Reloading requested from client PID 1516 ('systemd-sysext') (unit systemd-sysext.service)... Mar 17 17:26:09.941374 systemd[1]: Reloading... Mar 17 17:26:10.105877 zram_generator::config[1587]: No configuration found. Mar 17 17:26:10.436718 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:26:10.548135 systemd[1]: Reloading finished in 605 ms. Mar 17 17:26:10.585915 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 17 17:26:10.588856 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 17 17:26:10.605250 systemd[1]: Starting ensure-sysext.service... Mar 17 17:26:10.614381 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:26:10.620224 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:26:10.643174 systemd[1]: Reloading requested from client PID 1639 ('systemctl') (unit ensure-sysext.service)... Mar 17 17:26:10.643212 systemd[1]: Reloading... Mar 17 17:26:10.677647 systemd-tmpfiles[1640]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 17 17:26:10.678381 systemd-tmpfiles[1640]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 17 17:26:10.680275 systemd-tmpfiles[1640]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 17 17:26:10.680885 systemd-tmpfiles[1640]: ACLs are not supported, ignoring. Mar 17 17:26:10.681047 systemd-tmpfiles[1640]: ACLs are not supported, ignoring. Mar 17 17:26:10.690893 systemd-tmpfiles[1640]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:26:10.691506 systemd-tmpfiles[1640]: Skipping /boot Mar 17 17:26:10.711118 systemd-udevd[1641]: Using default interface naming scheme 'v255'. Mar 17 17:26:10.732205 systemd-tmpfiles[1640]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:26:10.732233 systemd-tmpfiles[1640]: Skipping /boot Mar 17 17:26:10.878027 zram_generator::config[1669]: No configuration found. Mar 17 17:26:10.959933 ldconfig[1512]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 17 17:26:11.009595 (udev-worker)[1672]: Network interface NamePolicy= disabled on kernel command line. Mar 17 17:26:11.257359 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:26:11.331034 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (1673) Mar 17 17:26:11.411293 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 17 17:26:11.412743 systemd[1]: Reloading finished in 768 ms. Mar 17 17:26:11.449920 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:26:11.455814 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 17 17:26:11.459085 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:26:11.536678 systemd[1]: Finished ensure-sysext.service. Mar 17 17:26:11.567151 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 17 17:26:11.575714 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 17 17:26:11.593270 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:26:11.605189 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 17 17:26:11.608654 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:26:11.611697 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 17 17:26:11.622252 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:26:11.632795 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:26:11.637219 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:26:11.645254 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:26:11.647634 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:26:11.652726 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 17 17:26:11.668177 lvm[1840]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:26:11.676313 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 17 17:26:11.719492 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:26:11.732197 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:26:11.734372 systemd[1]: Reached target time-set.target - System Time Set. Mar 17 17:26:11.748255 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 17 17:26:11.758628 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:26:11.767951 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 17 17:26:11.771290 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:26:11.771605 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:26:11.774521 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:26:11.776956 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:26:11.779718 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:26:11.780053 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:26:11.791131 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:26:11.793812 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:26:11.809578 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 17 17:26:11.813728 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:26:11.835298 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 17 17:26:11.837781 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:26:11.837940 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:26:11.850982 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 17 17:26:11.855738 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 17 17:26:11.860513 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 17:26:11.882329 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 17 17:26:11.893459 lvm[1873]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:26:11.907242 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 17 17:26:11.921275 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 17 17:26:11.959949 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 17 17:26:11.969019 augenrules[1886]: No rules Mar 17 17:26:11.970650 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:26:11.971084 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:26:11.977674 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 17 17:26:11.987679 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 17 17:26:12.076521 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:26:12.116296 systemd-networkd[1853]: lo: Link UP Mar 17 17:26:12.116323 systemd-networkd[1853]: lo: Gained carrier Mar 17 17:26:12.119179 systemd-networkd[1853]: Enumeration completed Mar 17 17:26:12.119382 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:26:12.123431 systemd-networkd[1853]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:26:12.123453 systemd-networkd[1853]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:26:12.126776 systemd-networkd[1853]: eth0: Link UP Mar 17 17:26:12.127516 systemd-networkd[1853]: eth0: Gained carrier Mar 17 17:26:12.127550 systemd-networkd[1853]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:26:12.131184 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 17 17:26:12.138960 systemd-networkd[1853]: eth0: DHCPv4 address 172.31.16.223/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 17 17:26:12.146905 systemd-resolved[1855]: Positive Trust Anchors: Mar 17 17:26:12.146966 systemd-resolved[1855]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:26:12.147029 systemd-resolved[1855]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:26:12.155783 systemd-resolved[1855]: Defaulting to hostname 'linux'. Mar 17 17:26:12.159180 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:26:12.161482 systemd[1]: Reached target network.target - Network. Mar 17 17:26:12.163263 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:26:12.165559 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:26:12.167766 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 17 17:26:12.170110 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 17 17:26:12.173177 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 17 17:26:12.175601 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 17 17:26:12.178538 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 17 17:26:12.181038 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 17 17:26:12.181103 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:26:12.183182 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:26:12.186720 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 17 17:26:12.191474 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 17 17:26:12.201174 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 17 17:26:12.204346 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 17 17:26:12.206812 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:26:12.208758 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:26:12.210787 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:26:12.210889 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:26:12.220631 systemd[1]: Starting containerd.service - containerd container runtime... Mar 17 17:26:12.226828 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 17 17:26:12.233235 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 17 17:26:12.244505 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 17 17:26:12.254941 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 17 17:26:12.257045 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 17 17:26:12.259457 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 17 17:26:12.265188 systemd[1]: Started ntpd.service - Network Time Service. Mar 17 17:26:12.269792 systemd[1]: Starting setup-oem.service - Setup OEM... Mar 17 17:26:12.278411 jq[1909]: false Mar 17 17:26:12.284134 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 17 17:26:12.291211 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 17 17:26:12.301524 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 17 17:26:12.306819 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 17 17:26:12.308811 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 17 17:26:12.334136 systemd[1]: Starting update-engine.service - Update Engine... Mar 17 17:26:12.342493 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 17 17:26:12.351509 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 17 17:26:12.353410 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 17 17:26:12.364787 extend-filesystems[1910]: Found loop4 Mar 17 17:26:12.364787 extend-filesystems[1910]: Found loop5 Mar 17 17:26:12.364787 extend-filesystems[1910]: Found loop6 Mar 17 17:26:12.364787 extend-filesystems[1910]: Found loop7 Mar 17 17:26:12.364787 extend-filesystems[1910]: Found nvme0n1 Mar 17 17:26:12.364787 extend-filesystems[1910]: Found nvme0n1p1 Mar 17 17:26:12.364787 extend-filesystems[1910]: Found nvme0n1p2 Mar 17 17:26:12.364787 extend-filesystems[1910]: Found nvme0n1p3 Mar 17 17:26:12.364787 extend-filesystems[1910]: Found usr Mar 17 17:26:12.411860 extend-filesystems[1910]: Found nvme0n1p4 Mar 17 17:26:12.411860 extend-filesystems[1910]: Found nvme0n1p6 Mar 17 17:26:12.411860 extend-filesystems[1910]: Found nvme0n1p7 Mar 17 17:26:12.411860 extend-filesystems[1910]: Found nvme0n1p9 Mar 17 17:26:12.411860 extend-filesystems[1910]: Checking size of /dev/nvme0n1p9 Mar 17 17:26:12.431591 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 17 17:26:12.432037 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 17 17:26:12.469486 extend-filesystems[1910]: Resized partition /dev/nvme0n1p9 Mar 17 17:26:12.471500 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 17 17:26:12.470415 dbus-daemon[1908]: [system] SELinux support is enabled Mar 17 17:26:12.493698 extend-filesystems[1941]: resize2fs 1.47.1 (20-May-2024) Mar 17 17:26:12.510188 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Mar 17 17:26:12.486463 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 17 17:26:12.486560 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 17 17:26:12.491914 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 17 17:26:12.491964 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 17 17:26:12.514551 dbus-daemon[1908]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1853 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 17 17:26:12.518786 dbus-daemon[1908]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 17 17:26:12.553193 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Mar 17 17:26:12.556414 jq[1920]: true Mar 17 17:26:12.582440 systemd[1]: motdgen.service: Deactivated successfully. Mar 17 17:26:12.584000 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 17 17:26:12.603630 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Mar 17 17:26:12.603721 update_engine[1917]: I20250317 17:26:12.600557 1917 main.cc:92] Flatcar Update Engine starting Mar 17 17:26:12.593300 ntpd[1912]: ntpd 4.2.8p17@1.4004-o Mon Mar 17 15:34:53 UTC 2025 (1): Starting Mar 17 17:26:12.592150 (ntainerd)[1940]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 17 17:26:12.618824 ntpd[1912]: 17 Mar 17:26:12 ntpd[1912]: ntpd 4.2.8p17@1.4004-o Mon Mar 17 15:34:53 UTC 2025 (1): Starting Mar 17 17:26:12.618824 ntpd[1912]: 17 Mar 17:26:12 ntpd[1912]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 17 17:26:12.618824 ntpd[1912]: 17 Mar 17:26:12 ntpd[1912]: ---------------------------------------------------- Mar 17 17:26:12.618824 ntpd[1912]: 17 Mar 17:26:12 ntpd[1912]: ntp-4 is maintained by Network Time Foundation, Mar 17 17:26:12.618824 ntpd[1912]: 17 Mar 17:26:12 ntpd[1912]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 17 17:26:12.618824 ntpd[1912]: 17 Mar 17:26:12 ntpd[1912]: corporation. Support and training for ntp-4 are Mar 17 17:26:12.618824 ntpd[1912]: 17 Mar 17:26:12 ntpd[1912]: available at https://www.nwtime.org/support Mar 17 17:26:12.618824 ntpd[1912]: 17 Mar 17:26:12 ntpd[1912]: ---------------------------------------------------- Mar 17 17:26:12.618824 ntpd[1912]: 17 Mar 17:26:12 ntpd[1912]: proto: precision = 0.096 usec (-23) Mar 17 17:26:12.618824 ntpd[1912]: 17 Mar 17:26:12 ntpd[1912]: basedate set to 2025-03-05 Mar 17 17:26:12.618824 ntpd[1912]: 17 Mar 17:26:12 ntpd[1912]: gps base set to 2025-03-09 (week 2357) Mar 17 17:26:12.618824 ntpd[1912]: 17 Mar 17:26:12 ntpd[1912]: Listen and drop on 0 v6wildcard [::]:123 Mar 17 17:26:12.618824 ntpd[1912]: 17 Mar 17:26:12 ntpd[1912]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 17 17:26:12.618824 ntpd[1912]: 17 Mar 17:26:12 ntpd[1912]: Listen normally on 2 lo 127.0.0.1:123 Mar 17 17:26:12.618824 ntpd[1912]: 17 Mar 17:26:12 ntpd[1912]: Listen normally on 3 eth0 172.31.16.223:123 Mar 17 17:26:12.618824 ntpd[1912]: 17 Mar 17:26:12 ntpd[1912]: Listen normally on 4 lo [::1]:123 Mar 17 17:26:12.618824 ntpd[1912]: 17 Mar 17:26:12 ntpd[1912]: bind(21) AF_INET6 fe80::4e6:65ff:feab:5fed%2#123 flags 0x11 failed: Cannot assign requested address Mar 17 17:26:12.618824 ntpd[1912]: 17 Mar 17:26:12 ntpd[1912]: unable to create socket on eth0 (5) for fe80::4e6:65ff:feab:5fed%2#123 Mar 17 17:26:12.618824 ntpd[1912]: 17 Mar 17:26:12 ntpd[1912]: failed to init interface for address fe80::4e6:65ff:feab:5fed%2 Mar 17 17:26:12.618824 ntpd[1912]: 17 Mar 17:26:12 ntpd[1912]: Listening on routing socket on fd #21 for interface updates Mar 17 17:26:12.593347 ntpd[1912]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 17 17:26:12.653274 update_engine[1917]: I20250317 17:26:12.628165 1917 update_check_scheduler.cc:74] Next update check in 9m34s Mar 17 17:26:12.636175 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 17 17:26:12.653425 extend-filesystems[1941]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Mar 17 17:26:12.653425 extend-filesystems[1941]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 17 17:26:12.653425 extend-filesystems[1941]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Mar 17 17:26:12.677264 ntpd[1912]: 17 Mar 17:26:12 ntpd[1912]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 17 17:26:12.677264 ntpd[1912]: 17 Mar 17:26:12 ntpd[1912]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 17 17:26:12.593367 ntpd[1912]: ---------------------------------------------------- Mar 17 17:26:12.636645 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 17 17:26:12.677554 extend-filesystems[1910]: Resized filesystem in /dev/nvme0n1p9 Mar 17 17:26:12.593386 ntpd[1912]: ntp-4 is maintained by Network Time Foundation, Mar 17 17:26:12.642206 systemd[1]: Started update-engine.service - Update Engine. Mar 17 17:26:12.593404 ntpd[1912]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 17 17:26:12.694738 jq[1951]: true Mar 17 17:26:12.668328 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 17 17:26:12.593422 ntpd[1912]: corporation. Support and training for ntp-4 are Mar 17 17:26:12.679147 systemd[1]: Finished setup-oem.service - Setup OEM. Mar 17 17:26:12.593441 ntpd[1912]: available at https://www.nwtime.org/support Mar 17 17:26:12.593459 ntpd[1912]: ---------------------------------------------------- Mar 17 17:26:12.604180 ntpd[1912]: proto: precision = 0.096 usec (-23) Mar 17 17:26:12.604629 ntpd[1912]: basedate set to 2025-03-05 Mar 17 17:26:12.604655 ntpd[1912]: gps base set to 2025-03-09 (week 2357) Mar 17 17:26:12.611584 ntpd[1912]: Listen and drop on 0 v6wildcard [::]:123 Mar 17 17:26:12.611669 ntpd[1912]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 17 17:26:12.616028 ntpd[1912]: Listen normally on 2 lo 127.0.0.1:123 Mar 17 17:26:12.616096 ntpd[1912]: Listen normally on 3 eth0 172.31.16.223:123 Mar 17 17:26:12.616160 ntpd[1912]: Listen normally on 4 lo [::1]:123 Mar 17 17:26:12.616235 ntpd[1912]: bind(21) AF_INET6 fe80::4e6:65ff:feab:5fed%2#123 flags 0x11 failed: Cannot assign requested address Mar 17 17:26:12.616273 ntpd[1912]: unable to create socket on eth0 (5) for fe80::4e6:65ff:feab:5fed%2#123 Mar 17 17:26:12.616305 ntpd[1912]: failed to init interface for address fe80::4e6:65ff:feab:5fed%2 Mar 17 17:26:12.616360 ntpd[1912]: Listening on routing socket on fd #21 for interface updates Mar 17 17:26:12.630967 ntpd[1912]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 17 17:26:12.631029 ntpd[1912]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 17 17:26:12.744417 systemd-logind[1916]: Watching system buttons on /dev/input/event0 (Power Button) Mar 17 17:26:12.744471 systemd-logind[1916]: Watching system buttons on /dev/input/event1 (Sleep Button) Mar 17 17:26:12.749230 systemd-logind[1916]: New seat seat0. Mar 17 17:26:12.753341 systemd[1]: Started systemd-logind.service - User Login Management. Mar 17 17:26:12.815650 coreos-metadata[1907]: Mar 17 17:26:12.815 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 17 17:26:12.826664 coreos-metadata[1907]: Mar 17 17:26:12.826 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Mar 17 17:26:12.833783 coreos-metadata[1907]: Mar 17 17:26:12.831 INFO Fetch successful Mar 17 17:26:12.833783 coreos-metadata[1907]: Mar 17 17:26:12.831 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Mar 17 17:26:12.836659 coreos-metadata[1907]: Mar 17 17:26:12.836 INFO Fetch successful Mar 17 17:26:12.836659 coreos-metadata[1907]: Mar 17 17:26:12.836 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Mar 17 17:26:12.844925 coreos-metadata[1907]: Mar 17 17:26:12.841 INFO Fetch successful Mar 17 17:26:12.844925 coreos-metadata[1907]: Mar 17 17:26:12.841 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Mar 17 17:26:12.845515 coreos-metadata[1907]: Mar 17 17:26:12.845 INFO Fetch successful Mar 17 17:26:12.845515 coreos-metadata[1907]: Mar 17 17:26:12.845 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Mar 17 17:26:12.848356 coreos-metadata[1907]: Mar 17 17:26:12.847 INFO Fetch failed with 404: resource not found Mar 17 17:26:12.848356 coreos-metadata[1907]: Mar 17 17:26:12.847 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Mar 17 17:26:12.851280 coreos-metadata[1907]: Mar 17 17:26:12.851 INFO Fetch successful Mar 17 17:26:12.851280 coreos-metadata[1907]: Mar 17 17:26:12.851 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Mar 17 17:26:12.857934 coreos-metadata[1907]: Mar 17 17:26:12.857 INFO Fetch successful Mar 17 17:26:12.857934 coreos-metadata[1907]: Mar 17 17:26:12.857 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Mar 17 17:26:12.865923 coreos-metadata[1907]: Mar 17 17:26:12.864 INFO Fetch successful Mar 17 17:26:12.865923 coreos-metadata[1907]: Mar 17 17:26:12.864 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Mar 17 17:26:12.867936 coreos-metadata[1907]: Mar 17 17:26:12.867 INFO Fetch successful Mar 17 17:26:12.867936 coreos-metadata[1907]: Mar 17 17:26:12.867 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Mar 17 17:26:12.871924 coreos-metadata[1907]: Mar 17 17:26:12.871 INFO Fetch successful Mar 17 17:26:12.906228 bash[1985]: Updated "/home/core/.ssh/authorized_keys" Mar 17 17:26:12.906052 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 17 17:26:12.934457 systemd[1]: Starting sshkeys.service... Mar 17 17:26:12.995992 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 17 17:26:13.010456 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 17 17:26:13.017861 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (1697) Mar 17 17:26:13.019390 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 17 17:26:13.024467 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 17 17:26:13.069725 containerd[1940]: time="2025-03-17T17:26:13.069594968Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Mar 17 17:26:13.095442 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 17 17:26:13.184868 containerd[1940]: time="2025-03-17T17:26:13.183439340Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:26:13.189643 containerd[1940]: time="2025-03-17T17:26:13.189536648Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.83-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:26:13.189817 containerd[1940]: time="2025-03-17T17:26:13.189785060Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 17 17:26:13.189965 containerd[1940]: time="2025-03-17T17:26:13.189935888Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 17 17:26:13.191122 containerd[1940]: time="2025-03-17T17:26:13.191035700Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 17 17:26:13.191683 containerd[1940]: time="2025-03-17T17:26:13.191088884Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 17 17:26:13.191721 dbus-daemon[1908]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 17 17:26:13.192000 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Mar 17 17:26:13.193927 containerd[1940]: time="2025-03-17T17:26:13.192331328Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:26:13.194064 containerd[1940]: time="2025-03-17T17:26:13.193726112Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:26:13.194855 containerd[1940]: time="2025-03-17T17:26:13.194648624Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:26:13.194855 containerd[1940]: time="2025-03-17T17:26:13.194717408Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 17 17:26:13.194855 containerd[1940]: time="2025-03-17T17:26:13.194775152Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:26:13.194855 containerd[1940]: time="2025-03-17T17:26:13.194802224Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 17 17:26:13.195375 containerd[1940]: time="2025-03-17T17:26:13.195339512Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:26:13.197550 containerd[1940]: time="2025-03-17T17:26:13.197278761Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:26:13.198745 containerd[1940]: time="2025-03-17T17:26:13.197916153Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:26:13.198745 containerd[1940]: time="2025-03-17T17:26:13.197959941Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 17 17:26:13.198745 containerd[1940]: time="2025-03-17T17:26:13.198177693Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 17 17:26:13.198745 containerd[1940]: time="2025-03-17T17:26:13.198278289Z" level=info msg="metadata content store policy set" policy=shared Mar 17 17:26:13.201502 dbus-daemon[1908]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1947 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 17 17:26:13.212726 containerd[1940]: time="2025-03-17T17:26:13.212664393Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 17 17:26:13.214656 containerd[1940]: time="2025-03-17T17:26:13.213931353Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 17 17:26:13.214656 containerd[1940]: time="2025-03-17T17:26:13.214056681Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 17 17:26:13.214656 containerd[1940]: time="2025-03-17T17:26:13.214103541Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 17 17:26:13.214656 containerd[1940]: time="2025-03-17T17:26:13.214150185Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 17 17:26:13.214656 containerd[1940]: time="2025-03-17T17:26:13.214451961Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 17 17:26:13.216160 systemd[1]: Starting polkit.service - Authorization Manager... Mar 17 17:26:13.218775 containerd[1940]: time="2025-03-17T17:26:13.216598725Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 17 17:26:13.218989 containerd[1940]: time="2025-03-17T17:26:13.218946921Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 17 17:26:13.219104 containerd[1940]: time="2025-03-17T17:26:13.219076941Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 17 17:26:13.219246 containerd[1940]: time="2025-03-17T17:26:13.219189597Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 17 17:26:13.219384 containerd[1940]: time="2025-03-17T17:26:13.219354909Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 17 17:26:13.219503 containerd[1940]: time="2025-03-17T17:26:13.219477093Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 17 17:26:13.219631 containerd[1940]: time="2025-03-17T17:26:13.219605109Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 17 17:26:13.220460 containerd[1940]: time="2025-03-17T17:26:13.220396917Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 17 17:26:13.222877 containerd[1940]: time="2025-03-17T17:26:13.220956609Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 17 17:26:13.222877 containerd[1940]: time="2025-03-17T17:26:13.221007237Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 17 17:26:13.222877 containerd[1940]: time="2025-03-17T17:26:13.221041797Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 17 17:26:13.222877 containerd[1940]: time="2025-03-17T17:26:13.221088393Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 17 17:26:13.222877 containerd[1940]: time="2025-03-17T17:26:13.221134869Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 17 17:26:13.222877 containerd[1940]: time="2025-03-17T17:26:13.221168925Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 17 17:26:13.222877 containerd[1940]: time="2025-03-17T17:26:13.221200437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 17 17:26:13.222877 containerd[1940]: time="2025-03-17T17:26:13.221232045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 17 17:26:13.222877 containerd[1940]: time="2025-03-17T17:26:13.221265393Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 17 17:26:13.222877 containerd[1940]: time="2025-03-17T17:26:13.221296869Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 17 17:26:13.222877 containerd[1940]: time="2025-03-17T17:26:13.221324469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 17 17:26:13.222877 containerd[1940]: time="2025-03-17T17:26:13.221354781Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 17 17:26:13.222877 containerd[1940]: time="2025-03-17T17:26:13.221386065Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 17 17:26:13.222877 containerd[1940]: time="2025-03-17T17:26:13.221455893Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 17 17:26:13.223560 containerd[1940]: time="2025-03-17T17:26:13.221485869Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 17 17:26:13.223560 containerd[1940]: time="2025-03-17T17:26:13.221513733Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 17 17:26:13.223560 containerd[1940]: time="2025-03-17T17:26:13.221553009Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 17 17:26:13.223560 containerd[1940]: time="2025-03-17T17:26:13.221595705Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 17 17:26:13.223560 containerd[1940]: time="2025-03-17T17:26:13.221646153Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 17 17:26:13.223560 containerd[1940]: time="2025-03-17T17:26:13.221678901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 17 17:26:13.223560 containerd[1940]: time="2025-03-17T17:26:13.221705145Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 17 17:26:13.223560 containerd[1940]: time="2025-03-17T17:26:13.221870517Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 17 17:26:13.223560 containerd[1940]: time="2025-03-17T17:26:13.221918541Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 17 17:26:13.223560 containerd[1940]: time="2025-03-17T17:26:13.221947077Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 17 17:26:13.223560 containerd[1940]: time="2025-03-17T17:26:13.221982237Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 17 17:26:13.223560 containerd[1940]: time="2025-03-17T17:26:13.222005793Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 17 17:26:13.223560 containerd[1940]: time="2025-03-17T17:26:13.222038013Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 17 17:26:13.223560 containerd[1940]: time="2025-03-17T17:26:13.222064113Z" level=info msg="NRI interface is disabled by configuration." Mar 17 17:26:13.224143 containerd[1940]: time="2025-03-17T17:26:13.222088425Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 17 17:26:13.224195 containerd[1940]: time="2025-03-17T17:26:13.222628101Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 17 17:26:13.224195 containerd[1940]: time="2025-03-17T17:26:13.222725685Z" level=info msg="Connect containerd service" Mar 17 17:26:13.224195 containerd[1940]: time="2025-03-17T17:26:13.222802821Z" level=info msg="using legacy CRI server" Mar 17 17:26:13.228226 containerd[1940]: time="2025-03-17T17:26:13.222823401Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 17 17:26:13.228226 containerd[1940]: time="2025-03-17T17:26:13.227630961Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 17 17:26:13.232812 containerd[1940]: time="2025-03-17T17:26:13.231084705Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 17:26:13.232812 containerd[1940]: time="2025-03-17T17:26:13.232006137Z" level=info msg="Start subscribing containerd event" Mar 17 17:26:13.232812 containerd[1940]: time="2025-03-17T17:26:13.232087029Z" level=info msg="Start recovering state" Mar 17 17:26:13.232812 containerd[1940]: time="2025-03-17T17:26:13.232214505Z" level=info msg="Start event monitor" Mar 17 17:26:13.232812 containerd[1940]: time="2025-03-17T17:26:13.232237617Z" level=info msg="Start snapshots syncer" Mar 17 17:26:13.232812 containerd[1940]: time="2025-03-17T17:26:13.232258545Z" level=info msg="Start cni network conf syncer for default" Mar 17 17:26:13.232812 containerd[1940]: time="2025-03-17T17:26:13.232280037Z" level=info msg="Start streaming server" Mar 17 17:26:13.235666 containerd[1940]: time="2025-03-17T17:26:13.235355853Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 17 17:26:13.236287 containerd[1940]: time="2025-03-17T17:26:13.236190861Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 17 17:26:13.240535 systemd[1]: Started containerd.service - containerd container runtime. Mar 17 17:26:13.267261 containerd[1940]: time="2025-03-17T17:26:13.267198213Z" level=info msg="containerd successfully booted in 0.201065s" Mar 17 17:26:13.274018 locksmithd[1957]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 17 17:26:13.283007 polkitd[2051]: Started polkitd version 121 Mar 17 17:26:13.299659 polkitd[2051]: Loading rules from directory /etc/polkit-1/rules.d Mar 17 17:26:13.299789 polkitd[2051]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 17 17:26:13.303229 polkitd[2051]: Finished loading, compiling and executing 2 rules Mar 17 17:26:13.305316 dbus-daemon[1908]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 17 17:26:13.305624 systemd[1]: Started polkit.service - Authorization Manager. Mar 17 17:26:13.307051 polkitd[2051]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 17 17:26:13.377629 systemd-hostnamed[1947]: Hostname set to (transient) Mar 17 17:26:13.377631 systemd-resolved[1855]: System hostname changed to 'ip-172-31-16-223'. Mar 17 17:26:13.382867 coreos-metadata[2002]: Mar 17 17:26:13.381 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 17 17:26:13.383409 coreos-metadata[2002]: Mar 17 17:26:13.383 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Mar 17 17:26:13.386296 coreos-metadata[2002]: Mar 17 17:26:13.386 INFO Fetch successful Mar 17 17:26:13.386296 coreos-metadata[2002]: Mar 17 17:26:13.386 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Mar 17 17:26:13.391128 coreos-metadata[2002]: Mar 17 17:26:13.391 INFO Fetch successful Mar 17 17:26:13.395590 unknown[2002]: wrote ssh authorized keys file for user: core Mar 17 17:26:13.509081 update-ssh-keys[2095]: Updated "/home/core/.ssh/authorized_keys" Mar 17 17:26:13.511627 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 17 17:26:13.522931 systemd[1]: Finished sshkeys.service. Mar 17 17:26:13.524181 systemd-networkd[1853]: eth0: Gained IPv6LL Mar 17 17:26:13.544402 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 17 17:26:13.553536 systemd[1]: Reached target network-online.target - Network is Online. Mar 17 17:26:13.569541 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Mar 17 17:26:13.584652 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:26:13.594624 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 17 17:26:13.716375 amazon-ssm-agent[2111]: Initializing new seelog logger Mar 17 17:26:13.718213 amazon-ssm-agent[2111]: New Seelog Logger Creation Complete Mar 17 17:26:13.720402 amazon-ssm-agent[2111]: 2025/03/17 17:26:13 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 17 17:26:13.720402 amazon-ssm-agent[2111]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 17 17:26:13.722879 amazon-ssm-agent[2111]: 2025/03/17 17:26:13 processing appconfig overrides Mar 17 17:26:13.724807 amazon-ssm-agent[2111]: 2025-03-17 17:26:13 INFO Proxy environment variables: Mar 17 17:26:13.725652 amazon-ssm-agent[2111]: 2025/03/17 17:26:13 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 17 17:26:13.727389 amazon-ssm-agent[2111]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 17 17:26:13.727389 amazon-ssm-agent[2111]: 2025/03/17 17:26:13 processing appconfig overrides Mar 17 17:26:13.728016 amazon-ssm-agent[2111]: 2025/03/17 17:26:13 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 17 17:26:13.728306 amazon-ssm-agent[2111]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 17 17:26:13.729783 amazon-ssm-agent[2111]: 2025/03/17 17:26:13 processing appconfig overrides Mar 17 17:26:13.734224 amazon-ssm-agent[2111]: 2025/03/17 17:26:13 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 17 17:26:13.734224 amazon-ssm-agent[2111]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 17 17:26:13.734436 amazon-ssm-agent[2111]: 2025/03/17 17:26:13 processing appconfig overrides Mar 17 17:26:13.752656 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 17 17:26:13.826292 amazon-ssm-agent[2111]: 2025-03-17 17:26:13 INFO no_proxy: Mar 17 17:26:13.925898 amazon-ssm-agent[2111]: 2025-03-17 17:26:13 INFO https_proxy: Mar 17 17:26:14.025946 amazon-ssm-agent[2111]: 2025-03-17 17:26:13 INFO http_proxy: Mar 17 17:26:14.126874 amazon-ssm-agent[2111]: 2025-03-17 17:26:13 INFO Checking if agent identity type OnPrem can be assumed Mar 17 17:26:14.223171 amazon-ssm-agent[2111]: 2025-03-17 17:26:13 INFO Checking if agent identity type EC2 can be assumed Mar 17 17:26:14.323016 amazon-ssm-agent[2111]: 2025-03-17 17:26:13 INFO Agent will take identity from EC2 Mar 17 17:26:14.422252 amazon-ssm-agent[2111]: 2025-03-17 17:26:13 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 17 17:26:14.521888 amazon-ssm-agent[2111]: 2025-03-17 17:26:13 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 17 17:26:14.544601 amazon-ssm-agent[2111]: 2025-03-17 17:26:13 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 17 17:26:14.545602 amazon-ssm-agent[2111]: 2025-03-17 17:26:13 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Mar 17 17:26:14.545817 amazon-ssm-agent[2111]: 2025-03-17 17:26:13 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Mar 17 17:26:14.547187 amazon-ssm-agent[2111]: 2025-03-17 17:26:13 INFO [amazon-ssm-agent] Starting Core Agent Mar 17 17:26:14.547635 amazon-ssm-agent[2111]: 2025-03-17 17:26:13 INFO [amazon-ssm-agent] registrar detected. Attempting registration Mar 17 17:26:14.547793 amazon-ssm-agent[2111]: 2025-03-17 17:26:13 INFO [Registrar] Starting registrar module Mar 17 17:26:14.548011 amazon-ssm-agent[2111]: 2025-03-17 17:26:13 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Mar 17 17:26:14.548143 amazon-ssm-agent[2111]: 2025-03-17 17:26:14 INFO [EC2Identity] EC2 registration was successful. Mar 17 17:26:14.548343 amazon-ssm-agent[2111]: 2025-03-17 17:26:14 INFO [CredentialRefresher] credentialRefresher has started Mar 17 17:26:14.548343 amazon-ssm-agent[2111]: 2025-03-17 17:26:14 INFO [CredentialRefresher] Starting credentials refresher loop Mar 17 17:26:14.548476 amazon-ssm-agent[2111]: 2025-03-17 17:26:14 INFO EC2RoleProvider Successfully connected with instance profile role credentials Mar 17 17:26:14.620647 amazon-ssm-agent[2111]: 2025-03-17 17:26:14 INFO [CredentialRefresher] Next credential rotation will be in 30.3248928555 minutes Mar 17 17:26:15.469103 sshd_keygen[1931]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 17 17:26:15.514732 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 17 17:26:15.525525 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 17 17:26:15.539205 systemd[1]: Started sshd@0-172.31.16.223:22-139.178.68.195:50390.service - OpenSSH per-connection server daemon (139.178.68.195:50390). Mar 17 17:26:15.552468 systemd[1]: issuegen.service: Deactivated successfully. Mar 17 17:26:15.554001 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 17 17:26:15.572393 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 17 17:26:15.595187 ntpd[1912]: Listen normally on 6 eth0 [fe80::4e6:65ff:feab:5fed%2]:123 Mar 17 17:26:15.598275 ntpd[1912]: 17 Mar 17:26:15 ntpd[1912]: Listen normally on 6 eth0 [fe80::4e6:65ff:feab:5fed%2]:123 Mar 17 17:26:15.621532 amazon-ssm-agent[2111]: 2025-03-17 17:26:15 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Mar 17 17:26:15.624948 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 17 17:26:15.636997 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 17 17:26:15.651493 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 17 17:26:15.654385 systemd[1]: Reached target getty.target - Login Prompts. Mar 17 17:26:15.723076 amazon-ssm-agent[2111]: 2025-03-17 17:26:15 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2146) started Mar 17 17:26:15.823508 amazon-ssm-agent[2111]: 2025-03-17 17:26:15 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Mar 17 17:26:15.836474 sshd[2139]: Accepted publickey for core from 139.178.68.195 port 50390 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:26:15.837878 sshd-session[2139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:26:15.864881 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 17 17:26:15.877446 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 17 17:26:15.889533 systemd-logind[1916]: New session 1 of user core. Mar 17 17:26:15.921964 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 17 17:26:15.936053 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 17 17:26:15.957804 (systemd)[2160]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 17 17:26:15.991929 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:26:15.998376 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 17 17:26:16.007516 (kubelet)[2167]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:26:16.200599 systemd[2160]: Queued start job for default target default.target. Mar 17 17:26:16.208590 systemd[2160]: Created slice app.slice - User Application Slice. Mar 17 17:26:16.208656 systemd[2160]: Reached target paths.target - Paths. Mar 17 17:26:16.208690 systemd[2160]: Reached target timers.target - Timers. Mar 17 17:26:16.211608 systemd[2160]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 17 17:26:16.245551 systemd[2160]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 17 17:26:16.246201 systemd[2160]: Reached target sockets.target - Sockets. Mar 17 17:26:16.246246 systemd[2160]: Reached target basic.target - Basic System. Mar 17 17:26:16.246339 systemd[2160]: Reached target default.target - Main User Target. Mar 17 17:26:16.246405 systemd[2160]: Startup finished in 270ms. Mar 17 17:26:16.246780 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 17 17:26:16.257170 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 17 17:26:16.259954 systemd[1]: Startup finished in 1.194s (kernel) + 8.384s (initrd) + 9.124s (userspace) = 18.704s. Mar 17 17:26:16.428494 systemd[1]: Started sshd@1-172.31.16.223:22-139.178.68.195:51642.service - OpenSSH per-connection server daemon (139.178.68.195:51642). Mar 17 17:26:16.632740 sshd[2186]: Accepted publickey for core from 139.178.68.195 port 51642 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:26:16.636174 sshd-session[2186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:26:16.649226 systemd-logind[1916]: New session 2 of user core. Mar 17 17:26:16.655252 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 17 17:26:16.788048 sshd[2188]: Connection closed by 139.178.68.195 port 51642 Mar 17 17:26:16.788959 sshd-session[2186]: pam_unix(sshd:session): session closed for user core Mar 17 17:26:16.795513 systemd[1]: sshd@1-172.31.16.223:22-139.178.68.195:51642.service: Deactivated successfully. Mar 17 17:26:16.799574 systemd[1]: session-2.scope: Deactivated successfully. Mar 17 17:26:16.803955 systemd-logind[1916]: Session 2 logged out. Waiting for processes to exit. Mar 17 17:26:16.807978 systemd-logind[1916]: Removed session 2. Mar 17 17:26:16.827576 systemd[1]: Started sshd@2-172.31.16.223:22-139.178.68.195:51658.service - OpenSSH per-connection server daemon (139.178.68.195:51658). Mar 17 17:26:17.023694 sshd[2193]: Accepted publickey for core from 139.178.68.195 port 51658 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:26:17.027036 sshd-session[2193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:26:17.040048 systemd-logind[1916]: New session 3 of user core. Mar 17 17:26:17.045234 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 17 17:26:17.167773 sshd[2195]: Connection closed by 139.178.68.195 port 51658 Mar 17 17:26:17.167507 sshd-session[2193]: pam_unix(sshd:session): session closed for user core Mar 17 17:26:17.175102 systemd[1]: sshd@2-172.31.16.223:22-139.178.68.195:51658.service: Deactivated successfully. Mar 17 17:26:17.179761 systemd[1]: session-3.scope: Deactivated successfully. Mar 17 17:26:17.185966 systemd-logind[1916]: Session 3 logged out. Waiting for processes to exit. Mar 17 17:26:17.208557 systemd[1]: Started sshd@3-172.31.16.223:22-139.178.68.195:51660.service - OpenSSH per-connection server daemon (139.178.68.195:51660). Mar 17 17:26:17.211988 systemd-logind[1916]: Removed session 3. Mar 17 17:26:17.243487 kubelet[2167]: E0317 17:26:17.243388 2167 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:26:17.249321 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:26:17.249669 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:26:17.250411 systemd[1]: kubelet.service: Consumed 1.349s CPU time. Mar 17 17:26:17.410463 sshd[2201]: Accepted publickey for core from 139.178.68.195 port 51660 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:26:17.413175 sshd-session[2201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:26:17.422342 systemd-logind[1916]: New session 4 of user core. Mar 17 17:26:17.432226 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 17 17:26:17.558934 sshd[2204]: Connection closed by 139.178.68.195 port 51660 Mar 17 17:26:17.558699 sshd-session[2201]: pam_unix(sshd:session): session closed for user core Mar 17 17:26:17.565229 systemd[1]: sshd@3-172.31.16.223:22-139.178.68.195:51660.service: Deactivated successfully. Mar 17 17:26:17.565332 systemd-logind[1916]: Session 4 logged out. Waiting for processes to exit. Mar 17 17:26:17.568877 systemd[1]: session-4.scope: Deactivated successfully. Mar 17 17:26:17.580055 systemd-logind[1916]: Removed session 4. Mar 17 17:26:17.605376 systemd[1]: Started sshd@4-172.31.16.223:22-139.178.68.195:51674.service - OpenSSH per-connection server daemon (139.178.68.195:51674). Mar 17 17:26:17.787553 sshd[2209]: Accepted publickey for core from 139.178.68.195 port 51674 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:26:17.790101 sshd-session[2209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:26:17.799094 systemd-logind[1916]: New session 5 of user core. Mar 17 17:26:17.808156 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 17 17:26:17.933679 sudo[2212]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 17 17:26:17.934481 sudo[2212]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:26:17.953699 sudo[2212]: pam_unix(sudo:session): session closed for user root Mar 17 17:26:17.977079 sshd[2211]: Connection closed by 139.178.68.195 port 51674 Mar 17 17:26:17.978310 sshd-session[2209]: pam_unix(sshd:session): session closed for user core Mar 17 17:26:17.984435 systemd-logind[1916]: Session 5 logged out. Waiting for processes to exit. Mar 17 17:26:17.985179 systemd[1]: sshd@4-172.31.16.223:22-139.178.68.195:51674.service: Deactivated successfully. Mar 17 17:26:17.989658 systemd[1]: session-5.scope: Deactivated successfully. Mar 17 17:26:17.993611 systemd-logind[1916]: Removed session 5. Mar 17 17:26:18.024345 systemd[1]: Started sshd@5-172.31.16.223:22-139.178.68.195:51684.service - OpenSSH per-connection server daemon (139.178.68.195:51684). Mar 17 17:26:18.212245 sshd[2217]: Accepted publickey for core from 139.178.68.195 port 51684 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:26:18.215047 sshd-session[2217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:26:18.223303 systemd-logind[1916]: New session 6 of user core. Mar 17 17:26:18.234173 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 17 17:26:18.343465 sudo[2221]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 17 17:26:18.344200 sudo[2221]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:26:18.352054 sudo[2221]: pam_unix(sudo:session): session closed for user root Mar 17 17:26:18.363215 sudo[2220]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 17 17:26:18.364151 sudo[2220]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:26:18.388476 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:26:18.443669 augenrules[2243]: No rules Mar 17 17:26:18.446094 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:26:18.447971 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:26:18.452009 sudo[2220]: pam_unix(sudo:session): session closed for user root Mar 17 17:26:18.476293 sshd[2219]: Connection closed by 139.178.68.195 port 51684 Mar 17 17:26:18.477933 sshd-session[2217]: pam_unix(sshd:session): session closed for user core Mar 17 17:26:18.484622 systemd-logind[1916]: Session 6 logged out. Waiting for processes to exit. Mar 17 17:26:18.486181 systemd[1]: sshd@5-172.31.16.223:22-139.178.68.195:51684.service: Deactivated successfully. Mar 17 17:26:18.489335 systemd[1]: session-6.scope: Deactivated successfully. Mar 17 17:26:18.493340 systemd-logind[1916]: Removed session 6. Mar 17 17:26:18.514370 systemd[1]: Started sshd@6-172.31.16.223:22-139.178.68.195:51698.service - OpenSSH per-connection server daemon (139.178.68.195:51698). Mar 17 17:26:18.704136 sshd[2251]: Accepted publickey for core from 139.178.68.195 port 51698 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:26:18.706737 sshd-session[2251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:26:18.715682 systemd-logind[1916]: New session 7 of user core. Mar 17 17:26:18.724148 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 17 17:26:18.830918 sudo[2254]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 17 17:26:18.831630 sudo[2254]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:26:19.404993 systemd-resolved[1855]: Clock change detected. Flushing caches. Mar 17 17:26:19.909567 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:26:19.910119 systemd[1]: kubelet.service: Consumed 1.349s CPU time. Mar 17 17:26:19.921185 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:26:19.984227 systemd[1]: Reloading requested from client PID 2287 ('systemctl') (unit session-7.scope)... Mar 17 17:26:19.984416 systemd[1]: Reloading... Mar 17 17:26:20.249684 zram_generator::config[2330]: No configuration found. Mar 17 17:26:20.509051 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:26:20.672426 systemd[1]: Reloading finished in 687 ms. Mar 17 17:26:20.760888 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 17 17:26:20.761274 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 17 17:26:20.761901 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:26:20.770318 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:26:21.081379 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:26:21.098216 (kubelet)[2389]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:26:21.170964 kubelet[2389]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:26:21.171434 kubelet[2389]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 17 17:26:21.171516 kubelet[2389]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:26:21.171842 kubelet[2389]: I0317 17:26:21.171783 2389 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 17:26:22.916914 kubelet[2389]: I0317 17:26:22.916859 2389 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Mar 17 17:26:22.917510 kubelet[2389]: I0317 17:26:22.917486 2389 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 17:26:22.919419 kubelet[2389]: I0317 17:26:22.918240 2389 server.go:954] "Client rotation is on, will bootstrap in background" Mar 17 17:26:22.963765 kubelet[2389]: I0317 17:26:22.963708 2389 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 17:26:22.982503 kubelet[2389]: E0317 17:26:22.982433 2389 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 17 17:26:22.982503 kubelet[2389]: I0317 17:26:22.982491 2389 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 17 17:26:22.988545 kubelet[2389]: I0317 17:26:22.988109 2389 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 17:26:22.991207 kubelet[2389]: I0317 17:26:22.991135 2389 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 17:26:22.991516 kubelet[2389]: I0317 17:26:22.991209 2389 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172.31.16.223","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 17 17:26:22.991742 kubelet[2389]: I0317 17:26:22.991555 2389 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 17:26:22.991742 kubelet[2389]: I0317 17:26:22.991579 2389 container_manager_linux.go:304] "Creating device plugin manager" Mar 17 17:26:22.991879 kubelet[2389]: I0317 17:26:22.991867 2389 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:26:22.995264 kubelet[2389]: I0317 17:26:22.995220 2389 kubelet.go:446] "Attempting to sync node with API server" Mar 17 17:26:22.995264 kubelet[2389]: I0317 17:26:22.995265 2389 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 17:26:22.995433 kubelet[2389]: I0317 17:26:22.995299 2389 kubelet.go:352] "Adding apiserver pod source" Mar 17 17:26:22.995433 kubelet[2389]: I0317 17:26:22.995319 2389 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 17:26:22.996993 kubelet[2389]: E0317 17:26:22.996131 2389 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:22.996993 kubelet[2389]: E0317 17:26:22.996582 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:23.001848 kubelet[2389]: I0317 17:26:23.000529 2389 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 17:26:23.001848 kubelet[2389]: I0317 17:26:23.001382 2389 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 17:26:23.001848 kubelet[2389]: W0317 17:26:23.001492 2389 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 17 17:26:23.002935 kubelet[2389]: I0317 17:26:23.002885 2389 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 17 17:26:23.003028 kubelet[2389]: I0317 17:26:23.002944 2389 server.go:1287] "Started kubelet" Mar 17 17:26:23.006547 kubelet[2389]: I0317 17:26:23.005625 2389 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 17:26:23.007544 kubelet[2389]: I0317 17:26:23.007513 2389 server.go:490] "Adding debug handlers to kubelet server" Mar 17 17:26:23.011497 kubelet[2389]: I0317 17:26:23.011394 2389 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 17:26:23.011983 kubelet[2389]: I0317 17:26:23.011941 2389 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 17:26:23.015453 kubelet[2389]: I0317 17:26:23.015412 2389 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 17:26:23.016564 kubelet[2389]: E0317 17:26:23.016322 2389 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.16.223.182da71f21123645 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.16.223,UID:172.31.16.223,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172.31.16.223,},FirstTimestamp:2025-03-17 17:26:23.002916421 +0000 UTC m=+1.897887886,LastTimestamp:2025-03-17 17:26:23.002916421 +0000 UTC m=+1.897887886,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.16.223,}" Mar 17 17:26:23.022005 kubelet[2389]: I0317 17:26:23.021958 2389 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 17 17:26:23.026163 kubelet[2389]: E0317 17:26:23.025845 2389 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.31.16.223\" not found" Mar 17 17:26:23.026163 kubelet[2389]: I0317 17:26:23.025904 2389 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 17 17:26:23.026379 kubelet[2389]: I0317 17:26:23.026213 2389 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 17:26:23.026379 kubelet[2389]: I0317 17:26:23.026310 2389 reconciler.go:26] "Reconciler: start to sync state" Mar 17 17:26:23.031478 kubelet[2389]: I0317 17:26:23.030099 2389 factory.go:221] Registration of the systemd container factory successfully Mar 17 17:26:23.031478 kubelet[2389]: I0317 17:26:23.030815 2389 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 17:26:23.037417 kubelet[2389]: E0317 17:26:23.037347 2389 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 17:26:23.038964 kubelet[2389]: I0317 17:26:23.038902 2389 factory.go:221] Registration of the containerd container factory successfully Mar 17 17:26:23.054520 kubelet[2389]: E0317 17:26:23.053828 2389 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.31.16.223\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Mar 17 17:26:23.054520 kubelet[2389]: W0317 17:26:23.053951 2389 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Mar 17 17:26:23.054520 kubelet[2389]: E0317 17:26:23.053991 2389 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 17 17:26:23.054520 kubelet[2389]: W0317 17:26:23.054055 2389 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "172.31.16.223" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Mar 17 17:26:23.054520 kubelet[2389]: E0317 17:26:23.054079 2389 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"172.31.16.223\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 17 17:26:23.054520 kubelet[2389]: W0317 17:26:23.054238 2389 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Mar 17 17:26:23.054520 kubelet[2389]: E0317 17:26:23.054264 2389 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Mar 17 17:26:23.074995 kubelet[2389]: I0317 17:26:23.074861 2389 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 17 17:26:23.074995 kubelet[2389]: I0317 17:26:23.074932 2389 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 17 17:26:23.075254 kubelet[2389]: I0317 17:26:23.074967 2389 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:26:23.080840 kubelet[2389]: I0317 17:26:23.080283 2389 policy_none.go:49] "None policy: Start" Mar 17 17:26:23.080840 kubelet[2389]: I0317 17:26:23.080339 2389 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 17 17:26:23.080840 kubelet[2389]: I0317 17:26:23.080363 2389 state_mem.go:35] "Initializing new in-memory state store" Mar 17 17:26:23.100820 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 17 17:26:23.125715 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 17 17:26:23.126152 kubelet[2389]: E0317 17:26:23.126039 2389 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.31.16.223\" not found" Mar 17 17:26:23.140612 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 17 17:26:23.153477 kubelet[2389]: I0317 17:26:23.152296 2389 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 17:26:23.153477 kubelet[2389]: I0317 17:26:23.152575 2389 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 17 17:26:23.153477 kubelet[2389]: I0317 17:26:23.152593 2389 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 17:26:23.153477 kubelet[2389]: I0317 17:26:23.153393 2389 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 17:26:23.156504 kubelet[2389]: E0317 17:26:23.156454 2389 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 17 17:26:23.156733 kubelet[2389]: E0317 17:26:23.156707 2389 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.16.223\" not found" Mar 17 17:26:23.185197 kubelet[2389]: I0317 17:26:23.185109 2389 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 17:26:23.187978 kubelet[2389]: I0317 17:26:23.187911 2389 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 17:26:23.187978 kubelet[2389]: I0317 17:26:23.187970 2389 status_manager.go:227] "Starting to sync pod status with apiserver" Mar 17 17:26:23.188168 kubelet[2389]: I0317 17:26:23.188007 2389 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 17 17:26:23.188168 kubelet[2389]: I0317 17:26:23.188022 2389 kubelet.go:2388] "Starting kubelet main sync loop" Mar 17 17:26:23.188168 kubelet[2389]: E0317 17:26:23.188096 2389 kubelet.go:2412] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Mar 17 17:26:23.254479 kubelet[2389]: I0317 17:26:23.254263 2389 kubelet_node_status.go:76] "Attempting to register node" node="172.31.16.223" Mar 17 17:26:23.266309 kubelet[2389]: I0317 17:26:23.266143 2389 kubelet_node_status.go:79] "Successfully registered node" node="172.31.16.223" Mar 17 17:26:23.266309 kubelet[2389]: E0317 17:26:23.266191 2389 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"172.31.16.223\": node \"172.31.16.223\" not found" Mar 17 17:26:23.288361 kubelet[2389]: E0317 17:26:23.288296 2389 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.31.16.223\" not found" Mar 17 17:26:23.388907 kubelet[2389]: E0317 17:26:23.388838 2389 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.31.16.223\" not found" Mar 17 17:26:23.489757 kubelet[2389]: E0317 17:26:23.489554 2389 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.31.16.223\" not found" Mar 17 17:26:23.590602 kubelet[2389]: E0317 17:26:23.590550 2389 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.31.16.223\" not found" Mar 17 17:26:23.641301 sudo[2254]: pam_unix(sudo:session): session closed for user root Mar 17 17:26:23.664701 sshd[2253]: Connection closed by 139.178.68.195 port 51698 Mar 17 17:26:23.665486 sshd-session[2251]: pam_unix(sshd:session): session closed for user core Mar 17 17:26:23.672304 systemd[1]: sshd@6-172.31.16.223:22-139.178.68.195:51698.service: Deactivated successfully. Mar 17 17:26:23.677914 systemd[1]: session-7.scope: Deactivated successfully. Mar 17 17:26:23.679253 systemd-logind[1916]: Session 7 logged out. Waiting for processes to exit. Mar 17 17:26:23.681093 systemd-logind[1916]: Removed session 7. Mar 17 17:26:23.691418 kubelet[2389]: E0317 17:26:23.691354 2389 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.31.16.223\" not found" Mar 17 17:26:23.792213 kubelet[2389]: E0317 17:26:23.792058 2389 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.31.16.223\" not found" Mar 17 17:26:23.892757 kubelet[2389]: E0317 17:26:23.892695 2389 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.31.16.223\" not found" Mar 17 17:26:23.921218 kubelet[2389]: I0317 17:26:23.921138 2389 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Mar 17 17:26:23.921924 kubelet[2389]: W0317 17:26:23.921336 2389 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Mar 17 17:26:23.921924 kubelet[2389]: W0317 17:26:23.921390 2389 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Mar 17 17:26:23.993900 kubelet[2389]: E0317 17:26:23.993837 2389 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.31.16.223\" not found" Mar 17 17:26:23.997088 kubelet[2389]: E0317 17:26:23.997050 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:24.094745 kubelet[2389]: E0317 17:26:24.094586 2389 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.31.16.223\" not found" Mar 17 17:26:24.195164 kubelet[2389]: E0317 17:26:24.195110 2389 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.31.16.223\" not found" Mar 17 17:26:24.295294 kubelet[2389]: E0317 17:26:24.295227 2389 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.31.16.223\" not found" Mar 17 17:26:24.396662 kubelet[2389]: I0317 17:26:24.396501 2389 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Mar 17 17:26:24.397081 containerd[1940]: time="2025-03-17T17:26:24.397002100Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 17 17:26:24.397752 kubelet[2389]: I0317 17:26:24.397518 2389 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Mar 17 17:26:24.997847 kubelet[2389]: I0317 17:26:24.997791 2389 apiserver.go:52] "Watching apiserver" Mar 17 17:26:24.998523 kubelet[2389]: E0317 17:26:24.997777 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:25.020403 systemd[1]: Created slice kubepods-besteffort-pod40d0c7d1_a97a_40f6_8ea5_ccde4c0ee858.slice - libcontainer container kubepods-besteffort-pod40d0c7d1_a97a_40f6_8ea5_ccde4c0ee858.slice. Mar 17 17:26:25.027409 kubelet[2389]: I0317 17:26:25.027341 2389 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 17:26:25.036963 kubelet[2389]: I0317 17:26:25.036886 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/40d0c7d1-a97a-40f6-8ea5-ccde4c0ee858-kube-proxy\") pod \"kube-proxy-vgv4h\" (UID: \"40d0c7d1-a97a-40f6-8ea5-ccde4c0ee858\") " pod="kube-system/kube-proxy-vgv4h" Mar 17 17:26:25.037115 kubelet[2389]: I0317 17:26:25.036972 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fcac3214-4e7e-4b38-ac80-365486e6c93e-cilium-run\") pod \"cilium-fm4nm\" (UID: \"fcac3214-4e7e-4b38-ac80-365486e6c93e\") " pod="kube-system/cilium-fm4nm" Mar 17 17:26:25.037115 kubelet[2389]: I0317 17:26:25.037015 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fcac3214-4e7e-4b38-ac80-365486e6c93e-hostproc\") pod \"cilium-fm4nm\" (UID: \"fcac3214-4e7e-4b38-ac80-365486e6c93e\") " pod="kube-system/cilium-fm4nm" Mar 17 17:26:25.037115 kubelet[2389]: I0317 17:26:25.037056 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fcac3214-4e7e-4b38-ac80-365486e6c93e-cilium-cgroup\") pod \"cilium-fm4nm\" (UID: \"fcac3214-4e7e-4b38-ac80-365486e6c93e\") " pod="kube-system/cilium-fm4nm" Mar 17 17:26:25.037115 kubelet[2389]: I0317 17:26:25.037092 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fcac3214-4e7e-4b38-ac80-365486e6c93e-cni-path\") pod \"cilium-fm4nm\" (UID: \"fcac3214-4e7e-4b38-ac80-365486e6c93e\") " pod="kube-system/cilium-fm4nm" Mar 17 17:26:25.037338 kubelet[2389]: I0317 17:26:25.037142 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fcac3214-4e7e-4b38-ac80-365486e6c93e-etc-cni-netd\") pod \"cilium-fm4nm\" (UID: \"fcac3214-4e7e-4b38-ac80-365486e6c93e\") " pod="kube-system/cilium-fm4nm" Mar 17 17:26:25.037338 kubelet[2389]: I0317 17:26:25.037186 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fcac3214-4e7e-4b38-ac80-365486e6c93e-hubble-tls\") pod \"cilium-fm4nm\" (UID: \"fcac3214-4e7e-4b38-ac80-365486e6c93e\") " pod="kube-system/cilium-fm4nm" Mar 17 17:26:25.037338 kubelet[2389]: I0317 17:26:25.037223 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fcac3214-4e7e-4b38-ac80-365486e6c93e-xtables-lock\") pod \"cilium-fm4nm\" (UID: \"fcac3214-4e7e-4b38-ac80-365486e6c93e\") " pod="kube-system/cilium-fm4nm" Mar 17 17:26:25.037338 kubelet[2389]: I0317 17:26:25.037264 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fcac3214-4e7e-4b38-ac80-365486e6c93e-cilium-config-path\") pod \"cilium-fm4nm\" (UID: \"fcac3214-4e7e-4b38-ac80-365486e6c93e\") " pod="kube-system/cilium-fm4nm" Mar 17 17:26:25.037338 kubelet[2389]: I0317 17:26:25.037299 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fcac3214-4e7e-4b38-ac80-365486e6c93e-host-proc-sys-net\") pod \"cilium-fm4nm\" (UID: \"fcac3214-4e7e-4b38-ac80-365486e6c93e\") " pod="kube-system/cilium-fm4nm" Mar 17 17:26:25.037598 kubelet[2389]: I0317 17:26:25.037340 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fcac3214-4e7e-4b38-ac80-365486e6c93e-host-proc-sys-kernel\") pod \"cilium-fm4nm\" (UID: \"fcac3214-4e7e-4b38-ac80-365486e6c93e\") " pod="kube-system/cilium-fm4nm" Mar 17 17:26:25.037598 kubelet[2389]: I0317 17:26:25.037391 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/40d0c7d1-a97a-40f6-8ea5-ccde4c0ee858-xtables-lock\") pod \"kube-proxy-vgv4h\" (UID: \"40d0c7d1-a97a-40f6-8ea5-ccde4c0ee858\") " pod="kube-system/kube-proxy-vgv4h" Mar 17 17:26:25.037598 kubelet[2389]: I0317 17:26:25.037427 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/40d0c7d1-a97a-40f6-8ea5-ccde4c0ee858-lib-modules\") pod \"kube-proxy-vgv4h\" (UID: \"40d0c7d1-a97a-40f6-8ea5-ccde4c0ee858\") " pod="kube-system/kube-proxy-vgv4h" Mar 17 17:26:25.037598 kubelet[2389]: I0317 17:26:25.037461 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fcac3214-4e7e-4b38-ac80-365486e6c93e-bpf-maps\") pod \"cilium-fm4nm\" (UID: \"fcac3214-4e7e-4b38-ac80-365486e6c93e\") " pod="kube-system/cilium-fm4nm" Mar 17 17:26:25.037598 kubelet[2389]: I0317 17:26:25.037495 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fcac3214-4e7e-4b38-ac80-365486e6c93e-lib-modules\") pod \"cilium-fm4nm\" (UID: \"fcac3214-4e7e-4b38-ac80-365486e6c93e\") " pod="kube-system/cilium-fm4nm" Mar 17 17:26:25.037598 kubelet[2389]: I0317 17:26:25.037549 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fcac3214-4e7e-4b38-ac80-365486e6c93e-clustermesh-secrets\") pod \"cilium-fm4nm\" (UID: \"fcac3214-4e7e-4b38-ac80-365486e6c93e\") " pod="kube-system/cilium-fm4nm" Mar 17 17:26:25.037956 kubelet[2389]: I0317 17:26:25.037593 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvd2s\" (UniqueName: \"kubernetes.io/projected/fcac3214-4e7e-4b38-ac80-365486e6c93e-kube-api-access-dvd2s\") pod \"cilium-fm4nm\" (UID: \"fcac3214-4e7e-4b38-ac80-365486e6c93e\") " pod="kube-system/cilium-fm4nm" Mar 17 17:26:25.037956 kubelet[2389]: I0317 17:26:25.037724 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2p4h\" (UniqueName: \"kubernetes.io/projected/40d0c7d1-a97a-40f6-8ea5-ccde4c0ee858-kube-api-access-c2p4h\") pod \"kube-proxy-vgv4h\" (UID: \"40d0c7d1-a97a-40f6-8ea5-ccde4c0ee858\") " pod="kube-system/kube-proxy-vgv4h" Mar 17 17:26:25.038352 systemd[1]: Created slice kubepods-burstable-podfcac3214_4e7e_4b38_ac80_365486e6c93e.slice - libcontainer container kubepods-burstable-podfcac3214_4e7e_4b38_ac80_365486e6c93e.slice. Mar 17 17:26:25.332525 containerd[1940]: time="2025-03-17T17:26:25.332197853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vgv4h,Uid:40d0c7d1-a97a-40f6-8ea5-ccde4c0ee858,Namespace:kube-system,Attempt:0,}" Mar 17 17:26:25.352334 containerd[1940]: time="2025-03-17T17:26:25.352240757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fm4nm,Uid:fcac3214-4e7e-4b38-ac80-365486e6c93e,Namespace:kube-system,Attempt:0,}" Mar 17 17:26:25.872893 containerd[1940]: time="2025-03-17T17:26:25.872455796Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:26:25.874555 containerd[1940]: time="2025-03-17T17:26:25.874491212Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:26:25.876976 containerd[1940]: time="2025-03-17T17:26:25.876538232Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Mar 17 17:26:25.878724 containerd[1940]: time="2025-03-17T17:26:25.878661776Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 17:26:25.880017 containerd[1940]: time="2025-03-17T17:26:25.879955952Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:26:25.887627 containerd[1940]: time="2025-03-17T17:26:25.887532920Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:26:25.889688 containerd[1940]: time="2025-03-17T17:26:25.889478588Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 557.158395ms" Mar 17 17:26:25.895794 containerd[1940]: time="2025-03-17T17:26:25.895715324Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 543.288303ms" Mar 17 17:26:25.999050 kubelet[2389]: E0317 17:26:25.998979 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:26.239342 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4027403011.mount: Deactivated successfully. Mar 17 17:26:26.254385 containerd[1940]: time="2025-03-17T17:26:26.253629210Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:26:26.254385 containerd[1940]: time="2025-03-17T17:26:26.254019558Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:26:26.254385 containerd[1940]: time="2025-03-17T17:26:26.254058486Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:26:26.254715 containerd[1940]: time="2025-03-17T17:26:26.254578026Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:26:26.256191 containerd[1940]: time="2025-03-17T17:26:26.255564486Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:26:26.256191 containerd[1940]: time="2025-03-17T17:26:26.255716310Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:26:26.256191 containerd[1940]: time="2025-03-17T17:26:26.255752190Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:26:26.256191 containerd[1940]: time="2025-03-17T17:26:26.255946854Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:26:26.437959 systemd[1]: Started cri-containerd-2ad9d37bf5b2b9929e6fe65993639682e2d5781b7cd15892ee5ec86f94b3e258.scope - libcontainer container 2ad9d37bf5b2b9929e6fe65993639682e2d5781b7cd15892ee5ec86f94b3e258. Mar 17 17:26:26.449971 systemd[1]: Started cri-containerd-c2630589a0202e6a1fbce4359b296483a49c25fcdde427e7af31fc4fce7ae265.scope - libcontainer container c2630589a0202e6a1fbce4359b296483a49c25fcdde427e7af31fc4fce7ae265. Mar 17 17:26:26.509898 containerd[1940]: time="2025-03-17T17:26:26.508807027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vgv4h,Uid:40d0c7d1-a97a-40f6-8ea5-ccde4c0ee858,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ad9d37bf5b2b9929e6fe65993639682e2d5781b7cd15892ee5ec86f94b3e258\"" Mar 17 17:26:26.516954 containerd[1940]: time="2025-03-17T17:26:26.516796639Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.3\"" Mar 17 17:26:26.520664 containerd[1940]: time="2025-03-17T17:26:26.520139203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fm4nm,Uid:fcac3214-4e7e-4b38-ac80-365486e6c93e,Namespace:kube-system,Attempt:0,} returns sandbox id \"c2630589a0202e6a1fbce4359b296483a49c25fcdde427e7af31fc4fce7ae265\"" Mar 17 17:26:26.999258 kubelet[2389]: E0317 17:26:26.999106 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:27.945893 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2241614930.mount: Deactivated successfully. Mar 17 17:26:27.999506 kubelet[2389]: E0317 17:26:27.999410 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:28.508200 containerd[1940]: time="2025-03-17T17:26:28.507784353Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:26:28.510987 containerd[1940]: time="2025-03-17T17:26:28.510935253Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.3: active requests=0, bytes read=27370095" Mar 17 17:26:28.511971 containerd[1940]: time="2025-03-17T17:26:28.511889061Z" level=info msg="ImageCreate event name:\"sha256:2a637602f3e88e76046aa1a75bccdb37b25b2fcba99a380412e2c27ccd55c547\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:26:28.515846 containerd[1940]: time="2025-03-17T17:26:28.515758209Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:5015269547a0b7dd2c062758e9a64467b58978ff2502cad4c3f5cdf4aa554ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:26:28.517688 containerd[1940]: time="2025-03-17T17:26:28.517418745Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.3\" with image id \"sha256:2a637602f3e88e76046aa1a75bccdb37b25b2fcba99a380412e2c27ccd55c547\", repo tag \"registry.k8s.io/kube-proxy:v1.32.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:5015269547a0b7dd2c062758e9a64467b58978ff2502cad4c3f5cdf4aa554ad3\", size \"27369114\" in 2.000559718s" Mar 17 17:26:28.517688 containerd[1940]: time="2025-03-17T17:26:28.517475289Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.3\" returns image reference \"sha256:2a637602f3e88e76046aa1a75bccdb37b25b2fcba99a380412e2c27ccd55c547\"" Mar 17 17:26:28.520305 containerd[1940]: time="2025-03-17T17:26:28.520243581Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 17 17:26:28.522985 containerd[1940]: time="2025-03-17T17:26:28.522912297Z" level=info msg="CreateContainer within sandbox \"2ad9d37bf5b2b9929e6fe65993639682e2d5781b7cd15892ee5ec86f94b3e258\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 17 17:26:28.547462 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount951451849.mount: Deactivated successfully. Mar 17 17:26:28.558365 containerd[1940]: time="2025-03-17T17:26:28.558283233Z" level=info msg="CreateContainer within sandbox \"2ad9d37bf5b2b9929e6fe65993639682e2d5781b7cd15892ee5ec86f94b3e258\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"833ad2af1db4e75ba97bad9f98bf1cd6bb8c0eb5b38f7e806c869eef60ece23c\"" Mar 17 17:26:28.559382 containerd[1940]: time="2025-03-17T17:26:28.559259637Z" level=info msg="StartContainer for \"833ad2af1db4e75ba97bad9f98bf1cd6bb8c0eb5b38f7e806c869eef60ece23c\"" Mar 17 17:26:28.611983 systemd[1]: Started cri-containerd-833ad2af1db4e75ba97bad9f98bf1cd6bb8c0eb5b38f7e806c869eef60ece23c.scope - libcontainer container 833ad2af1db4e75ba97bad9f98bf1cd6bb8c0eb5b38f7e806c869eef60ece23c. Mar 17 17:26:28.665183 containerd[1940]: time="2025-03-17T17:26:28.665080893Z" level=info msg="StartContainer for \"833ad2af1db4e75ba97bad9f98bf1cd6bb8c0eb5b38f7e806c869eef60ece23c\" returns successfully" Mar 17 17:26:29.000865 kubelet[2389]: E0317 17:26:29.000757 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:30.001800 kubelet[2389]: E0317 17:26:30.001739 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:31.002842 kubelet[2389]: E0317 17:26:31.002791 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:32.004019 kubelet[2389]: E0317 17:26:32.003969 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:33.004679 kubelet[2389]: E0317 17:26:33.004201 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:33.853047 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3650516302.mount: Deactivated successfully. Mar 17 17:26:34.004425 kubelet[2389]: E0317 17:26:34.004370 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:35.005569 kubelet[2389]: E0317 17:26:35.005487 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:36.005890 kubelet[2389]: E0317 17:26:36.005820 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:36.399818 containerd[1940]: time="2025-03-17T17:26:36.399360664Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:26:36.401496 containerd[1940]: time="2025-03-17T17:26:36.401401972Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Mar 17 17:26:36.403191 containerd[1940]: time="2025-03-17T17:26:36.403119988Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:26:36.407027 containerd[1940]: time="2025-03-17T17:26:36.406832212Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 7.886525127s" Mar 17 17:26:36.407027 containerd[1940]: time="2025-03-17T17:26:36.406893268Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Mar 17 17:26:36.412310 containerd[1940]: time="2025-03-17T17:26:36.412090468Z" level=info msg="CreateContainer within sandbox \"c2630589a0202e6a1fbce4359b296483a49c25fcdde427e7af31fc4fce7ae265\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 17:26:36.429896 containerd[1940]: time="2025-03-17T17:26:36.429749752Z" level=info msg="CreateContainer within sandbox \"c2630589a0202e6a1fbce4359b296483a49c25fcdde427e7af31fc4fce7ae265\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3067b6c4c4bc5c14c8ad7cbf422107748e0c1598fffdbe63ab06cd96cebb9d44\"" Mar 17 17:26:36.430575 containerd[1940]: time="2025-03-17T17:26:36.430515964Z" level=info msg="StartContainer for \"3067b6c4c4bc5c14c8ad7cbf422107748e0c1598fffdbe63ab06cd96cebb9d44\"" Mar 17 17:26:36.480999 systemd[1]: Started cri-containerd-3067b6c4c4bc5c14c8ad7cbf422107748e0c1598fffdbe63ab06cd96cebb9d44.scope - libcontainer container 3067b6c4c4bc5c14c8ad7cbf422107748e0c1598fffdbe63ab06cd96cebb9d44. Mar 17 17:26:36.529743 containerd[1940]: time="2025-03-17T17:26:36.529106585Z" level=info msg="StartContainer for \"3067b6c4c4bc5c14c8ad7cbf422107748e0c1598fffdbe63ab06cd96cebb9d44\" returns successfully" Mar 17 17:26:36.550768 systemd[1]: cri-containerd-3067b6c4c4bc5c14c8ad7cbf422107748e0c1598fffdbe63ab06cd96cebb9d44.scope: Deactivated successfully. Mar 17 17:26:36.590101 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3067b6c4c4bc5c14c8ad7cbf422107748e0c1598fffdbe63ab06cd96cebb9d44-rootfs.mount: Deactivated successfully. Mar 17 17:26:37.005961 kubelet[2389]: E0317 17:26:37.005917 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:37.308926 kubelet[2389]: I0317 17:26:37.308742 2389 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vgv4h" podStartSLOduration=12.30536243 podStartE2EDuration="14.308695888s" podCreationTimestamp="2025-03-17 17:26:23 +0000 UTC" firstStartedPulling="2025-03-17 17:26:26.515703019 +0000 UTC m=+5.410674448" lastFinishedPulling="2025-03-17 17:26:28.519036489 +0000 UTC m=+7.414007906" observedRunningTime="2025-03-17 17:26:29.271605837 +0000 UTC m=+8.166577290" watchObservedRunningTime="2025-03-17 17:26:37.308695888 +0000 UTC m=+16.203667329" Mar 17 17:26:37.827344 containerd[1940]: time="2025-03-17T17:26:37.827187835Z" level=info msg="shim disconnected" id=3067b6c4c4bc5c14c8ad7cbf422107748e0c1598fffdbe63ab06cd96cebb9d44 namespace=k8s.io Mar 17 17:26:37.827344 containerd[1940]: time="2025-03-17T17:26:37.827281819Z" level=warning msg="cleaning up after shim disconnected" id=3067b6c4c4bc5c14c8ad7cbf422107748e0c1598fffdbe63ab06cd96cebb9d44 namespace=k8s.io Mar 17 17:26:37.827344 containerd[1940]: time="2025-03-17T17:26:37.827301199Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:26:38.007907 kubelet[2389]: E0317 17:26:38.007824 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:38.287087 containerd[1940]: time="2025-03-17T17:26:38.287025209Z" level=info msg="CreateContainer within sandbox \"c2630589a0202e6a1fbce4359b296483a49c25fcdde427e7af31fc4fce7ae265\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 17:26:38.317560 containerd[1940]: time="2025-03-17T17:26:38.317485901Z" level=info msg="CreateContainer within sandbox \"c2630589a0202e6a1fbce4359b296483a49c25fcdde427e7af31fc4fce7ae265\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"eb8d9297506e5f30fffe1846b3c3d10cd7b2b205a8dc86cf4a57f0febb8ebad8\"" Mar 17 17:26:38.319802 containerd[1940]: time="2025-03-17T17:26:38.318446117Z" level=info msg="StartContainer for \"eb8d9297506e5f30fffe1846b3c3d10cd7b2b205a8dc86cf4a57f0febb8ebad8\"" Mar 17 17:26:38.373991 systemd[1]: Started cri-containerd-eb8d9297506e5f30fffe1846b3c3d10cd7b2b205a8dc86cf4a57f0febb8ebad8.scope - libcontainer container eb8d9297506e5f30fffe1846b3c3d10cd7b2b205a8dc86cf4a57f0febb8ebad8. Mar 17 17:26:38.419227 containerd[1940]: time="2025-03-17T17:26:38.419081862Z" level=info msg="StartContainer for \"eb8d9297506e5f30fffe1846b3c3d10cd7b2b205a8dc86cf4a57f0febb8ebad8\" returns successfully" Mar 17 17:26:38.438418 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 17:26:38.439264 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:26:38.439387 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:26:38.449024 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:26:38.449472 systemd[1]: cri-containerd-eb8d9297506e5f30fffe1846b3c3d10cd7b2b205a8dc86cf4a57f0febb8ebad8.scope: Deactivated successfully. Mar 17 17:26:38.499172 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eb8d9297506e5f30fffe1846b3c3d10cd7b2b205a8dc86cf4a57f0febb8ebad8-rootfs.mount: Deactivated successfully. Mar 17 17:26:38.502239 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:26:38.513464 containerd[1940]: time="2025-03-17T17:26:38.513376062Z" level=info msg="shim disconnected" id=eb8d9297506e5f30fffe1846b3c3d10cd7b2b205a8dc86cf4a57f0febb8ebad8 namespace=k8s.io Mar 17 17:26:38.513464 containerd[1940]: time="2025-03-17T17:26:38.513456774Z" level=warning msg="cleaning up after shim disconnected" id=eb8d9297506e5f30fffe1846b3c3d10cd7b2b205a8dc86cf4a57f0febb8ebad8 namespace=k8s.io Mar 17 17:26:38.513464 containerd[1940]: time="2025-03-17T17:26:38.513479706Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:26:39.008092 kubelet[2389]: E0317 17:26:39.008011 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:39.291328 containerd[1940]: time="2025-03-17T17:26:39.291165210Z" level=info msg="CreateContainer within sandbox \"c2630589a0202e6a1fbce4359b296483a49c25fcdde427e7af31fc4fce7ae265\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 17:26:39.330881 containerd[1940]: time="2025-03-17T17:26:39.330706530Z" level=info msg="CreateContainer within sandbox \"c2630589a0202e6a1fbce4359b296483a49c25fcdde427e7af31fc4fce7ae265\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"418a25315ba0d288665bab36e1a6626f26c8bf4f6397ed92b1b86793c33ec0d4\"" Mar 17 17:26:39.331912 containerd[1940]: time="2025-03-17T17:26:39.331841910Z" level=info msg="StartContainer for \"418a25315ba0d288665bab36e1a6626f26c8bf4f6397ed92b1b86793c33ec0d4\"" Mar 17 17:26:39.382986 systemd[1]: Started cri-containerd-418a25315ba0d288665bab36e1a6626f26c8bf4f6397ed92b1b86793c33ec0d4.scope - libcontainer container 418a25315ba0d288665bab36e1a6626f26c8bf4f6397ed92b1b86793c33ec0d4. Mar 17 17:26:39.436597 containerd[1940]: time="2025-03-17T17:26:39.436516579Z" level=info msg="StartContainer for \"418a25315ba0d288665bab36e1a6626f26c8bf4f6397ed92b1b86793c33ec0d4\" returns successfully" Mar 17 17:26:39.440436 systemd[1]: cri-containerd-418a25315ba0d288665bab36e1a6626f26c8bf4f6397ed92b1b86793c33ec0d4.scope: Deactivated successfully. Mar 17 17:26:39.475715 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-418a25315ba0d288665bab36e1a6626f26c8bf4f6397ed92b1b86793c33ec0d4-rootfs.mount: Deactivated successfully. Mar 17 17:26:39.485656 containerd[1940]: time="2025-03-17T17:26:39.485547799Z" level=info msg="shim disconnected" id=418a25315ba0d288665bab36e1a6626f26c8bf4f6397ed92b1b86793c33ec0d4 namespace=k8s.io Mar 17 17:26:39.485656 containerd[1940]: time="2025-03-17T17:26:39.485626663Z" level=warning msg="cleaning up after shim disconnected" id=418a25315ba0d288665bab36e1a6626f26c8bf4f6397ed92b1b86793c33ec0d4 namespace=k8s.io Mar 17 17:26:39.485656 containerd[1940]: time="2025-03-17T17:26:39.485677231Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:26:40.008975 kubelet[2389]: E0317 17:26:40.008923 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:40.304575 containerd[1940]: time="2025-03-17T17:26:40.304439227Z" level=info msg="CreateContainer within sandbox \"c2630589a0202e6a1fbce4359b296483a49c25fcdde427e7af31fc4fce7ae265\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 17:26:40.332423 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount685737061.mount: Deactivated successfully. Mar 17 17:26:40.339434 containerd[1940]: time="2025-03-17T17:26:40.339259771Z" level=info msg="CreateContainer within sandbox \"c2630589a0202e6a1fbce4359b296483a49c25fcdde427e7af31fc4fce7ae265\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"cc04c49625054cc2a83c380e56b213f4412ea7e1f46db3a2eb1dd42758cb8999\"" Mar 17 17:26:40.340188 containerd[1940]: time="2025-03-17T17:26:40.340124803Z" level=info msg="StartContainer for \"cc04c49625054cc2a83c380e56b213f4412ea7e1f46db3a2eb1dd42758cb8999\"" Mar 17 17:26:40.390963 systemd[1]: Started cri-containerd-cc04c49625054cc2a83c380e56b213f4412ea7e1f46db3a2eb1dd42758cb8999.scope - libcontainer container cc04c49625054cc2a83c380e56b213f4412ea7e1f46db3a2eb1dd42758cb8999. Mar 17 17:26:40.436230 systemd[1]: cri-containerd-cc04c49625054cc2a83c380e56b213f4412ea7e1f46db3a2eb1dd42758cb8999.scope: Deactivated successfully. Mar 17 17:26:40.439673 containerd[1940]: time="2025-03-17T17:26:40.439347992Z" level=info msg="StartContainer for \"cc04c49625054cc2a83c380e56b213f4412ea7e1f46db3a2eb1dd42758cb8999\" returns successfully" Mar 17 17:26:40.470320 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cc04c49625054cc2a83c380e56b213f4412ea7e1f46db3a2eb1dd42758cb8999-rootfs.mount: Deactivated successfully. Mar 17 17:26:40.479723 containerd[1940]: time="2025-03-17T17:26:40.479605328Z" level=info msg="shim disconnected" id=cc04c49625054cc2a83c380e56b213f4412ea7e1f46db3a2eb1dd42758cb8999 namespace=k8s.io Mar 17 17:26:40.479723 containerd[1940]: time="2025-03-17T17:26:40.479720420Z" level=warning msg="cleaning up after shim disconnected" id=cc04c49625054cc2a83c380e56b213f4412ea7e1f46db3a2eb1dd42758cb8999 namespace=k8s.io Mar 17 17:26:40.480080 containerd[1940]: time="2025-03-17T17:26:40.479757884Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:26:40.499810 containerd[1940]: time="2025-03-17T17:26:40.499573892Z" level=warning msg="cleanup warnings time=\"2025-03-17T17:26:40Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 17 17:26:41.009892 kubelet[2389]: E0317 17:26:41.009818 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:41.309363 containerd[1940]: time="2025-03-17T17:26:41.309204656Z" level=info msg="CreateContainer within sandbox \"c2630589a0202e6a1fbce4359b296483a49c25fcdde427e7af31fc4fce7ae265\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 17:26:41.345065 containerd[1940]: time="2025-03-17T17:26:41.344311460Z" level=info msg="CreateContainer within sandbox \"c2630589a0202e6a1fbce4359b296483a49c25fcdde427e7af31fc4fce7ae265\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f7a60f941e9d4ca893172d01d655a15e8b22653ad3c7c2341775e3e0cb5c5f82\"" Mar 17 17:26:41.345401 containerd[1940]: time="2025-03-17T17:26:41.345334592Z" level=info msg="StartContainer for \"f7a60f941e9d4ca893172d01d655a15e8b22653ad3c7c2341775e3e0cb5c5f82\"" Mar 17 17:26:41.397005 systemd[1]: Started cri-containerd-f7a60f941e9d4ca893172d01d655a15e8b22653ad3c7c2341775e3e0cb5c5f82.scope - libcontainer container f7a60f941e9d4ca893172d01d655a15e8b22653ad3c7c2341775e3e0cb5c5f82. Mar 17 17:26:41.450964 containerd[1940]: time="2025-03-17T17:26:41.450308961Z" level=info msg="StartContainer for \"f7a60f941e9d4ca893172d01d655a15e8b22653ad3c7c2341775e3e0cb5c5f82\" returns successfully" Mar 17 17:26:41.635152 kubelet[2389]: I0317 17:26:41.634980 2389 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Mar 17 17:26:42.010803 kubelet[2389]: E0317 17:26:42.010624 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:42.349466 kubelet[2389]: I0317 17:26:42.346920 2389 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-fm4nm" podStartSLOduration=9.461637648 podStartE2EDuration="19.346899369s" podCreationTimestamp="2025-03-17 17:26:23 +0000 UTC" firstStartedPulling="2025-03-17 17:26:26.523663819 +0000 UTC m=+5.418635236" lastFinishedPulling="2025-03-17 17:26:36.408925528 +0000 UTC m=+15.303896957" observedRunningTime="2025-03-17 17:26:42.346453761 +0000 UTC m=+21.241425214" watchObservedRunningTime="2025-03-17 17:26:42.346899369 +0000 UTC m=+21.241870798" Mar 17 17:26:42.350864 kernel: Initializing XFRM netlink socket Mar 17 17:26:42.996373 kubelet[2389]: E0317 17:26:42.996305 2389 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:43.010908 kubelet[2389]: E0317 17:26:43.010852 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:43.195543 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 17 17:26:44.011159 kubelet[2389]: E0317 17:26:44.011071 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:44.191940 (udev-worker)[3047]: Network interface NamePolicy= disabled on kernel command line. Mar 17 17:26:44.193246 systemd-networkd[1853]: cilium_host: Link UP Mar 17 17:26:44.194946 systemd-networkd[1853]: cilium_net: Link UP Mar 17 17:26:44.194954 systemd-networkd[1853]: cilium_net: Gained carrier Mar 17 17:26:44.195406 systemd-networkd[1853]: cilium_host: Gained carrier Mar 17 17:26:44.196009 systemd-networkd[1853]: cilium_host: Gained IPv6LL Mar 17 17:26:44.199557 (udev-worker)[3089]: Network interface NamePolicy= disabled on kernel command line. Mar 17 17:26:44.373602 systemd-networkd[1853]: cilium_vxlan: Link UP Mar 17 17:26:44.373623 systemd-networkd[1853]: cilium_vxlan: Gained carrier Mar 17 17:26:44.865782 kernel: NET: Registered PF_ALG protocol family Mar 17 17:26:44.888301 systemd-networkd[1853]: cilium_net: Gained IPv6LL Mar 17 17:26:45.011989 kubelet[2389]: E0317 17:26:45.011921 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:46.012499 kubelet[2389]: E0317 17:26:46.012327 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:46.102893 systemd-networkd[1853]: cilium_vxlan: Gained IPv6LL Mar 17 17:26:46.193519 systemd-networkd[1853]: lxc_health: Link UP Mar 17 17:26:46.210548 systemd-networkd[1853]: lxc_health: Gained carrier Mar 17 17:26:46.212899 (udev-worker)[3097]: Network interface NamePolicy= disabled on kernel command line. Mar 17 17:26:46.903997 systemd[1]: Created slice kubepods-besteffort-pod4bf3b041_e13c_4df4_8d61_34d6edb7670d.slice - libcontainer container kubepods-besteffort-pod4bf3b041_e13c_4df4_8d61_34d6edb7670d.slice. Mar 17 17:26:46.977972 kubelet[2389]: I0317 17:26:46.977872 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxdxh\" (UniqueName: \"kubernetes.io/projected/4bf3b041-e13c-4df4-8d61-34d6edb7670d-kube-api-access-fxdxh\") pod \"nginx-deployment-7fcdb87857-zf7s7\" (UID: \"4bf3b041-e13c-4df4-8d61-34d6edb7670d\") " pod="default/nginx-deployment-7fcdb87857-zf7s7" Mar 17 17:26:47.013464 kubelet[2389]: E0317 17:26:47.013407 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:47.209362 containerd[1940]: time="2025-03-17T17:26:47.209292026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-zf7s7,Uid:4bf3b041-e13c-4df4-8d61-34d6edb7670d,Namespace:default,Attempt:0,}" Mar 17 17:26:47.320480 systemd-networkd[1853]: lxca9f402df2d48: Link UP Mar 17 17:26:47.331208 kernel: eth0: renamed from tmp08908 Mar 17 17:26:47.337757 systemd-networkd[1853]: lxca9f402df2d48: Gained carrier Mar 17 17:26:48.014285 kubelet[2389]: E0317 17:26:48.014219 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:48.151400 systemd-networkd[1853]: lxc_health: Gained IPv6LL Mar 17 17:26:49.014705 kubelet[2389]: E0317 17:26:49.014599 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:49.174971 systemd-networkd[1853]: lxca9f402df2d48: Gained IPv6LL Mar 17 17:26:50.015212 kubelet[2389]: E0317 17:26:50.015137 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:51.015705 kubelet[2389]: E0317 17:26:51.015620 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:51.404920 ntpd[1912]: Listen normally on 7 cilium_host 192.168.1.99:123 Mar 17 17:26:51.406315 ntpd[1912]: 17 Mar 17:26:51 ntpd[1912]: Listen normally on 7 cilium_host 192.168.1.99:123 Mar 17 17:26:51.406315 ntpd[1912]: 17 Mar 17:26:51 ntpd[1912]: Listen normally on 8 cilium_net [fe80::4d5:27ff:fee5:bc34%3]:123 Mar 17 17:26:51.406315 ntpd[1912]: 17 Mar 17:26:51 ntpd[1912]: Listen normally on 9 cilium_host [fe80::1cca:2fff:fe14:40f3%4]:123 Mar 17 17:26:51.406315 ntpd[1912]: 17 Mar 17:26:51 ntpd[1912]: Listen normally on 10 cilium_vxlan [fe80::8492:82ff:fe00:b699%5]:123 Mar 17 17:26:51.406315 ntpd[1912]: 17 Mar 17:26:51 ntpd[1912]: Listen normally on 11 lxc_health [fe80::8e6:dbff:fe24:d5a0%7]:123 Mar 17 17:26:51.406315 ntpd[1912]: 17 Mar 17:26:51 ntpd[1912]: Listen normally on 12 lxca9f402df2d48 [fe80::6cb6:c5ff:fe9b:7f56%9]:123 Mar 17 17:26:51.405046 ntpd[1912]: Listen normally on 8 cilium_net [fe80::4d5:27ff:fee5:bc34%3]:123 Mar 17 17:26:51.405126 ntpd[1912]: Listen normally on 9 cilium_host [fe80::1cca:2fff:fe14:40f3%4]:123 Mar 17 17:26:51.405195 ntpd[1912]: Listen normally on 10 cilium_vxlan [fe80::8492:82ff:fe00:b699%5]:123 Mar 17 17:26:51.405262 ntpd[1912]: Listen normally on 11 lxc_health [fe80::8e6:dbff:fe24:d5a0%7]:123 Mar 17 17:26:51.405327 ntpd[1912]: Listen normally on 12 lxca9f402df2d48 [fe80::6cb6:c5ff:fe9b:7f56%9]:123 Mar 17 17:26:52.016800 kubelet[2389]: E0317 17:26:52.016730 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:52.102728 kubelet[2389]: I0317 17:26:52.101511 2389 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 17 17:26:53.017489 kubelet[2389]: E0317 17:26:53.017421 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:54.018160 kubelet[2389]: E0317 17:26:54.018080 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:55.018740 kubelet[2389]: E0317 17:26:55.018670 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:55.289245 containerd[1940]: time="2025-03-17T17:26:55.287843302Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:26:55.289245 containerd[1940]: time="2025-03-17T17:26:55.287926330Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:26:55.289245 containerd[1940]: time="2025-03-17T17:26:55.287951698Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:26:55.289245 containerd[1940]: time="2025-03-17T17:26:55.288092638Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:26:55.323947 systemd[1]: run-containerd-runc-k8s.io-089087456088a6d0c82b62113d33b116d8397a59f2e9535867f7be00b2cc5518-runc.1sQg4J.mount: Deactivated successfully. Mar 17 17:26:55.334971 systemd[1]: Started cri-containerd-089087456088a6d0c82b62113d33b116d8397a59f2e9535867f7be00b2cc5518.scope - libcontainer container 089087456088a6d0c82b62113d33b116d8397a59f2e9535867f7be00b2cc5518. Mar 17 17:26:55.396735 containerd[1940]: time="2025-03-17T17:26:55.396672586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-zf7s7,Uid:4bf3b041-e13c-4df4-8d61-34d6edb7670d,Namespace:default,Attempt:0,} returns sandbox id \"089087456088a6d0c82b62113d33b116d8397a59f2e9535867f7be00b2cc5518\"" Mar 17 17:26:55.399688 containerd[1940]: time="2025-03-17T17:26:55.399589654Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Mar 17 17:26:56.019361 kubelet[2389]: E0317 17:26:56.019287 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:57.020487 kubelet[2389]: E0317 17:26:57.020204 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:57.367923 update_engine[1917]: I20250317 17:26:57.367753 1917 update_attempter.cc:509] Updating boot flags... Mar 17 17:26:57.495061 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (3520) Mar 17 17:26:57.903731 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (3520) Mar 17 17:26:58.021101 kubelet[2389]: E0317 17:26:58.021022 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:59.022815 kubelet[2389]: E0317 17:26:59.022720 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:26:59.385253 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1704411252.mount: Deactivated successfully. Mar 17 17:27:00.024062 kubelet[2389]: E0317 17:27:00.023892 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:00.752261 containerd[1940]: time="2025-03-17T17:27:00.752187953Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:27:00.754068 containerd[1940]: time="2025-03-17T17:27:00.753967121Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=69703867" Mar 17 17:27:00.755091 containerd[1940]: time="2025-03-17T17:27:00.755000789Z" level=info msg="ImageCreate event name:\"sha256:f660a383148a8217a75a455efeb8bfd4cbe3afa737712cc0e25f27c03b770dd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:27:00.761840 containerd[1940]: time="2025-03-17T17:27:00.761731469Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:b927c62cc716b99bce51774b46a63feb63f5414c6f985fb80cacd1933bbd0e06\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:27:00.764019 containerd[1940]: time="2025-03-17T17:27:00.763829165Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:f660a383148a8217a75a455efeb8bfd4cbe3afa737712cc0e25f27c03b770dd4\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:b927c62cc716b99bce51774b46a63feb63f5414c6f985fb80cacd1933bbd0e06\", size \"69703745\" in 5.364155391s" Mar 17 17:27:00.764019 containerd[1940]: time="2025-03-17T17:27:00.763881329Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:f660a383148a8217a75a455efeb8bfd4cbe3afa737712cc0e25f27c03b770dd4\"" Mar 17 17:27:00.767758 containerd[1940]: time="2025-03-17T17:27:00.767709869Z" level=info msg="CreateContainer within sandbox \"089087456088a6d0c82b62113d33b116d8397a59f2e9535867f7be00b2cc5518\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Mar 17 17:27:00.791465 containerd[1940]: time="2025-03-17T17:27:00.791397305Z" level=info msg="CreateContainer within sandbox \"089087456088a6d0c82b62113d33b116d8397a59f2e9535867f7be00b2cc5518\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"ae40cc955717a24e090fbcae61f084946509a9d6a93d118dd365ca7b9a49e27c\"" Mar 17 17:27:00.792583 containerd[1940]: time="2025-03-17T17:27:00.792501833Z" level=info msg="StartContainer for \"ae40cc955717a24e090fbcae61f084946509a9d6a93d118dd365ca7b9a49e27c\"" Mar 17 17:27:00.840516 systemd[1]: run-containerd-runc-k8s.io-ae40cc955717a24e090fbcae61f084946509a9d6a93d118dd365ca7b9a49e27c-runc.TmyVC6.mount: Deactivated successfully. Mar 17 17:27:00.853971 systemd[1]: Started cri-containerd-ae40cc955717a24e090fbcae61f084946509a9d6a93d118dd365ca7b9a49e27c.scope - libcontainer container ae40cc955717a24e090fbcae61f084946509a9d6a93d118dd365ca7b9a49e27c. Mar 17 17:27:00.898732 containerd[1940]: time="2025-03-17T17:27:00.898560210Z" level=info msg="StartContainer for \"ae40cc955717a24e090fbcae61f084946509a9d6a93d118dd365ca7b9a49e27c\" returns successfully" Mar 17 17:27:01.025723 kubelet[2389]: E0317 17:27:01.024971 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:01.385750 kubelet[2389]: I0317 17:27:01.385509 2389 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-zf7s7" podStartSLOduration=10.018880533 podStartE2EDuration="15.385487212s" podCreationTimestamp="2025-03-17 17:26:46 +0000 UTC" firstStartedPulling="2025-03-17 17:26:55.398941978 +0000 UTC m=+34.293913407" lastFinishedPulling="2025-03-17 17:27:00.765548657 +0000 UTC m=+39.660520086" observedRunningTime="2025-03-17 17:27:01.384562132 +0000 UTC m=+40.279533585" watchObservedRunningTime="2025-03-17 17:27:01.385487212 +0000 UTC m=+40.280458653" Mar 17 17:27:02.025776 kubelet[2389]: E0317 17:27:02.025706 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:02.996073 kubelet[2389]: E0317 17:27:02.995997 2389 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:03.026758 kubelet[2389]: E0317 17:27:03.026697 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:04.027730 kubelet[2389]: E0317 17:27:04.027623 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:05.028067 kubelet[2389]: E0317 17:27:05.028000 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:05.849722 systemd[1]: Created slice kubepods-besteffort-podb2d176a3_6ad8_4571_8ca3_2be271a0563f.slice - libcontainer container kubepods-besteffort-podb2d176a3_6ad8_4571_8ca3_2be271a0563f.slice. Mar 17 17:27:05.912703 kubelet[2389]: I0317 17:27:05.912631 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgnf2\" (UniqueName: \"kubernetes.io/projected/b2d176a3-6ad8-4571-8ca3-2be271a0563f-kube-api-access-kgnf2\") pod \"nfs-server-provisioner-0\" (UID: \"b2d176a3-6ad8-4571-8ca3-2be271a0563f\") " pod="default/nfs-server-provisioner-0" Mar 17 17:27:05.912884 kubelet[2389]: I0317 17:27:05.912733 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/b2d176a3-6ad8-4571-8ca3-2be271a0563f-data\") pod \"nfs-server-provisioner-0\" (UID: \"b2d176a3-6ad8-4571-8ca3-2be271a0563f\") " pod="default/nfs-server-provisioner-0" Mar 17 17:27:06.030716 kubelet[2389]: E0317 17:27:06.028782 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:06.155253 containerd[1940]: time="2025-03-17T17:27:06.155061992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:b2d176a3-6ad8-4571-8ca3-2be271a0563f,Namespace:default,Attempt:0,}" Mar 17 17:27:06.198804 systemd-networkd[1853]: lxc4e32e44277c8: Link UP Mar 17 17:27:06.205404 (udev-worker)[3775]: Network interface NamePolicy= disabled on kernel command line. Mar 17 17:27:06.208787 kernel: eth0: renamed from tmp368b5 Mar 17 17:27:06.214319 systemd-networkd[1853]: lxc4e32e44277c8: Gained carrier Mar 17 17:27:06.578010 containerd[1940]: time="2025-03-17T17:27:06.577855606Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:27:06.578251 containerd[1940]: time="2025-03-17T17:27:06.577968130Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:27:06.578251 containerd[1940]: time="2025-03-17T17:27:06.578006074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:27:06.578251 containerd[1940]: time="2025-03-17T17:27:06.578161114Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:27:06.617174 systemd[1]: Started cri-containerd-368b5077881777643a88a13632814edab98e68c4bf865bd28c0a22d02b807715.scope - libcontainer container 368b5077881777643a88a13632814edab98e68c4bf865bd28c0a22d02b807715. Mar 17 17:27:06.678333 containerd[1940]: time="2025-03-17T17:27:06.678284434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:b2d176a3-6ad8-4571-8ca3-2be271a0563f,Namespace:default,Attempt:0,} returns sandbox id \"368b5077881777643a88a13632814edab98e68c4bf865bd28c0a22d02b807715\"" Mar 17 17:27:06.681674 containerd[1940]: time="2025-03-17T17:27:06.681570202Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Mar 17 17:27:07.029777 kubelet[2389]: E0317 17:27:07.029707 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:07.479293 systemd-networkd[1853]: lxc4e32e44277c8: Gained IPv6LL Mar 17 17:27:08.030933 kubelet[2389]: E0317 17:27:08.030879 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:09.031831 kubelet[2389]: E0317 17:27:09.031737 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:09.177629 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount398476399.mount: Deactivated successfully. Mar 17 17:27:10.031955 kubelet[2389]: E0317 17:27:10.031895 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:10.405547 ntpd[1912]: Listen normally on 13 lxc4e32e44277c8 [fe80::acb0:c3ff:fe3d:bbdc%11]:123 Mar 17 17:27:10.406130 ntpd[1912]: 17 Mar 17:27:10 ntpd[1912]: Listen normally on 13 lxc4e32e44277c8 [fe80::acb0:c3ff:fe3d:bbdc%11]:123 Mar 17 17:27:11.032904 kubelet[2389]: E0317 17:27:11.032837 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:12.033104 kubelet[2389]: E0317 17:27:12.033035 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:12.492111 containerd[1940]: time="2025-03-17T17:27:12.492040203Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:27:12.493981 containerd[1940]: time="2025-03-17T17:27:12.493894179Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=87373623" Mar 17 17:27:12.494719 containerd[1940]: time="2025-03-17T17:27:12.494616507Z" level=info msg="ImageCreate event name:\"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:27:12.499908 containerd[1940]: time="2025-03-17T17:27:12.499856823Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:27:12.502326 containerd[1940]: time="2025-03-17T17:27:12.502092123Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"87371201\" in 5.820461453s" Mar 17 17:27:12.502326 containerd[1940]: time="2025-03-17T17:27:12.502152771Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Mar 17 17:27:12.507182 containerd[1940]: time="2025-03-17T17:27:12.506950779Z" level=info msg="CreateContainer within sandbox \"368b5077881777643a88a13632814edab98e68c4bf865bd28c0a22d02b807715\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Mar 17 17:27:12.531999 containerd[1940]: time="2025-03-17T17:27:12.531946191Z" level=info msg="CreateContainer within sandbox \"368b5077881777643a88a13632814edab98e68c4bf865bd28c0a22d02b807715\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"2bbead063f81a26fa077043ff5ff9f37a845391db63ebba3d70b0fe4597192a0\"" Mar 17 17:27:12.533208 containerd[1940]: time="2025-03-17T17:27:12.532992207Z" level=info msg="StartContainer for \"2bbead063f81a26fa077043ff5ff9f37a845391db63ebba3d70b0fe4597192a0\"" Mar 17 17:27:12.590975 systemd[1]: Started cri-containerd-2bbead063f81a26fa077043ff5ff9f37a845391db63ebba3d70b0fe4597192a0.scope - libcontainer container 2bbead063f81a26fa077043ff5ff9f37a845391db63ebba3d70b0fe4597192a0. Mar 17 17:27:12.634379 containerd[1940]: time="2025-03-17T17:27:12.634163764Z" level=info msg="StartContainer for \"2bbead063f81a26fa077043ff5ff9f37a845391db63ebba3d70b0fe4597192a0\" returns successfully" Mar 17 17:27:13.033502 kubelet[2389]: E0317 17:27:13.033430 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:13.436409 kubelet[2389]: I0317 17:27:13.436262 2389 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.613095227 podStartE2EDuration="8.436239148s" podCreationTimestamp="2025-03-17 17:27:05 +0000 UTC" firstStartedPulling="2025-03-17 17:27:06.681005134 +0000 UTC m=+45.575976551" lastFinishedPulling="2025-03-17 17:27:12.504149055 +0000 UTC m=+51.399120472" observedRunningTime="2025-03-17 17:27:13.435316444 +0000 UTC m=+52.330287897" watchObservedRunningTime="2025-03-17 17:27:13.436239148 +0000 UTC m=+52.331210577" Mar 17 17:27:13.518789 systemd[1]: run-containerd-runc-k8s.io-2bbead063f81a26fa077043ff5ff9f37a845391db63ebba3d70b0fe4597192a0-runc.n7PDzF.mount: Deactivated successfully. Mar 17 17:27:14.033788 kubelet[2389]: E0317 17:27:14.033725 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:15.034163 kubelet[2389]: E0317 17:27:15.034092 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:16.035221 kubelet[2389]: E0317 17:27:16.035156 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:17.035986 kubelet[2389]: E0317 17:27:17.035914 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:18.036840 kubelet[2389]: E0317 17:27:18.036772 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:19.037313 kubelet[2389]: E0317 17:27:19.037233 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:20.037921 kubelet[2389]: E0317 17:27:20.037850 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:21.038549 kubelet[2389]: E0317 17:27:21.038480 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:22.039295 kubelet[2389]: E0317 17:27:22.039225 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:22.496040 systemd[1]: Created slice kubepods-besteffort-pod26c7ead5_cf89_485d_937e_e4140023d322.slice - libcontainer container kubepods-besteffort-pod26c7ead5_cf89_485d_937e_e4140023d322.slice. Mar 17 17:27:22.518223 kubelet[2389]: I0317 17:27:22.518082 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3bec0061-ef95-46fb-ae24-16a2fc2993e0\" (UniqueName: \"kubernetes.io/nfs/26c7ead5-cf89-485d-937e-e4140023d322-pvc-3bec0061-ef95-46fb-ae24-16a2fc2993e0\") pod \"test-pod-1\" (UID: \"26c7ead5-cf89-485d-937e-e4140023d322\") " pod="default/test-pod-1" Mar 17 17:27:22.518223 kubelet[2389]: I0317 17:27:22.518150 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sh865\" (UniqueName: \"kubernetes.io/projected/26c7ead5-cf89-485d-937e-e4140023d322-kube-api-access-sh865\") pod \"test-pod-1\" (UID: \"26c7ead5-cf89-485d-937e-e4140023d322\") " pod="default/test-pod-1" Mar 17 17:27:22.751849 kernel: FS-Cache: Loaded Mar 17 17:27:22.794962 kernel: RPC: Registered named UNIX socket transport module. Mar 17 17:27:22.795109 kernel: RPC: Registered udp transport module. Mar 17 17:27:22.796072 kernel: RPC: Registered tcp transport module. Mar 17 17:27:22.797145 kernel: RPC: Registered tcp-with-tls transport module. Mar 17 17:27:22.798364 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Mar 17 17:27:22.995598 kubelet[2389]: E0317 17:27:22.995442 2389 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:23.040035 kubelet[2389]: E0317 17:27:23.039939 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:23.233066 kernel: NFS: Registering the id_resolver key type Mar 17 17:27:23.233194 kernel: Key type id_resolver registered Mar 17 17:27:23.233227 kernel: Key type id_legacy registered Mar 17 17:27:23.354513 nfsidmap[3959]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Mar 17 17:27:23.360758 nfsidmap[3960]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Mar 17 17:27:23.402472 containerd[1940]: time="2025-03-17T17:27:23.402367129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:26c7ead5-cf89-485d-937e-e4140023d322,Namespace:default,Attempt:0,}" Mar 17 17:27:23.451632 systemd-networkd[1853]: lxc21d4e31430b7: Link UP Mar 17 17:27:23.459700 kernel: eth0: renamed from tmp07d9b Mar 17 17:27:23.462027 (udev-worker)[3941]: Network interface NamePolicy= disabled on kernel command line. Mar 17 17:27:23.466225 systemd-networkd[1853]: lxc21d4e31430b7: Gained carrier Mar 17 17:27:23.786581 containerd[1940]: time="2025-03-17T17:27:23.786074559Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:27:23.786581 containerd[1940]: time="2025-03-17T17:27:23.786213435Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:27:23.786581 containerd[1940]: time="2025-03-17T17:27:23.786243771Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:27:23.786581 containerd[1940]: time="2025-03-17T17:27:23.786409407Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:27:23.821875 systemd[1]: run-containerd-runc-k8s.io-07d9b5779b06e025d8ac596e79c57c7dad1995051f64a3e273996886db8a6a5e-runc.B40U96.mount: Deactivated successfully. Mar 17 17:27:23.837978 systemd[1]: Started cri-containerd-07d9b5779b06e025d8ac596e79c57c7dad1995051f64a3e273996886db8a6a5e.scope - libcontainer container 07d9b5779b06e025d8ac596e79c57c7dad1995051f64a3e273996886db8a6a5e. Mar 17 17:27:23.893575 containerd[1940]: time="2025-03-17T17:27:23.893489608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:26c7ead5-cf89-485d-937e-e4140023d322,Namespace:default,Attempt:0,} returns sandbox id \"07d9b5779b06e025d8ac596e79c57c7dad1995051f64a3e273996886db8a6a5e\"" Mar 17 17:27:23.895257 containerd[1940]: time="2025-03-17T17:27:23.895176664Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Mar 17 17:27:24.040266 kubelet[2389]: E0317 17:27:24.040106 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:24.350587 containerd[1940]: time="2025-03-17T17:27:24.350410166Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:27:24.352063 containerd[1940]: time="2025-03-17T17:27:24.351953810Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Mar 17 17:27:24.358810 containerd[1940]: time="2025-03-17T17:27:24.358739570Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:f660a383148a8217a75a455efeb8bfd4cbe3afa737712cc0e25f27c03b770dd4\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:b927c62cc716b99bce51774b46a63feb63f5414c6f985fb80cacd1933bbd0e06\", size \"69703745\" in 463.49303ms" Mar 17 17:27:24.358810 containerd[1940]: time="2025-03-17T17:27:24.358809626Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:f660a383148a8217a75a455efeb8bfd4cbe3afa737712cc0e25f27c03b770dd4\"" Mar 17 17:27:24.363015 containerd[1940]: time="2025-03-17T17:27:24.362783342Z" level=info msg="CreateContainer within sandbox \"07d9b5779b06e025d8ac596e79c57c7dad1995051f64a3e273996886db8a6a5e\" for container &ContainerMetadata{Name:test,Attempt:0,}" Mar 17 17:27:24.389244 containerd[1940]: time="2025-03-17T17:27:24.389162594Z" level=info msg="CreateContainer within sandbox \"07d9b5779b06e025d8ac596e79c57c7dad1995051f64a3e273996886db8a6a5e\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"017a68020263d1ca0ce86d3420ffb09b2b77101be8526021ff4dfee38d9ca58f\"" Mar 17 17:27:24.390581 containerd[1940]: time="2025-03-17T17:27:24.390510986Z" level=info msg="StartContainer for \"017a68020263d1ca0ce86d3420ffb09b2b77101be8526021ff4dfee38d9ca58f\"" Mar 17 17:27:24.433967 systemd[1]: Started cri-containerd-017a68020263d1ca0ce86d3420ffb09b2b77101be8526021ff4dfee38d9ca58f.scope - libcontainer container 017a68020263d1ca0ce86d3420ffb09b2b77101be8526021ff4dfee38d9ca58f. Mar 17 17:27:24.481557 containerd[1940]: time="2025-03-17T17:27:24.481493331Z" level=info msg="StartContainer for \"017a68020263d1ca0ce86d3420ffb09b2b77101be8526021ff4dfee38d9ca58f\" returns successfully" Mar 17 17:27:25.040856 kubelet[2389]: E0317 17:27:25.040787 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:25.271194 systemd-networkd[1853]: lxc21d4e31430b7: Gained IPv6LL Mar 17 17:27:25.471702 kubelet[2389]: I0317 17:27:25.471457 2389 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=19.006020774 podStartE2EDuration="19.471436624s" podCreationTimestamp="2025-03-17 17:27:06 +0000 UTC" firstStartedPulling="2025-03-17 17:27:23.894615064 +0000 UTC m=+62.789586493" lastFinishedPulling="2025-03-17 17:27:24.360030914 +0000 UTC m=+63.255002343" observedRunningTime="2025-03-17 17:27:25.471135556 +0000 UTC m=+64.366106997" watchObservedRunningTime="2025-03-17 17:27:25.471436624 +0000 UTC m=+64.366408053" Mar 17 17:27:26.041482 kubelet[2389]: E0317 17:27:26.041417 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:27.041926 kubelet[2389]: E0317 17:27:27.041846 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:27.405010 ntpd[1912]: Listen normally on 14 lxc21d4e31430b7 [fe80::40a8:65ff:fea0:10c4%13]:123 Mar 17 17:27:27.406198 ntpd[1912]: 17 Mar 17:27:27 ntpd[1912]: Listen normally on 14 lxc21d4e31430b7 [fe80::40a8:65ff:fea0:10c4%13]:123 Mar 17 17:27:28.042955 kubelet[2389]: E0317 17:27:28.042890 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:29.043978 kubelet[2389]: E0317 17:27:29.043919 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:30.044505 kubelet[2389]: E0317 17:27:30.044443 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:31.004939 containerd[1940]: time="2025-03-17T17:27:31.004847539Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 17:27:31.015211 containerd[1940]: time="2025-03-17T17:27:31.015154747Z" level=info msg="StopContainer for \"f7a60f941e9d4ca893172d01d655a15e8b22653ad3c7c2341775e3e0cb5c5f82\" with timeout 2 (s)" Mar 17 17:27:31.015948 containerd[1940]: time="2025-03-17T17:27:31.015906883Z" level=info msg="Stop container \"f7a60f941e9d4ca893172d01d655a15e8b22653ad3c7c2341775e3e0cb5c5f82\" with signal terminated" Mar 17 17:27:31.027477 systemd-networkd[1853]: lxc_health: Link DOWN Mar 17 17:27:31.027497 systemd-networkd[1853]: lxc_health: Lost carrier Mar 17 17:27:31.045102 kubelet[2389]: E0317 17:27:31.045012 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:31.051118 systemd[1]: cri-containerd-f7a60f941e9d4ca893172d01d655a15e8b22653ad3c7c2341775e3e0cb5c5f82.scope: Deactivated successfully. Mar 17 17:27:31.051627 systemd[1]: cri-containerd-f7a60f941e9d4ca893172d01d655a15e8b22653ad3c7c2341775e3e0cb5c5f82.scope: Consumed 14.266s CPU time. Mar 17 17:27:31.090433 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f7a60f941e9d4ca893172d01d655a15e8b22653ad3c7c2341775e3e0cb5c5f82-rootfs.mount: Deactivated successfully. Mar 17 17:27:31.344906 containerd[1940]: time="2025-03-17T17:27:31.344695125Z" level=info msg="shim disconnected" id=f7a60f941e9d4ca893172d01d655a15e8b22653ad3c7c2341775e3e0cb5c5f82 namespace=k8s.io Mar 17 17:27:31.344906 containerd[1940]: time="2025-03-17T17:27:31.344772465Z" level=warning msg="cleaning up after shim disconnected" id=f7a60f941e9d4ca893172d01d655a15e8b22653ad3c7c2341775e3e0cb5c5f82 namespace=k8s.io Mar 17 17:27:31.344906 containerd[1940]: time="2025-03-17T17:27:31.344793393Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:27:31.369829 containerd[1940]: time="2025-03-17T17:27:31.369695205Z" level=info msg="StopContainer for \"f7a60f941e9d4ca893172d01d655a15e8b22653ad3c7c2341775e3e0cb5c5f82\" returns successfully" Mar 17 17:27:31.370704 containerd[1940]: time="2025-03-17T17:27:31.370510065Z" level=info msg="StopPodSandbox for \"c2630589a0202e6a1fbce4359b296483a49c25fcdde427e7af31fc4fce7ae265\"" Mar 17 17:27:31.370704 containerd[1940]: time="2025-03-17T17:27:31.370571589Z" level=info msg="Container to stop \"eb8d9297506e5f30fffe1846b3c3d10cd7b2b205a8dc86cf4a57f0febb8ebad8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:27:31.370704 containerd[1940]: time="2025-03-17T17:27:31.370595349Z" level=info msg="Container to stop \"418a25315ba0d288665bab36e1a6626f26c8bf4f6397ed92b1b86793c33ec0d4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:27:31.370704 containerd[1940]: time="2025-03-17T17:27:31.370616445Z" level=info msg="Container to stop \"f7a60f941e9d4ca893172d01d655a15e8b22653ad3c7c2341775e3e0cb5c5f82\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:27:31.371180 containerd[1940]: time="2025-03-17T17:27:31.370661301Z" level=info msg="Container to stop \"3067b6c4c4bc5c14c8ad7cbf422107748e0c1598fffdbe63ab06cd96cebb9d44\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:27:31.371430 containerd[1940]: time="2025-03-17T17:27:31.371127117Z" level=info msg="Container to stop \"cc04c49625054cc2a83c380e56b213f4412ea7e1f46db3a2eb1dd42758cb8999\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:27:31.375957 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c2630589a0202e6a1fbce4359b296483a49c25fcdde427e7af31fc4fce7ae265-shm.mount: Deactivated successfully. Mar 17 17:27:31.384486 systemd[1]: cri-containerd-c2630589a0202e6a1fbce4359b296483a49c25fcdde427e7af31fc4fce7ae265.scope: Deactivated successfully. Mar 17 17:27:31.419025 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c2630589a0202e6a1fbce4359b296483a49c25fcdde427e7af31fc4fce7ae265-rootfs.mount: Deactivated successfully. Mar 17 17:27:31.424934 containerd[1940]: time="2025-03-17T17:27:31.424780557Z" level=info msg="shim disconnected" id=c2630589a0202e6a1fbce4359b296483a49c25fcdde427e7af31fc4fce7ae265 namespace=k8s.io Mar 17 17:27:31.425452 containerd[1940]: time="2025-03-17T17:27:31.425202489Z" level=warning msg="cleaning up after shim disconnected" id=c2630589a0202e6a1fbce4359b296483a49c25fcdde427e7af31fc4fce7ae265 namespace=k8s.io Mar 17 17:27:31.425452 containerd[1940]: time="2025-03-17T17:27:31.425230317Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:27:31.447591 containerd[1940]: time="2025-03-17T17:27:31.447271173Z" level=info msg="TearDown network for sandbox \"c2630589a0202e6a1fbce4359b296483a49c25fcdde427e7af31fc4fce7ae265\" successfully" Mar 17 17:27:31.447591 containerd[1940]: time="2025-03-17T17:27:31.447331629Z" level=info msg="StopPodSandbox for \"c2630589a0202e6a1fbce4359b296483a49c25fcdde427e7af31fc4fce7ae265\" returns successfully" Mar 17 17:27:31.476678 kubelet[2389]: I0317 17:27:31.474723 2389 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fcac3214-4e7e-4b38-ac80-365486e6c93e-xtables-lock\") pod \"fcac3214-4e7e-4b38-ac80-365486e6c93e\" (UID: \"fcac3214-4e7e-4b38-ac80-365486e6c93e\") " Mar 17 17:27:31.476678 kubelet[2389]: I0317 17:27:31.474787 2389 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fcac3214-4e7e-4b38-ac80-365486e6c93e-cilium-config-path\") pod \"fcac3214-4e7e-4b38-ac80-365486e6c93e\" (UID: \"fcac3214-4e7e-4b38-ac80-365486e6c93e\") " Mar 17 17:27:31.476678 kubelet[2389]: I0317 17:27:31.474816 2389 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fcac3214-4e7e-4b38-ac80-365486e6c93e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "fcac3214-4e7e-4b38-ac80-365486e6c93e" (UID: "fcac3214-4e7e-4b38-ac80-365486e6c93e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 17:27:31.476678 kubelet[2389]: I0317 17:27:31.474857 2389 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fcac3214-4e7e-4b38-ac80-365486e6c93e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "fcac3214-4e7e-4b38-ac80-365486e6c93e" (UID: "fcac3214-4e7e-4b38-ac80-365486e6c93e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 17:27:31.476678 kubelet[2389]: I0317 17:27:31.474826 2389 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fcac3214-4e7e-4b38-ac80-365486e6c93e-host-proc-sys-net\") pod \"fcac3214-4e7e-4b38-ac80-365486e6c93e\" (UID: \"fcac3214-4e7e-4b38-ac80-365486e6c93e\") " Mar 17 17:27:31.477089 kubelet[2389]: I0317 17:27:31.474909 2389 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fcac3214-4e7e-4b38-ac80-365486e6c93e-bpf-maps\") pod \"fcac3214-4e7e-4b38-ac80-365486e6c93e\" (UID: \"fcac3214-4e7e-4b38-ac80-365486e6c93e\") " Mar 17 17:27:31.477089 kubelet[2389]: I0317 17:27:31.474949 2389 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fcac3214-4e7e-4b38-ac80-365486e6c93e-hostproc\") pod \"fcac3214-4e7e-4b38-ac80-365486e6c93e\" (UID: \"fcac3214-4e7e-4b38-ac80-365486e6c93e\") " Mar 17 17:27:31.477089 kubelet[2389]: I0317 17:27:31.474989 2389 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fcac3214-4e7e-4b38-ac80-365486e6c93e-clustermesh-secrets\") pod \"fcac3214-4e7e-4b38-ac80-365486e6c93e\" (UID: \"fcac3214-4e7e-4b38-ac80-365486e6c93e\") " Mar 17 17:27:31.477089 kubelet[2389]: I0317 17:27:31.475023 2389 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fcac3214-4e7e-4b38-ac80-365486e6c93e-etc-cni-netd\") pod \"fcac3214-4e7e-4b38-ac80-365486e6c93e\" (UID: \"fcac3214-4e7e-4b38-ac80-365486e6c93e\") " Mar 17 17:27:31.477089 kubelet[2389]: I0317 17:27:31.475061 2389 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fcac3214-4e7e-4b38-ac80-365486e6c93e-hubble-tls\") pod \"fcac3214-4e7e-4b38-ac80-365486e6c93e\" (UID: \"fcac3214-4e7e-4b38-ac80-365486e6c93e\") " Mar 17 17:27:31.477089 kubelet[2389]: I0317 17:27:31.475101 2389 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvd2s\" (UniqueName: \"kubernetes.io/projected/fcac3214-4e7e-4b38-ac80-365486e6c93e-kube-api-access-dvd2s\") pod \"fcac3214-4e7e-4b38-ac80-365486e6c93e\" (UID: \"fcac3214-4e7e-4b38-ac80-365486e6c93e\") " Mar 17 17:27:31.477395 kubelet[2389]: I0317 17:27:31.475138 2389 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fcac3214-4e7e-4b38-ac80-365486e6c93e-cilium-run\") pod \"fcac3214-4e7e-4b38-ac80-365486e6c93e\" (UID: \"fcac3214-4e7e-4b38-ac80-365486e6c93e\") " Mar 17 17:27:31.477395 kubelet[2389]: I0317 17:27:31.475171 2389 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fcac3214-4e7e-4b38-ac80-365486e6c93e-cilium-cgroup\") pod \"fcac3214-4e7e-4b38-ac80-365486e6c93e\" (UID: \"fcac3214-4e7e-4b38-ac80-365486e6c93e\") " Mar 17 17:27:31.477395 kubelet[2389]: I0317 17:27:31.475202 2389 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fcac3214-4e7e-4b38-ac80-365486e6c93e-cni-path\") pod \"fcac3214-4e7e-4b38-ac80-365486e6c93e\" (UID: \"fcac3214-4e7e-4b38-ac80-365486e6c93e\") " Mar 17 17:27:31.477395 kubelet[2389]: I0317 17:27:31.475236 2389 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fcac3214-4e7e-4b38-ac80-365486e6c93e-host-proc-sys-kernel\") pod \"fcac3214-4e7e-4b38-ac80-365486e6c93e\" (UID: \"fcac3214-4e7e-4b38-ac80-365486e6c93e\") " Mar 17 17:27:31.477395 kubelet[2389]: I0317 17:27:31.475271 2389 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fcac3214-4e7e-4b38-ac80-365486e6c93e-lib-modules\") pod \"fcac3214-4e7e-4b38-ac80-365486e6c93e\" (UID: \"fcac3214-4e7e-4b38-ac80-365486e6c93e\") " Mar 17 17:27:31.477395 kubelet[2389]: I0317 17:27:31.475322 2389 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fcac3214-4e7e-4b38-ac80-365486e6c93e-xtables-lock\") on node \"172.31.16.223\" DevicePath \"\"" Mar 17 17:27:31.477395 kubelet[2389]: I0317 17:27:31.475349 2389 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fcac3214-4e7e-4b38-ac80-365486e6c93e-host-proc-sys-net\") on node \"172.31.16.223\" DevicePath \"\"" Mar 17 17:27:31.477888 kubelet[2389]: I0317 17:27:31.475387 2389 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fcac3214-4e7e-4b38-ac80-365486e6c93e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "fcac3214-4e7e-4b38-ac80-365486e6c93e" (UID: "fcac3214-4e7e-4b38-ac80-365486e6c93e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 17:27:31.477888 kubelet[2389]: I0317 17:27:31.475424 2389 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fcac3214-4e7e-4b38-ac80-365486e6c93e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "fcac3214-4e7e-4b38-ac80-365486e6c93e" (UID: "fcac3214-4e7e-4b38-ac80-365486e6c93e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 17:27:31.477888 kubelet[2389]: I0317 17:27:31.475458 2389 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fcac3214-4e7e-4b38-ac80-365486e6c93e-hostproc" (OuterVolumeSpecName: "hostproc") pod "fcac3214-4e7e-4b38-ac80-365486e6c93e" (UID: "fcac3214-4e7e-4b38-ac80-365486e6c93e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 17:27:31.478884 kubelet[2389]: I0317 17:27:31.478266 2389 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fcac3214-4e7e-4b38-ac80-365486e6c93e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "fcac3214-4e7e-4b38-ac80-365486e6c93e" (UID: "fcac3214-4e7e-4b38-ac80-365486e6c93e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 17:27:31.481223 kubelet[2389]: I0317 17:27:31.481172 2389 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fcac3214-4e7e-4b38-ac80-365486e6c93e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "fcac3214-4e7e-4b38-ac80-365486e6c93e" (UID: "fcac3214-4e7e-4b38-ac80-365486e6c93e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 17:27:31.483939 kubelet[2389]: I0317 17:27:31.481444 2389 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fcac3214-4e7e-4b38-ac80-365486e6c93e-cni-path" (OuterVolumeSpecName: "cni-path") pod "fcac3214-4e7e-4b38-ac80-365486e6c93e" (UID: "fcac3214-4e7e-4b38-ac80-365486e6c93e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 17:27:31.484820 kubelet[2389]: I0317 17:27:31.481493 2389 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fcac3214-4e7e-4b38-ac80-365486e6c93e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "fcac3214-4e7e-4b38-ac80-365486e6c93e" (UID: "fcac3214-4e7e-4b38-ac80-365486e6c93e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 17:27:31.484820 kubelet[2389]: I0317 17:27:31.484177 2389 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fcac3214-4e7e-4b38-ac80-365486e6c93e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "fcac3214-4e7e-4b38-ac80-365486e6c93e" (UID: "fcac3214-4e7e-4b38-ac80-365486e6c93e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 17:27:31.484820 kubelet[2389]: I0317 17:27:31.484305 2389 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcac3214-4e7e-4b38-ac80-365486e6c93e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "fcac3214-4e7e-4b38-ac80-365486e6c93e" (UID: "fcac3214-4e7e-4b38-ac80-365486e6c93e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 17 17:27:31.485201 kubelet[2389]: I0317 17:27:31.485168 2389 scope.go:117] "RemoveContainer" containerID="f7a60f941e9d4ca893172d01d655a15e8b22653ad3c7c2341775e3e0cb5c5f82" Mar 17 17:27:31.487222 systemd[1]: var-lib-kubelet-pods-fcac3214\x2d4e7e\x2d4b38\x2dac80\x2d365486e6c93e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 17:27:31.493971 containerd[1940]: time="2025-03-17T17:27:31.493531594Z" level=info msg="RemoveContainer for \"f7a60f941e9d4ca893172d01d655a15e8b22653ad3c7c2341775e3e0cb5c5f82\"" Mar 17 17:27:31.496866 kubelet[2389]: I0317 17:27:31.496763 2389 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcac3214-4e7e-4b38-ac80-365486e6c93e-kube-api-access-dvd2s" (OuterVolumeSpecName: "kube-api-access-dvd2s") pod "fcac3214-4e7e-4b38-ac80-365486e6c93e" (UID: "fcac3214-4e7e-4b38-ac80-365486e6c93e"). InnerVolumeSpecName "kube-api-access-dvd2s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 17 17:27:31.499698 containerd[1940]: time="2025-03-17T17:27:31.499378486Z" level=info msg="RemoveContainer for \"f7a60f941e9d4ca893172d01d655a15e8b22653ad3c7c2341775e3e0cb5c5f82\" returns successfully" Mar 17 17:27:31.499964 kubelet[2389]: I0317 17:27:31.499923 2389 scope.go:117] "RemoveContainer" containerID="cc04c49625054cc2a83c380e56b213f4412ea7e1f46db3a2eb1dd42758cb8999" Mar 17 17:27:31.500545 kubelet[2389]: I0317 17:27:31.500157 2389 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcac3214-4e7e-4b38-ac80-365486e6c93e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "fcac3214-4e7e-4b38-ac80-365486e6c93e" (UID: "fcac3214-4e7e-4b38-ac80-365486e6c93e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 17 17:27:31.500927 kubelet[2389]: I0317 17:27:31.500876 2389 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fcac3214-4e7e-4b38-ac80-365486e6c93e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "fcac3214-4e7e-4b38-ac80-365486e6c93e" (UID: "fcac3214-4e7e-4b38-ac80-365486e6c93e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 17 17:27:31.502326 containerd[1940]: time="2025-03-17T17:27:31.502213534Z" level=info msg="RemoveContainer for \"cc04c49625054cc2a83c380e56b213f4412ea7e1f46db3a2eb1dd42758cb8999\"" Mar 17 17:27:31.506166 containerd[1940]: time="2025-03-17T17:27:31.506100622Z" level=info msg="RemoveContainer for \"cc04c49625054cc2a83c380e56b213f4412ea7e1f46db3a2eb1dd42758cb8999\" returns successfully" Mar 17 17:27:31.506512 kubelet[2389]: I0317 17:27:31.506446 2389 scope.go:117] "RemoveContainer" containerID="418a25315ba0d288665bab36e1a6626f26c8bf4f6397ed92b1b86793c33ec0d4" Mar 17 17:27:31.508516 containerd[1940]: time="2025-03-17T17:27:31.508236238Z" level=info msg="RemoveContainer for \"418a25315ba0d288665bab36e1a6626f26c8bf4f6397ed92b1b86793c33ec0d4\"" Mar 17 17:27:31.512124 containerd[1940]: time="2025-03-17T17:27:31.512060590Z" level=info msg="RemoveContainer for \"418a25315ba0d288665bab36e1a6626f26c8bf4f6397ed92b1b86793c33ec0d4\" returns successfully" Mar 17 17:27:31.512439 kubelet[2389]: I0317 17:27:31.512375 2389 scope.go:117] "RemoveContainer" containerID="eb8d9297506e5f30fffe1846b3c3d10cd7b2b205a8dc86cf4a57f0febb8ebad8" Mar 17 17:27:31.515079 containerd[1940]: time="2025-03-17T17:27:31.514256758Z" level=info msg="RemoveContainer for \"eb8d9297506e5f30fffe1846b3c3d10cd7b2b205a8dc86cf4a57f0febb8ebad8\"" Mar 17 17:27:31.522288 containerd[1940]: time="2025-03-17T17:27:31.522233086Z" level=info msg="RemoveContainer for \"eb8d9297506e5f30fffe1846b3c3d10cd7b2b205a8dc86cf4a57f0febb8ebad8\" returns successfully" Mar 17 17:27:31.524181 kubelet[2389]: I0317 17:27:31.524124 2389 scope.go:117] "RemoveContainer" containerID="3067b6c4c4bc5c14c8ad7cbf422107748e0c1598fffdbe63ab06cd96cebb9d44" Mar 17 17:27:31.526509 containerd[1940]: time="2025-03-17T17:27:31.526372522Z" level=info msg="RemoveContainer for \"3067b6c4c4bc5c14c8ad7cbf422107748e0c1598fffdbe63ab06cd96cebb9d44\"" Mar 17 17:27:31.529797 containerd[1940]: time="2025-03-17T17:27:31.529736482Z" level=info msg="RemoveContainer for \"3067b6c4c4bc5c14c8ad7cbf422107748e0c1598fffdbe63ab06cd96cebb9d44\" returns successfully" Mar 17 17:27:31.530100 kubelet[2389]: I0317 17:27:31.530061 2389 scope.go:117] "RemoveContainer" containerID="f7a60f941e9d4ca893172d01d655a15e8b22653ad3c7c2341775e3e0cb5c5f82" Mar 17 17:27:31.530536 containerd[1940]: time="2025-03-17T17:27:31.530400934Z" level=error msg="ContainerStatus for \"f7a60f941e9d4ca893172d01d655a15e8b22653ad3c7c2341775e3e0cb5c5f82\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f7a60f941e9d4ca893172d01d655a15e8b22653ad3c7c2341775e3e0cb5c5f82\": not found" Mar 17 17:27:31.530671 kubelet[2389]: E0317 17:27:31.530606 2389 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f7a60f941e9d4ca893172d01d655a15e8b22653ad3c7c2341775e3e0cb5c5f82\": not found" containerID="f7a60f941e9d4ca893172d01d655a15e8b22653ad3c7c2341775e3e0cb5c5f82" Mar 17 17:27:31.530764 kubelet[2389]: I0317 17:27:31.530683 2389 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f7a60f941e9d4ca893172d01d655a15e8b22653ad3c7c2341775e3e0cb5c5f82"} err="failed to get container status \"f7a60f941e9d4ca893172d01d655a15e8b22653ad3c7c2341775e3e0cb5c5f82\": rpc error: code = NotFound desc = an error occurred when try to find container \"f7a60f941e9d4ca893172d01d655a15e8b22653ad3c7c2341775e3e0cb5c5f82\": not found" Mar 17 17:27:31.530764 kubelet[2389]: I0317 17:27:31.530757 2389 scope.go:117] "RemoveContainer" containerID="cc04c49625054cc2a83c380e56b213f4412ea7e1f46db3a2eb1dd42758cb8999" Mar 17 17:27:31.531492 containerd[1940]: time="2025-03-17T17:27:31.531355978Z" level=error msg="ContainerStatus for \"cc04c49625054cc2a83c380e56b213f4412ea7e1f46db3a2eb1dd42758cb8999\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cc04c49625054cc2a83c380e56b213f4412ea7e1f46db3a2eb1dd42758cb8999\": not found" Mar 17 17:27:31.531755 kubelet[2389]: E0317 17:27:31.531600 2389 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cc04c49625054cc2a83c380e56b213f4412ea7e1f46db3a2eb1dd42758cb8999\": not found" containerID="cc04c49625054cc2a83c380e56b213f4412ea7e1f46db3a2eb1dd42758cb8999" Mar 17 17:27:31.531755 kubelet[2389]: I0317 17:27:31.531665 2389 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cc04c49625054cc2a83c380e56b213f4412ea7e1f46db3a2eb1dd42758cb8999"} err="failed to get container status \"cc04c49625054cc2a83c380e56b213f4412ea7e1f46db3a2eb1dd42758cb8999\": rpc error: code = NotFound desc = an error occurred when try to find container \"cc04c49625054cc2a83c380e56b213f4412ea7e1f46db3a2eb1dd42758cb8999\": not found" Mar 17 17:27:31.531755 kubelet[2389]: I0317 17:27:31.531700 2389 scope.go:117] "RemoveContainer" containerID="418a25315ba0d288665bab36e1a6626f26c8bf4f6397ed92b1b86793c33ec0d4" Mar 17 17:27:31.532509 containerd[1940]: time="2025-03-17T17:27:31.532340194Z" level=error msg="ContainerStatus for \"418a25315ba0d288665bab36e1a6626f26c8bf4f6397ed92b1b86793c33ec0d4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"418a25315ba0d288665bab36e1a6626f26c8bf4f6397ed92b1b86793c33ec0d4\": not found" Mar 17 17:27:31.532658 kubelet[2389]: E0317 17:27:31.532585 2389 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"418a25315ba0d288665bab36e1a6626f26c8bf4f6397ed92b1b86793c33ec0d4\": not found" containerID="418a25315ba0d288665bab36e1a6626f26c8bf4f6397ed92b1b86793c33ec0d4" Mar 17 17:27:31.532795 kubelet[2389]: I0317 17:27:31.532674 2389 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"418a25315ba0d288665bab36e1a6626f26c8bf4f6397ed92b1b86793c33ec0d4"} err="failed to get container status \"418a25315ba0d288665bab36e1a6626f26c8bf4f6397ed92b1b86793c33ec0d4\": rpc error: code = NotFound desc = an error occurred when try to find container \"418a25315ba0d288665bab36e1a6626f26c8bf4f6397ed92b1b86793c33ec0d4\": not found" Mar 17 17:27:31.532795 kubelet[2389]: I0317 17:27:31.532706 2389 scope.go:117] "RemoveContainer" containerID="eb8d9297506e5f30fffe1846b3c3d10cd7b2b205a8dc86cf4a57f0febb8ebad8" Mar 17 17:27:31.533124 containerd[1940]: time="2025-03-17T17:27:31.533062726Z" level=error msg="ContainerStatus for \"eb8d9297506e5f30fffe1846b3c3d10cd7b2b205a8dc86cf4a57f0febb8ebad8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"eb8d9297506e5f30fffe1846b3c3d10cd7b2b205a8dc86cf4a57f0febb8ebad8\": not found" Mar 17 17:27:31.533339 kubelet[2389]: E0317 17:27:31.533265 2389 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"eb8d9297506e5f30fffe1846b3c3d10cd7b2b205a8dc86cf4a57f0febb8ebad8\": not found" containerID="eb8d9297506e5f30fffe1846b3c3d10cd7b2b205a8dc86cf4a57f0febb8ebad8" Mar 17 17:27:31.533339 kubelet[2389]: I0317 17:27:31.533301 2389 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"eb8d9297506e5f30fffe1846b3c3d10cd7b2b205a8dc86cf4a57f0febb8ebad8"} err="failed to get container status \"eb8d9297506e5f30fffe1846b3c3d10cd7b2b205a8dc86cf4a57f0febb8ebad8\": rpc error: code = NotFound desc = an error occurred when try to find container \"eb8d9297506e5f30fffe1846b3c3d10cd7b2b205a8dc86cf4a57f0febb8ebad8\": not found" Mar 17 17:27:31.533339 kubelet[2389]: I0317 17:27:31.533333 2389 scope.go:117] "RemoveContainer" containerID="3067b6c4c4bc5c14c8ad7cbf422107748e0c1598fffdbe63ab06cd96cebb9d44" Mar 17 17:27:31.534199 containerd[1940]: time="2025-03-17T17:27:31.534039754Z" level=error msg="ContainerStatus for \"3067b6c4c4bc5c14c8ad7cbf422107748e0c1598fffdbe63ab06cd96cebb9d44\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3067b6c4c4bc5c14c8ad7cbf422107748e0c1598fffdbe63ab06cd96cebb9d44\": not found" Mar 17 17:27:31.534382 kubelet[2389]: E0317 17:27:31.534297 2389 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3067b6c4c4bc5c14c8ad7cbf422107748e0c1598fffdbe63ab06cd96cebb9d44\": not found" containerID="3067b6c4c4bc5c14c8ad7cbf422107748e0c1598fffdbe63ab06cd96cebb9d44" Mar 17 17:27:31.534382 kubelet[2389]: I0317 17:27:31.534339 2389 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3067b6c4c4bc5c14c8ad7cbf422107748e0c1598fffdbe63ab06cd96cebb9d44"} err="failed to get container status \"3067b6c4c4bc5c14c8ad7cbf422107748e0c1598fffdbe63ab06cd96cebb9d44\": rpc error: code = NotFound desc = an error occurred when try to find container \"3067b6c4c4bc5c14c8ad7cbf422107748e0c1598fffdbe63ab06cd96cebb9d44\": not found" Mar 17 17:27:31.575908 kubelet[2389]: I0317 17:27:31.575854 2389 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fcac3214-4e7e-4b38-ac80-365486e6c93e-etc-cni-netd\") on node \"172.31.16.223\" DevicePath \"\"" Mar 17 17:27:31.575908 kubelet[2389]: I0317 17:27:31.575905 2389 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fcac3214-4e7e-4b38-ac80-365486e6c93e-hubble-tls\") on node \"172.31.16.223\" DevicePath \"\"" Mar 17 17:27:31.576303 kubelet[2389]: I0317 17:27:31.575932 2389 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fcac3214-4e7e-4b38-ac80-365486e6c93e-cni-path\") on node \"172.31.16.223\" DevicePath \"\"" Mar 17 17:27:31.576303 kubelet[2389]: I0317 17:27:31.575957 2389 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fcac3214-4e7e-4b38-ac80-365486e6c93e-host-proc-sys-kernel\") on node \"172.31.16.223\" DevicePath \"\"" Mar 17 17:27:31.576303 kubelet[2389]: I0317 17:27:31.575978 2389 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fcac3214-4e7e-4b38-ac80-365486e6c93e-lib-modules\") on node \"172.31.16.223\" DevicePath \"\"" Mar 17 17:27:31.576303 kubelet[2389]: I0317 17:27:31.575999 2389 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dvd2s\" (UniqueName: \"kubernetes.io/projected/fcac3214-4e7e-4b38-ac80-365486e6c93e-kube-api-access-dvd2s\") on node \"172.31.16.223\" DevicePath \"\"" Mar 17 17:27:31.576303 kubelet[2389]: I0317 17:27:31.576021 2389 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fcac3214-4e7e-4b38-ac80-365486e6c93e-cilium-run\") on node \"172.31.16.223\" DevicePath \"\"" Mar 17 17:27:31.576303 kubelet[2389]: I0317 17:27:31.576041 2389 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fcac3214-4e7e-4b38-ac80-365486e6c93e-cilium-cgroup\") on node \"172.31.16.223\" DevicePath \"\"" Mar 17 17:27:31.576303 kubelet[2389]: I0317 17:27:31.576061 2389 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fcac3214-4e7e-4b38-ac80-365486e6c93e-bpf-maps\") on node \"172.31.16.223\" DevicePath \"\"" Mar 17 17:27:31.576303 kubelet[2389]: I0317 17:27:31.576082 2389 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fcac3214-4e7e-4b38-ac80-365486e6c93e-cilium-config-path\") on node \"172.31.16.223\" DevicePath \"\"" Mar 17 17:27:31.576831 kubelet[2389]: I0317 17:27:31.576104 2389 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fcac3214-4e7e-4b38-ac80-365486e6c93e-hostproc\") on node \"172.31.16.223\" DevicePath \"\"" Mar 17 17:27:31.576831 kubelet[2389]: I0317 17:27:31.576123 2389 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fcac3214-4e7e-4b38-ac80-365486e6c93e-clustermesh-secrets\") on node \"172.31.16.223\" DevicePath \"\"" Mar 17 17:27:31.795519 systemd[1]: Removed slice kubepods-burstable-podfcac3214_4e7e_4b38_ac80_365486e6c93e.slice - libcontainer container kubepods-burstable-podfcac3214_4e7e_4b38_ac80_365486e6c93e.slice. Mar 17 17:27:31.795762 systemd[1]: kubepods-burstable-podfcac3214_4e7e_4b38_ac80_365486e6c93e.slice: Consumed 14.413s CPU time. Mar 17 17:27:31.986316 systemd[1]: var-lib-kubelet-pods-fcac3214\x2d4e7e\x2d4b38\x2dac80\x2d365486e6c93e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddvd2s.mount: Deactivated successfully. Mar 17 17:27:31.986521 systemd[1]: var-lib-kubelet-pods-fcac3214\x2d4e7e\x2d4b38\x2dac80\x2d365486e6c93e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 17:27:32.045941 kubelet[2389]: E0317 17:27:32.045801 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:33.046479 kubelet[2389]: E0317 17:27:33.046001 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:33.185391 kubelet[2389]: E0317 17:27:33.185264 2389 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 17:27:33.194136 kubelet[2389]: I0317 17:27:33.193036 2389 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fcac3214-4e7e-4b38-ac80-365486e6c93e" path="/var/lib/kubelet/pods/fcac3214-4e7e-4b38-ac80-365486e6c93e/volumes" Mar 17 17:27:33.404962 ntpd[1912]: Deleting interface #11 lxc_health, fe80::8e6:dbff:fe24:d5a0%7#123, interface stats: received=0, sent=0, dropped=0, active_time=42 secs Mar 17 17:27:33.405564 ntpd[1912]: 17 Mar 17:27:33 ntpd[1912]: Deleting interface #11 lxc_health, fe80::8e6:dbff:fe24:d5a0%7#123, interface stats: received=0, sent=0, dropped=0, active_time=42 secs Mar 17 17:27:34.047024 kubelet[2389]: E0317 17:27:34.046949 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:34.296166 kubelet[2389]: I0317 17:27:34.296075 2389 setters.go:602] "Node became not ready" node="172.31.16.223" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-03-17T17:27:34Z","lastTransitionTime":"2025-03-17T17:27:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 17 17:27:34.739947 kubelet[2389]: I0317 17:27:34.739890 2389 memory_manager.go:355] "RemoveStaleState removing state" podUID="fcac3214-4e7e-4b38-ac80-365486e6c93e" containerName="cilium-agent" Mar 17 17:27:34.751596 kubelet[2389]: W0317 17:27:34.749742 2389 reflector.go:569] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:172.31.16.223" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '172.31.16.223' and this object Mar 17 17:27:34.751596 kubelet[2389]: E0317 17:27:34.749800 2389 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:172.31.16.223\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172.31.16.223' and this object" logger="UnhandledError" Mar 17 17:27:34.751596 kubelet[2389]: I0317 17:27:34.749880 2389 status_manager.go:890] "Failed to get status for pod" podUID="18d70e0a-9128-4e44-8cd4-041c2ccd518e" pod="kube-system/cilium-operator-6c4d7847fc-hr667" err="pods \"cilium-operator-6c4d7847fc-hr667\" is forbidden: User \"system:node:172.31.16.223\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172.31.16.223' and this object" Mar 17 17:27:34.750823 systemd[1]: Created slice kubepods-besteffort-pod18d70e0a_9128_4e44_8cd4_041c2ccd518e.slice - libcontainer container kubepods-besteffort-pod18d70e0a_9128_4e44_8cd4_041c2ccd518e.slice. Mar 17 17:27:34.765318 kubelet[2389]: W0317 17:27:34.765279 2389 reflector.go:569] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:172.31.16.223" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172.31.16.223' and this object Mar 17 17:27:34.765568 kubelet[2389]: E0317 17:27:34.765530 2389 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-ipsec-keys\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:172.31.16.223\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172.31.16.223' and this object" logger="UnhandledError" Mar 17 17:27:34.765740 kubelet[2389]: W0317 17:27:34.765288 2389 reflector.go:569] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:172.31.16.223" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172.31.16.223' and this object Mar 17 17:27:34.765873 kubelet[2389]: E0317 17:27:34.765846 2389 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:172.31.16.223\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172.31.16.223' and this object" logger="UnhandledError" Mar 17 17:27:34.765972 kubelet[2389]: W0317 17:27:34.765401 2389 reflector.go:569] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:172.31.16.223" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172.31.16.223' and this object Mar 17 17:27:34.766115 kubelet[2389]: E0317 17:27:34.766079 2389 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:172.31.16.223\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172.31.16.223' and this object" logger="UnhandledError" Mar 17 17:27:34.767361 systemd[1]: Created slice kubepods-burstable-pod70b13cdf_c362_4123_ad04_24f12d9a1bed.slice - libcontainer container kubepods-burstable-pod70b13cdf_c362_4123_ad04_24f12d9a1bed.slice. Mar 17 17:27:34.794596 kubelet[2389]: I0317 17:27:34.794519 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/70b13cdf-c362-4123-ad04-24f12d9a1bed-cilium-run\") pod \"cilium-vl4xg\" (UID: \"70b13cdf-c362-4123-ad04-24f12d9a1bed\") " pod="kube-system/cilium-vl4xg" Mar 17 17:27:34.794596 kubelet[2389]: I0317 17:27:34.794589 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/70b13cdf-c362-4123-ad04-24f12d9a1bed-bpf-maps\") pod \"cilium-vl4xg\" (UID: \"70b13cdf-c362-4123-ad04-24f12d9a1bed\") " pod="kube-system/cilium-vl4xg" Mar 17 17:27:34.794844 kubelet[2389]: I0317 17:27:34.794629 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/70b13cdf-c362-4123-ad04-24f12d9a1bed-cilium-cgroup\") pod \"cilium-vl4xg\" (UID: \"70b13cdf-c362-4123-ad04-24f12d9a1bed\") " pod="kube-system/cilium-vl4xg" Mar 17 17:27:34.794844 kubelet[2389]: I0317 17:27:34.794708 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/70b13cdf-c362-4123-ad04-24f12d9a1bed-cni-path\") pod \"cilium-vl4xg\" (UID: \"70b13cdf-c362-4123-ad04-24f12d9a1bed\") " pod="kube-system/cilium-vl4xg" Mar 17 17:27:34.794844 kubelet[2389]: I0317 17:27:34.794745 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/70b13cdf-c362-4123-ad04-24f12d9a1bed-lib-modules\") pod \"cilium-vl4xg\" (UID: \"70b13cdf-c362-4123-ad04-24f12d9a1bed\") " pod="kube-system/cilium-vl4xg" Mar 17 17:27:34.794844 kubelet[2389]: I0317 17:27:34.794780 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/70b13cdf-c362-4123-ad04-24f12d9a1bed-xtables-lock\") pod \"cilium-vl4xg\" (UID: \"70b13cdf-c362-4123-ad04-24f12d9a1bed\") " pod="kube-system/cilium-vl4xg" Mar 17 17:27:34.794844 kubelet[2389]: I0317 17:27:34.794820 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/70b13cdf-c362-4123-ad04-24f12d9a1bed-cilium-config-path\") pod \"cilium-vl4xg\" (UID: \"70b13cdf-c362-4123-ad04-24f12d9a1bed\") " pod="kube-system/cilium-vl4xg" Mar 17 17:27:34.795092 kubelet[2389]: I0317 17:27:34.794866 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9m74\" (UniqueName: \"kubernetes.io/projected/70b13cdf-c362-4123-ad04-24f12d9a1bed-kube-api-access-n9m74\") pod \"cilium-vl4xg\" (UID: \"70b13cdf-c362-4123-ad04-24f12d9a1bed\") " pod="kube-system/cilium-vl4xg" Mar 17 17:27:34.795092 kubelet[2389]: I0317 17:27:34.794903 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/18d70e0a-9128-4e44-8cd4-041c2ccd518e-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-hr667\" (UID: \"18d70e0a-9128-4e44-8cd4-041c2ccd518e\") " pod="kube-system/cilium-operator-6c4d7847fc-hr667" Mar 17 17:27:34.795092 kubelet[2389]: I0317 17:27:34.794946 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/70b13cdf-c362-4123-ad04-24f12d9a1bed-hostproc\") pod \"cilium-vl4xg\" (UID: \"70b13cdf-c362-4123-ad04-24f12d9a1bed\") " pod="kube-system/cilium-vl4xg" Mar 17 17:27:34.795092 kubelet[2389]: I0317 17:27:34.794984 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/70b13cdf-c362-4123-ad04-24f12d9a1bed-host-proc-sys-net\") pod \"cilium-vl4xg\" (UID: \"70b13cdf-c362-4123-ad04-24f12d9a1bed\") " pod="kube-system/cilium-vl4xg" Mar 17 17:27:34.795092 kubelet[2389]: I0317 17:27:34.795024 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/70b13cdf-c362-4123-ad04-24f12d9a1bed-hubble-tls\") pod \"cilium-vl4xg\" (UID: \"70b13cdf-c362-4123-ad04-24f12d9a1bed\") " pod="kube-system/cilium-vl4xg" Mar 17 17:27:34.795330 kubelet[2389]: I0317 17:27:34.795061 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ph86m\" (UniqueName: \"kubernetes.io/projected/18d70e0a-9128-4e44-8cd4-041c2ccd518e-kube-api-access-ph86m\") pod \"cilium-operator-6c4d7847fc-hr667\" (UID: \"18d70e0a-9128-4e44-8cd4-041c2ccd518e\") " pod="kube-system/cilium-operator-6c4d7847fc-hr667" Mar 17 17:27:34.795330 kubelet[2389]: I0317 17:27:34.795113 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/70b13cdf-c362-4123-ad04-24f12d9a1bed-etc-cni-netd\") pod \"cilium-vl4xg\" (UID: \"70b13cdf-c362-4123-ad04-24f12d9a1bed\") " pod="kube-system/cilium-vl4xg" Mar 17 17:27:34.795330 kubelet[2389]: I0317 17:27:34.795151 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/70b13cdf-c362-4123-ad04-24f12d9a1bed-clustermesh-secrets\") pod \"cilium-vl4xg\" (UID: \"70b13cdf-c362-4123-ad04-24f12d9a1bed\") " pod="kube-system/cilium-vl4xg" Mar 17 17:27:34.795330 kubelet[2389]: I0317 17:27:34.795188 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/70b13cdf-c362-4123-ad04-24f12d9a1bed-cilium-ipsec-secrets\") pod \"cilium-vl4xg\" (UID: \"70b13cdf-c362-4123-ad04-24f12d9a1bed\") " pod="kube-system/cilium-vl4xg" Mar 17 17:27:34.795330 kubelet[2389]: I0317 17:27:34.795225 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/70b13cdf-c362-4123-ad04-24f12d9a1bed-host-proc-sys-kernel\") pod \"cilium-vl4xg\" (UID: \"70b13cdf-c362-4123-ad04-24f12d9a1bed\") " pod="kube-system/cilium-vl4xg" Mar 17 17:27:35.048104 kubelet[2389]: E0317 17:27:35.047942 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:35.656716 containerd[1940]: time="2025-03-17T17:27:35.656630438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-hr667,Uid:18d70e0a-9128-4e44-8cd4-041c2ccd518e,Namespace:kube-system,Attempt:0,}" Mar 17 17:27:35.691770 containerd[1940]: time="2025-03-17T17:27:35.691069226Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:27:35.691770 containerd[1940]: time="2025-03-17T17:27:35.691184942Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:27:35.691770 containerd[1940]: time="2025-03-17T17:27:35.691222838Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:27:35.691770 containerd[1940]: time="2025-03-17T17:27:35.691386482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:27:35.731987 systemd[1]: Started cri-containerd-f31acb6c4125bda7010bf583013b6fb657205335bc5cbebb7e5e03c8d6089e70.scope - libcontainer container f31acb6c4125bda7010bf583013b6fb657205335bc5cbebb7e5e03c8d6089e70. Mar 17 17:27:35.793632 containerd[1940]: time="2025-03-17T17:27:35.793516467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-hr667,Uid:18d70e0a-9128-4e44-8cd4-041c2ccd518e,Namespace:kube-system,Attempt:0,} returns sandbox id \"f31acb6c4125bda7010bf583013b6fb657205335bc5cbebb7e5e03c8d6089e70\"" Mar 17 17:27:35.797314 containerd[1940]: time="2025-03-17T17:27:35.796979823Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 17 17:27:35.897783 kubelet[2389]: E0317 17:27:35.897723 2389 projected.go:263] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Mar 17 17:27:35.897783 kubelet[2389]: E0317 17:27:35.897780 2389 projected.go:194] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-vl4xg: failed to sync secret cache: timed out waiting for the condition Mar 17 17:27:35.898014 kubelet[2389]: E0317 17:27:35.897874 2389 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/70b13cdf-c362-4123-ad04-24f12d9a1bed-hubble-tls podName:70b13cdf-c362-4123-ad04-24f12d9a1bed nodeName:}" failed. No retries permitted until 2025-03-17 17:27:36.397841395 +0000 UTC m=+75.292812824 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/70b13cdf-c362-4123-ad04-24f12d9a1bed-hubble-tls") pod "cilium-vl4xg" (UID: "70b13cdf-c362-4123-ad04-24f12d9a1bed") : failed to sync secret cache: timed out waiting for the condition Mar 17 17:27:35.898150 kubelet[2389]: E0317 17:27:35.897726 2389 secret.go:189] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition Mar 17 17:27:35.898231 kubelet[2389]: E0317 17:27:35.898190 2389 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/70b13cdf-c362-4123-ad04-24f12d9a1bed-cilium-ipsec-secrets podName:70b13cdf-c362-4123-ad04-24f12d9a1bed nodeName:}" failed. No retries permitted until 2025-03-17 17:27:36.398165671 +0000 UTC m=+75.293137100 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/70b13cdf-c362-4123-ad04-24f12d9a1bed-cilium-ipsec-secrets") pod "cilium-vl4xg" (UID: "70b13cdf-c362-4123-ad04-24f12d9a1bed") : failed to sync secret cache: timed out waiting for the condition Mar 17 17:27:35.898231 kubelet[2389]: E0317 17:27:35.897748 2389 secret.go:189] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Mar 17 17:27:35.898370 kubelet[2389]: E0317 17:27:35.898250 2389 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/70b13cdf-c362-4123-ad04-24f12d9a1bed-clustermesh-secrets podName:70b13cdf-c362-4123-ad04-24f12d9a1bed nodeName:}" failed. No retries permitted until 2025-03-17 17:27:36.398236231 +0000 UTC m=+75.293207660 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/70b13cdf-c362-4123-ad04-24f12d9a1bed-clustermesh-secrets") pod "cilium-vl4xg" (UID: "70b13cdf-c362-4123-ad04-24f12d9a1bed") : failed to sync secret cache: timed out waiting for the condition Mar 17 17:27:36.048624 kubelet[2389]: E0317 17:27:36.048543 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:36.584661 containerd[1940]: time="2025-03-17T17:27:36.582874719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vl4xg,Uid:70b13cdf-c362-4123-ad04-24f12d9a1bed,Namespace:kube-system,Attempt:0,}" Mar 17 17:27:36.619203 containerd[1940]: time="2025-03-17T17:27:36.619043511Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:27:36.619405 containerd[1940]: time="2025-03-17T17:27:36.619175031Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:27:36.619405 containerd[1940]: time="2025-03-17T17:27:36.619213983Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:27:36.619622 containerd[1940]: time="2025-03-17T17:27:36.619463535Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:27:36.652966 systemd[1]: Started cri-containerd-03f6f336d8652581c448595d874a5a03e0c010c8d294e117e8514ab11ae313d2.scope - libcontainer container 03f6f336d8652581c448595d874a5a03e0c010c8d294e117e8514ab11ae313d2. Mar 17 17:27:36.693937 containerd[1940]: time="2025-03-17T17:27:36.693874263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vl4xg,Uid:70b13cdf-c362-4123-ad04-24f12d9a1bed,Namespace:kube-system,Attempt:0,} returns sandbox id \"03f6f336d8652581c448595d874a5a03e0c010c8d294e117e8514ab11ae313d2\"" Mar 17 17:27:36.699032 containerd[1940]: time="2025-03-17T17:27:36.698976147Z" level=info msg="CreateContainer within sandbox \"03f6f336d8652581c448595d874a5a03e0c010c8d294e117e8514ab11ae313d2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 17:27:36.713248 containerd[1940]: time="2025-03-17T17:27:36.713108871Z" level=info msg="CreateContainer within sandbox \"03f6f336d8652581c448595d874a5a03e0c010c8d294e117e8514ab11ae313d2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"578179b46075054164c7935c67e2c287d33af6ce82e4442a5949bfefb5b29cbd\"" Mar 17 17:27:36.714746 containerd[1940]: time="2025-03-17T17:27:36.713854059Z" level=info msg="StartContainer for \"578179b46075054164c7935c67e2c287d33af6ce82e4442a5949bfefb5b29cbd\"" Mar 17 17:27:36.755968 systemd[1]: Started cri-containerd-578179b46075054164c7935c67e2c287d33af6ce82e4442a5949bfefb5b29cbd.scope - libcontainer container 578179b46075054164c7935c67e2c287d33af6ce82e4442a5949bfefb5b29cbd. Mar 17 17:27:36.798186 containerd[1940]: time="2025-03-17T17:27:36.798096160Z" level=info msg="StartContainer for \"578179b46075054164c7935c67e2c287d33af6ce82e4442a5949bfefb5b29cbd\" returns successfully" Mar 17 17:27:36.813234 systemd[1]: cri-containerd-578179b46075054164c7935c67e2c287d33af6ce82e4442a5949bfefb5b29cbd.scope: Deactivated successfully. Mar 17 17:27:36.858605 containerd[1940]: time="2025-03-17T17:27:36.858416452Z" level=info msg="shim disconnected" id=578179b46075054164c7935c67e2c287d33af6ce82e4442a5949bfefb5b29cbd namespace=k8s.io Mar 17 17:27:36.858605 containerd[1940]: time="2025-03-17T17:27:36.858494872Z" level=warning msg="cleaning up after shim disconnected" id=578179b46075054164c7935c67e2c287d33af6ce82e4442a5949bfefb5b29cbd namespace=k8s.io Mar 17 17:27:36.858605 containerd[1940]: time="2025-03-17T17:27:36.858516280Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:27:37.049623 kubelet[2389]: E0317 17:27:37.049542 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:37.510400 containerd[1940]: time="2025-03-17T17:27:37.510305031Z" level=info msg="CreateContainer within sandbox \"03f6f336d8652581c448595d874a5a03e0c010c8d294e117e8514ab11ae313d2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 17:27:37.528698 containerd[1940]: time="2025-03-17T17:27:37.528605320Z" level=info msg="CreateContainer within sandbox \"03f6f336d8652581c448595d874a5a03e0c010c8d294e117e8514ab11ae313d2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e738a8e1e74dc1f706e1a2ea9414906a0f50b2f727bc9300621b85d906143660\"" Mar 17 17:27:37.529831 containerd[1940]: time="2025-03-17T17:27:37.529596124Z" level=info msg="StartContainer for \"e738a8e1e74dc1f706e1a2ea9414906a0f50b2f727bc9300621b85d906143660\"" Mar 17 17:27:37.583984 systemd[1]: Started cri-containerd-e738a8e1e74dc1f706e1a2ea9414906a0f50b2f727bc9300621b85d906143660.scope - libcontainer container e738a8e1e74dc1f706e1a2ea9414906a0f50b2f727bc9300621b85d906143660. Mar 17 17:27:37.635283 containerd[1940]: time="2025-03-17T17:27:37.635182180Z" level=info msg="StartContainer for \"e738a8e1e74dc1f706e1a2ea9414906a0f50b2f727bc9300621b85d906143660\" returns successfully" Mar 17 17:27:37.645930 systemd[1]: cri-containerd-e738a8e1e74dc1f706e1a2ea9414906a0f50b2f727bc9300621b85d906143660.scope: Deactivated successfully. Mar 17 17:27:37.686030 containerd[1940]: time="2025-03-17T17:27:37.685933672Z" level=info msg="shim disconnected" id=e738a8e1e74dc1f706e1a2ea9414906a0f50b2f727bc9300621b85d906143660 namespace=k8s.io Mar 17 17:27:37.686030 containerd[1940]: time="2025-03-17T17:27:37.686010304Z" level=warning msg="cleaning up after shim disconnected" id=e738a8e1e74dc1f706e1a2ea9414906a0f50b2f727bc9300621b85d906143660 namespace=k8s.io Mar 17 17:27:37.686030 containerd[1940]: time="2025-03-17T17:27:37.686032012Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:27:38.049814 kubelet[2389]: E0317 17:27:38.049743 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:38.186367 kubelet[2389]: E0317 17:27:38.186299 2389 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 17:27:38.414373 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e738a8e1e74dc1f706e1a2ea9414906a0f50b2f727bc9300621b85d906143660-rootfs.mount: Deactivated successfully. Mar 17 17:27:38.523162 containerd[1940]: time="2025-03-17T17:27:38.522433804Z" level=info msg="CreateContainer within sandbox \"03f6f336d8652581c448595d874a5a03e0c010c8d294e117e8514ab11ae313d2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 17:27:38.553307 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1548758511.mount: Deactivated successfully. Mar 17 17:27:38.603142 containerd[1940]: time="2025-03-17T17:27:38.603050825Z" level=info msg="CreateContainer within sandbox \"03f6f336d8652581c448595d874a5a03e0c010c8d294e117e8514ab11ae313d2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2cdb31622f874e310af0aae3ad0f636a870e55e8d59c5dc87bec6a9c521420dc\"" Mar 17 17:27:38.604353 containerd[1940]: time="2025-03-17T17:27:38.604248425Z" level=info msg="StartContainer for \"2cdb31622f874e310af0aae3ad0f636a870e55e8d59c5dc87bec6a9c521420dc\"" Mar 17 17:27:38.664067 systemd[1]: Started cri-containerd-2cdb31622f874e310af0aae3ad0f636a870e55e8d59c5dc87bec6a9c521420dc.scope - libcontainer container 2cdb31622f874e310af0aae3ad0f636a870e55e8d59c5dc87bec6a9c521420dc. Mar 17 17:27:38.738229 containerd[1940]: time="2025-03-17T17:27:38.738151578Z" level=info msg="StartContainer for \"2cdb31622f874e310af0aae3ad0f636a870e55e8d59c5dc87bec6a9c521420dc\" returns successfully" Mar 17 17:27:38.744972 systemd[1]: cri-containerd-2cdb31622f874e310af0aae3ad0f636a870e55e8d59c5dc87bec6a9c521420dc.scope: Deactivated successfully. Mar 17 17:27:38.817195 containerd[1940]: time="2025-03-17T17:27:38.817110714Z" level=info msg="shim disconnected" id=2cdb31622f874e310af0aae3ad0f636a870e55e8d59c5dc87bec6a9c521420dc namespace=k8s.io Mar 17 17:27:38.817195 containerd[1940]: time="2025-03-17T17:27:38.817186230Z" level=warning msg="cleaning up after shim disconnected" id=2cdb31622f874e310af0aae3ad0f636a870e55e8d59c5dc87bec6a9c521420dc namespace=k8s.io Mar 17 17:27:38.817195 containerd[1940]: time="2025-03-17T17:27:38.817207782Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:27:39.050910 kubelet[2389]: E0317 17:27:39.050619 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:39.526347 containerd[1940]: time="2025-03-17T17:27:39.526291853Z" level=info msg="CreateContainer within sandbox \"03f6f336d8652581c448595d874a5a03e0c010c8d294e117e8514ab11ae313d2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 17:27:39.546304 containerd[1940]: time="2025-03-17T17:27:39.546171414Z" level=info msg="CreateContainer within sandbox \"03f6f336d8652581c448595d874a5a03e0c010c8d294e117e8514ab11ae313d2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"311481e141118900ac9b961c2ebdaa23cfbe1eb318304044f8ce5bef2c34b886\"" Mar 17 17:27:39.547089 containerd[1940]: time="2025-03-17T17:27:39.546902034Z" level=info msg="StartContainer for \"311481e141118900ac9b961c2ebdaa23cfbe1eb318304044f8ce5bef2c34b886\"" Mar 17 17:27:39.602983 systemd[1]: Started cri-containerd-311481e141118900ac9b961c2ebdaa23cfbe1eb318304044f8ce5bef2c34b886.scope - libcontainer container 311481e141118900ac9b961c2ebdaa23cfbe1eb318304044f8ce5bef2c34b886. Mar 17 17:27:39.647505 systemd[1]: cri-containerd-311481e141118900ac9b961c2ebdaa23cfbe1eb318304044f8ce5bef2c34b886.scope: Deactivated successfully. Mar 17 17:27:39.649780 containerd[1940]: time="2025-03-17T17:27:39.649522230Z" level=info msg="StartContainer for \"311481e141118900ac9b961c2ebdaa23cfbe1eb318304044f8ce5bef2c34b886\" returns successfully" Mar 17 17:27:39.742014 containerd[1940]: time="2025-03-17T17:27:39.741880963Z" level=info msg="shim disconnected" id=311481e141118900ac9b961c2ebdaa23cfbe1eb318304044f8ce5bef2c34b886 namespace=k8s.io Mar 17 17:27:39.742014 containerd[1940]: time="2025-03-17T17:27:39.741959371Z" level=warning msg="cleaning up after shim disconnected" id=311481e141118900ac9b961c2ebdaa23cfbe1eb318304044f8ce5bef2c34b886 namespace=k8s.io Mar 17 17:27:39.742014 containerd[1940]: time="2025-03-17T17:27:39.741980203Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:27:40.051704 kubelet[2389]: E0317 17:27:40.051616 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:40.091404 containerd[1940]: time="2025-03-17T17:27:40.091309768Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:27:40.093265 containerd[1940]: time="2025-03-17T17:27:40.093175504Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Mar 17 17:27:40.094460 containerd[1940]: time="2025-03-17T17:27:40.094388104Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:27:40.098193 containerd[1940]: time="2025-03-17T17:27:40.097307920Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 4.300268961s" Mar 17 17:27:40.098193 containerd[1940]: time="2025-03-17T17:27:40.097365052Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Mar 17 17:27:40.101320 containerd[1940]: time="2025-03-17T17:27:40.101142364Z" level=info msg="CreateContainer within sandbox \"f31acb6c4125bda7010bf583013b6fb657205335bc5cbebb7e5e03c8d6089e70\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 17 17:27:40.124188 containerd[1940]: time="2025-03-17T17:27:40.124091824Z" level=info msg="CreateContainer within sandbox \"f31acb6c4125bda7010bf583013b6fb657205335bc5cbebb7e5e03c8d6089e70\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"562736d7e15184437db1a214fa8a6852eaa22ea3508c71fa0216554dc43d55ed\"" Mar 17 17:27:40.124998 containerd[1940]: time="2025-03-17T17:27:40.124950340Z" level=info msg="StartContainer for \"562736d7e15184437db1a214fa8a6852eaa22ea3508c71fa0216554dc43d55ed\"" Mar 17 17:27:40.171936 systemd[1]: Started cri-containerd-562736d7e15184437db1a214fa8a6852eaa22ea3508c71fa0216554dc43d55ed.scope - libcontainer container 562736d7e15184437db1a214fa8a6852eaa22ea3508c71fa0216554dc43d55ed. Mar 17 17:27:40.221242 containerd[1940]: time="2025-03-17T17:27:40.221157545Z" level=info msg="StartContainer for \"562736d7e15184437db1a214fa8a6852eaa22ea3508c71fa0216554dc43d55ed\" returns successfully" Mar 17 17:27:40.415636 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-311481e141118900ac9b961c2ebdaa23cfbe1eb318304044f8ce5bef2c34b886-rootfs.mount: Deactivated successfully. Mar 17 17:27:40.536665 containerd[1940]: time="2025-03-17T17:27:40.536583546Z" level=info msg="CreateContainer within sandbox \"03f6f336d8652581c448595d874a5a03e0c010c8d294e117e8514ab11ae313d2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 17:27:40.541893 kubelet[2389]: I0317 17:27:40.541807 2389 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-hr667" podStartSLOduration=2.2392810340000002 podStartE2EDuration="6.541783963s" podCreationTimestamp="2025-03-17 17:27:34 +0000 UTC" firstStartedPulling="2025-03-17 17:27:35.796249791 +0000 UTC m=+74.691221208" lastFinishedPulling="2025-03-17 17:27:40.09875272 +0000 UTC m=+78.993724137" observedRunningTime="2025-03-17 17:27:40.541169071 +0000 UTC m=+79.436140500" watchObservedRunningTime="2025-03-17 17:27:40.541783963 +0000 UTC m=+79.436755416" Mar 17 17:27:40.561745 containerd[1940]: time="2025-03-17T17:27:40.561375799Z" level=info msg="CreateContainer within sandbox \"03f6f336d8652581c448595d874a5a03e0c010c8d294e117e8514ab11ae313d2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"03fd2a8de3ac9da5e145abff6e41c9c713b0d32dc24d3d98b7bb2c3b03c57919\"" Mar 17 17:27:40.562535 containerd[1940]: time="2025-03-17T17:27:40.562315951Z" level=info msg="StartContainer for \"03fd2a8de3ac9da5e145abff6e41c9c713b0d32dc24d3d98b7bb2c3b03c57919\"" Mar 17 17:27:40.620953 systemd[1]: Started cri-containerd-03fd2a8de3ac9da5e145abff6e41c9c713b0d32dc24d3d98b7bb2c3b03c57919.scope - libcontainer container 03fd2a8de3ac9da5e145abff6e41c9c713b0d32dc24d3d98b7bb2c3b03c57919. Mar 17 17:27:40.681125 containerd[1940]: time="2025-03-17T17:27:40.680944843Z" level=info msg="StartContainer for \"03fd2a8de3ac9da5e145abff6e41c9c713b0d32dc24d3d98b7bb2c3b03c57919\" returns successfully" Mar 17 17:27:41.052949 kubelet[2389]: E0317 17:27:41.052776 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:41.446069 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Mar 17 17:27:41.575253 kubelet[2389]: I0317 17:27:41.574875 2389 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-vl4xg" podStartSLOduration=7.574852184 podStartE2EDuration="7.574852184s" podCreationTimestamp="2025-03-17 17:27:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:27:41.57284492 +0000 UTC m=+80.467816373" watchObservedRunningTime="2025-03-17 17:27:41.574852184 +0000 UTC m=+80.469823613" Mar 17 17:27:42.053359 kubelet[2389]: E0317 17:27:42.053313 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:42.996315 kubelet[2389]: E0317 17:27:42.996102 2389 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:43.054713 kubelet[2389]: E0317 17:27:43.054387 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:44.055495 kubelet[2389]: E0317 17:27:44.055397 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:45.056318 kubelet[2389]: E0317 17:27:45.056244 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:45.557351 (udev-worker)[5099]: Network interface NamePolicy= disabled on kernel command line. Mar 17 17:27:45.558635 (udev-worker)[5097]: Network interface NamePolicy= disabled on kernel command line. Mar 17 17:27:45.581821 systemd-networkd[1853]: lxc_health: Link UP Mar 17 17:27:45.603576 systemd-networkd[1853]: lxc_health: Gained carrier Mar 17 17:27:46.057240 kubelet[2389]: E0317 17:27:46.057138 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:47.057683 kubelet[2389]: E0317 17:27:47.057585 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:47.095060 systemd-networkd[1853]: lxc_health: Gained IPv6LL Mar 17 17:27:48.058212 kubelet[2389]: E0317 17:27:48.058132 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:49.056677 systemd[1]: run-containerd-runc-k8s.io-03fd2a8de3ac9da5e145abff6e41c9c713b0d32dc24d3d98b7bb2c3b03c57919-runc.Xt7dXd.mount: Deactivated successfully. Mar 17 17:27:49.062991 kubelet[2389]: E0317 17:27:49.059202 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:49.405189 ntpd[1912]: Listen normally on 15 lxc_health [fe80::7c0c:f2ff:fedb:6322%15]:123 Mar 17 17:27:49.405841 ntpd[1912]: 17 Mar 17:27:49 ntpd[1912]: Listen normally on 15 lxc_health [fe80::7c0c:f2ff:fedb:6322%15]:123 Mar 17 17:27:50.059621 kubelet[2389]: E0317 17:27:50.059540 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:51.060783 kubelet[2389]: E0317 17:27:51.060709 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:52.061884 kubelet[2389]: E0317 17:27:52.061795 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:53.062380 kubelet[2389]: E0317 17:27:53.062304 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:54.063259 kubelet[2389]: E0317 17:27:54.063174 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:55.063798 kubelet[2389]: E0317 17:27:55.063726 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:56.064297 kubelet[2389]: E0317 17:27:56.064238 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:57.064929 kubelet[2389]: E0317 17:27:57.064870 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:58.065893 kubelet[2389]: E0317 17:27:58.065833 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:27:59.066089 kubelet[2389]: E0317 17:27:59.066017 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:28:00.067252 kubelet[2389]: E0317 17:28:00.067194 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:28:01.067405 kubelet[2389]: E0317 17:28:01.067337 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:28:02.067892 kubelet[2389]: E0317 17:28:02.067829 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:28:02.995910 kubelet[2389]: E0317 17:28:02.995838 2389 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:28:03.068780 kubelet[2389]: E0317 17:28:03.068603 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:28:04.069750 kubelet[2389]: E0317 17:28:04.069691 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:28:04.797310 kubelet[2389]: E0317 17:28:04.797212 2389 kubelet_node_status.go:549] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-03-17T17:27:54Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-03-17T17:27:54Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-03-17T17:27:54Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-03-17T17:27:54Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\\\"],\\\"sizeBytes\\\":157636062},{\\\"names\\\":[\\\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\\\",\\\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\\\"],\\\"sizeBytes\\\":87371201},{\\\"names\\\":[\\\"ghcr.io/flatcar/nginx@sha256:b927c62cc716b99bce51774b46a63feb63f5414c6f985fb80cacd1933bbd0e06\\\",\\\"ghcr.io/flatcar/nginx:latest\\\"],\\\"sizeBytes\\\":69703745},{\\\"names\\\":[\\\"registry.k8s.io/kube-proxy@sha256:5015269547a0b7dd2c062758e9a64467b58978ff2502cad4c3f5cdf4aa554ad3\\\",\\\"registry.k8s.io/kube-proxy:v1.32.3\\\"],\\\"sizeBytes\\\":27369114},{\\\"names\\\":[\\\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\\\"],\\\"sizeBytes\\\":17128551},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\\\",\\\"registry.k8s.io/pause:3.8\\\"],\\\"sizeBytes\\\":268403}]}}\" for node \"172.31.16.223\": Patch \"https://172.31.21.92:6443/api/v1/nodes/172.31.16.223/status?timeout=10s\": context deadline exceeded" Mar 17 17:28:04.915455 kubelet[2389]: E0317 17:28:04.915380 2389 controller.go:195] "Failed to update lease" err="Put \"https://172.31.21.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.16.223?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 17 17:28:05.070851 kubelet[2389]: E0317 17:28:05.070698 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:28:06.071396 kubelet[2389]: E0317 17:28:06.071321 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:28:07.072194 kubelet[2389]: E0317 17:28:07.072130 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:28:08.073042 kubelet[2389]: E0317 17:28:08.072982 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:28:09.073873 kubelet[2389]: E0317 17:28:09.073815 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:28:10.074286 kubelet[2389]: E0317 17:28:10.074217 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:28:11.074458 kubelet[2389]: E0317 17:28:11.074400 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:28:12.075542 kubelet[2389]: E0317 17:28:12.075476 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:28:13.076172 kubelet[2389]: E0317 17:28:13.076110 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:28:14.076538 kubelet[2389]: E0317 17:28:14.076479 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:28:14.798151 kubelet[2389]: E0317 17:28:14.797875 2389 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"172.31.16.223\": Get \"https://172.31.21.92:6443/api/v1/nodes/172.31.16.223?timeout=10s\": context deadline exceeded" Mar 17 17:28:14.916211 kubelet[2389]: E0317 17:28:14.915793 2389 controller.go:195] "Failed to update lease" err="Put \"https://172.31.21.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.16.223?timeout=10s\": context deadline exceeded" Mar 17 17:28:15.077267 kubelet[2389]: E0317 17:28:15.077156 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:28:16.078474 kubelet[2389]: E0317 17:28:16.078402 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:28:17.079350 kubelet[2389]: E0317 17:28:17.079279 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:28:18.079614 kubelet[2389]: E0317 17:28:18.079556 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:28:19.080242 kubelet[2389]: E0317 17:28:19.080183 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:28:20.081322 kubelet[2389]: E0317 17:28:20.081265 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:28:21.081820 kubelet[2389]: E0317 17:28:21.081765 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:28:22.082530 kubelet[2389]: E0317 17:28:22.082467 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:28:22.996158 kubelet[2389]: E0317 17:28:22.996095 2389 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:28:23.037464 containerd[1940]: time="2025-03-17T17:28:23.037399654Z" level=info msg="StopPodSandbox for \"c2630589a0202e6a1fbce4359b296483a49c25fcdde427e7af31fc4fce7ae265\"" Mar 17 17:28:23.038205 containerd[1940]: time="2025-03-17T17:28:23.037540966Z" level=info msg="TearDown network for sandbox \"c2630589a0202e6a1fbce4359b296483a49c25fcdde427e7af31fc4fce7ae265\" successfully" Mar 17 17:28:23.038205 containerd[1940]: time="2025-03-17T17:28:23.037566742Z" level=info msg="StopPodSandbox for \"c2630589a0202e6a1fbce4359b296483a49c25fcdde427e7af31fc4fce7ae265\" returns successfully" Mar 17 17:28:23.039543 containerd[1940]: time="2025-03-17T17:28:23.039107230Z" level=info msg="RemovePodSandbox for \"c2630589a0202e6a1fbce4359b296483a49c25fcdde427e7af31fc4fce7ae265\"" Mar 17 17:28:23.039543 containerd[1940]: time="2025-03-17T17:28:23.039159658Z" level=info msg="Forcibly stopping sandbox \"c2630589a0202e6a1fbce4359b296483a49c25fcdde427e7af31fc4fce7ae265\"" Mar 17 17:28:23.039543 containerd[1940]: time="2025-03-17T17:28:23.039275650Z" level=info msg="TearDown network for sandbox \"c2630589a0202e6a1fbce4359b296483a49c25fcdde427e7af31fc4fce7ae265\" successfully" Mar 17 17:28:23.046441 containerd[1940]: time="2025-03-17T17:28:23.046133494Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c2630589a0202e6a1fbce4359b296483a49c25fcdde427e7af31fc4fce7ae265\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:28:23.046441 containerd[1940]: time="2025-03-17T17:28:23.046230670Z" level=info msg="RemovePodSandbox \"c2630589a0202e6a1fbce4359b296483a49c25fcdde427e7af31fc4fce7ae265\" returns successfully" Mar 17 17:28:23.083591 kubelet[2389]: E0317 17:28:23.083516 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:28:24.084174 kubelet[2389]: E0317 17:28:24.084110 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:28:24.798701 kubelet[2389]: E0317 17:28:24.798615 2389 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"172.31.16.223\": Get \"https://172.31.21.92:6443/api/v1/nodes/172.31.16.223?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 17 17:28:24.916110 kubelet[2389]: E0317 17:28:24.916033 2389 controller.go:195] "Failed to update lease" err="Put \"https://172.31.21.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.16.223?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 17 17:28:25.084632 kubelet[2389]: E0317 17:28:25.084479 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:28:26.085496 kubelet[2389]: E0317 17:28:26.085429 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:28:27.086707 kubelet[2389]: E0317 17:28:27.086610 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:28:28.087332 kubelet[2389]: E0317 17:28:28.087223 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:28:29.088079 kubelet[2389]: E0317 17:28:29.088013 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:28:30.088404 kubelet[2389]: E0317 17:28:30.088340 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:28:31.089168 kubelet[2389]: E0317 17:28:31.089095 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:28:32.090266 kubelet[2389]: E0317 17:28:32.090203 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:28:33.090674 kubelet[2389]: E0317 17:28:33.090598 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:28:34.091015 kubelet[2389]: E0317 17:28:34.090953 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:28:34.799438 kubelet[2389]: E0317 17:28:34.799376 2389 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"172.31.16.223\": Get \"https://172.31.21.92:6443/api/v1/nodes/172.31.16.223?timeout=10s\": context deadline exceeded" Mar 17 17:28:34.917255 kubelet[2389]: E0317 17:28:34.917187 2389 controller.go:195] "Failed to update lease" err="Put \"https://172.31.21.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.16.223?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 17 17:28:35.093777 kubelet[2389]: E0317 17:28:35.093004 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:28:35.095753 kubelet[2389]: E0317 17:28:35.095526 2389 controller.go:195] "Failed to update lease" err="Put \"https://172.31.21.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.16.223?timeout=10s\": unexpected EOF" Mar 17 17:28:35.095753 kubelet[2389]: I0317 17:28:35.095582 2389 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 17 17:28:36.093574 kubelet[2389]: E0317 17:28:36.093508 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:28:36.096390 kubelet[2389]: E0317 17:28:36.096270 2389 desired_state_of_world_populator.go:304] "Error processing volume" err="error processing PVC default/test-dynamic-volume-claim: failed to fetch PVC from API server: Get \"https://172.31.21.92:6443/api/v1/namespaces/default/persistentvolumeclaims/test-dynamic-volume-claim\": dial tcp 172.31.21.92:6443: connect: connection refused - error from a previous attempt: unexpected EOF" pod="default/test-pod-1" volumeName="config" Mar 17 17:28:36.104671 kubelet[2389]: E0317 17:28:36.104521 2389 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.16.223?timeout=10s\": dial tcp 172.31.21.92:6443: connect: connection refused - error from a previous attempt: dial tcp 172.31.21.92:6443: connect: connection reset by peer" interval="200ms" Mar 17 17:28:36.107768 kubelet[2389]: E0317 17:28:36.107626 2389 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"172.31.16.223\": Get \"https://172.31.21.92:6443/api/v1/nodes/172.31.16.223?timeout=10s\": dial tcp 172.31.21.92:6443: connect: connection refused - error from a previous attempt: unexpected EOF" Mar 17 17:28:36.107768 kubelet[2389]: E0317 17:28:36.107746 2389 kubelet_node_status.go:536] "Unable to update node status" err="update node status exceeds retry count" Mar 17 17:28:37.094741 kubelet[2389]: E0317 17:28:37.094630 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:28:38.094929 kubelet[2389]: E0317 17:28:38.094862 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:28:39.095238 kubelet[2389]: E0317 17:28:39.095170 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:28:40.096275 kubelet[2389]: E0317 17:28:40.096212 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:28:41.096775 kubelet[2389]: E0317 17:28:41.096705 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:28:42.097870 kubelet[2389]: E0317 17:28:42.097804 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:28:42.995663 kubelet[2389]: E0317 17:28:42.995601 2389 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:28:43.098828 kubelet[2389]: E0317 17:28:43.098762 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:28:44.099847 kubelet[2389]: E0317 17:28:44.099772 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:28:45.100190 kubelet[2389]: E0317 17:28:45.100128 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:28:46.100507 kubelet[2389]: E0317 17:28:46.100442 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:28:46.306124 kubelet[2389]: E0317 17:28:46.306043 2389 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.16.223?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="400ms" Mar 17 17:28:47.101383 kubelet[2389]: E0317 17:28:47.101305 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:28:48.101756 kubelet[2389]: E0317 17:28:48.101695 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:28:49.102066 kubelet[2389]: E0317 17:28:49.101992 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:28:50.102535 kubelet[2389]: E0317 17:28:50.102474 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:28:51.103193 kubelet[2389]: E0317 17:28:51.103121 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:28:52.103496 kubelet[2389]: E0317 17:28:52.103438 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:28:53.103815 kubelet[2389]: E0317 17:28:53.103751 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:28:54.104774 kubelet[2389]: E0317 17:28:54.104710 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:28:55.105788 kubelet[2389]: E0317 17:28:55.105715 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:28:56.105970 kubelet[2389]: E0317 17:28:56.105904 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:28:56.465564 kubelet[2389]: E0317 17:28:56.465472 2389 kubelet_node_status.go:549] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-03-17T17:28:46Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-03-17T17:28:46Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-03-17T17:28:46Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-03-17T17:28:46Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\\\"],\\\"sizeBytes\\\":157636062},{\\\"names\\\":[\\\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\\\",\\\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\\\"],\\\"sizeBytes\\\":87371201},{\\\"names\\\":[\\\"ghcr.io/flatcar/nginx@sha256:b927c62cc716b99bce51774b46a63feb63f5414c6f985fb80cacd1933bbd0e06\\\",\\\"ghcr.io/flatcar/nginx:latest\\\"],\\\"sizeBytes\\\":69703745},{\\\"names\\\":[\\\"registry.k8s.io/kube-proxy@sha256:5015269547a0b7dd2c062758e9a64467b58978ff2502cad4c3f5cdf4aa554ad3\\\",\\\"registry.k8s.io/kube-proxy:v1.32.3\\\"],\\\"sizeBytes\\\":27369114},{\\\"names\\\":[\\\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\\\"],\\\"sizeBytes\\\":17128551},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\\\",\\\"registry.k8s.io/pause:3.8\\\"],\\\"sizeBytes\\\":268403}]}}\" for node \"172.31.16.223\": Patch \"https://172.31.21.92:6443/api/v1/nodes/172.31.16.223/status?timeout=10s\": context deadline exceeded" Mar 17 17:28:56.707696 kubelet[2389]: E0317 17:28:56.707589 2389 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.16.223?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="800ms" Mar 17 17:28:57.106783 kubelet[2389]: E0317 17:28:57.106717 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:28:58.107810 kubelet[2389]: E0317 17:28:58.107732 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"