Dec 13 13:15:33.188473 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Dec 13 13:15:33.188517 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Fri Dec 13 11:56:07 -00 2024 Dec 13 13:15:33.188541 kernel: KASLR disabled due to lack of seed Dec 13 13:15:33.188557 kernel: efi: EFI v2.7 by EDK II Dec 13 13:15:33.188573 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a736a98 MEMRESERVE=0x78557598 Dec 13 13:15:33.188588 kernel: secureboot: Secure boot disabled Dec 13 13:15:33.188605 kernel: ACPI: Early table checksum verification disabled Dec 13 13:15:33.188621 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Dec 13 13:15:33.188637 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Dec 13 13:15:33.188652 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Dec 13 13:15:33.188671 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Dec 13 13:15:33.188687 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Dec 13 13:15:33.188702 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Dec 13 13:15:33.188718 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Dec 13 13:15:33.188736 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Dec 13 13:15:33.188757 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Dec 13 13:15:33.188774 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Dec 13 13:15:33.188790 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Dec 13 13:15:33.188806 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Dec 13 13:15:33.188822 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Dec 13 13:15:33.188838 kernel: printk: bootconsole [uart0] enabled Dec 13 13:15:33.188854 kernel: NUMA: Failed to initialise from firmware Dec 13 13:15:33.188871 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Dec 13 13:15:33.188887 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Dec 13 13:15:33.188904 kernel: Zone ranges: Dec 13 13:15:33.188920 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Dec 13 13:15:33.188940 kernel: DMA32 empty Dec 13 13:15:33.188957 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Dec 13 13:15:33.188973 kernel: Movable zone start for each node Dec 13 13:15:33.188988 kernel: Early memory node ranges Dec 13 13:15:33.189004 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Dec 13 13:15:33.191352 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Dec 13 13:15:33.191383 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Dec 13 13:15:33.191400 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Dec 13 13:15:33.191416 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Dec 13 13:15:33.191433 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Dec 13 13:15:33.191450 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Dec 13 13:15:33.191467 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Dec 13 13:15:33.191497 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Dec 13 13:15:33.191514 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Dec 13 13:15:33.191555 kernel: psci: probing for conduit method from ACPI. Dec 13 13:15:33.191575 kernel: psci: PSCIv1.0 detected in firmware. Dec 13 13:15:33.191593 kernel: psci: Using standard PSCI v0.2 function IDs Dec 13 13:15:33.191615 kernel: psci: Trusted OS migration not required Dec 13 13:15:33.191633 kernel: psci: SMC Calling Convention v1.1 Dec 13 13:15:33.191649 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Dec 13 13:15:33.191666 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Dec 13 13:15:33.191683 kernel: pcpu-alloc: [0] 0 [0] 1 Dec 13 13:15:33.191700 kernel: Detected PIPT I-cache on CPU0 Dec 13 13:15:33.191717 kernel: CPU features: detected: GIC system register CPU interface Dec 13 13:15:33.191734 kernel: CPU features: detected: Spectre-v2 Dec 13 13:15:33.191751 kernel: CPU features: detected: Spectre-v3a Dec 13 13:15:33.191767 kernel: CPU features: detected: Spectre-BHB Dec 13 13:15:33.191784 kernel: CPU features: detected: ARM erratum 1742098 Dec 13 13:15:33.191801 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Dec 13 13:15:33.191823 kernel: alternatives: applying boot alternatives Dec 13 13:15:33.191842 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c48af8adabdaf1d8e07ceb011d2665929c607ddf2c4d40203b31334d745cc472 Dec 13 13:15:33.191861 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 13:15:33.191878 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 13:15:33.191895 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 13:15:33.191912 kernel: Fallback order for Node 0: 0 Dec 13 13:15:33.191928 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Dec 13 13:15:33.191945 kernel: Policy zone: Normal Dec 13 13:15:33.191962 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 13:15:33.191978 kernel: software IO TLB: area num 2. Dec 13 13:15:33.192000 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Dec 13 13:15:33.192124 kernel: Memory: 3819640K/4030464K available (10304K kernel code, 2184K rwdata, 8088K rodata, 39936K init, 897K bss, 210824K reserved, 0K cma-reserved) Dec 13 13:15:33.192147 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 13:15:33.192164 kernel: trace event string verifier disabled Dec 13 13:15:33.192180 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 13:15:33.192198 kernel: rcu: RCU event tracing is enabled. Dec 13 13:15:33.192215 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 13:15:33.192233 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 13:15:33.192250 kernel: Tracing variant of Tasks RCU enabled. Dec 13 13:15:33.192267 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 13:15:33.192284 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 13:15:33.192307 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 13 13:15:33.192325 kernel: GICv3: 96 SPIs implemented Dec 13 13:15:33.192341 kernel: GICv3: 0 Extended SPIs implemented Dec 13 13:15:33.192358 kernel: Root IRQ handler: gic_handle_irq Dec 13 13:15:33.192374 kernel: GICv3: GICv3 features: 16 PPIs Dec 13 13:15:33.192391 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Dec 13 13:15:33.192408 kernel: ITS [mem 0x10080000-0x1009ffff] Dec 13 13:15:33.192425 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Dec 13 13:15:33.192443 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Dec 13 13:15:33.192461 kernel: GICv3: using LPI property table @0x00000004000d0000 Dec 13 13:15:33.192478 kernel: ITS: Using hypervisor restricted LPI range [128] Dec 13 13:15:33.192494 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Dec 13 13:15:33.192515 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 13:15:33.192533 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Dec 13 13:15:33.192549 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Dec 13 13:15:33.192566 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Dec 13 13:15:33.192583 kernel: Console: colour dummy device 80x25 Dec 13 13:15:33.192601 kernel: printk: console [tty1] enabled Dec 13 13:15:33.192618 kernel: ACPI: Core revision 20230628 Dec 13 13:15:33.192635 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Dec 13 13:15:33.192653 kernel: pid_max: default: 32768 minimum: 301 Dec 13 13:15:33.192670 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 13:15:33.192692 kernel: landlock: Up and running. Dec 13 13:15:33.192709 kernel: SELinux: Initializing. Dec 13 13:15:33.192726 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 13:15:33.192744 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 13:15:33.192761 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 13:15:33.192778 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 13:15:33.192796 kernel: rcu: Hierarchical SRCU implementation. Dec 13 13:15:33.192813 kernel: rcu: Max phase no-delay instances is 400. Dec 13 13:15:33.192830 kernel: Platform MSI: ITS@0x10080000 domain created Dec 13 13:15:33.192852 kernel: PCI/MSI: ITS@0x10080000 domain created Dec 13 13:15:33.192869 kernel: Remapping and enabling EFI services. Dec 13 13:15:33.192887 kernel: smp: Bringing up secondary CPUs ... Dec 13 13:15:33.192904 kernel: Detected PIPT I-cache on CPU1 Dec 13 13:15:33.192921 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Dec 13 13:15:33.192939 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Dec 13 13:15:33.192957 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Dec 13 13:15:33.192974 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 13:15:33.192992 kernel: SMP: Total of 2 processors activated. Dec 13 13:15:33.193042 kernel: CPU features: detected: 32-bit EL0 Support Dec 13 13:15:33.193087 kernel: CPU features: detected: 32-bit EL1 Support Dec 13 13:15:33.193128 kernel: CPU features: detected: CRC32 instructions Dec 13 13:15:33.193154 kernel: CPU: All CPU(s) started at EL1 Dec 13 13:15:33.193173 kernel: alternatives: applying system-wide alternatives Dec 13 13:15:33.193191 kernel: devtmpfs: initialized Dec 13 13:15:33.193209 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 13:15:33.193228 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 13:15:33.193247 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 13:15:33.193270 kernel: SMBIOS 3.0.0 present. Dec 13 13:15:33.193289 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Dec 13 13:15:33.193307 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 13:15:33.193325 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 13 13:15:33.193344 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 13 13:15:33.193363 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 13 13:15:33.193382 kernel: audit: initializing netlink subsys (disabled) Dec 13 13:15:33.193404 kernel: audit: type=2000 audit(0.222:1): state=initialized audit_enabled=0 res=1 Dec 13 13:15:33.193423 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 13:15:33.193441 kernel: cpuidle: using governor menu Dec 13 13:15:33.193459 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 13 13:15:33.193479 kernel: ASID allocator initialised with 65536 entries Dec 13 13:15:33.193497 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 13:15:33.193516 kernel: Serial: AMBA PL011 UART driver Dec 13 13:15:33.193534 kernel: Modules: 17360 pages in range for non-PLT usage Dec 13 13:15:33.193553 kernel: Modules: 508880 pages in range for PLT usage Dec 13 13:15:33.193571 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 13:15:33.193595 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 13:15:33.193613 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 13 13:15:33.193631 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 13 13:15:33.193649 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 13:15:33.193668 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 13:15:33.193686 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 13 13:15:33.193706 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 13 13:15:33.193724 kernel: ACPI: Added _OSI(Module Device) Dec 13 13:15:33.193742 kernel: ACPI: Added _OSI(Processor Device) Dec 13 13:15:33.193765 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 13:15:33.193783 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 13:15:33.193801 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 13:15:33.193819 kernel: ACPI: Interpreter enabled Dec 13 13:15:33.193837 kernel: ACPI: Using GIC for interrupt routing Dec 13 13:15:33.193854 kernel: ACPI: MCFG table detected, 1 entries Dec 13 13:15:33.193872 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Dec 13 13:15:33.194198 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 13:15:33.194413 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 13 13:15:33.194613 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 13 13:15:33.194807 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Dec 13 13:15:33.195009 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Dec 13 13:15:33.196568 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Dec 13 13:15:33.196589 kernel: acpiphp: Slot [1] registered Dec 13 13:15:33.196609 kernel: acpiphp: Slot [2] registered Dec 13 13:15:33.196630 kernel: acpiphp: Slot [3] registered Dec 13 13:15:33.196658 kernel: acpiphp: Slot [4] registered Dec 13 13:15:33.196676 kernel: acpiphp: Slot [5] registered Dec 13 13:15:33.196694 kernel: acpiphp: Slot [6] registered Dec 13 13:15:33.196712 kernel: acpiphp: Slot [7] registered Dec 13 13:15:33.196729 kernel: acpiphp: Slot [8] registered Dec 13 13:15:33.196747 kernel: acpiphp: Slot [9] registered Dec 13 13:15:33.196765 kernel: acpiphp: Slot [10] registered Dec 13 13:15:33.196783 kernel: acpiphp: Slot [11] registered Dec 13 13:15:33.196801 kernel: acpiphp: Slot [12] registered Dec 13 13:15:33.196824 kernel: acpiphp: Slot [13] registered Dec 13 13:15:33.196843 kernel: acpiphp: Slot [14] registered Dec 13 13:15:33.196861 kernel: acpiphp: Slot [15] registered Dec 13 13:15:33.196879 kernel: acpiphp: Slot [16] registered Dec 13 13:15:33.196896 kernel: acpiphp: Slot [17] registered Dec 13 13:15:33.196914 kernel: acpiphp: Slot [18] registered Dec 13 13:15:33.196932 kernel: acpiphp: Slot [19] registered Dec 13 13:15:33.196950 kernel: acpiphp: Slot [20] registered Dec 13 13:15:33.196967 kernel: acpiphp: Slot [21] registered Dec 13 13:15:33.196985 kernel: acpiphp: Slot [22] registered Dec 13 13:15:33.197008 kernel: acpiphp: Slot [23] registered Dec 13 13:15:33.197057 kernel: acpiphp: Slot [24] registered Dec 13 13:15:33.197077 kernel: acpiphp: Slot [25] registered Dec 13 13:15:33.197095 kernel: acpiphp: Slot [26] registered Dec 13 13:15:33.197115 kernel: acpiphp: Slot [27] registered Dec 13 13:15:33.197133 kernel: acpiphp: Slot [28] registered Dec 13 13:15:33.197152 kernel: acpiphp: Slot [29] registered Dec 13 13:15:33.197172 kernel: acpiphp: Slot [30] registered Dec 13 13:15:33.197190 kernel: acpiphp: Slot [31] registered Dec 13 13:15:33.197217 kernel: PCI host bridge to bus 0000:00 Dec 13 13:15:33.197615 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Dec 13 13:15:33.197825 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 13 13:15:33.199082 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Dec 13 13:15:33.199334 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Dec 13 13:15:33.199616 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Dec 13 13:15:33.201227 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Dec 13 13:15:33.201493 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Dec 13 13:15:33.201769 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Dec 13 13:15:33.201998 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Dec 13 13:15:33.204776 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 13 13:15:33.205008 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Dec 13 13:15:33.205320 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Dec 13 13:15:33.205536 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Dec 13 13:15:33.205735 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Dec 13 13:15:33.205939 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 13 13:15:33.206880 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Dec 13 13:15:33.207246 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Dec 13 13:15:33.207449 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Dec 13 13:15:33.208371 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Dec 13 13:15:33.208608 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Dec 13 13:15:33.208821 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Dec 13 13:15:33.211065 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 13 13:15:33.213587 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Dec 13 13:15:33.213626 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 13 13:15:33.213647 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 13 13:15:33.213666 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 13 13:15:33.213685 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 13 13:15:33.213718 kernel: iommu: Default domain type: Translated Dec 13 13:15:33.213737 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 13 13:15:33.213755 kernel: efivars: Registered efivars operations Dec 13 13:15:33.213773 kernel: vgaarb: loaded Dec 13 13:15:33.213792 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 13 13:15:33.213811 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 13:15:33.213830 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 13:15:33.213849 kernel: pnp: PnP ACPI init Dec 13 13:15:33.214093 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Dec 13 13:15:33.214128 kernel: pnp: PnP ACPI: found 1 devices Dec 13 13:15:33.214147 kernel: NET: Registered PF_INET protocol family Dec 13 13:15:33.214166 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 13:15:33.214184 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 13:15:33.214202 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 13:15:33.214221 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 13:15:33.214239 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 13:15:33.214257 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 13:15:33.214280 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 13:15:33.214298 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 13:15:33.214316 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 13:15:33.214334 kernel: PCI: CLS 0 bytes, default 64 Dec 13 13:15:33.214352 kernel: kvm [1]: HYP mode not available Dec 13 13:15:33.214370 kernel: Initialise system trusted keyrings Dec 13 13:15:33.214388 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 13:15:33.214406 kernel: Key type asymmetric registered Dec 13 13:15:33.214423 kernel: Asymmetric key parser 'x509' registered Dec 13 13:15:33.214446 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 13 13:15:33.214464 kernel: io scheduler mq-deadline registered Dec 13 13:15:33.214482 kernel: io scheduler kyber registered Dec 13 13:15:33.214500 kernel: io scheduler bfq registered Dec 13 13:15:33.214727 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Dec 13 13:15:33.214757 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 13 13:15:33.214776 kernel: ACPI: button: Power Button [PWRB] Dec 13 13:15:33.214795 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Dec 13 13:15:33.214813 kernel: ACPI: button: Sleep Button [SLPB] Dec 13 13:15:33.214838 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 13:15:33.214859 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Dec 13 13:15:33.215119 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Dec 13 13:15:33.215148 kernel: printk: console [ttyS0] disabled Dec 13 13:15:33.215170 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Dec 13 13:15:33.215189 kernel: printk: console [ttyS0] enabled Dec 13 13:15:33.215208 kernel: printk: bootconsole [uart0] disabled Dec 13 13:15:33.215227 kernel: thunder_xcv, ver 1.0 Dec 13 13:15:33.215246 kernel: thunder_bgx, ver 1.0 Dec 13 13:15:33.215271 kernel: nicpf, ver 1.0 Dec 13 13:15:33.215289 kernel: nicvf, ver 1.0 Dec 13 13:15:33.215502 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 13 13:15:33.215716 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-12-13T13:15:32 UTC (1734095732) Dec 13 13:15:33.215743 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 13:15:33.215762 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Dec 13 13:15:33.215780 kernel: watchdog: Delayed init of the lockup detector failed: -19 Dec 13 13:15:33.215799 kernel: watchdog: Hard watchdog permanently disabled Dec 13 13:15:33.215823 kernel: NET: Registered PF_INET6 protocol family Dec 13 13:15:33.215841 kernel: Segment Routing with IPv6 Dec 13 13:15:33.215859 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 13:15:33.215877 kernel: NET: Registered PF_PACKET protocol family Dec 13 13:15:33.215895 kernel: Key type dns_resolver registered Dec 13 13:15:33.215912 kernel: registered taskstats version 1 Dec 13 13:15:33.215930 kernel: Loading compiled-in X.509 certificates Dec 13 13:15:33.215948 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: 752b3e36c6039904ea643ccad2b3f5f3cb4ebf78' Dec 13 13:15:33.215967 kernel: Key type .fscrypt registered Dec 13 13:15:33.215990 kernel: Key type fscrypt-provisioning registered Dec 13 13:15:33.216008 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 13:15:33.216051 kernel: ima: Allocated hash algorithm: sha1 Dec 13 13:15:33.216070 kernel: ima: No architecture policies found Dec 13 13:15:33.216089 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 13 13:15:33.216107 kernel: clk: Disabling unused clocks Dec 13 13:15:33.216125 kernel: Freeing unused kernel memory: 39936K Dec 13 13:15:33.216143 kernel: Run /init as init process Dec 13 13:15:33.216161 kernel: with arguments: Dec 13 13:15:33.216186 kernel: /init Dec 13 13:15:33.216204 kernel: with environment: Dec 13 13:15:33.216222 kernel: HOME=/ Dec 13 13:15:33.218435 kernel: TERM=linux Dec 13 13:15:33.218461 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 13:15:33.218484 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 13:15:33.218508 systemd[1]: Detected virtualization amazon. Dec 13 13:15:33.218529 systemd[1]: Detected architecture arm64. Dec 13 13:15:33.218559 systemd[1]: Running in initrd. Dec 13 13:15:33.218579 systemd[1]: No hostname configured, using default hostname. Dec 13 13:15:33.218598 systemd[1]: Hostname set to <localhost>. Dec 13 13:15:33.218618 systemd[1]: Initializing machine ID from VM UUID. Dec 13 13:15:33.218638 systemd[1]: Queued start job for default target initrd.target. Dec 13 13:15:33.218657 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:15:33.218677 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:15:33.218698 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 13:15:33.218724 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 13:15:33.218744 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 13:15:33.218764 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 13:15:33.218787 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 13:15:33.218807 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 13:15:33.218827 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:15:33.218851 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:15:33.218873 systemd[1]: Reached target paths.target - Path Units. Dec 13 13:15:33.218892 systemd[1]: Reached target slices.target - Slice Units. Dec 13 13:15:33.218913 systemd[1]: Reached target swap.target - Swaps. Dec 13 13:15:33.218933 systemd[1]: Reached target timers.target - Timer Units. Dec 13 13:15:33.218954 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 13:15:33.218974 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 13:15:33.218995 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 13:15:33.219084 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 13:15:33.219117 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:15:33.219139 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 13:15:33.219160 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:15:33.219181 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 13:15:33.219201 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 13:15:33.219221 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 13:15:33.219242 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 13:15:33.219261 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 13:15:33.219281 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 13:15:33.219307 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 13:15:33.219329 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:15:33.219349 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 13:15:33.219369 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:15:33.219389 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 13:15:33.219477 systemd-journald[252]: Collecting audit messages is disabled. Dec 13 13:15:33.219547 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 13:15:33.219571 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 13:15:33.219598 kernel: Bridge firewalling registered Dec 13 13:15:33.219619 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 13:15:33.219639 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:15:33.219659 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 13:15:33.219682 systemd-journald[252]: Journal started Dec 13 13:15:33.219737 systemd-journald[252]: Runtime Journal (/run/log/journal/ec2298afae61cbb831b36bb14ff494ed) is 8.0M, max 75.3M, 67.3M free. Dec 13 13:15:33.166298 systemd-modules-load[253]: Inserted module 'overlay' Dec 13 13:15:33.196565 systemd-modules-load[253]: Inserted module 'br_netfilter' Dec 13 13:15:33.232960 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:15:33.250536 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 13:15:33.250623 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 13:15:33.261808 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 13:15:33.272366 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 13:15:33.293709 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:15:33.303300 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:15:33.307210 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:15:33.321386 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 13:15:33.328941 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:15:33.344290 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 13:15:33.366390 dracut-cmdline[288]: dracut-dracut-053 Dec 13 13:15:33.372107 dracut-cmdline[288]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c48af8adabdaf1d8e07ceb011d2665929c607ddf2c4d40203b31334d745cc472 Dec 13 13:15:33.431576 systemd-resolved[290]: Positive Trust Anchors: Dec 13 13:15:33.432402 systemd-resolved[290]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 13:15:33.432465 systemd-resolved[290]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 13:15:33.546324 kernel: SCSI subsystem initialized Dec 13 13:15:33.554157 kernel: Loading iSCSI transport class v2.0-870. Dec 13 13:15:33.566139 kernel: iscsi: registered transport (tcp) Dec 13 13:15:33.588665 kernel: iscsi: registered transport (qla4xxx) Dec 13 13:15:33.588777 kernel: QLogic iSCSI HBA Driver Dec 13 13:15:33.676197 kernel: random: crng init done Dec 13 13:15:33.675324 systemd-resolved[290]: Defaulting to hostname 'linux'. Dec 13 13:15:33.679135 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 13:15:33.681867 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:15:33.705967 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 13:15:33.716354 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 13:15:33.755043 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 13:15:33.756042 kernel: device-mapper: uevent: version 1.0.3 Dec 13 13:15:33.758036 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 13:15:33.822056 kernel: raid6: neonx8 gen() 6634 MB/s Dec 13 13:15:33.839046 kernel: raid6: neonx4 gen() 6616 MB/s Dec 13 13:15:33.856044 kernel: raid6: neonx2 gen() 5493 MB/s Dec 13 13:15:33.873044 kernel: raid6: neonx1 gen() 3965 MB/s Dec 13 13:15:33.890060 kernel: raid6: int64x8 gen() 3635 MB/s Dec 13 13:15:33.907047 kernel: raid6: int64x4 gen() 3717 MB/s Dec 13 13:15:33.924077 kernel: raid6: int64x2 gen() 3613 MB/s Dec 13 13:15:33.941835 kernel: raid6: int64x1 gen() 2752 MB/s Dec 13 13:15:33.941907 kernel: raid6: using algorithm neonx8 gen() 6634 MB/s Dec 13 13:15:33.959810 kernel: raid6: .... xor() 4658 MB/s, rmw enabled Dec 13 13:15:33.959854 kernel: raid6: using neon recovery algorithm Dec 13 13:15:33.967067 kernel: xor: measuring software checksum speed Dec 13 13:15:33.969072 kernel: 8regs : 11688 MB/sec Dec 13 13:15:33.969111 kernel: 32regs : 13011 MB/sec Dec 13 13:15:33.970184 kernel: arm64_neon : 9579 MB/sec Dec 13 13:15:33.970216 kernel: xor: using function: 32regs (13011 MB/sec) Dec 13 13:15:34.053058 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 13:15:34.072568 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 13:15:34.087344 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:15:34.126556 systemd-udevd[472]: Using default interface naming scheme 'v255'. Dec 13 13:15:34.135722 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:15:34.146270 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 13:15:34.191481 dracut-pre-trigger[478]: rd.md=0: removing MD RAID activation Dec 13 13:15:34.248218 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 13:15:34.258351 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 13:15:34.380953 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:15:34.395645 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 13:15:34.455414 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 13:15:34.461296 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 13:15:34.464117 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:15:34.495881 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 13:15:34.512325 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 13:15:34.552648 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 13:15:34.582860 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 13 13:15:34.582929 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Dec 13 13:15:34.608324 kernel: ena 0000:00:05.0: ENA device version: 0.10 Dec 13 13:15:34.608583 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Dec 13 13:15:34.608832 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:91:9d:6d:e2:cf Dec 13 13:15:34.590294 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 13:15:34.590519 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:15:34.596323 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:15:34.598427 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 13:15:34.633243 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Dec 13 13:15:34.598627 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:15:34.637248 kernel: nvme nvme0: pci function 0000:00:04.0 Dec 13 13:15:34.600910 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:15:34.611436 (udev-worker)[535]: Network interface NamePolicy= disabled on kernel command line. Dec 13 13:15:34.631366 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:15:34.654057 kernel: nvme nvme0: 2/0/0 default/read/poll queues Dec 13 13:15:34.662436 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 13:15:34.662514 kernel: GPT:9289727 != 16777215 Dec 13 13:15:34.662545 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 13:15:34.664258 kernel: GPT:9289727 != 16777215 Dec 13 13:15:34.664323 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 13:15:34.666051 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 13:15:34.681664 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:15:34.695296 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:15:34.737095 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:15:34.772131 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (536) Dec 13 13:15:34.851047 kernel: BTRFS: device fsid 47b12626-f7d3-4179-9720-ca262eb4c614 devid 1 transid 38 /dev/nvme0n1p3 scanned by (udev-worker) (529) Dec 13 13:15:34.860656 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Dec 13 13:15:34.887043 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Dec 13 13:15:34.943619 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Dec 13 13:15:34.949511 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Dec 13 13:15:34.964804 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Dec 13 13:15:34.976415 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 13:15:34.994994 disk-uuid[665]: Primary Header is updated. Dec 13 13:15:34.994994 disk-uuid[665]: Secondary Entries is updated. Dec 13 13:15:34.994994 disk-uuid[665]: Secondary Header is updated. Dec 13 13:15:35.007106 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 13:15:36.025042 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 13:15:36.026224 disk-uuid[666]: The operation has completed successfully. Dec 13 13:15:36.193243 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 13:15:36.193461 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 13:15:36.256261 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 13:15:36.263118 sh[927]: Success Dec 13 13:15:36.290071 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Dec 13 13:15:36.395861 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 13:15:36.414237 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 13:15:36.421099 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 13:15:36.456889 kernel: BTRFS info (device dm-0): first mount of filesystem 47b12626-f7d3-4179-9720-ca262eb4c614 Dec 13 13:15:36.456950 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 13 13:15:36.456991 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 13:15:36.458643 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 13:15:36.459882 kernel: BTRFS info (device dm-0): using free space tree Dec 13 13:15:36.576051 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 13 13:15:36.594359 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 13:15:36.598343 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 13:15:36.606336 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 13:15:36.621251 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 13:15:36.646730 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem d0a3d620-8ab2-45d8-a26c-bb488ffd59f2 Dec 13 13:15:36.646835 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 13 13:15:36.648693 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 13:15:36.654108 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 13:15:36.677499 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 13:15:36.680247 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem d0a3d620-8ab2-45d8-a26c-bb488ffd59f2 Dec 13 13:15:36.703260 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 13:15:36.712355 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 13:15:36.799224 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 13:15:36.820352 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 13:15:36.875423 systemd-networkd[1119]: lo: Link UP Dec 13 13:15:36.875438 systemd-networkd[1119]: lo: Gained carrier Dec 13 13:15:36.881066 systemd-networkd[1119]: Enumeration completed Dec 13 13:15:36.882658 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 13:15:36.885628 systemd-networkd[1119]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:15:36.885641 systemd-networkd[1119]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 13:15:36.887937 systemd[1]: Reached target network.target - Network. Dec 13 13:15:36.899187 systemd-networkd[1119]: eth0: Link UP Dec 13 13:15:36.899202 systemd-networkd[1119]: eth0: Gained carrier Dec 13 13:15:36.899219 systemd-networkd[1119]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:15:36.922388 systemd-networkd[1119]: eth0: DHCPv4 address 172.31.17.245/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 13:15:37.230509 ignition[1044]: Ignition 2.20.0 Dec 13 13:15:37.231991 ignition[1044]: Stage: fetch-offline Dec 13 13:15:37.232433 ignition[1044]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:15:37.232456 ignition[1044]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 13:15:37.232892 ignition[1044]: Ignition finished successfully Dec 13 13:15:37.238342 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 13:15:37.254392 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 13:15:37.276705 ignition[1128]: Ignition 2.20.0 Dec 13 13:15:37.276737 ignition[1128]: Stage: fetch Dec 13 13:15:37.277816 ignition[1128]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:15:37.277849 ignition[1128]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 13:15:37.278084 ignition[1128]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 13:15:37.294295 ignition[1128]: PUT result: OK Dec 13 13:15:37.297662 ignition[1128]: parsed url from cmdline: "" Dec 13 13:15:37.297683 ignition[1128]: no config URL provided Dec 13 13:15:37.297705 ignition[1128]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 13:15:37.297995 ignition[1128]: no config at "/usr/lib/ignition/user.ign" Dec 13 13:15:37.298060 ignition[1128]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 13:15:37.302930 ignition[1128]: PUT result: OK Dec 13 13:15:37.303048 ignition[1128]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Dec 13 13:15:37.306740 ignition[1128]: GET result: OK Dec 13 13:15:37.306835 ignition[1128]: parsing config with SHA512: c9f5b14f6763fcdcc24859e7c2f832e23838e6b5660210b0a2d51ddd533fd3da5290c80511c798dc6bc80fa5cbdd686acb32cc29b309af053705bf7b6406c683 Dec 13 13:15:37.317095 unknown[1128]: fetched base config from "system" Dec 13 13:15:37.317342 unknown[1128]: fetched base config from "system" Dec 13 13:15:37.317802 ignition[1128]: fetch: fetch complete Dec 13 13:15:37.317356 unknown[1128]: fetched user config from "aws" Dec 13 13:15:37.317814 ignition[1128]: fetch: fetch passed Dec 13 13:15:37.317897 ignition[1128]: Ignition finished successfully Dec 13 13:15:37.329079 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 13:15:37.347203 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 13:15:37.369151 ignition[1135]: Ignition 2.20.0 Dec 13 13:15:37.369183 ignition[1135]: Stage: kargs Dec 13 13:15:37.369956 ignition[1135]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:15:37.369992 ignition[1135]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 13:15:37.370235 ignition[1135]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 13:15:37.372340 ignition[1135]: PUT result: OK Dec 13 13:15:37.382819 ignition[1135]: kargs: kargs passed Dec 13 13:15:37.382912 ignition[1135]: Ignition finished successfully Dec 13 13:15:37.385809 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 13:15:37.397341 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 13:15:37.424671 ignition[1141]: Ignition 2.20.0 Dec 13 13:15:37.424702 ignition[1141]: Stage: disks Dec 13 13:15:37.425645 ignition[1141]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:15:37.425672 ignition[1141]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 13:15:37.425898 ignition[1141]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 13:15:37.429410 ignition[1141]: PUT result: OK Dec 13 13:15:37.436831 ignition[1141]: disks: disks passed Dec 13 13:15:37.436937 ignition[1141]: Ignition finished successfully Dec 13 13:15:37.440387 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 13:15:37.443124 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 13:15:37.445297 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 13:15:37.447798 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 13:15:37.449752 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 13:15:37.452091 systemd[1]: Reached target basic.target - Basic System. Dec 13 13:15:37.471399 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 13:15:37.521300 systemd-fsck[1149]: ROOT: clean, 14/553520 files, 52654/553472 blocks Dec 13 13:15:37.529341 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 13:15:37.540921 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 13:15:37.634053 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 0aa4851d-a2ba-4d04-90b3-5d00bf608ecc r/w with ordered data mode. Quota mode: none. Dec 13 13:15:37.634744 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 13:15:37.636405 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 13:15:37.653179 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 13:15:37.666567 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 13:15:37.671693 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 13:15:37.671789 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 13:15:37.671842 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 13:15:37.686809 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 13:15:37.703227 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1168) Dec 13 13:15:37.707228 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem d0a3d620-8ab2-45d8-a26c-bb488ffd59f2 Dec 13 13:15:37.707300 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 13 13:15:37.707327 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 13:15:37.708430 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 13:15:37.721051 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 13:15:37.721675 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 13:15:38.148269 initrd-setup-root[1192]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 13:15:38.169351 initrd-setup-root[1199]: cut: /sysroot/etc/group: No such file or directory Dec 13 13:15:38.191294 initrd-setup-root[1206]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 13:15:38.200228 initrd-setup-root[1213]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 13:15:38.591275 systemd-networkd[1119]: eth0: Gained IPv6LL Dec 13 13:15:38.600573 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 13:15:38.608223 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 13:15:38.622529 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 13:15:38.640056 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem d0a3d620-8ab2-45d8-a26c-bb488ffd59f2 Dec 13 13:15:38.640553 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 13:15:38.672825 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 13:15:38.694365 ignition[1287]: INFO : Ignition 2.20.0 Dec 13 13:15:38.697105 ignition[1287]: INFO : Stage: mount Dec 13 13:15:38.697105 ignition[1287]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:15:38.697105 ignition[1287]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 13:15:38.697105 ignition[1287]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 13:15:38.706197 ignition[1287]: INFO : PUT result: OK Dec 13 13:15:38.713205 ignition[1287]: INFO : mount: mount passed Dec 13 13:15:38.714662 ignition[1287]: INFO : Ignition finished successfully Dec 13 13:15:38.718430 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 13:15:38.726200 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 13:15:38.745415 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 13:15:38.764067 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1299) Dec 13 13:15:38.767745 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem d0a3d620-8ab2-45d8-a26c-bb488ffd59f2 Dec 13 13:15:38.767792 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 13 13:15:38.767819 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 13:15:38.774051 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 13:15:38.776930 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 13:15:38.812783 ignition[1316]: INFO : Ignition 2.20.0 Dec 13 13:15:38.812783 ignition[1316]: INFO : Stage: files Dec 13 13:15:38.816557 ignition[1316]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:15:38.816557 ignition[1316]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 13:15:38.816557 ignition[1316]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 13:15:38.822833 ignition[1316]: INFO : PUT result: OK Dec 13 13:15:38.828681 ignition[1316]: DEBUG : files: compiled without relabeling support, skipping Dec 13 13:15:38.831640 ignition[1316]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 13:15:38.831640 ignition[1316]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 13:15:38.852353 ignition[1316]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 13:15:38.855132 ignition[1316]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 13:15:38.858199 unknown[1316]: wrote ssh authorized keys file for user: core Dec 13 13:15:38.862450 ignition[1316]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 13:15:38.866932 ignition[1316]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Dec 13 13:15:38.866932 ignition[1316]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 13:15:38.866932 ignition[1316]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 13:15:38.866932 ignition[1316]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 13:15:38.866932 ignition[1316]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Dec 13 13:15:38.866932 ignition[1316]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Dec 13 13:15:38.866932 ignition[1316]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Dec 13 13:15:38.866932 ignition[1316]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Dec 13 13:15:39.247351 ignition[1316]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Dec 13 13:15:39.624274 ignition[1316]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Dec 13 13:15:39.628241 ignition[1316]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 13:15:39.628241 ignition[1316]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 13:15:39.628241 ignition[1316]: INFO : files: files passed Dec 13 13:15:39.628241 ignition[1316]: INFO : Ignition finished successfully Dec 13 13:15:39.636301 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 13:15:39.657368 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 13:15:39.667311 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 13:15:39.673770 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 13:15:39.673959 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 13:15:39.705106 initrd-setup-root-after-ignition[1345]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:15:39.705106 initrd-setup-root-after-ignition[1345]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:15:39.711357 initrd-setup-root-after-ignition[1349]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:15:39.714650 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 13:15:39.720173 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 13:15:39.729393 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 13:15:39.797392 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 13:15:39.799269 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 13:15:39.804130 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 13:15:39.807702 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 13:15:39.809766 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 13:15:39.833301 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 13:15:39.858958 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 13:15:39.872272 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 13:15:39.901761 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 13:15:39.902314 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 13:15:39.908909 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:15:39.909140 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:15:39.917671 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 13:15:39.919452 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 13:15:39.919596 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 13:15:39.922108 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 13:15:39.924089 systemd[1]: Stopped target basic.target - Basic System. Dec 13 13:15:39.925783 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 13:15:39.927808 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 13:15:39.929985 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 13:15:39.930116 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 13:15:39.935426 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 13:15:39.937729 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 13:15:39.939630 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 13:15:39.941529 systemd[1]: Stopped target swap.target - Swaps. Dec 13 13:15:39.943100 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 13:15:39.943210 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 13:15:39.945388 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:15:39.947459 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:15:39.969697 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 13:15:39.979273 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:15:39.981475 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 13:15:39.981576 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 13:15:39.983734 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 13:15:39.983818 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 13:15:39.986050 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 13:15:39.986128 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 13:15:40.002231 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 13:15:40.025233 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 13:15:40.026970 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 13:15:40.027124 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:15:40.031688 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 13:15:40.051170 ignition[1370]: INFO : Ignition 2.20.0 Dec 13 13:15:40.051170 ignition[1370]: INFO : Stage: umount Dec 13 13:15:40.051170 ignition[1370]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:15:40.051170 ignition[1370]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 13:15:40.051170 ignition[1370]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 13:15:40.031793 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 13:15:40.075326 ignition[1370]: INFO : PUT result: OK Dec 13 13:15:40.075326 ignition[1370]: INFO : umount: umount passed Dec 13 13:15:40.075326 ignition[1370]: INFO : Ignition finished successfully Dec 13 13:15:40.060099 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 13:15:40.060297 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 13:15:40.077172 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 13:15:40.077362 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 13:15:40.081348 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 13:15:40.081455 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 13:15:40.086682 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 13:15:40.086781 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 13:15:40.090456 systemd[1]: Stopped target network.target - Network. Dec 13 13:15:40.097928 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 13:15:40.099644 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 13:15:40.102955 systemd[1]: Stopped target paths.target - Path Units. Dec 13 13:15:40.105518 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 13:15:40.115980 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:15:40.124442 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 13:15:40.126958 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 13:15:40.130302 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 13:15:40.130388 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 13:15:40.132399 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 13:15:40.132473 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 13:15:40.134563 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 13:15:40.134656 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 13:15:40.137314 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 13:15:40.137393 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 13:15:40.139666 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 13:15:40.142272 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 13:15:40.151150 systemd-networkd[1119]: eth0: DHCPv6 lease lost Dec 13 13:15:40.151497 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 13:15:40.152863 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 13:15:40.153446 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 13:15:40.172957 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 13:15:40.173272 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 13:15:40.178902 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 13:15:40.180052 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:15:40.201701 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 13:15:40.205283 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 13:15:40.205474 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 13:15:40.210163 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 13:15:40.210260 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:15:40.210800 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 13:15:40.210873 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 13:15:40.214586 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 13:15:40.214682 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:15:40.216928 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:15:40.262579 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 13:15:40.263213 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 13:15:40.277216 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 13:15:40.277924 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:15:40.284639 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 13:15:40.284731 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 13:15:40.287050 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 13:15:40.287122 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:15:40.289040 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 13:15:40.289123 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 13:15:40.291184 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 13:15:40.291262 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 13:15:40.293405 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 13:15:40.293482 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:15:40.309435 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 13:15:40.332916 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 13:15:40.333055 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:15:40.335360 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 13:15:40.335465 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 13:15:40.338228 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 13:15:40.338329 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:15:40.341931 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 13:15:40.342045 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:15:40.374941 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 13:15:40.375152 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 13:15:40.469682 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 13:15:40.469937 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 13:15:40.474539 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 13:15:40.477668 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 13:15:40.477921 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 13:15:40.494374 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 13:15:40.510795 systemd[1]: Switching root. Dec 13 13:15:40.568289 systemd-journald[252]: Journal stopped Dec 13 13:15:43.242074 systemd-journald[252]: Received SIGTERM from PID 1 (systemd). Dec 13 13:15:43.242224 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 13:15:43.242273 kernel: SELinux: policy capability open_perms=1 Dec 13 13:15:43.242308 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 13:15:43.242338 kernel: SELinux: policy capability always_check_network=0 Dec 13 13:15:43.242368 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 13:15:43.244619 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 13:15:43.244655 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 13:15:43.244695 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 13:15:43.244724 kernel: audit: type=1403 audit(1734095741.339:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 13:15:43.244766 systemd[1]: Successfully loaded SELinux policy in 75.648ms. Dec 13 13:15:43.244812 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.982ms. Dec 13 13:15:43.244844 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 13:15:43.244873 systemd[1]: Detected virtualization amazon. Dec 13 13:15:43.244905 systemd[1]: Detected architecture arm64. Dec 13 13:15:43.244937 systemd[1]: Detected first boot. Dec 13 13:15:43.244967 systemd[1]: Initializing machine ID from VM UUID. Dec 13 13:15:43.245001 zram_generator::config[1412]: No configuration found. Dec 13 13:15:43.245065 systemd[1]: Populated /etc with preset unit settings. Dec 13 13:15:43.245100 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 13:15:43.245131 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 13:15:43.247112 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 13:15:43.247168 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 13:15:43.247211 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 13:15:43.247243 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 13:15:43.247272 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 13:15:43.247302 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 13:15:43.247331 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 13:15:43.247359 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 13:15:43.247390 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 13:15:43.247420 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:15:43.247468 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:15:43.247507 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 13:15:43.247536 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 13:15:43.247568 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 13:15:43.247599 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 13:15:43.247630 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 13:15:43.247660 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:15:43.247690 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 13:15:43.247718 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 13:15:43.247754 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 13:15:43.247782 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 13:15:43.247814 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:15:43.247844 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 13:15:43.247872 systemd[1]: Reached target slices.target - Slice Units. Dec 13 13:15:43.247902 systemd[1]: Reached target swap.target - Swaps. Dec 13 13:15:43.247930 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 13:15:43.247959 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 13:15:43.247991 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:15:43.248073 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 13:15:43.248107 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:15:43.248136 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 13:15:43.248165 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 13:15:43.248193 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 13:15:43.248221 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 13:15:43.248256 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 13:15:43.248287 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 13:15:43.248321 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 13:15:43.248354 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 13:15:43.248384 systemd[1]: Reached target machines.target - Containers. Dec 13 13:15:43.248415 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 13:15:43.248447 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:15:43.248475 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 13:15:43.248506 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 13:15:43.248534 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:15:43.248567 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 13:15:43.248597 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 13:15:43.248626 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 13:15:43.248654 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 13:15:43.248684 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 13:15:43.248712 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 13:15:43.248740 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 13:15:43.248769 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 13:15:43.248801 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 13:15:43.248829 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 13:15:43.248860 kernel: fuse: init (API version 7.39) Dec 13 13:15:43.248891 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 13:15:43.248919 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 13:15:43.248946 kernel: loop: module loaded Dec 13 13:15:43.248973 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 13:15:43.249001 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 13:15:43.251183 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 13:15:43.251227 systemd[1]: Stopped verity-setup.service. Dec 13 13:15:43.251267 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 13:15:43.251296 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 13:15:43.251327 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 13:15:43.251358 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 13:15:43.251390 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 13:15:43.251419 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 13:15:43.251516 systemd-journald[1494]: Collecting audit messages is disabled. Dec 13 13:15:43.251585 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:15:43.251616 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 13:15:43.251647 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 13:15:43.251675 systemd-journald[1494]: Journal started Dec 13 13:15:43.251736 systemd-journald[1494]: Runtime Journal (/run/log/journal/ec2298afae61cbb831b36bb14ff494ed) is 8.0M, max 75.3M, 67.3M free. Dec 13 13:15:42.664514 systemd[1]: Queued start job for default target multi-user.target. Dec 13 13:15:42.734607 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Dec 13 13:15:42.735408 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 13:15:43.262364 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 13:15:43.262748 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:15:43.265202 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:15:43.268249 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:15:43.269159 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 13:15:43.272881 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 13:15:43.274483 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 13:15:43.277297 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:15:43.277697 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 13:15:43.281122 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 13:15:43.286076 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 13:15:43.288978 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 13:15:43.330212 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 13:15:43.342488 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 13:15:43.355396 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 13:15:43.359262 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 13:15:43.359327 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 13:15:43.365771 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 13:15:43.372069 kernel: ACPI: bus type drm_connector registered Dec 13 13:15:43.381448 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 13:15:43.393528 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 13:15:43.396076 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:15:43.402414 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 13:15:43.413593 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 13:15:43.417315 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 13:15:43.421734 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 13:15:43.424334 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 13:15:43.431292 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 13:15:43.438458 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 13:15:43.448448 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 13:15:43.460210 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 13:15:43.463797 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 13:15:43.464225 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 13:15:43.466855 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 13:15:43.469311 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 13:15:43.474097 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 13:15:43.506058 kernel: loop0: detected capacity change from 0 to 194096 Dec 13 13:15:43.517665 systemd-journald[1494]: Time spent on flushing to /var/log/journal/ec2298afae61cbb831b36bb14ff494ed is 36.086ms for 893 entries. Dec 13 13:15:43.517665 systemd-journald[1494]: System Journal (/var/log/journal/ec2298afae61cbb831b36bb14ff494ed) is 8.0M, max 195.6M, 187.6M free. Dec 13 13:15:43.604298 systemd-journald[1494]: Received client request to flush runtime journal. Dec 13 13:15:43.604402 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 13:15:43.563534 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 13:15:43.566577 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 13:15:43.588431 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 13:15:43.594474 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:15:43.610769 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 13:15:43.618229 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 13:15:43.646101 kernel: loop1: detected capacity change from 0 to 113552 Dec 13 13:15:43.662127 systemd-tmpfiles[1540]: ACLs are not supported, ignoring. Dec 13 13:15:43.662157 systemd-tmpfiles[1540]: ACLs are not supported, ignoring. Dec 13 13:15:43.663549 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:15:43.686217 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 13:15:43.695161 udevadm[1554]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 13:15:43.703646 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 13:15:43.717206 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 13:15:43.725879 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 13:15:43.797876 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 13:15:43.812650 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 13:15:43.838061 kernel: loop2: detected capacity change from 0 to 116784 Dec 13 13:15:43.869531 systemd-tmpfiles[1564]: ACLs are not supported, ignoring. Dec 13 13:15:43.869570 systemd-tmpfiles[1564]: ACLs are not supported, ignoring. Dec 13 13:15:43.879255 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:15:43.974075 kernel: loop3: detected capacity change from 0 to 53784 Dec 13 13:15:44.118147 kernel: loop4: detected capacity change from 0 to 194096 Dec 13 13:15:44.148146 kernel: loop5: detected capacity change from 0 to 113552 Dec 13 13:15:44.173079 kernel: loop6: detected capacity change from 0 to 116784 Dec 13 13:15:44.193226 kernel: loop7: detected capacity change from 0 to 53784 Dec 13 13:15:44.210927 (sd-merge)[1569]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Dec 13 13:15:44.212598 (sd-merge)[1569]: Merged extensions into '/usr'. Dec 13 13:15:44.224138 systemd[1]: Reloading requested from client PID 1539 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 13:15:44.224410 systemd[1]: Reloading... Dec 13 13:15:44.399344 zram_generator::config[1595]: No configuration found. Dec 13 13:15:44.746951 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:15:44.872814 systemd[1]: Reloading finished in 647 ms. Dec 13 13:15:44.911112 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 13:15:44.913889 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 13:15:44.928351 systemd[1]: Starting ensure-sysext.service... Dec 13 13:15:44.938358 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 13:15:44.944432 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:15:44.962522 systemd[1]: Reloading requested from client PID 1647 ('systemctl') (unit ensure-sysext.service)... Dec 13 13:15:44.962554 systemd[1]: Reloading... Dec 13 13:15:45.026791 systemd-tmpfiles[1648]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 13:15:45.030502 systemd-tmpfiles[1648]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 13:15:45.036615 systemd-tmpfiles[1648]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 13:15:45.043402 systemd-tmpfiles[1648]: ACLs are not supported, ignoring. Dec 13 13:15:45.043592 systemd-tmpfiles[1648]: ACLs are not supported, ignoring. Dec 13 13:15:45.063666 systemd-tmpfiles[1648]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 13:15:45.063693 systemd-tmpfiles[1648]: Skipping /boot Dec 13 13:15:45.076533 systemd-udevd[1649]: Using default interface naming scheme 'v255'. Dec 13 13:15:45.114401 systemd-tmpfiles[1648]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 13:15:45.114449 systemd-tmpfiles[1648]: Skipping /boot Dec 13 13:15:45.123134 zram_generator::config[1673]: No configuration found. Dec 13 13:15:45.355284 (udev-worker)[1688]: Network interface NamePolicy= disabled on kernel command line. Dec 13 13:15:45.424932 ldconfig[1534]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 13:15:45.482081 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1690) Dec 13 13:15:45.513087 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1690) Dec 13 13:15:45.565328 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:15:45.730169 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1706) Dec 13 13:15:45.762870 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 13 13:15:45.763268 systemd[1]: Reloading finished in 800 ms. Dec 13 13:15:45.789272 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:15:45.793108 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 13:15:45.814719 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:15:45.884700 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 13:15:45.900756 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 13:15:45.904335 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:15:45.909847 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:15:45.920869 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 13:15:45.930517 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 13:15:45.932749 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:15:45.937600 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 13:15:45.949778 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 13:15:45.959538 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 13:15:45.967545 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 13:15:45.974930 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:15:46.012976 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:15:46.013456 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 13:15:46.039111 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:15:46.041153 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:15:46.056867 systemd[1]: Finished ensure-sysext.service. Dec 13 13:15:46.071269 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:15:46.072122 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 13:15:46.092281 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:15:46.097159 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 13:15:46.099386 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:15:46.099510 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 13:15:46.099612 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 13:15:46.099684 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 13:15:46.112395 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 13:15:46.159298 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Dec 13 13:15:46.164208 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 13:15:46.167899 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 13:15:46.169253 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 13:15:46.187734 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 13:15:46.201479 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 13:15:46.205097 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 13:15:46.248223 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 13:15:46.251426 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 13:15:46.271521 lvm[1877]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 13:15:46.277525 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 13:15:46.279580 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 13:15:46.317441 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 13:15:46.322834 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 13:15:46.341743 augenrules[1890]: No rules Dec 13 13:15:46.344619 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 13:15:46.345091 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 13:15:46.356616 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 13:15:46.357808 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:15:46.372703 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 13:15:46.378492 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 13:15:46.385084 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:15:46.412841 lvm[1901]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 13:15:46.456101 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 13:15:46.498327 systemd-networkd[1850]: lo: Link UP Dec 13 13:15:46.498823 systemd-networkd[1850]: lo: Gained carrier Dec 13 13:15:46.501911 systemd-networkd[1850]: Enumeration completed Dec 13 13:15:46.502270 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 13:15:46.503323 systemd-networkd[1850]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:15:46.503444 systemd-networkd[1850]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 13:15:46.507657 systemd-networkd[1850]: eth0: Link UP Dec 13 13:15:46.508286 systemd-networkd[1850]: eth0: Gained carrier Dec 13 13:15:46.509249 systemd-networkd[1850]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:15:46.516725 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 13:15:46.523165 systemd-networkd[1850]: eth0: DHCPv4 address 172.31.17.245/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 13:15:46.542144 systemd-resolved[1853]: Positive Trust Anchors: Dec 13 13:15:46.542183 systemd-resolved[1853]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 13:15:46.542246 systemd-resolved[1853]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 13:15:46.565213 systemd-resolved[1853]: Defaulting to hostname 'linux'. Dec 13 13:15:46.568344 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 13:15:46.570540 systemd[1]: Reached target network.target - Network. Dec 13 13:15:46.572306 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:15:46.574468 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 13:15:46.576540 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 13:15:46.578834 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 13:15:46.581393 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 13:15:46.583594 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 13:15:46.585993 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 13:15:46.588348 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 13:15:46.588420 systemd[1]: Reached target paths.target - Path Units. Dec 13 13:15:46.590124 systemd[1]: Reached target timers.target - Timer Units. Dec 13 13:15:46.593126 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 13:15:46.597705 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 13:15:46.617090 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 13:15:46.620214 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 13:15:46.622650 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 13:15:46.625304 systemd[1]: Reached target basic.target - Basic System. Dec 13 13:15:46.627185 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 13:15:46.627237 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 13:15:46.629899 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 13:15:46.637228 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 13:15:46.649483 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 13:15:46.654411 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 13:15:46.660601 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 13:15:46.664222 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 13:15:46.678956 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 13:15:46.691561 systemd[1]: Started ntpd.service - Network Time Service. Dec 13 13:15:46.698595 systemd[1]: Starting setup-oem.service - Setup OEM... Dec 13 13:15:46.707714 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 13:15:46.716615 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 13:15:46.728615 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 13:15:46.731521 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 13:15:46.734499 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 13:15:46.737437 jq[1917]: false Dec 13 13:15:46.745441 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 13:15:46.761234 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 13:15:46.784598 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 13:15:46.784945 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 13:15:46.816107 jq[1927]: true Dec 13 13:15:46.860155 dbus-daemon[1916]: [system] SELinux support is enabled Dec 13 13:15:46.862767 dbus-daemon[1916]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1850 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 13 13:15:46.865220 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 13:15:46.872677 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 13:15:46.873145 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 13:15:46.885137 dbus-daemon[1916]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 13:15:46.877955 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 13:15:46.878047 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 13:15:46.880581 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 13:15:46.880619 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 13:15:46.904512 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Dec 13 13:15:46.938291 extend-filesystems[1918]: Found loop4 Dec 13 13:15:46.938291 extend-filesystems[1918]: Found loop5 Dec 13 13:15:46.938291 extend-filesystems[1918]: Found loop6 Dec 13 13:15:46.938291 extend-filesystems[1918]: Found loop7 Dec 13 13:15:46.938291 extend-filesystems[1918]: Found nvme0n1 Dec 13 13:15:46.938291 extend-filesystems[1918]: Found nvme0n1p1 Dec 13 13:15:46.938291 extend-filesystems[1918]: Found nvme0n1p2 Dec 13 13:15:46.938291 extend-filesystems[1918]: Found nvme0n1p3 Dec 13 13:15:46.938291 extend-filesystems[1918]: Found usr Dec 13 13:15:46.938291 extend-filesystems[1918]: Found nvme0n1p4 Dec 13 13:15:46.938291 extend-filesystems[1918]: Found nvme0n1p6 Dec 13 13:15:46.938291 extend-filesystems[1918]: Found nvme0n1p7 Dec 13 13:15:46.938291 extend-filesystems[1918]: Found nvme0n1p9 Dec 13 13:15:46.993519 extend-filesystems[1918]: Checking size of /dev/nvme0n1p9 Dec 13 13:15:47.010678 jq[1939]: true Dec 13 13:15:46.948632 (ntainerd)[1941]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 13:15:46.983210 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 13:15:46.987740 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 13:15:47.021177 update_engine[1925]: I20241213 13:15:47.020620 1925 main.cc:92] Flatcar Update Engine starting Dec 13 13:15:47.025720 update_engine[1925]: I20241213 13:15:47.025645 1925 update_check_scheduler.cc:74] Next update check in 2m13s Dec 13 13:15:47.028110 systemd[1]: Finished setup-oem.service - Setup OEM. Dec 13 13:15:47.031080 systemd[1]: Started update-engine.service - Update Engine. Dec 13 13:15:47.043402 ntpd[1920]: ntpd 4.2.8p17@1.4004-o Fri Dec 13 11:28:25 UTC 2024 (1): Starting Dec 13 13:15:47.055100 ntpd[1920]: 13 Dec 13:15:47 ntpd[1920]: ntpd 4.2.8p17@1.4004-o Fri Dec 13 11:28:25 UTC 2024 (1): Starting Dec 13 13:15:47.055100 ntpd[1920]: 13 Dec 13:15:47 ntpd[1920]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 13 13:15:47.055100 ntpd[1920]: 13 Dec 13:15:47 ntpd[1920]: ---------------------------------------------------- Dec 13 13:15:47.055100 ntpd[1920]: 13 Dec 13:15:47 ntpd[1920]: ntp-4 is maintained by Network Time Foundation, Dec 13 13:15:47.055100 ntpd[1920]: 13 Dec 13:15:47 ntpd[1920]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 13 13:15:47.055100 ntpd[1920]: 13 Dec 13:15:47 ntpd[1920]: corporation. Support and training for ntp-4 are Dec 13 13:15:47.055100 ntpd[1920]: 13 Dec 13:15:47 ntpd[1920]: available at https://www.nwtime.org/support Dec 13 13:15:47.055100 ntpd[1920]: 13 Dec 13:15:47 ntpd[1920]: ---------------------------------------------------- Dec 13 13:15:47.055100 ntpd[1920]: 13 Dec 13:15:47 ntpd[1920]: proto: precision = 0.108 usec (-23) Dec 13 13:15:47.050731 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 13:15:47.043484 ntpd[1920]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 13 13:15:47.043507 ntpd[1920]: ---------------------------------------------------- Dec 13 13:15:47.043527 ntpd[1920]: ntp-4 is maintained by Network Time Foundation, Dec 13 13:15:47.043546 ntpd[1920]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 13 13:15:47.043563 ntpd[1920]: corporation. Support and training for ntp-4 are Dec 13 13:15:47.043582 ntpd[1920]: available at https://www.nwtime.org/support Dec 13 13:15:47.043599 ntpd[1920]: ---------------------------------------------------- Dec 13 13:15:47.054574 ntpd[1920]: proto: precision = 0.108 usec (-23) Dec 13 13:15:47.063513 ntpd[1920]: basedate set to 2024-12-01 Dec 13 13:15:47.072212 ntpd[1920]: 13 Dec 13:15:47 ntpd[1920]: basedate set to 2024-12-01 Dec 13 13:15:47.072212 ntpd[1920]: 13 Dec 13:15:47 ntpd[1920]: gps base set to 2024-12-01 (week 2343) Dec 13 13:15:47.072212 ntpd[1920]: 13 Dec 13:15:47 ntpd[1920]: Listen and drop on 0 v6wildcard [::]:123 Dec 13 13:15:47.072212 ntpd[1920]: 13 Dec 13:15:47 ntpd[1920]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 13 13:15:47.063557 ntpd[1920]: gps base set to 2024-12-01 (week 2343) Dec 13 13:15:47.069737 ntpd[1920]: Listen and drop on 0 v6wildcard [::]:123 Dec 13 13:15:47.069814 ntpd[1920]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 13 13:15:47.075302 ntpd[1920]: Listen normally on 2 lo 127.0.0.1:123 Dec 13 13:15:47.078009 ntpd[1920]: 13 Dec 13:15:47 ntpd[1920]: Listen normally on 2 lo 127.0.0.1:123 Dec 13 13:15:47.078009 ntpd[1920]: 13 Dec 13:15:47 ntpd[1920]: Listen normally on 3 eth0 172.31.17.245:123 Dec 13 13:15:47.078009 ntpd[1920]: 13 Dec 13:15:47 ntpd[1920]: Listen normally on 4 lo [::1]:123 Dec 13 13:15:47.078009 ntpd[1920]: 13 Dec 13:15:47 ntpd[1920]: bind(21) AF_INET6 fe80::491:9dff:fe6d:e2cf%2#123 flags 0x11 failed: Cannot assign requested address Dec 13 13:15:47.078009 ntpd[1920]: 13 Dec 13:15:47 ntpd[1920]: unable to create socket on eth0 (5) for fe80::491:9dff:fe6d:e2cf%2#123 Dec 13 13:15:47.078009 ntpd[1920]: 13 Dec 13:15:47 ntpd[1920]: failed to init interface for address fe80::491:9dff:fe6d:e2cf%2 Dec 13 13:15:47.078009 ntpd[1920]: 13 Dec 13:15:47 ntpd[1920]: Listening on routing socket on fd #21 for interface updates Dec 13 13:15:47.075389 ntpd[1920]: Listen normally on 3 eth0 172.31.17.245:123 Dec 13 13:15:47.075477 ntpd[1920]: Listen normally on 4 lo [::1]:123 Dec 13 13:15:47.075557 ntpd[1920]: bind(21) AF_INET6 fe80::491:9dff:fe6d:e2cf%2#123 flags 0x11 failed: Cannot assign requested address Dec 13 13:15:47.075596 ntpd[1920]: unable to create socket on eth0 (5) for fe80::491:9dff:fe6d:e2cf%2#123 Dec 13 13:15:47.075622 ntpd[1920]: failed to init interface for address fe80::491:9dff:fe6d:e2cf%2 Dec 13 13:15:47.075676 ntpd[1920]: Listening on routing socket on fd #21 for interface updates Dec 13 13:15:47.080227 extend-filesystems[1918]: Resized partition /dev/nvme0n1p9 Dec 13 13:15:47.087069 ntpd[1920]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 13:15:47.093068 extend-filesystems[1967]: resize2fs 1.47.1 (20-May-2024) Dec 13 13:15:47.118277 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Dec 13 13:15:47.118336 ntpd[1920]: 13 Dec 13:15:47 ntpd[1920]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 13:15:47.118336 ntpd[1920]: 13 Dec 13:15:47 ntpd[1920]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 13:15:47.092119 ntpd[1920]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 13:15:47.225957 coreos-metadata[1915]: Dec 13 13:15:47.225 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 13 13:15:47.229439 coreos-metadata[1915]: Dec 13 13:15:47.229 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Dec 13 13:15:47.234407 coreos-metadata[1915]: Dec 13 13:15:47.234 INFO Fetch successful Dec 13 13:15:47.234615 coreos-metadata[1915]: Dec 13 13:15:47.234 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Dec 13 13:15:47.235387 systemd-logind[1924]: Watching system buttons on /dev/input/event0 (Power Button) Dec 13 13:15:47.235460 systemd-logind[1924]: Watching system buttons on /dev/input/event1 (Sleep Button) Dec 13 13:15:47.248308 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Dec 13 13:15:47.238064 systemd-logind[1924]: New seat seat0. Dec 13 13:15:47.248705 coreos-metadata[1915]: Dec 13 13:15:47.241 INFO Fetch successful Dec 13 13:15:47.248705 coreos-metadata[1915]: Dec 13 13:15:47.241 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Dec 13 13:15:47.248705 coreos-metadata[1915]: Dec 13 13:15:47.247 INFO Fetch successful Dec 13 13:15:47.248705 coreos-metadata[1915]: Dec 13 13:15:47.247 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Dec 13 13:15:47.285256 coreos-metadata[1915]: Dec 13 13:15:47.249 INFO Fetch successful Dec 13 13:15:47.285256 coreos-metadata[1915]: Dec 13 13:15:47.249 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Dec 13 13:15:47.285256 coreos-metadata[1915]: Dec 13 13:15:47.250 INFO Fetch failed with 404: resource not found Dec 13 13:15:47.285256 coreos-metadata[1915]: Dec 13 13:15:47.250 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Dec 13 13:15:47.285256 coreos-metadata[1915]: Dec 13 13:15:47.254 INFO Fetch successful Dec 13 13:15:47.285256 coreos-metadata[1915]: Dec 13 13:15:47.254 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Dec 13 13:15:47.285256 coreos-metadata[1915]: Dec 13 13:15:47.259 INFO Fetch successful Dec 13 13:15:47.285256 coreos-metadata[1915]: Dec 13 13:15:47.259 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Dec 13 13:15:47.285256 coreos-metadata[1915]: Dec 13 13:15:47.263 INFO Fetch successful Dec 13 13:15:47.285256 coreos-metadata[1915]: Dec 13 13:15:47.263 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Dec 13 13:15:47.285256 coreos-metadata[1915]: Dec 13 13:15:47.265 INFO Fetch successful Dec 13 13:15:47.285256 coreos-metadata[1915]: Dec 13 13:15:47.265 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Dec 13 13:15:47.285256 coreos-metadata[1915]: Dec 13 13:15:47.266 INFO Fetch successful Dec 13 13:15:47.259240 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 13:15:47.291392 extend-filesystems[1967]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Dec 13 13:15:47.291392 extend-filesystems[1967]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 13:15:47.291392 extend-filesystems[1967]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Dec 13 13:15:47.304986 extend-filesystems[1918]: Resized filesystem in /dev/nvme0n1p9 Dec 13 13:15:47.307658 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 13:15:47.308087 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 13:15:47.336933 bash[1986]: Updated "/home/core/.ssh/authorized_keys" Dec 13 13:15:47.339739 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 13:15:47.353511 systemd[1]: Starting sshkeys.service... Dec 13 13:15:47.401052 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1692) Dec 13 13:15:47.420502 locksmithd[1962]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 13:15:47.460590 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Dec 13 13:15:47.448197 dbus-daemon[1916]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 13 13:15:47.452561 dbus-daemon[1916]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1949 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 13 13:15:47.472601 systemd[1]: Starting polkit.service - Authorization Manager... Dec 13 13:15:47.492206 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 13:15:47.499683 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 13:15:47.502627 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 13:15:47.521232 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 13 13:15:47.555885 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 13 13:15:47.609003 polkitd[2024]: Started polkitd version 121 Dec 13 13:15:47.634159 containerd[1941]: time="2024-12-13T13:15:47.634001975Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Dec 13 13:15:47.638718 polkitd[2024]: Loading rules from directory /etc/polkit-1/rules.d Dec 13 13:15:47.638845 polkitd[2024]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 13 13:15:47.649226 polkitd[2024]: Finished loading, compiling and executing 2 rules Dec 13 13:15:47.653255 dbus-daemon[1916]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 13 13:15:47.653558 systemd[1]: Started polkit.service - Authorization Manager. Dec 13 13:15:47.657618 polkitd[2024]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 13 13:15:47.716637 systemd-hostnamed[1949]: Hostname set to <ip-172-31-17-245> (transient) Dec 13 13:15:47.717434 systemd-resolved[1853]: System hostname changed to 'ip-172-31-17-245'. Dec 13 13:15:47.786060 containerd[1941]: time="2024-12-13T13:15:47.783843023Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:15:47.790041 containerd[1941]: time="2024-12-13T13:15:47.788521715Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:15:47.790041 containerd[1941]: time="2024-12-13T13:15:47.788594039Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 13:15:47.790041 containerd[1941]: time="2024-12-13T13:15:47.788630051Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 13:15:47.790041 containerd[1941]: time="2024-12-13T13:15:47.788920415Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 13:15:47.790041 containerd[1941]: time="2024-12-13T13:15:47.788954591Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 13:15:47.790041 containerd[1941]: time="2024-12-13T13:15:47.789105863Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:15:47.790041 containerd[1941]: time="2024-12-13T13:15:47.789137255Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:15:47.790041 containerd[1941]: time="2024-12-13T13:15:47.789480239Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:15:47.790041 containerd[1941]: time="2024-12-13T13:15:47.789513803Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 13:15:47.790041 containerd[1941]: time="2024-12-13T13:15:47.789550751Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:15:47.790041 containerd[1941]: time="2024-12-13T13:15:47.789578075Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 13:15:47.790638 containerd[1941]: time="2024-12-13T13:15:47.789734291Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:15:47.793053 containerd[1941]: time="2024-12-13T13:15:47.791292371Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:15:47.793053 containerd[1941]: time="2024-12-13T13:15:47.791748395Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:15:47.793053 containerd[1941]: time="2024-12-13T13:15:47.791791643Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 13:15:47.793053 containerd[1941]: time="2024-12-13T13:15:47.792166187Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 13:15:47.793053 containerd[1941]: time="2024-12-13T13:15:47.792285023Z" level=info msg="metadata content store policy set" policy=shared Dec 13 13:15:47.807340 coreos-metadata[2030]: Dec 13 13:15:47.807 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 13 13:15:47.809401 coreos-metadata[2030]: Dec 13 13:15:47.809 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Dec 13 13:15:47.811129 containerd[1941]: time="2024-12-13T13:15:47.811040616Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 13:15:47.811351 containerd[1941]: time="2024-12-13T13:15:47.811144944Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 13:15:47.811351 containerd[1941]: time="2024-12-13T13:15:47.811180632Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 13:15:47.811351 containerd[1941]: time="2024-12-13T13:15:47.811215816Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 13:15:47.811351 containerd[1941]: time="2024-12-13T13:15:47.811249368Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 13:15:47.811806 containerd[1941]: time="2024-12-13T13:15:47.811531332Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 13:15:47.811975 containerd[1941]: time="2024-12-13T13:15:47.811914012Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 13:15:47.812370 containerd[1941]: time="2024-12-13T13:15:47.812161428Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 13:15:47.812370 containerd[1941]: time="2024-12-13T13:15:47.812216532Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 13:15:47.812370 containerd[1941]: time="2024-12-13T13:15:47.812253084Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 13:15:47.812370 containerd[1941]: time="2024-12-13T13:15:47.812287272Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 13:15:47.812370 containerd[1941]: time="2024-12-13T13:15:47.812321244Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 13:15:47.812370 containerd[1941]: time="2024-12-13T13:15:47.812352552Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 13:15:47.813230 containerd[1941]: time="2024-12-13T13:15:47.812384484Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 13:15:47.813230 containerd[1941]: time="2024-12-13T13:15:47.812420424Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 13:15:47.813230 containerd[1941]: time="2024-12-13T13:15:47.812456724Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 13:15:47.813230 containerd[1941]: time="2024-12-13T13:15:47.812487924Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 13:15:47.813230 containerd[1941]: time="2024-12-13T13:15:47.812518764Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 13:15:47.813230 containerd[1941]: time="2024-12-13T13:15:47.812570076Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 13:15:47.813230 containerd[1941]: time="2024-12-13T13:15:47.812606988Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 13:15:47.813230 containerd[1941]: time="2024-12-13T13:15:47.812640588Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 13:15:47.813230 containerd[1941]: time="2024-12-13T13:15:47.812676120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 13:15:47.813230 containerd[1941]: time="2024-12-13T13:15:47.812706432Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 13:15:47.813230 containerd[1941]: time="2024-12-13T13:15:47.812738520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 13:15:47.813230 containerd[1941]: time="2024-12-13T13:15:47.812766096Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 13:15:47.813230 containerd[1941]: time="2024-12-13T13:15:47.812801604Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 13:15:47.813230 containerd[1941]: time="2024-12-13T13:15:47.812831592Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 13:15:47.815147 containerd[1941]: time="2024-12-13T13:15:47.812862924Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 13:15:47.815147 containerd[1941]: time="2024-12-13T13:15:47.812892120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 13:15:47.815147 containerd[1941]: time="2024-12-13T13:15:47.812920632Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 13:15:47.815147 containerd[1941]: time="2024-12-13T13:15:47.812950848Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 13:15:47.815147 containerd[1941]: time="2024-12-13T13:15:47.812982852Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 13:15:47.815812 coreos-metadata[2030]: Dec 13 13:15:47.814 INFO Fetch successful Dec 13 13:15:47.815812 coreos-metadata[2030]: Dec 13 13:15:47.814 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 13 13:15:47.815812 coreos-metadata[2030]: Dec 13 13:15:47.814 INFO Fetch successful Dec 13 13:15:47.815950 containerd[1941]: time="2024-12-13T13:15:47.814791840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 13:15:47.815950 containerd[1941]: time="2024-12-13T13:15:47.815527344Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 13:15:47.815950 containerd[1941]: time="2024-12-13T13:15:47.815557872Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 13:15:47.817167 containerd[1941]: time="2024-12-13T13:15:47.816852180Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 13:15:47.817167 containerd[1941]: time="2024-12-13T13:15:47.816971652Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 13:15:47.817167 containerd[1941]: time="2024-12-13T13:15:47.816997872Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 13:15:47.817167 containerd[1941]: time="2024-12-13T13:15:47.817047660Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 13:15:47.817167 containerd[1941]: time="2024-12-13T13:15:47.817073808Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 13:15:47.817167 containerd[1941]: time="2024-12-13T13:15:47.817113300Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 13:15:47.817167 containerd[1941]: time="2024-12-13T13:15:47.817138008Z" level=info msg="NRI interface is disabled by configuration." Dec 13 13:15:47.817167 containerd[1941]: time="2024-12-13T13:15:47.817163112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 13:15:47.819065 containerd[1941]: time="2024-12-13T13:15:47.817694820Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 13:15:47.819065 containerd[1941]: time="2024-12-13T13:15:47.817794612Z" level=info msg="Connect containerd service" Dec 13 13:15:47.819065 containerd[1941]: time="2024-12-13T13:15:47.817853964Z" level=info msg="using legacy CRI server" Dec 13 13:15:47.819065 containerd[1941]: time="2024-12-13T13:15:47.817870620Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 13:15:47.819966 containerd[1941]: time="2024-12-13T13:15:47.819903252Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 13:15:47.820762 unknown[2030]: wrote ssh authorized keys file for user: core Dec 13 13:15:47.822498 containerd[1941]: time="2024-12-13T13:15:47.822347832Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 13:15:47.823306 containerd[1941]: time="2024-12-13T13:15:47.823240332Z" level=info msg="Start subscribing containerd event" Dec 13 13:15:47.823402 containerd[1941]: time="2024-12-13T13:15:47.823331544Z" level=info msg="Start recovering state" Dec 13 13:15:47.823511 containerd[1941]: time="2024-12-13T13:15:47.823474212Z" level=info msg="Start event monitor" Dec 13 13:15:47.823563 containerd[1941]: time="2024-12-13T13:15:47.823508616Z" level=info msg="Start snapshots syncer" Dec 13 13:15:47.823563 containerd[1941]: time="2024-12-13T13:15:47.823533036Z" level=info msg="Start cni network conf syncer for default" Dec 13 13:15:47.823563 containerd[1941]: time="2024-12-13T13:15:47.823551312Z" level=info msg="Start streaming server" Dec 13 13:15:47.826746 containerd[1941]: time="2024-12-13T13:15:47.826619736Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 13:15:47.826964 containerd[1941]: time="2024-12-13T13:15:47.826804308Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 13:15:47.828882 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 13:15:47.833498 containerd[1941]: time="2024-12-13T13:15:47.832066368Z" level=info msg="containerd successfully booted in 0.203335s" Dec 13 13:15:47.924716 update-ssh-keys[2098]: Updated "/home/core/.ssh/authorized_keys" Dec 13 13:15:47.929648 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 13 13:15:47.940997 systemd[1]: Finished sshkeys.service. Dec 13 13:15:48.044209 ntpd[1920]: bind(24) AF_INET6 fe80::491:9dff:fe6d:e2cf%2#123 flags 0x11 failed: Cannot assign requested address Dec 13 13:15:48.044963 ntpd[1920]: 13 Dec 13:15:48 ntpd[1920]: bind(24) AF_INET6 fe80::491:9dff:fe6d:e2cf%2#123 flags 0x11 failed: Cannot assign requested address Dec 13 13:15:48.044963 ntpd[1920]: 13 Dec 13:15:48 ntpd[1920]: unable to create socket on eth0 (6) for fe80::491:9dff:fe6d:e2cf%2#123 Dec 13 13:15:48.044963 ntpd[1920]: 13 Dec 13:15:48 ntpd[1920]: failed to init interface for address fe80::491:9dff:fe6d:e2cf%2 Dec 13 13:15:48.044270 ntpd[1920]: unable to create socket on eth0 (6) for fe80::491:9dff:fe6d:e2cf%2#123 Dec 13 13:15:48.044299 ntpd[1920]: failed to init interface for address fe80::491:9dff:fe6d:e2cf%2 Dec 13 13:15:48.311731 sshd_keygen[1961]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 13:15:48.318258 systemd-networkd[1850]: eth0: Gained IPv6LL Dec 13 13:15:48.321841 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 13:15:48.326085 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 13:15:48.337764 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Dec 13 13:15:48.347377 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:15:48.363828 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 13:15:48.392298 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 13:15:48.409527 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 13:15:48.415628 systemd[1]: Started sshd@0-172.31.17.245:22-139.178.89.65:49164.service - OpenSSH per-connection server daemon (139.178.89.65:49164). Dec 13 13:15:48.454574 amazon-ssm-agent[2123]: Initializing new seelog logger Dec 13 13:15:48.455260 amazon-ssm-agent[2123]: New Seelog Logger Creation Complete Dec 13 13:15:48.455465 amazon-ssm-agent[2123]: 2024/12/13 13:15:48 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 13:15:48.455714 amazon-ssm-agent[2123]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 13:15:48.457072 amazon-ssm-agent[2123]: 2024/12/13 13:15:48 processing appconfig overrides Dec 13 13:15:48.457192 amazon-ssm-agent[2123]: 2024/12/13 13:15:48 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 13:15:48.457269 amazon-ssm-agent[2123]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 13:15:48.458110 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 13:15:48.461058 amazon-ssm-agent[2123]: 2024/12/13 13:15:48 processing appconfig overrides Dec 13 13:15:48.463486 amazon-ssm-agent[2123]: 2024/12/13 13:15:48 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 13:15:48.465878 amazon-ssm-agent[2123]: 2024-12-13 13:15:48 INFO Proxy environment variables: Dec 13 13:15:48.466082 amazon-ssm-agent[2123]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 13:15:48.466451 amazon-ssm-agent[2123]: 2024/12/13 13:15:48 processing appconfig overrides Dec 13 13:15:48.475148 amazon-ssm-agent[2123]: 2024/12/13 13:15:48 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 13:15:48.475148 amazon-ssm-agent[2123]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 13:15:48.475148 amazon-ssm-agent[2123]: 2024/12/13 13:15:48 processing appconfig overrides Dec 13 13:15:48.475536 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 13:15:48.476714 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 13:15:48.487530 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 13:15:48.540400 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 13:15:48.551798 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 13:15:48.564847 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 13:15:48.568511 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 13:15:48.574140 amazon-ssm-agent[2123]: 2024-12-13 13:15:48 INFO https_proxy: Dec 13 13:15:48.674215 amazon-ssm-agent[2123]: 2024-12-13 13:15:48 INFO http_proxy: Dec 13 13:15:48.772876 amazon-ssm-agent[2123]: 2024-12-13 13:15:48 INFO no_proxy: Dec 13 13:15:48.818131 sshd[2137]: Accepted publickey for core from 139.178.89.65 port 49164 ssh2: RSA SHA256:5Kg9OcrZzPx9+IQT5C5GfxT/ghwdzAdT4IUYKbDF5Cw Dec 13 13:15:48.821926 sshd-session[2137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:15:48.843220 amazon-ssm-agent[2123]: 2024-12-13 13:15:48 INFO Checking if agent identity type OnPrem can be assumed Dec 13 13:15:48.843220 amazon-ssm-agent[2123]: 2024-12-13 13:15:48 INFO Checking if agent identity type EC2 can be assumed Dec 13 13:15:48.843220 amazon-ssm-agent[2123]: 2024-12-13 13:15:48 INFO Agent will take identity from EC2 Dec 13 13:15:48.843220 amazon-ssm-agent[2123]: 2024-12-13 13:15:48 INFO [amazon-ssm-agent] using named pipe channel for IPC Dec 13 13:15:48.843220 amazon-ssm-agent[2123]: 2024-12-13 13:15:48 INFO [amazon-ssm-agent] using named pipe channel for IPC Dec 13 13:15:48.843220 amazon-ssm-agent[2123]: 2024-12-13 13:15:48 INFO [amazon-ssm-agent] using named pipe channel for IPC Dec 13 13:15:48.843220 amazon-ssm-agent[2123]: 2024-12-13 13:15:48 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Dec 13 13:15:48.843220 amazon-ssm-agent[2123]: 2024-12-13 13:15:48 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Dec 13 13:15:48.843220 amazon-ssm-agent[2123]: 2024-12-13 13:15:48 INFO [amazon-ssm-agent] Starting Core Agent Dec 13 13:15:48.843220 amazon-ssm-agent[2123]: 2024-12-13 13:15:48 INFO [amazon-ssm-agent] registrar detected. Attempting registration Dec 13 13:15:48.843220 amazon-ssm-agent[2123]: 2024-12-13 13:15:48 INFO [Registrar] Starting registrar module Dec 13 13:15:48.843220 amazon-ssm-agent[2123]: 2024-12-13 13:15:48 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Dec 13 13:15:48.843220 amazon-ssm-agent[2123]: 2024-12-13 13:15:48 INFO [EC2Identity] EC2 registration was successful. Dec 13 13:15:48.843220 amazon-ssm-agent[2123]: 2024-12-13 13:15:48 INFO [CredentialRefresher] credentialRefresher has started Dec 13 13:15:48.843220 amazon-ssm-agent[2123]: 2024-12-13 13:15:48 INFO [CredentialRefresher] Starting credentials refresher loop Dec 13 13:15:48.843220 amazon-ssm-agent[2123]: 2024-12-13 13:15:48 INFO EC2RoleProvider Successfully connected with instance profile role credentials Dec 13 13:15:48.840977 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 13:15:48.851684 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 13:15:48.862001 systemd-logind[1924]: New session 1 of user core. Dec 13 13:15:48.872105 amazon-ssm-agent[2123]: 2024-12-13 13:15:48 INFO [CredentialRefresher] Next credential rotation will be in 31.658322462333334 minutes Dec 13 13:15:48.890811 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 13:15:48.902725 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 13:15:48.918512 (systemd)[2157]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 13:15:49.150156 systemd[2157]: Queued start job for default target default.target. Dec 13 13:15:49.157475 systemd[2157]: Created slice app.slice - User Application Slice. Dec 13 13:15:49.157805 systemd[2157]: Reached target paths.target - Paths. Dec 13 13:15:49.157985 systemd[2157]: Reached target timers.target - Timers. Dec 13 13:15:49.169339 systemd[2157]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 13:15:49.190137 systemd[2157]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 13:15:49.190334 systemd[2157]: Reached target sockets.target - Sockets. Dec 13 13:15:49.190371 systemd[2157]: Reached target basic.target - Basic System. Dec 13 13:15:49.190499 systemd[2157]: Reached target default.target - Main User Target. Dec 13 13:15:49.190574 systemd[2157]: Startup finished in 258ms. Dec 13 13:15:49.191305 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 13:15:49.203352 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 13:15:49.369868 systemd[1]: Started sshd@1-172.31.17.245:22-139.178.89.65:53934.service - OpenSSH per-connection server daemon (139.178.89.65:53934). Dec 13 13:15:49.586136 sshd[2168]: Accepted publickey for core from 139.178.89.65 port 53934 ssh2: RSA SHA256:5Kg9OcrZzPx9+IQT5C5GfxT/ghwdzAdT4IUYKbDF5Cw Dec 13 13:15:49.586813 sshd-session[2168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:15:49.601356 systemd-logind[1924]: New session 2 of user core. Dec 13 13:15:49.608339 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:15:49.619616 (kubelet)[2174]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:15:49.621322 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 13:15:49.623647 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 13:15:49.631143 systemd[1]: Startup finished in 1.089s (kernel) + 8.529s (initrd) + 8.364s (userspace) = 17.984s. Dec 13 13:15:49.676797 agetty[2150]: failed to open credentials directory Dec 13 13:15:49.678867 agetty[2151]: failed to open credentials directory Dec 13 13:15:49.768270 sshd[2176]: Connection closed by 139.178.89.65 port 53934 Dec 13 13:15:49.768105 sshd-session[2168]: pam_unix(sshd:session): session closed for user core Dec 13 13:15:49.777300 systemd-logind[1924]: Session 2 logged out. Waiting for processes to exit. Dec 13 13:15:49.777402 systemd[1]: sshd@1-172.31.17.245:22-139.178.89.65:53934.service: Deactivated successfully. Dec 13 13:15:49.781812 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 13:15:49.784072 systemd-logind[1924]: Removed session 2. Dec 13 13:15:49.809460 systemd[1]: Started sshd@2-172.31.17.245:22-139.178.89.65:53944.service - OpenSSH per-connection server daemon (139.178.89.65:53944). Dec 13 13:15:49.872796 amazon-ssm-agent[2123]: 2024-12-13 13:15:49 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Dec 13 13:15:49.973968 amazon-ssm-agent[2123]: 2024-12-13 13:15:49 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2192) started Dec 13 13:15:50.001272 sshd[2189]: Accepted publickey for core from 139.178.89.65 port 53944 ssh2: RSA SHA256:5Kg9OcrZzPx9+IQT5C5GfxT/ghwdzAdT4IUYKbDF5Cw Dec 13 13:15:50.003809 sshd-session[2189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:15:50.022237 systemd-logind[1924]: New session 3 of user core. Dec 13 13:15:50.025687 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 13:15:50.076276 amazon-ssm-agent[2123]: 2024-12-13 13:15:49 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Dec 13 13:15:50.148127 sshd[2198]: Connection closed by 139.178.89.65 port 53944 Dec 13 13:15:50.148850 sshd-session[2189]: pam_unix(sshd:session): session closed for user core Dec 13 13:15:50.156570 systemd-logind[1924]: Session 3 logged out. Waiting for processes to exit. Dec 13 13:15:50.157974 systemd[1]: sshd@2-172.31.17.245:22-139.178.89.65:53944.service: Deactivated successfully. Dec 13 13:15:50.165842 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 13:15:50.167960 systemd-logind[1924]: Removed session 3. Dec 13 13:15:50.194205 systemd[1]: Started sshd@3-172.31.17.245:22-139.178.89.65:53948.service - OpenSSH per-connection server daemon (139.178.89.65:53948). Dec 13 13:15:50.396635 sshd[2208]: Accepted publickey for core from 139.178.89.65 port 53948 ssh2: RSA SHA256:5Kg9OcrZzPx9+IQT5C5GfxT/ghwdzAdT4IUYKbDF5Cw Dec 13 13:15:50.400202 sshd-session[2208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:15:50.410223 systemd-logind[1924]: New session 4 of user core. Dec 13 13:15:50.418581 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 13:15:50.459314 kubelet[2174]: E1213 13:15:50.459230 2174 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:15:50.463635 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:15:50.463977 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:15:50.464668 systemd[1]: kubelet.service: Consumed 1.348s CPU time. Dec 13 13:15:50.551272 sshd[2212]: Connection closed by 139.178.89.65 port 53948 Dec 13 13:15:50.552143 sshd-session[2208]: pam_unix(sshd:session): session closed for user core Dec 13 13:15:50.557914 systemd[1]: sshd@3-172.31.17.245:22-139.178.89.65:53948.service: Deactivated successfully. Dec 13 13:15:50.562177 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 13:15:50.563688 systemd-logind[1924]: Session 4 logged out. Waiting for processes to exit. Dec 13 13:15:50.565546 systemd-logind[1924]: Removed session 4. Dec 13 13:15:50.593461 systemd[1]: Started sshd@4-172.31.17.245:22-139.178.89.65:53950.service - OpenSSH per-connection server daemon (139.178.89.65:53950). Dec 13 13:15:50.770449 sshd[2218]: Accepted publickey for core from 139.178.89.65 port 53950 ssh2: RSA SHA256:5Kg9OcrZzPx9+IQT5C5GfxT/ghwdzAdT4IUYKbDF5Cw Dec 13 13:15:50.772756 sshd-session[2218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:15:50.780340 systemd-logind[1924]: New session 5 of user core. Dec 13 13:15:50.788374 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 13:15:50.915879 sudo[2221]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 13:15:50.917036 sudo[2221]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:15:50.931560 sudo[2221]: pam_unix(sudo:session): session closed for user root Dec 13 13:15:50.955095 sshd[2220]: Connection closed by 139.178.89.65 port 53950 Dec 13 13:15:50.954889 sshd-session[2218]: pam_unix(sshd:session): session closed for user core Dec 13 13:15:50.961816 systemd[1]: sshd@4-172.31.17.245:22-139.178.89.65:53950.service: Deactivated successfully. Dec 13 13:15:50.964842 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 13:15:50.967246 systemd-logind[1924]: Session 5 logged out. Waiting for processes to exit. Dec 13 13:15:50.969350 systemd-logind[1924]: Removed session 5. Dec 13 13:15:50.990561 systemd[1]: Started sshd@5-172.31.17.245:22-139.178.89.65:53952.service - OpenSSH per-connection server daemon (139.178.89.65:53952). Dec 13 13:15:51.044424 ntpd[1920]: Listen normally on 7 eth0 [fe80::491:9dff:fe6d:e2cf%2]:123 Dec 13 13:15:51.045109 ntpd[1920]: 13 Dec 13:15:51 ntpd[1920]: Listen normally on 7 eth0 [fe80::491:9dff:fe6d:e2cf%2]:123 Dec 13 13:15:51.181276 sshd[2226]: Accepted publickey for core from 139.178.89.65 port 53952 ssh2: RSA SHA256:5Kg9OcrZzPx9+IQT5C5GfxT/ghwdzAdT4IUYKbDF5Cw Dec 13 13:15:51.183749 sshd-session[2226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:15:51.192457 systemd-logind[1924]: New session 6 of user core. Dec 13 13:15:51.204289 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 13:15:51.307572 sudo[2230]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 13:15:51.308880 sudo[2230]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:15:51.314638 sudo[2230]: pam_unix(sudo:session): session closed for user root Dec 13 13:15:51.324572 sudo[2229]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 13 13:15:51.325205 sudo[2229]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:15:51.352587 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 13:15:51.399072 augenrules[2252]: No rules Dec 13 13:15:51.401960 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 13:15:51.402551 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 13:15:51.404671 sudo[2229]: pam_unix(sudo:session): session closed for user root Dec 13 13:15:51.427509 sshd[2228]: Connection closed by 139.178.89.65 port 53952 Dec 13 13:15:51.428353 sshd-session[2226]: pam_unix(sshd:session): session closed for user core Dec 13 13:15:51.435491 systemd[1]: sshd@5-172.31.17.245:22-139.178.89.65:53952.service: Deactivated successfully. Dec 13 13:15:51.438595 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 13:15:51.440131 systemd-logind[1924]: Session 6 logged out. Waiting for processes to exit. Dec 13 13:15:51.442151 systemd-logind[1924]: Removed session 6. Dec 13 13:15:51.463333 systemd[1]: Started sshd@6-172.31.17.245:22-139.178.89.65:53960.service - OpenSSH per-connection server daemon (139.178.89.65:53960). Dec 13 13:15:51.658191 sshd[2260]: Accepted publickey for core from 139.178.89.65 port 53960 ssh2: RSA SHA256:5Kg9OcrZzPx9+IQT5C5GfxT/ghwdzAdT4IUYKbDF5Cw Dec 13 13:15:51.659790 sshd-session[2260]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:15:51.667159 systemd-logind[1924]: New session 7 of user core. Dec 13 13:15:51.675248 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 13:15:51.778751 sudo[2263]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 13:15:51.779861 sudo[2263]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:15:52.979838 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:15:52.980831 systemd[1]: kubelet.service: Consumed 1.348s CPU time. Dec 13 13:15:52.987532 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:15:53.033229 systemd[1]: Reloading requested from client PID 2300 ('systemctl') (unit session-7.scope)... Dec 13 13:15:53.033284 systemd[1]: Reloading... Dec 13 13:15:53.210054 zram_generator::config[2340]: No configuration found. Dec 13 13:15:53.461173 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:15:53.647667 systemd[1]: Reloading finished in 613 ms. Dec 13 13:15:53.766481 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 13:15:53.766714 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 13:15:53.767474 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:15:53.779928 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:15:54.812282 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:15:54.818820 (kubelet)[2400]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 13:15:54.900112 kubelet[2400]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:15:54.900112 kubelet[2400]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 13:15:54.900112 kubelet[2400]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:15:54.901630 kubelet[2400]: I1213 13:15:54.901547 2400 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 13:15:55.941135 kubelet[2400]: I1213 13:15:55.941003 2400 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 13:15:55.944127 kubelet[2400]: I1213 13:15:55.942086 2400 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 13:15:55.944127 kubelet[2400]: I1213 13:15:55.942664 2400 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 13:15:55.970744 kubelet[2400]: I1213 13:15:55.970686 2400 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 13:15:55.984321 kubelet[2400]: I1213 13:15:55.984227 2400 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 13:15:55.984759 kubelet[2400]: I1213 13:15:55.984708 2400 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 13:15:55.985056 kubelet[2400]: I1213 13:15:55.984760 2400 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172.31.17.245","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 13:15:55.985238 kubelet[2400]: I1213 13:15:55.985096 2400 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 13:15:55.985238 kubelet[2400]: I1213 13:15:55.985118 2400 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 13:15:55.985359 kubelet[2400]: I1213 13:15:55.985327 2400 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:15:55.986622 kubelet[2400]: I1213 13:15:55.986580 2400 kubelet.go:400] "Attempting to sync node with API server" Dec 13 13:15:55.986622 kubelet[2400]: I1213 13:15:55.986619 2400 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 13:15:55.986779 kubelet[2400]: I1213 13:15:55.986713 2400 kubelet.go:312] "Adding apiserver pod source" Dec 13 13:15:55.986779 kubelet[2400]: I1213 13:15:55.986761 2400 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 13:15:55.988269 kubelet[2400]: E1213 13:15:55.987551 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:15:55.988269 kubelet[2400]: E1213 13:15:55.987722 2400 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:15:55.989090 kubelet[2400]: I1213 13:15:55.988742 2400 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Dec 13 13:15:55.989271 kubelet[2400]: I1213 13:15:55.989185 2400 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 13:15:55.989330 kubelet[2400]: W1213 13:15:55.989271 2400 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 13:15:55.990712 kubelet[2400]: I1213 13:15:55.990387 2400 server.go:1264] "Started kubelet" Dec 13 13:15:55.994063 kubelet[2400]: I1213 13:15:55.992141 2400 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 13:15:55.994063 kubelet[2400]: I1213 13:15:55.993367 2400 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 13:15:55.994063 kubelet[2400]: I1213 13:15:55.993861 2400 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 13:15:55.994063 kubelet[2400]: I1213 13:15:55.993921 2400 server.go:455] "Adding debug handlers to kubelet server" Dec 13 13:15:55.998551 kubelet[2400]: I1213 13:15:55.998500 2400 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 13:15:56.007914 kubelet[2400]: E1213 13:15:56.007858 2400 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 13:15:56.010811 kubelet[2400]: E1213 13:15:56.010764 2400 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.17.245\" not found" Dec 13 13:15:56.011085 kubelet[2400]: I1213 13:15:56.011063 2400 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 13:15:56.011398 kubelet[2400]: I1213 13:15:56.011375 2400 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 13:15:56.013047 kubelet[2400]: I1213 13:15:56.012992 2400 reconciler.go:26] "Reconciler: start to sync state" Dec 13 13:15:56.016133 kubelet[2400]: I1213 13:15:56.016082 2400 factory.go:221] Registration of the systemd container factory successfully Dec 13 13:15:56.016289 kubelet[2400]: I1213 13:15:56.016265 2400 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 13:15:56.018975 kubelet[2400]: I1213 13:15:56.018934 2400 factory.go:221] Registration of the containerd container factory successfully Dec 13 13:15:56.040074 kubelet[2400]: E1213 13:15:56.038886 2400 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.17.245.1810bee7065117f2 default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.17.245,UID:172.31.17.245,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172.31.17.245,},FirstTimestamp:2024-12-13 13:15:55.990341618 +0000 UTC m=+1.164792601,LastTimestamp:2024-12-13 13:15:55.990341618 +0000 UTC m=+1.164792601,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.17.245,}" Dec 13 13:15:56.040074 kubelet[2400]: W1213 13:15:56.039541 2400 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "172.31.17.245" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 13 13:15:56.040074 kubelet[2400]: E1213 13:15:56.039649 2400 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.17.245" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 13 13:15:56.040074 kubelet[2400]: W1213 13:15:56.039731 2400 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 13 13:15:56.040074 kubelet[2400]: E1213 13:15:56.039766 2400 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 13 13:15:56.040507 kubelet[2400]: W1213 13:15:56.039828 2400 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Dec 13 13:15:56.040507 kubelet[2400]: E1213 13:15:56.039852 2400 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Dec 13 13:15:56.040507 kubelet[2400]: E1213 13:15:56.040068 2400 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.31.17.245\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Dec 13 13:15:56.051968 kubelet[2400]: I1213 13:15:56.051923 2400 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 13:15:56.052269 kubelet[2400]: I1213 13:15:56.052246 2400 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 13:15:56.052409 kubelet[2400]: I1213 13:15:56.052389 2400 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:15:56.055838 kubelet[2400]: I1213 13:15:56.055806 2400 policy_none.go:49] "None policy: Start" Dec 13 13:15:56.057157 kubelet[2400]: I1213 13:15:56.057126 2400 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 13:15:56.057545 kubelet[2400]: I1213 13:15:56.057525 2400 state_mem.go:35] "Initializing new in-memory state store" Dec 13 13:15:56.072806 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 13:15:56.094696 kubelet[2400]: E1213 13:15:56.094245 2400 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.17.245.1810bee7075c0f35 default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.17.245,UID:172.31.17.245,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:172.31.17.245,},FirstTimestamp:2024-12-13 13:15:56.007837493 +0000 UTC m=+1.182288500,LastTimestamp:2024-12-13 13:15:56.007837493 +0000 UTC m=+1.182288500,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.17.245,}" Dec 13 13:15:56.107120 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 13:15:56.112552 kubelet[2400]: I1213 13:15:56.112500 2400 kubelet_node_status.go:73] "Attempting to register node" node="172.31.17.245" Dec 13 13:15:56.118959 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 13:15:56.123834 kubelet[2400]: I1213 13:15:56.123446 2400 kubelet_node_status.go:76] "Successfully registered node" node="172.31.17.245" Dec 13 13:15:56.133292 kubelet[2400]: I1213 13:15:56.133128 2400 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 13:15:56.135189 kubelet[2400]: I1213 13:15:56.135036 2400 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 13:15:56.135609 kubelet[2400]: I1213 13:15:56.135479 2400 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 13:15:56.138275 kubelet[2400]: I1213 13:15:56.138197 2400 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 13:15:56.143991 kubelet[2400]: I1213 13:15:56.143699 2400 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 13:15:56.146108 kubelet[2400]: I1213 13:15:56.145442 2400 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 13:15:56.146108 kubelet[2400]: I1213 13:15:56.145501 2400 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 13:15:56.146108 kubelet[2400]: E1213 13:15:56.145576 2400 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Dec 13 13:15:56.150991 kubelet[2400]: E1213 13:15:56.149807 2400 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.17.245\" not found" Dec 13 13:15:56.172948 kubelet[2400]: E1213 13:15:56.172895 2400 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.17.245\" not found" Dec 13 13:15:56.274703 kubelet[2400]: E1213 13:15:56.273952 2400 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.17.245\" not found" Dec 13 13:15:56.347285 sudo[2263]: pam_unix(sudo:session): session closed for user root Dec 13 13:15:56.370845 sshd[2262]: Connection closed by 139.178.89.65 port 53960 Dec 13 13:15:56.370665 sshd-session[2260]: pam_unix(sshd:session): session closed for user core Dec 13 13:15:56.375043 kubelet[2400]: E1213 13:15:56.374953 2400 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.17.245\" not found" Dec 13 13:15:56.376562 systemd[1]: sshd@6-172.31.17.245:22-139.178.89.65:53960.service: Deactivated successfully. Dec 13 13:15:56.381047 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 13:15:56.384805 systemd-logind[1924]: Session 7 logged out. Waiting for processes to exit. Dec 13 13:15:56.386934 systemd-logind[1924]: Removed session 7. Dec 13 13:15:56.475847 kubelet[2400]: E1213 13:15:56.475778 2400 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.17.245\" not found" Dec 13 13:15:56.577230 kubelet[2400]: E1213 13:15:56.576615 2400 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.17.245\" not found" Dec 13 13:15:56.677340 kubelet[2400]: E1213 13:15:56.677285 2400 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.17.245\" not found" Dec 13 13:15:56.778001 kubelet[2400]: E1213 13:15:56.777958 2400 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.17.245\" not found" Dec 13 13:15:56.878810 kubelet[2400]: E1213 13:15:56.878377 2400 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.17.245\" not found" Dec 13 13:15:56.949183 kubelet[2400]: I1213 13:15:56.948949 2400 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 13 13:15:56.949183 kubelet[2400]: W1213 13:15:56.949162 2400 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 13 13:15:56.949183 kubelet[2400]: W1213 13:15:56.949162 2400 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 13 13:15:56.979435 kubelet[2400]: E1213 13:15:56.979354 2400 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.17.245\" not found" Dec 13 13:15:56.988729 kubelet[2400]: E1213 13:15:56.988671 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:15:57.080554 kubelet[2400]: E1213 13:15:57.080496 2400 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.17.245\" not found" Dec 13 13:15:57.181450 kubelet[2400]: E1213 13:15:57.181214 2400 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.17.245\" not found" Dec 13 13:15:57.281882 kubelet[2400]: E1213 13:15:57.281810 2400 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.17.245\" not found" Dec 13 13:15:57.382415 kubelet[2400]: E1213 13:15:57.382360 2400 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.17.245\" not found" Dec 13 13:15:57.483247 kubelet[2400]: E1213 13:15:57.483141 2400 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.17.245\" not found" Dec 13 13:15:57.584917 kubelet[2400]: I1213 13:15:57.584631 2400 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Dec 13 13:15:57.585616 containerd[1941]: time="2024-12-13T13:15:57.585366403Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 13:15:57.586699 kubelet[2400]: I1213 13:15:57.586391 2400 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Dec 13 13:15:57.987870 kubelet[2400]: I1213 13:15:57.987755 2400 apiserver.go:52] "Watching apiserver" Dec 13 13:15:57.988953 kubelet[2400]: E1213 13:15:57.988871 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:15:57.993476 kubelet[2400]: I1213 13:15:57.993411 2400 topology_manager.go:215] "Topology Admit Handler" podUID="6f8e7977-105f-4870-a2e8-b8e9037bfc78" podNamespace="kube-system" podName="cilium-lqfmt" Dec 13 13:15:57.993724 kubelet[2400]: I1213 13:15:57.993692 2400 topology_manager.go:215] "Topology Admit Handler" podUID="42bcc727-c791-407a-996f-38f6a23d17e1" podNamespace="kube-system" podName="kube-proxy-x5xnq" Dec 13 13:15:58.006473 systemd[1]: Created slice kubepods-burstable-pod6f8e7977_105f_4870_a2e8_b8e9037bfc78.slice - libcontainer container kubepods-burstable-pod6f8e7977_105f_4870_a2e8_b8e9037bfc78.slice. Dec 13 13:15:58.012771 kubelet[2400]: I1213 13:15:58.012532 2400 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 13:15:58.026844 kubelet[2400]: I1213 13:15:58.025856 2400 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6f8e7977-105f-4870-a2e8-b8e9037bfc78-hostproc\") pod \"cilium-lqfmt\" (UID: \"6f8e7977-105f-4870-a2e8-b8e9037bfc78\") " pod="kube-system/cilium-lqfmt" Dec 13 13:15:58.026844 kubelet[2400]: I1213 13:15:58.025937 2400 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6f8e7977-105f-4870-a2e8-b8e9037bfc78-etc-cni-netd\") pod \"cilium-lqfmt\" (UID: \"6f8e7977-105f-4870-a2e8-b8e9037bfc78\") " pod="kube-system/cilium-lqfmt" Dec 13 13:15:58.026844 kubelet[2400]: I1213 13:15:58.025992 2400 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6f8e7977-105f-4870-a2e8-b8e9037bfc78-cilium-run\") pod \"cilium-lqfmt\" (UID: \"6f8e7977-105f-4870-a2e8-b8e9037bfc78\") " pod="kube-system/cilium-lqfmt" Dec 13 13:15:58.026844 kubelet[2400]: I1213 13:15:58.026105 2400 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6f8e7977-105f-4870-a2e8-b8e9037bfc78-cilium-cgroup\") pod \"cilium-lqfmt\" (UID: \"6f8e7977-105f-4870-a2e8-b8e9037bfc78\") " pod="kube-system/cilium-lqfmt" Dec 13 13:15:58.026844 kubelet[2400]: I1213 13:15:58.026143 2400 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6f8e7977-105f-4870-a2e8-b8e9037bfc78-xtables-lock\") pod \"cilium-lqfmt\" (UID: \"6f8e7977-105f-4870-a2e8-b8e9037bfc78\") " pod="kube-system/cilium-lqfmt" Dec 13 13:15:58.026844 kubelet[2400]: I1213 13:15:58.026179 2400 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6f8e7977-105f-4870-a2e8-b8e9037bfc78-host-proc-sys-net\") pod \"cilium-lqfmt\" (UID: \"6f8e7977-105f-4870-a2e8-b8e9037bfc78\") " pod="kube-system/cilium-lqfmt" Dec 13 13:15:58.027275 kubelet[2400]: I1213 13:15:58.026214 2400 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/42bcc727-c791-407a-996f-38f6a23d17e1-kube-proxy\") pod \"kube-proxy-x5xnq\" (UID: \"42bcc727-c791-407a-996f-38f6a23d17e1\") " pod="kube-system/kube-proxy-x5xnq" Dec 13 13:15:58.027275 kubelet[2400]: I1213 13:15:58.026249 2400 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/42bcc727-c791-407a-996f-38f6a23d17e1-lib-modules\") pod \"kube-proxy-x5xnq\" (UID: \"42bcc727-c791-407a-996f-38f6a23d17e1\") " pod="kube-system/kube-proxy-x5xnq" Dec 13 13:15:58.027275 kubelet[2400]: I1213 13:15:58.026289 2400 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwb8n\" (UniqueName: \"kubernetes.io/projected/42bcc727-c791-407a-996f-38f6a23d17e1-kube-api-access-mwb8n\") pod \"kube-proxy-x5xnq\" (UID: \"42bcc727-c791-407a-996f-38f6a23d17e1\") " pod="kube-system/kube-proxy-x5xnq" Dec 13 13:15:58.027275 kubelet[2400]: I1213 13:15:58.026324 2400 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6f8e7977-105f-4870-a2e8-b8e9037bfc78-cni-path\") pod \"cilium-lqfmt\" (UID: \"6f8e7977-105f-4870-a2e8-b8e9037bfc78\") " pod="kube-system/cilium-lqfmt" Dec 13 13:15:58.027275 kubelet[2400]: I1213 13:15:58.026370 2400 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6f8e7977-105f-4870-a2e8-b8e9037bfc78-lib-modules\") pod \"cilium-lqfmt\" (UID: \"6f8e7977-105f-4870-a2e8-b8e9037bfc78\") " pod="kube-system/cilium-lqfmt" Dec 13 13:15:58.027520 kubelet[2400]: I1213 13:15:58.026407 2400 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6f8e7977-105f-4870-a2e8-b8e9037bfc78-host-proc-sys-kernel\") pod \"cilium-lqfmt\" (UID: \"6f8e7977-105f-4870-a2e8-b8e9037bfc78\") " pod="kube-system/cilium-lqfmt" Dec 13 13:15:58.027520 kubelet[2400]: I1213 13:15:58.026440 2400 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6f8e7977-105f-4870-a2e8-b8e9037bfc78-hubble-tls\") pod \"cilium-lqfmt\" (UID: \"6f8e7977-105f-4870-a2e8-b8e9037bfc78\") " pod="kube-system/cilium-lqfmt" Dec 13 13:15:58.027520 kubelet[2400]: I1213 13:15:58.026475 2400 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tmgh\" (UniqueName: \"kubernetes.io/projected/6f8e7977-105f-4870-a2e8-b8e9037bfc78-kube-api-access-2tmgh\") pod \"cilium-lqfmt\" (UID: \"6f8e7977-105f-4870-a2e8-b8e9037bfc78\") " pod="kube-system/cilium-lqfmt" Dec 13 13:15:58.027520 kubelet[2400]: I1213 13:15:58.026516 2400 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6f8e7977-105f-4870-a2e8-b8e9037bfc78-bpf-maps\") pod \"cilium-lqfmt\" (UID: \"6f8e7977-105f-4870-a2e8-b8e9037bfc78\") " pod="kube-system/cilium-lqfmt" Dec 13 13:15:58.027520 kubelet[2400]: I1213 13:15:58.026550 2400 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6f8e7977-105f-4870-a2e8-b8e9037bfc78-clustermesh-secrets\") pod \"cilium-lqfmt\" (UID: \"6f8e7977-105f-4870-a2e8-b8e9037bfc78\") " pod="kube-system/cilium-lqfmt" Dec 13 13:15:58.028391 kubelet[2400]: I1213 13:15:58.026583 2400 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6f8e7977-105f-4870-a2e8-b8e9037bfc78-cilium-config-path\") pod \"cilium-lqfmt\" (UID: \"6f8e7977-105f-4870-a2e8-b8e9037bfc78\") " pod="kube-system/cilium-lqfmt" Dec 13 13:15:58.028391 kubelet[2400]: I1213 13:15:58.026629 2400 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/42bcc727-c791-407a-996f-38f6a23d17e1-xtables-lock\") pod \"kube-proxy-x5xnq\" (UID: \"42bcc727-c791-407a-996f-38f6a23d17e1\") " pod="kube-system/kube-proxy-x5xnq" Dec 13 13:15:58.033670 systemd[1]: Created slice kubepods-besteffort-pod42bcc727_c791_407a_996f_38f6a23d17e1.slice - libcontainer container kubepods-besteffort-pod42bcc727_c791_407a_996f_38f6a23d17e1.slice. Dec 13 13:15:58.326703 containerd[1941]: time="2024-12-13T13:15:58.326530921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lqfmt,Uid:6f8e7977-105f-4870-a2e8-b8e9037bfc78,Namespace:kube-system,Attempt:0,}" Dec 13 13:15:58.346675 containerd[1941]: time="2024-12-13T13:15:58.346550050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-x5xnq,Uid:42bcc727-c791-407a-996f-38f6a23d17e1,Namespace:kube-system,Attempt:0,}" Dec 13 13:15:58.989688 kubelet[2400]: E1213 13:15:58.989592 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:15:58.995943 containerd[1941]: time="2024-12-13T13:15:58.995865389Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:15:58.999910 containerd[1941]: time="2024-12-13T13:15:58.999835726Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Dec 13 13:15:59.003049 containerd[1941]: time="2024-12-13T13:15:59.002964201Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:15:59.005707 containerd[1941]: time="2024-12-13T13:15:59.005464391Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:15:59.006429 containerd[1941]: time="2024-12-13T13:15:59.006323864Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 13:15:59.012163 containerd[1941]: time="2024-12-13T13:15:59.012042406Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:15:59.014246 containerd[1941]: time="2024-12-13T13:15:59.013838155Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 687.163247ms" Dec 13 13:15:59.016046 containerd[1941]: time="2024-12-13T13:15:59.015956242Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 669.275315ms" Dec 13 13:15:59.234719 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1524956512.mount: Deactivated successfully. Dec 13 13:15:59.354455 containerd[1941]: time="2024-12-13T13:15:59.354130579Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:15:59.354455 containerd[1941]: time="2024-12-13T13:15:59.354297378Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:15:59.355352 containerd[1941]: time="2024-12-13T13:15:59.354334141Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:15:59.355352 containerd[1941]: time="2024-12-13T13:15:59.354901052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:15:59.366925 containerd[1941]: time="2024-12-13T13:15:59.366753351Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:15:59.367167 containerd[1941]: time="2024-12-13T13:15:59.366854298Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:15:59.367167 containerd[1941]: time="2024-12-13T13:15:59.367121551Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:15:59.367600 containerd[1941]: time="2024-12-13T13:15:59.367307860Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:15:59.538336 systemd[1]: Started cri-containerd-2ca9aa04990ddef21c2be9c0541a6ba059a9e9b04f7a11ab44238ab264ef8f0c.scope - libcontainer container 2ca9aa04990ddef21c2be9c0541a6ba059a9e9b04f7a11ab44238ab264ef8f0c. Dec 13 13:15:59.541472 systemd[1]: Started cri-containerd-637d256324a3596ef602112d775397193f90454c24d5f466a66820d22424705e.scope - libcontainer container 637d256324a3596ef602112d775397193f90454c24d5f466a66820d22424705e. Dec 13 13:15:59.613188 containerd[1941]: time="2024-12-13T13:15:59.612137720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lqfmt,Uid:6f8e7977-105f-4870-a2e8-b8e9037bfc78,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ca9aa04990ddef21c2be9c0541a6ba059a9e9b04f7a11ab44238ab264ef8f0c\"" Dec 13 13:15:59.617284 containerd[1941]: time="2024-12-13T13:15:59.617215886Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 13:15:59.624827 containerd[1941]: time="2024-12-13T13:15:59.624665069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-x5xnq,Uid:42bcc727-c791-407a-996f-38f6a23d17e1,Namespace:kube-system,Attempt:0,} returns sandbox id \"637d256324a3596ef602112d775397193f90454c24d5f466a66820d22424705e\"" Dec 13 13:15:59.990118 kubelet[2400]: E1213 13:15:59.989937 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:16:00.991050 kubelet[2400]: E1213 13:16:00.990985 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:16:01.991455 kubelet[2400]: E1213 13:16:01.991386 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:16:02.991791 kubelet[2400]: E1213 13:16:02.991737 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:16:03.992983 kubelet[2400]: E1213 13:16:03.992818 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:16:04.993576 kubelet[2400]: E1213 13:16:04.993496 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:16:05.795925 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3380984436.mount: Deactivated successfully. Dec 13 13:16:05.994146 kubelet[2400]: E1213 13:16:05.994092 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:16:06.995309 kubelet[2400]: E1213 13:16:06.995178 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:16:07.996368 kubelet[2400]: E1213 13:16:07.996328 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:16:08.388273 containerd[1941]: time="2024-12-13T13:16:08.388095611Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:16:08.390041 containerd[1941]: time="2024-12-13T13:16:08.389863459Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157651538" Dec 13 13:16:08.391850 containerd[1941]: time="2024-12-13T13:16:08.391776723Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:16:08.396684 containerd[1941]: time="2024-12-13T13:16:08.396620700Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 8.779337196s" Dec 13 13:16:08.396910 containerd[1941]: time="2024-12-13T13:16:08.396685748Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Dec 13 13:16:08.399500 containerd[1941]: time="2024-12-13T13:16:08.399324259Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Dec 13 13:16:08.402302 containerd[1941]: time="2024-12-13T13:16:08.402103263Z" level=info msg="CreateContainer within sandbox \"2ca9aa04990ddef21c2be9c0541a6ba059a9e9b04f7a11ab44238ab264ef8f0c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 13:16:08.425441 containerd[1941]: time="2024-12-13T13:16:08.425369979Z" level=info msg="CreateContainer within sandbox \"2ca9aa04990ddef21c2be9c0541a6ba059a9e9b04f7a11ab44238ab264ef8f0c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f66cb0b968a768f2e82c4423903da4bfb27f81b3ef8e5b965a4cbb26b502e572\"" Dec 13 13:16:08.426605 containerd[1941]: time="2024-12-13T13:16:08.426553050Z" level=info msg="StartContainer for \"f66cb0b968a768f2e82c4423903da4bfb27f81b3ef8e5b965a4cbb26b502e572\"" Dec 13 13:16:08.475628 systemd[1]: run-containerd-runc-k8s.io-f66cb0b968a768f2e82c4423903da4bfb27f81b3ef8e5b965a4cbb26b502e572-runc.WHtq53.mount: Deactivated successfully. Dec 13 13:16:08.492329 systemd[1]: Started cri-containerd-f66cb0b968a768f2e82c4423903da4bfb27f81b3ef8e5b965a4cbb26b502e572.scope - libcontainer container f66cb0b968a768f2e82c4423903da4bfb27f81b3ef8e5b965a4cbb26b502e572. Dec 13 13:16:08.543271 containerd[1941]: time="2024-12-13T13:16:08.543170139Z" level=info msg="StartContainer for \"f66cb0b968a768f2e82c4423903da4bfb27f81b3ef8e5b965a4cbb26b502e572\" returns successfully" Dec 13 13:16:08.569268 systemd[1]: cri-containerd-f66cb0b968a768f2e82c4423903da4bfb27f81b3ef8e5b965a4cbb26b502e572.scope: Deactivated successfully. Dec 13 13:16:08.997112 kubelet[2400]: E1213 13:16:08.997070 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:16:09.418314 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f66cb0b968a768f2e82c4423903da4bfb27f81b3ef8e5b965a4cbb26b502e572-rootfs.mount: Deactivated successfully. Dec 13 13:16:09.998041 kubelet[2400]: E1213 13:16:09.997949 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:16:10.391371 containerd[1941]: time="2024-12-13T13:16:10.391190172Z" level=info msg="shim disconnected" id=f66cb0b968a768f2e82c4423903da4bfb27f81b3ef8e5b965a4cbb26b502e572 namespace=k8s.io Dec 13 13:16:10.392653 containerd[1941]: time="2024-12-13T13:16:10.392174147Z" level=warning msg="cleaning up after shim disconnected" id=f66cb0b968a768f2e82c4423903da4bfb27f81b3ef8e5b965a4cbb26b502e572 namespace=k8s.io Dec 13 13:16:10.392653 containerd[1941]: time="2024-12-13T13:16:10.392218546Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:16:10.857701 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3293269033.mount: Deactivated successfully. Dec 13 13:16:10.999287 kubelet[2400]: E1213 13:16:10.999124 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:16:11.280957 containerd[1941]: time="2024-12-13T13:16:11.280287004Z" level=info msg="CreateContainer within sandbox \"2ca9aa04990ddef21c2be9c0541a6ba059a9e9b04f7a11ab44238ab264ef8f0c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 13:16:11.320037 containerd[1941]: time="2024-12-13T13:16:11.319952847Z" level=info msg="CreateContainer within sandbox \"2ca9aa04990ddef21c2be9c0541a6ba059a9e9b04f7a11ab44238ab264ef8f0c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e2e3c40fe6a732e541ba79332288ed42f58b4f906fff9aa6668f99f66c047786\"" Dec 13 13:16:11.321868 containerd[1941]: time="2024-12-13T13:16:11.321665359Z" level=info msg="StartContainer for \"e2e3c40fe6a732e541ba79332288ed42f58b4f906fff9aa6668f99f66c047786\"" Dec 13 13:16:11.398502 containerd[1941]: time="2024-12-13T13:16:11.398440104Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:16:11.401333 containerd[1941]: time="2024-12-13T13:16:11.400574819Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.8: active requests=0, bytes read=25662011" Dec 13 13:16:11.399353 systemd[1]: Started cri-containerd-e2e3c40fe6a732e541ba79332288ed42f58b4f906fff9aa6668f99f66c047786.scope - libcontainer container e2e3c40fe6a732e541ba79332288ed42f58b4f906fff9aa6668f99f66c047786. Dec 13 13:16:11.406323 containerd[1941]: time="2024-12-13T13:16:11.406253801Z" level=info msg="ImageCreate event name:\"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:16:11.414426 containerd[1941]: time="2024-12-13T13:16:11.414336444Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:16:11.417166 containerd[1941]: time="2024-12-13T13:16:11.417084414Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.8\" with image id \"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\", repo tag \"registry.k8s.io/kube-proxy:v1.30.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\", size \"25661030\" in 3.017701566s" Dec 13 13:16:11.417449 containerd[1941]: time="2024-12-13T13:16:11.417417904Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\"" Dec 13 13:16:11.424459 containerd[1941]: time="2024-12-13T13:16:11.424093877Z" level=info msg="CreateContainer within sandbox \"637d256324a3596ef602112d775397193f90454c24d5f466a66820d22424705e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 13:16:11.455297 containerd[1941]: time="2024-12-13T13:16:11.455204268Z" level=info msg="CreateContainer within sandbox \"637d256324a3596ef602112d775397193f90454c24d5f466a66820d22424705e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c8cfad0cb1b17c8f6ab38e2d32d50040025142b4023359e2057a065c39f0ea60\"" Dec 13 13:16:11.457613 containerd[1941]: time="2024-12-13T13:16:11.456613869Z" level=info msg="StartContainer for \"c8cfad0cb1b17c8f6ab38e2d32d50040025142b4023359e2057a065c39f0ea60\"" Dec 13 13:16:11.459358 containerd[1941]: time="2024-12-13T13:16:11.459295265Z" level=info msg="StartContainer for \"e2e3c40fe6a732e541ba79332288ed42f58b4f906fff9aa6668f99f66c047786\" returns successfully" Dec 13 13:16:11.480594 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 13:16:11.481913 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:16:11.482289 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Dec 13 13:16:11.497566 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 13:16:11.498116 systemd[1]: cri-containerd-e2e3c40fe6a732e541ba79332288ed42f58b4f906fff9aa6668f99f66c047786.scope: Deactivated successfully. Dec 13 13:16:11.551418 systemd[1]: Started cri-containerd-c8cfad0cb1b17c8f6ab38e2d32d50040025142b4023359e2057a065c39f0ea60.scope - libcontainer container c8cfad0cb1b17c8f6ab38e2d32d50040025142b4023359e2057a065c39f0ea60. Dec 13 13:16:11.571525 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:16:11.649231 containerd[1941]: time="2024-12-13T13:16:11.648147077Z" level=info msg="StartContainer for \"c8cfad0cb1b17c8f6ab38e2d32d50040025142b4023359e2057a065c39f0ea60\" returns successfully" Dec 13 13:16:11.859770 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e2e3c40fe6a732e541ba79332288ed42f58b4f906fff9aa6668f99f66c047786-rootfs.mount: Deactivated successfully. Dec 13 13:16:12.000368 kubelet[2400]: E1213 13:16:12.000274 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:16:12.130624 containerd[1941]: time="2024-12-13T13:16:12.130297055Z" level=info msg="shim disconnected" id=e2e3c40fe6a732e541ba79332288ed42f58b4f906fff9aa6668f99f66c047786 namespace=k8s.io Dec 13 13:16:12.130624 containerd[1941]: time="2024-12-13T13:16:12.130371588Z" level=warning msg="cleaning up after shim disconnected" id=e2e3c40fe6a732e541ba79332288ed42f58b4f906fff9aa6668f99f66c047786 namespace=k8s.io Dec 13 13:16:12.130624 containerd[1941]: time="2024-12-13T13:16:12.130392130Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:16:12.290056 containerd[1941]: time="2024-12-13T13:16:12.289935531Z" level=info msg="CreateContainer within sandbox \"2ca9aa04990ddef21c2be9c0541a6ba059a9e9b04f7a11ab44238ab264ef8f0c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 13:16:12.327318 containerd[1941]: time="2024-12-13T13:16:12.326351734Z" level=info msg="CreateContainer within sandbox \"2ca9aa04990ddef21c2be9c0541a6ba059a9e9b04f7a11ab44238ab264ef8f0c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"40aca46ccd6d954ed6ffa019584ab1ab66c10819c5b4f41760d34acd14d51be7\"" Dec 13 13:16:12.327478 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3808506305.mount: Deactivated successfully. Dec 13 13:16:12.329103 kubelet[2400]: I1213 13:16:12.328511 2400 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-x5xnq" podStartSLOduration=4.534886146 podStartE2EDuration="16.328486293s" podCreationTimestamp="2024-12-13 13:15:56 +0000 UTC" firstStartedPulling="2024-12-13 13:15:59.626619081 +0000 UTC m=+4.801070076" lastFinishedPulling="2024-12-13 13:16:11.420219228 +0000 UTC m=+16.594670223" observedRunningTime="2024-12-13 13:16:12.299563899 +0000 UTC m=+17.474014906" watchObservedRunningTime="2024-12-13 13:16:12.328486293 +0000 UTC m=+17.502937312" Dec 13 13:16:12.330607 containerd[1941]: time="2024-12-13T13:16:12.330531275Z" level=info msg="StartContainer for \"40aca46ccd6d954ed6ffa019584ab1ab66c10819c5b4f41760d34acd14d51be7\"" Dec 13 13:16:12.384319 systemd[1]: Started cri-containerd-40aca46ccd6d954ed6ffa019584ab1ab66c10819c5b4f41760d34acd14d51be7.scope - libcontainer container 40aca46ccd6d954ed6ffa019584ab1ab66c10819c5b4f41760d34acd14d51be7. Dec 13 13:16:12.453427 systemd[1]: cri-containerd-40aca46ccd6d954ed6ffa019584ab1ab66c10819c5b4f41760d34acd14d51be7.scope: Deactivated successfully. Dec 13 13:16:12.455628 containerd[1941]: time="2024-12-13T13:16:12.455567643Z" level=info msg="StartContainer for \"40aca46ccd6d954ed6ffa019584ab1ab66c10819c5b4f41760d34acd14d51be7\" returns successfully" Dec 13 13:16:12.502833 containerd[1941]: time="2024-12-13T13:16:12.502688876Z" level=info msg="shim disconnected" id=40aca46ccd6d954ed6ffa019584ab1ab66c10819c5b4f41760d34acd14d51be7 namespace=k8s.io Dec 13 13:16:12.502833 containerd[1941]: time="2024-12-13T13:16:12.502760576Z" level=warning msg="cleaning up after shim disconnected" id=40aca46ccd6d954ed6ffa019584ab1ab66c10819c5b4f41760d34acd14d51be7 namespace=k8s.io Dec 13 13:16:12.502833 containerd[1941]: time="2024-12-13T13:16:12.502779882Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:16:12.858065 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-40aca46ccd6d954ed6ffa019584ab1ab66c10819c5b4f41760d34acd14d51be7-rootfs.mount: Deactivated successfully. Dec 13 13:16:13.000752 kubelet[2400]: E1213 13:16:13.000684 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:16:13.298756 containerd[1941]: time="2024-12-13T13:16:13.298347584Z" level=info msg="CreateContainer within sandbox \"2ca9aa04990ddef21c2be9c0541a6ba059a9e9b04f7a11ab44238ab264ef8f0c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 13:16:13.327165 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3143526990.mount: Deactivated successfully. Dec 13 13:16:13.329104 containerd[1941]: time="2024-12-13T13:16:13.329053506Z" level=info msg="CreateContainer within sandbox \"2ca9aa04990ddef21c2be9c0541a6ba059a9e9b04f7a11ab44238ab264ef8f0c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4f5552af206cd1b3d38942be0c709303b2356cf804e8b188e22a7bea2b283b31\"" Dec 13 13:16:13.330767 containerd[1941]: time="2024-12-13T13:16:13.330696155Z" level=info msg="StartContainer for \"4f5552af206cd1b3d38942be0c709303b2356cf804e8b188e22a7bea2b283b31\"" Dec 13 13:16:13.390335 systemd[1]: Started cri-containerd-4f5552af206cd1b3d38942be0c709303b2356cf804e8b188e22a7bea2b283b31.scope - libcontainer container 4f5552af206cd1b3d38942be0c709303b2356cf804e8b188e22a7bea2b283b31. Dec 13 13:16:13.437533 systemd[1]: cri-containerd-4f5552af206cd1b3d38942be0c709303b2356cf804e8b188e22a7bea2b283b31.scope: Deactivated successfully. Dec 13 13:16:13.484199 containerd[1941]: time="2024-12-13T13:16:13.484004143Z" level=info msg="StartContainer for \"4f5552af206cd1b3d38942be0c709303b2356cf804e8b188e22a7bea2b283b31\" returns successfully" Dec 13 13:16:13.529715 containerd[1941]: time="2024-12-13T13:16:13.529578800Z" level=info msg="shim disconnected" id=4f5552af206cd1b3d38942be0c709303b2356cf804e8b188e22a7bea2b283b31 namespace=k8s.io Dec 13 13:16:13.529715 containerd[1941]: time="2024-12-13T13:16:13.529656515Z" level=warning msg="cleaning up after shim disconnected" id=4f5552af206cd1b3d38942be0c709303b2356cf804e8b188e22a7bea2b283b31 namespace=k8s.io Dec 13 13:16:13.529715 containerd[1941]: time="2024-12-13T13:16:13.529676324Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:16:13.858271 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4f5552af206cd1b3d38942be0c709303b2356cf804e8b188e22a7bea2b283b31-rootfs.mount: Deactivated successfully. Dec 13 13:16:14.001350 kubelet[2400]: E1213 13:16:14.001291 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:16:14.306762 containerd[1941]: time="2024-12-13T13:16:14.305449651Z" level=info msg="CreateContainer within sandbox \"2ca9aa04990ddef21c2be9c0541a6ba059a9e9b04f7a11ab44238ab264ef8f0c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 13:16:14.341975 containerd[1941]: time="2024-12-13T13:16:14.341839813Z" level=info msg="CreateContainer within sandbox \"2ca9aa04990ddef21c2be9c0541a6ba059a9e9b04f7a11ab44238ab264ef8f0c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1440ce3dd1a8dbd97b6e546b12121f073c37435ebad684425ea0d26cd3383a0d\"" Dec 13 13:16:14.343157 containerd[1941]: time="2024-12-13T13:16:14.343049093Z" level=info msg="StartContainer for \"1440ce3dd1a8dbd97b6e546b12121f073c37435ebad684425ea0d26cd3383a0d\"" Dec 13 13:16:14.405348 systemd[1]: Started cri-containerd-1440ce3dd1a8dbd97b6e546b12121f073c37435ebad684425ea0d26cd3383a0d.scope - libcontainer container 1440ce3dd1a8dbd97b6e546b12121f073c37435ebad684425ea0d26cd3383a0d. Dec 13 13:16:14.459959 containerd[1941]: time="2024-12-13T13:16:14.459876370Z" level=info msg="StartContainer for \"1440ce3dd1a8dbd97b6e546b12121f073c37435ebad684425ea0d26cd3383a0d\" returns successfully" Dec 13 13:16:14.679891 kubelet[2400]: I1213 13:16:14.679491 2400 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 13:16:14.859602 systemd[1]: run-containerd-runc-k8s.io-1440ce3dd1a8dbd97b6e546b12121f073c37435ebad684425ea0d26cd3383a0d-runc.w0rCM2.mount: Deactivated successfully. Dec 13 13:16:15.002237 kubelet[2400]: E1213 13:16:15.001882 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:16:15.321521 kernel: Initializing XFRM netlink socket Dec 13 13:16:15.987828 kubelet[2400]: E1213 13:16:15.987763 2400 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:16:16.002096 kubelet[2400]: E1213 13:16:16.002006 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:16:17.002994 kubelet[2400]: E1213 13:16:17.002920 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:16:17.124332 systemd-networkd[1850]: cilium_host: Link UP Dec 13 13:16:17.125585 (udev-worker)[3053]: Network interface NamePolicy= disabled on kernel command line. Dec 13 13:16:17.128413 systemd-networkd[1850]: cilium_net: Link UP Dec 13 13:16:17.128815 systemd-networkd[1850]: cilium_net: Gained carrier Dec 13 13:16:17.129151 systemd-networkd[1850]: cilium_host: Gained carrier Dec 13 13:16:17.131433 (udev-worker)[3055]: Network interface NamePolicy= disabled on kernel command line. Dec 13 13:16:17.291534 (udev-worker)[3100]: Network interface NamePolicy= disabled on kernel command line. Dec 13 13:16:17.302905 systemd-networkd[1850]: cilium_vxlan: Link UP Dec 13 13:16:17.302924 systemd-networkd[1850]: cilium_vxlan: Gained carrier Dec 13 13:16:17.755638 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 13 13:16:17.842062 kernel: NET: Registered PF_ALG protocol family Dec 13 13:16:17.950338 systemd-networkd[1850]: cilium_net: Gained IPv6LL Dec 13 13:16:18.003622 kubelet[2400]: E1213 13:16:18.003539 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:16:18.142874 systemd-networkd[1850]: cilium_host: Gained IPv6LL Dec 13 13:16:18.974575 systemd-networkd[1850]: cilium_vxlan: Gained IPv6LL Dec 13 13:16:19.005803 kubelet[2400]: E1213 13:16:19.005738 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:16:19.107445 systemd-networkd[1850]: lxc_health: Link UP Dec 13 13:16:19.120889 systemd-networkd[1850]: lxc_health: Gained carrier Dec 13 13:16:19.727688 kubelet[2400]: I1213 13:16:19.726929 2400 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-lqfmt" podStartSLOduration=14.945102989 podStartE2EDuration="23.726904561s" podCreationTimestamp="2024-12-13 13:15:56 +0000 UTC" firstStartedPulling="2024-12-13 13:15:59.616309913 +0000 UTC m=+4.790760908" lastFinishedPulling="2024-12-13 13:16:08.398111497 +0000 UTC m=+13.572562480" observedRunningTime="2024-12-13 13:16:15.335411369 +0000 UTC m=+20.509862400" watchObservedRunningTime="2024-12-13 13:16:19.726904561 +0000 UTC m=+24.901355580" Dec 13 13:16:19.727688 kubelet[2400]: I1213 13:16:19.727420 2400 topology_manager.go:215] "Topology Admit Handler" podUID="2aa22f9a-f70f-4b3c-9c61-4a8de2c9a85f" podNamespace="default" podName="nginx-deployment-85f456d6dd-qb6j5" Dec 13 13:16:19.741090 systemd[1]: Created slice kubepods-besteffort-pod2aa22f9a_f70f_4b3c_9c61_4a8de2c9a85f.slice - libcontainer container kubepods-besteffort-pod2aa22f9a_f70f_4b3c_9c61_4a8de2c9a85f.slice. Dec 13 13:16:19.862650 kubelet[2400]: I1213 13:16:19.862535 2400 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmxxn\" (UniqueName: \"kubernetes.io/projected/2aa22f9a-f70f-4b3c-9c61-4a8de2c9a85f-kube-api-access-bmxxn\") pod \"nginx-deployment-85f456d6dd-qb6j5\" (UID: \"2aa22f9a-f70f-4b3c-9c61-4a8de2c9a85f\") " pod="default/nginx-deployment-85f456d6dd-qb6j5" Dec 13 13:16:20.006479 kubelet[2400]: E1213 13:16:20.006322 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:16:20.052330 containerd[1941]: time="2024-12-13T13:16:20.052176170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-qb6j5,Uid:2aa22f9a-f70f-4b3c-9c61-4a8de2c9a85f,Namespace:default,Attempt:0,}" Dec 13 13:16:20.141350 systemd-networkd[1850]: lxccdfe45efe849: Link UP Dec 13 13:16:20.152603 kernel: eth0: renamed from tmp601b4 Dec 13 13:16:20.164345 systemd-networkd[1850]: lxccdfe45efe849: Gained carrier Dec 13 13:16:20.382302 systemd-networkd[1850]: lxc_health: Gained IPv6LL Dec 13 13:16:21.007145 kubelet[2400]: E1213 13:16:21.007072 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:16:22.008327 kubelet[2400]: E1213 13:16:22.008260 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:16:22.110470 systemd-networkd[1850]: lxccdfe45efe849: Gained IPv6LL Dec 13 13:16:23.008812 kubelet[2400]: E1213 13:16:23.008691 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:16:24.009669 kubelet[2400]: E1213 13:16:24.009584 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:16:25.010158 kubelet[2400]: E1213 13:16:25.010048 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:16:25.044331 ntpd[1920]: Listen normally on 8 cilium_host 192.168.1.143:123 Dec 13 13:16:25.044464 ntpd[1920]: Listen normally on 9 cilium_net [fe80::cc8d:a1ff:fe5e:ed15%3]:123 Dec 13 13:16:25.044913 ntpd[1920]: 13 Dec 13:16:25 ntpd[1920]: Listen normally on 8 cilium_host 192.168.1.143:123 Dec 13 13:16:25.044913 ntpd[1920]: 13 Dec 13:16:25 ntpd[1920]: Listen normally on 9 cilium_net [fe80::cc8d:a1ff:fe5e:ed15%3]:123 Dec 13 13:16:25.044913 ntpd[1920]: 13 Dec 13:16:25 ntpd[1920]: Listen normally on 10 cilium_host [fe80::385f:d7ff:fedb:c698%4]:123 Dec 13 13:16:25.044913 ntpd[1920]: 13 Dec 13:16:25 ntpd[1920]: Listen normally on 11 cilium_vxlan [fe80::2479:9dff:fed5:dfb5%5]:123 Dec 13 13:16:25.044913 ntpd[1920]: 13 Dec 13:16:25 ntpd[1920]: Listen normally on 12 lxc_health [fe80::fcec:d0ff:fe3c:6b2c%7]:123 Dec 13 13:16:25.044913 ntpd[1920]: 13 Dec 13:16:25 ntpd[1920]: Listen normally on 13 lxccdfe45efe849 [fe80::9470:84ff:fe1a:aaab%9]:123 Dec 13 13:16:25.044545 ntpd[1920]: Listen normally on 10 cilium_host [fe80::385f:d7ff:fedb:c698%4]:123 Dec 13 13:16:25.044615 ntpd[1920]: Listen normally on 11 cilium_vxlan [fe80::2479:9dff:fed5:dfb5%5]:123 Dec 13 13:16:25.044681 ntpd[1920]: Listen normally on 12 lxc_health [fe80::fcec:d0ff:fe3c:6b2c%7]:123 Dec 13 13:16:25.044747 ntpd[1920]: Listen normally on 13 lxccdfe45efe849 [fe80::9470:84ff:fe1a:aaab%9]:123 Dec 13 13:16:26.010430 kubelet[2400]: E1213 13:16:26.010358 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:16:27.010540 kubelet[2400]: E1213 13:16:27.010469 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:16:28.011173 kubelet[2400]: E1213 13:16:28.011116 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:16:28.166938 containerd[1941]: time="2024-12-13T13:16:28.166203104Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:16:28.166938 containerd[1941]: time="2024-12-13T13:16:28.166301434Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:16:28.166938 containerd[1941]: time="2024-12-13T13:16:28.166337367Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:16:28.166938 containerd[1941]: time="2024-12-13T13:16:28.166517926Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:16:28.203237 systemd[1]: run-containerd-runc-k8s.io-601b49175024997b96429dd459de6133e9b1c66ab0560c0d270b74de93391a18-runc.zonC0K.mount: Deactivated successfully. Dec 13 13:16:28.216362 systemd[1]: Started cri-containerd-601b49175024997b96429dd459de6133e9b1c66ab0560c0d270b74de93391a18.scope - libcontainer container 601b49175024997b96429dd459de6133e9b1c66ab0560c0d270b74de93391a18. Dec 13 13:16:28.280343 containerd[1941]: time="2024-12-13T13:16:28.279985060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-qb6j5,Uid:2aa22f9a-f70f-4b3c-9c61-4a8de2c9a85f,Namespace:default,Attempt:0,} returns sandbox id \"601b49175024997b96429dd459de6133e9b1c66ab0560c0d270b74de93391a18\"" Dec 13 13:16:28.284044 containerd[1941]: time="2024-12-13T13:16:28.283716477Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 13:16:29.012384 kubelet[2400]: E1213 13:16:29.012305 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:16:30.013615 kubelet[2400]: E1213 13:16:30.013472 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:16:31.014442 kubelet[2400]: E1213 13:16:31.014351 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:16:31.287584 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount869517407.mount: Deactivated successfully. Dec 13 13:16:32.015137 kubelet[2400]: E1213 13:16:32.015083 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:16:32.652250 update_engine[1925]: I20241213 13:16:32.652110 1925 update_attempter.cc:509] Updating boot flags... Dec 13 13:16:32.656074 containerd[1941]: time="2024-12-13T13:16:32.655983464Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:16:32.658536 containerd[1941]: time="2024-12-13T13:16:32.658454491Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=67696939" Dec 13 13:16:32.661064 containerd[1941]: time="2024-12-13T13:16:32.660502750Z" level=info msg="ImageCreate event name:\"sha256:d5cb91e7550dca840aad69277b6dbccf8dc3739757998181746daf777a8bd9de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:16:32.680068 containerd[1941]: time="2024-12-13T13:16:32.678966574Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:16:32.683169 containerd[1941]: time="2024-12-13T13:16:32.683108296Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:d5cb91e7550dca840aad69277b6dbccf8dc3739757998181746daf777a8bd9de\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1\", size \"67696817\" in 4.399335883s" Dec 13 13:16:32.684126 containerd[1941]: time="2024-12-13T13:16:32.684085584Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:d5cb91e7550dca840aad69277b6dbccf8dc3739757998181746daf777a8bd9de\"" Dec 13 13:16:32.688458 containerd[1941]: time="2024-12-13T13:16:32.688405031Z" level=info msg="CreateContainer within sandbox \"601b49175024997b96429dd459de6133e9b1c66ab0560c0d270b74de93391a18\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Dec 13 13:16:32.722331 containerd[1941]: time="2024-12-13T13:16:32.722264244Z" level=info msg="CreateContainer within sandbox \"601b49175024997b96429dd459de6133e9b1c66ab0560c0d270b74de93391a18\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"4c9094a1d29cada6b882d4a714f4d336057af7b280d8d019fe5dc93ac8898eef\"" Dec 13 13:16:32.723772 containerd[1941]: time="2024-12-13T13:16:32.723722589Z" level=info msg="StartContainer for \"4c9094a1d29cada6b882d4a714f4d336057af7b280d8d019fe5dc93ac8898eef\"" Dec 13 13:16:32.791200 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3550) Dec 13 13:16:32.808339 systemd[1]: Started cri-containerd-4c9094a1d29cada6b882d4a714f4d336057af7b280d8d019fe5dc93ac8898eef.scope - libcontainer container 4c9094a1d29cada6b882d4a714f4d336057af7b280d8d019fe5dc93ac8898eef. Dec 13 13:16:32.893681 containerd[1941]: time="2024-12-13T13:16:32.893613805Z" level=info msg="StartContainer for \"4c9094a1d29cada6b882d4a714f4d336057af7b280d8d019fe5dc93ac8898eef\" returns successfully" Dec 13 13:16:33.015708 kubelet[2400]: E1213 13:16:33.015506 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:16:33.118247 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3559) Dec 13 13:16:33.403682 kubelet[2400]: I1213 13:16:33.403591 2400 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-qb6j5" podStartSLOduration=10.001084262 podStartE2EDuration="14.403571156s" podCreationTimestamp="2024-12-13 13:16:19 +0000 UTC" firstStartedPulling="2024-12-13 13:16:28.283070314 +0000 UTC m=+33.457521309" lastFinishedPulling="2024-12-13 13:16:32.685557184 +0000 UTC m=+37.860008203" observedRunningTime="2024-12-13 13:16:33.403272363 +0000 UTC m=+38.577723394" watchObservedRunningTime="2024-12-13 13:16:33.403571156 +0000 UTC m=+38.578022175" Dec 13 13:16:34.016606 kubelet[2400]: E1213 13:16:34.016541 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:16:35.017042 kubelet[2400]: E1213 13:16:35.016969 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:16:35.987038 kubelet[2400]: E1213 13:16:35.986942 2400 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:16:36.017319 kubelet[2400]: E1213 13:16:36.017281 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:16:37.017741 kubelet[2400]: E1213 13:16:37.017668 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:16:38.017835 kubelet[2400]: E1213 13:16:38.017766 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:16:39.018231 kubelet[2400]: E1213 13:16:39.018167 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:16:40.019266 kubelet[2400]: E1213 13:16:40.019200 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:16:40.144198 kubelet[2400]: I1213 13:16:40.144090 2400 topology_manager.go:215] "Topology Admit Handler" podUID="7f438911-8c4c-4ecf-86e5-56e88edf4afe" podNamespace="default" podName="nfs-server-provisioner-0" Dec 13 13:16:40.156285 systemd[1]: Created slice kubepods-besteffort-pod7f438911_8c4c_4ecf_86e5_56e88edf4afe.slice - libcontainer container kubepods-besteffort-pod7f438911_8c4c_4ecf_86e5_56e88edf4afe.slice. Dec 13 13:16:40.195690 kubelet[2400]: I1213 13:16:40.195615 2400 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghgx5\" (UniqueName: \"kubernetes.io/projected/7f438911-8c4c-4ecf-86e5-56e88edf4afe-kube-api-access-ghgx5\") pod \"nfs-server-provisioner-0\" (UID: \"7f438911-8c4c-4ecf-86e5-56e88edf4afe\") " pod="default/nfs-server-provisioner-0" Dec 13 13:16:40.195690 kubelet[2400]: I1213 13:16:40.195688 2400 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/7f438911-8c4c-4ecf-86e5-56e88edf4afe-data\") pod \"nfs-server-provisioner-0\" (UID: \"7f438911-8c4c-4ecf-86e5-56e88edf4afe\") " pod="default/nfs-server-provisioner-0" Dec 13 13:16:40.461668 containerd[1941]: time="2024-12-13T13:16:40.461612766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:7f438911-8c4c-4ecf-86e5-56e88edf4afe,Namespace:default,Attempt:0,}" Dec 13 13:16:40.514926 systemd-networkd[1850]: lxc50c79fe4dc0f: Link UP Dec 13 13:16:40.531099 kernel: eth0: renamed from tmp8822e Dec 13 13:16:40.538530 (udev-worker)[3773]: Network interface NamePolicy= disabled on kernel command line. Dec 13 13:16:40.540303 systemd-networkd[1850]: lxc50c79fe4dc0f: Gained carrier Dec 13 13:16:40.851094 containerd[1941]: time="2024-12-13T13:16:40.850840530Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:16:40.851684 containerd[1941]: time="2024-12-13T13:16:40.851302557Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:16:40.851852 containerd[1941]: time="2024-12-13T13:16:40.851475503Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:16:40.852968 containerd[1941]: time="2024-12-13T13:16:40.852784001Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:16:40.895376 systemd[1]: Started cri-containerd-8822e2e42e66ace3a297aaa3c6ca373c7b09af444fdb7b1a2a03d2282c72bb03.scope - libcontainer container 8822e2e42e66ace3a297aaa3c6ca373c7b09af444fdb7b1a2a03d2282c72bb03. Dec 13 13:16:40.955093 containerd[1941]: time="2024-12-13T13:16:40.954966347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:7f438911-8c4c-4ecf-86e5-56e88edf4afe,Namespace:default,Attempt:0,} returns sandbox id \"8822e2e42e66ace3a297aaa3c6ca373c7b09af444fdb7b1a2a03d2282c72bb03\"" Dec 13 13:16:40.958369 containerd[1941]: time="2024-12-13T13:16:40.958322277Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Dec 13 13:16:41.019658 kubelet[2400]: E1213 13:16:41.019597 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:16:42.014490 systemd-networkd[1850]: lxc50c79fe4dc0f: Gained IPv6LL Dec 13 13:16:42.020601 kubelet[2400]: E1213 13:16:42.020539 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:16:43.024210 kubelet[2400]: E1213 13:16:43.024108 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:16:43.663047 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3538022019.mount: Deactivated successfully. Dec 13 13:16:44.026517 kubelet[2400]: E1213 13:16:44.025095 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:16:44.044638 ntpd[1920]: Listen normally on 14 lxc50c79fe4dc0f [fe80::c050:2eff:fe68:32e%11]:123 Dec 13 13:16:44.046170 ntpd[1920]: 13 Dec 13:16:44 ntpd[1920]: Listen normally on 14 lxc50c79fe4dc0f [fe80::c050:2eff:fe68:32e%11]:123 Dec 13 13:16:45.026634 kubelet[2400]: E1213 13:16:45.026590 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:16:46.028127 kubelet[2400]: E1213 13:16:46.027994 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:16:46.660049 containerd[1941]: time="2024-12-13T13:16:46.659747384Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:16:46.661737 containerd[1941]: time="2024-12-13T13:16:46.661664406Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=87373623" Dec 13 13:16:46.662970 containerd[1941]: time="2024-12-13T13:16:46.662867996Z" level=info msg="ImageCreate event name:\"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:16:46.668174 containerd[1941]: time="2024-12-13T13:16:46.668071372Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:16:46.670364 containerd[1941]: time="2024-12-13T13:16:46.670192508Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"87371201\" in 5.711557548s" Dec 13 13:16:46.670364 containerd[1941]: time="2024-12-13T13:16:46.670246955Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Dec 13 13:16:46.675686 containerd[1941]: time="2024-12-13T13:16:46.675418576Z" level=info msg="CreateContainer within sandbox \"8822e2e42e66ace3a297aaa3c6ca373c7b09af444fdb7b1a2a03d2282c72bb03\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Dec 13 13:16:46.700052 containerd[1941]: time="2024-12-13T13:16:46.699967833Z" level=info msg="CreateContainer within sandbox \"8822e2e42e66ace3a297aaa3c6ca373c7b09af444fdb7b1a2a03d2282c72bb03\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"a7af42bf6728b3092d3e5a09643762e5d55c7e3528ab8fe40d0d9b4681df22c2\"" Dec 13 13:16:46.701400 containerd[1941]: time="2024-12-13T13:16:46.701248465Z" level=info msg="StartContainer for \"a7af42bf6728b3092d3e5a09643762e5d55c7e3528ab8fe40d0d9b4681df22c2\"" Dec 13 13:16:46.751355 systemd[1]: Started cri-containerd-a7af42bf6728b3092d3e5a09643762e5d55c7e3528ab8fe40d0d9b4681df22c2.scope - libcontainer container a7af42bf6728b3092d3e5a09643762e5d55c7e3528ab8fe40d0d9b4681df22c2. Dec 13 13:16:46.796925 containerd[1941]: time="2024-12-13T13:16:46.796843011Z" level=info msg="StartContainer for \"a7af42bf6728b3092d3e5a09643762e5d55c7e3528ab8fe40d0d9b4681df22c2\" returns successfully" Dec 13 13:16:47.029331 kubelet[2400]: E1213 13:16:47.029188 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:16:48.030048 kubelet[2400]: E1213 13:16:48.029944 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:16:49.031179 kubelet[2400]: E1213 13:16:49.031116 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:16:50.031628 kubelet[2400]: E1213 13:16:50.031568 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:16:51.032336 kubelet[2400]: E1213 13:16:51.032271 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:16:52.033190 kubelet[2400]: E1213 13:16:52.033117 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:16:53.033731 kubelet[2400]: E1213 13:16:53.033667 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:16:54.034064 kubelet[2400]: E1213 13:16:54.033971 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:16:55.035062 kubelet[2400]: E1213 13:16:55.034971 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:16:55.987051 kubelet[2400]: E1213 13:16:55.986983 2400 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:16:56.035156 kubelet[2400]: E1213 13:16:56.035093 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:16:56.717172 kubelet[2400]: I1213 13:16:56.717087 2400 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=11.002321541 podStartE2EDuration="16.71706428s" podCreationTimestamp="2024-12-13 13:16:40 +0000 UTC" firstStartedPulling="2024-12-13 13:16:40.957606863 +0000 UTC m=+46.132057858" lastFinishedPulling="2024-12-13 13:16:46.672349602 +0000 UTC m=+51.846800597" observedRunningTime="2024-12-13 13:16:47.441313149 +0000 UTC m=+52.615764192" watchObservedRunningTime="2024-12-13 13:16:56.71706428 +0000 UTC m=+61.891515311" Dec 13 13:16:56.717492 kubelet[2400]: I1213 13:16:56.717293 2400 topology_manager.go:215] "Topology Admit Handler" podUID="f8785bc8-06df-4773-b726-0bce94e82fb3" podNamespace="default" podName="test-pod-1" Dec 13 13:16:56.728126 systemd[1]: Created slice kubepods-besteffort-podf8785bc8_06df_4773_b726_0bce94e82fb3.slice - libcontainer container kubepods-besteffort-podf8785bc8_06df_4773_b726_0bce94e82fb3.slice. Dec 13 13:16:56.885675 kubelet[2400]: I1213 13:16:56.885597 2400 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d496a93b-f141-4dec-b35e-d67f0d4e6bfd\" (UniqueName: \"kubernetes.io/nfs/f8785bc8-06df-4773-b726-0bce94e82fb3-pvc-d496a93b-f141-4dec-b35e-d67f0d4e6bfd\") pod \"test-pod-1\" (UID: \"f8785bc8-06df-4773-b726-0bce94e82fb3\") " pod="default/test-pod-1" Dec 13 13:16:56.885675 kubelet[2400]: I1213 13:16:56.885672 2400 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sd46\" (UniqueName: \"kubernetes.io/projected/f8785bc8-06df-4773-b726-0bce94e82fb3-kube-api-access-2sd46\") pod \"test-pod-1\" (UID: \"f8785bc8-06df-4773-b726-0bce94e82fb3\") " pod="default/test-pod-1" Dec 13 13:16:57.028070 kernel: FS-Cache: Loaded Dec 13 13:16:57.035703 kubelet[2400]: E1213 13:16:57.035630 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:16:57.068651 kernel: RPC: Registered named UNIX socket transport module. Dec 13 13:16:57.068801 kernel: RPC: Registered udp transport module. Dec 13 13:16:57.068860 kernel: RPC: Registered tcp transport module. Dec 13 13:16:57.069363 kernel: RPC: Registered tcp-with-tls transport module. Dec 13 13:16:57.070389 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Dec 13 13:16:57.370481 kernel: NFS: Registering the id_resolver key type Dec 13 13:16:57.370604 kernel: Key type id_resolver registered Dec 13 13:16:57.370643 kernel: Key type id_legacy registered Dec 13 13:16:57.411643 nfsidmap[3962]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Dec 13 13:16:57.419192 nfsidmap[3963]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Dec 13 13:16:57.634783 containerd[1941]: time="2024-12-13T13:16:57.634631312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:f8785bc8-06df-4773-b726-0bce94e82fb3,Namespace:default,Attempt:0,}" Dec 13 13:16:57.683787 (udev-worker)[3948]: Network interface NamePolicy= disabled on kernel command line. Dec 13 13:16:57.683788 (udev-worker)[3959]: Network interface NamePolicy= disabled on kernel command line. Dec 13 13:16:57.688692 systemd-networkd[1850]: lxc076e7e41df8c: Link UP Dec 13 13:16:57.698058 kernel: eth0: renamed from tmpc9771 Dec 13 13:16:57.711251 systemd-networkd[1850]: lxc076e7e41df8c: Gained carrier Dec 13 13:16:57.986358 containerd[1941]: time="2024-12-13T13:16:57.986073612Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:16:57.986964 containerd[1941]: time="2024-12-13T13:16:57.986228261Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:16:57.987405 containerd[1941]: time="2024-12-13T13:16:57.987294538Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:16:57.987849 containerd[1941]: time="2024-12-13T13:16:57.987778464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:16:58.028349 systemd[1]: Started cri-containerd-c9771759b0b39b23ede94bcdb367e36a5d4c083a4e34eabcf2ecab5dd2a26dbc.scope - libcontainer container c9771759b0b39b23ede94bcdb367e36a5d4c083a4e34eabcf2ecab5dd2a26dbc. Dec 13 13:16:58.035907 kubelet[2400]: E1213 13:16:58.035810 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:16:58.091782 containerd[1941]: time="2024-12-13T13:16:58.091630905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:f8785bc8-06df-4773-b726-0bce94e82fb3,Namespace:default,Attempt:0,} returns sandbox id \"c9771759b0b39b23ede94bcdb367e36a5d4c083a4e34eabcf2ecab5dd2a26dbc\"" Dec 13 13:16:58.095807 containerd[1941]: time="2024-12-13T13:16:58.095746418Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 13:16:58.455329 containerd[1941]: time="2024-12-13T13:16:58.455267663Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:16:58.457482 containerd[1941]: time="2024-12-13T13:16:58.457391692Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Dec 13 13:16:58.463202 containerd[1941]: time="2024-12-13T13:16:58.462934214Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:d5cb91e7550dca840aad69277b6dbccf8dc3739757998181746daf777a8bd9de\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1\", size \"67696817\" in 367.129207ms" Dec 13 13:16:58.463202 containerd[1941]: time="2024-12-13T13:16:58.462992263Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:d5cb91e7550dca840aad69277b6dbccf8dc3739757998181746daf777a8bd9de\"" Dec 13 13:16:58.466925 containerd[1941]: time="2024-12-13T13:16:58.466725829Z" level=info msg="CreateContainer within sandbox \"c9771759b0b39b23ede94bcdb367e36a5d4c083a4e34eabcf2ecab5dd2a26dbc\" for container &ContainerMetadata{Name:test,Attempt:0,}" Dec 13 13:16:58.496735 containerd[1941]: time="2024-12-13T13:16:58.496660425Z" level=info msg="CreateContainer within sandbox \"c9771759b0b39b23ede94bcdb367e36a5d4c083a4e34eabcf2ecab5dd2a26dbc\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"b03caa6678eefcd3b66d8e80212a6bf63717e226d924da73a242a3c183362509\"" Dec 13 13:16:58.497623 containerd[1941]: time="2024-12-13T13:16:58.497409804Z" level=info msg="StartContainer for \"b03caa6678eefcd3b66d8e80212a6bf63717e226d924da73a242a3c183362509\"" Dec 13 13:16:58.551353 systemd[1]: Started cri-containerd-b03caa6678eefcd3b66d8e80212a6bf63717e226d924da73a242a3c183362509.scope - libcontainer container b03caa6678eefcd3b66d8e80212a6bf63717e226d924da73a242a3c183362509. Dec 13 13:16:58.598400 containerd[1941]: time="2024-12-13T13:16:58.598189033Z" level=info msg="StartContainer for \"b03caa6678eefcd3b66d8e80212a6bf63717e226d924da73a242a3c183362509\" returns successfully" Dec 13 13:16:59.036131 kubelet[2400]: E1213 13:16:59.036062 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:16:59.614652 systemd-networkd[1850]: lxc076e7e41df8c: Gained IPv6LL Dec 13 13:17:00.036593 kubelet[2400]: E1213 13:17:00.036429 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:17:01.036977 kubelet[2400]: E1213 13:17:01.036905 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:17:02.037780 kubelet[2400]: E1213 13:17:02.037734 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:17:02.044300 ntpd[1920]: Listen normally on 15 lxc076e7e41df8c [fe80::805c:f9ff:fe47:7c1d%13]:123 Dec 13 13:17:02.045122 ntpd[1920]: 13 Dec 13:17:02 ntpd[1920]: Listen normally on 15 lxc076e7e41df8c [fe80::805c:f9ff:fe47:7c1d%13]:123 Dec 13 13:17:03.038865 kubelet[2400]: E1213 13:17:03.038809 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:17:04.039572 kubelet[2400]: E1213 13:17:04.039512 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:17:05.039670 kubelet[2400]: E1213 13:17:05.039609 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:17:05.778428 kubelet[2400]: I1213 13:17:05.778284 2400 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=25.408893198 podStartE2EDuration="25.778260264s" podCreationTimestamp="2024-12-13 13:16:40 +0000 UTC" firstStartedPulling="2024-12-13 13:16:58.09477027 +0000 UTC m=+63.269221265" lastFinishedPulling="2024-12-13 13:16:58.464137348 +0000 UTC m=+63.638588331" observedRunningTime="2024-12-13 13:16:59.476550556 +0000 UTC m=+64.651001551" watchObservedRunningTime="2024-12-13 13:17:05.778260264 +0000 UTC m=+70.952711259" Dec 13 13:17:05.819708 containerd[1941]: time="2024-12-13T13:17:05.819629063Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 13:17:05.830948 containerd[1941]: time="2024-12-13T13:17:05.830893344Z" level=info msg="StopContainer for \"1440ce3dd1a8dbd97b6e546b12121f073c37435ebad684425ea0d26cd3383a0d\" with timeout 2 (s)" Dec 13 13:17:05.831670 containerd[1941]: time="2024-12-13T13:17:05.831622901Z" level=info msg="Stop container \"1440ce3dd1a8dbd97b6e546b12121f073c37435ebad684425ea0d26cd3383a0d\" with signal terminated" Dec 13 13:17:05.844411 systemd-networkd[1850]: lxc_health: Link DOWN Dec 13 13:17:05.844427 systemd-networkd[1850]: lxc_health: Lost carrier Dec 13 13:17:05.866925 systemd[1]: cri-containerd-1440ce3dd1a8dbd97b6e546b12121f073c37435ebad684425ea0d26cd3383a0d.scope: Deactivated successfully. Dec 13 13:17:05.867975 systemd[1]: cri-containerd-1440ce3dd1a8dbd97b6e546b12121f073c37435ebad684425ea0d26cd3383a0d.scope: Consumed 14.124s CPU time. Dec 13 13:17:05.904098 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1440ce3dd1a8dbd97b6e546b12121f073c37435ebad684425ea0d26cd3383a0d-rootfs.mount: Deactivated successfully. Dec 13 13:17:06.040719 kubelet[2400]: E1213 13:17:06.040581 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:17:06.162290 kubelet[2400]: E1213 13:17:06.162189 2400 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 13:17:06.622660 containerd[1941]: time="2024-12-13T13:17:06.622522142Z" level=info msg="shim disconnected" id=1440ce3dd1a8dbd97b6e546b12121f073c37435ebad684425ea0d26cd3383a0d namespace=k8s.io Dec 13 13:17:06.622660 containerd[1941]: time="2024-12-13T13:17:06.622595835Z" level=warning msg="cleaning up after shim disconnected" id=1440ce3dd1a8dbd97b6e546b12121f073c37435ebad684425ea0d26cd3383a0d namespace=k8s.io Dec 13 13:17:06.622660 containerd[1941]: time="2024-12-13T13:17:06.622614781Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:17:06.648064 containerd[1941]: time="2024-12-13T13:17:06.647982474Z" level=info msg="StopContainer for \"1440ce3dd1a8dbd97b6e546b12121f073c37435ebad684425ea0d26cd3383a0d\" returns successfully" Dec 13 13:17:06.649419 containerd[1941]: time="2024-12-13T13:17:06.648991218Z" level=info msg="StopPodSandbox for \"2ca9aa04990ddef21c2be9c0541a6ba059a9e9b04f7a11ab44238ab264ef8f0c\"" Dec 13 13:17:06.649419 containerd[1941]: time="2024-12-13T13:17:06.649077109Z" level=info msg="Container to stop \"e2e3c40fe6a732e541ba79332288ed42f58b4f906fff9aa6668f99f66c047786\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 13:17:06.649419 containerd[1941]: time="2024-12-13T13:17:06.649101722Z" level=info msg="Container to stop \"40aca46ccd6d954ed6ffa019584ab1ab66c10819c5b4f41760d34acd14d51be7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 13:17:06.649419 containerd[1941]: time="2024-12-13T13:17:06.649122852Z" level=info msg="Container to stop \"1440ce3dd1a8dbd97b6e546b12121f073c37435ebad684425ea0d26cd3383a0d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 13:17:06.649419 containerd[1941]: time="2024-12-13T13:17:06.649143034Z" level=info msg="Container to stop \"f66cb0b968a768f2e82c4423903da4bfb27f81b3ef8e5b965a4cbb26b502e572\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 13:17:06.649419 containerd[1941]: time="2024-12-13T13:17:06.649168163Z" level=info msg="Container to stop \"4f5552af206cd1b3d38942be0c709303b2356cf804e8b188e22a7bea2b283b31\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 13:17:06.652996 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2ca9aa04990ddef21c2be9c0541a6ba059a9e9b04f7a11ab44238ab264ef8f0c-shm.mount: Deactivated successfully. Dec 13 13:17:06.663112 systemd[1]: cri-containerd-2ca9aa04990ddef21c2be9c0541a6ba059a9e9b04f7a11ab44238ab264ef8f0c.scope: Deactivated successfully. Dec 13 13:17:06.694768 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2ca9aa04990ddef21c2be9c0541a6ba059a9e9b04f7a11ab44238ab264ef8f0c-rootfs.mount: Deactivated successfully. Dec 13 13:17:06.705464 containerd[1941]: time="2024-12-13T13:17:06.705268984Z" level=info msg="shim disconnected" id=2ca9aa04990ddef21c2be9c0541a6ba059a9e9b04f7a11ab44238ab264ef8f0c namespace=k8s.io Dec 13 13:17:06.705464 containerd[1941]: time="2024-12-13T13:17:06.705348739Z" level=warning msg="cleaning up after shim disconnected" id=2ca9aa04990ddef21c2be9c0541a6ba059a9e9b04f7a11ab44238ab264ef8f0c namespace=k8s.io Dec 13 13:17:06.705464 containerd[1941]: time="2024-12-13T13:17:06.705371875Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:17:06.727597 containerd[1941]: time="2024-12-13T13:17:06.727536836Z" level=info msg="TearDown network for sandbox \"2ca9aa04990ddef21c2be9c0541a6ba059a9e9b04f7a11ab44238ab264ef8f0c\" successfully" Dec 13 13:17:06.727597 containerd[1941]: time="2024-12-13T13:17:06.727588690Z" level=info msg="StopPodSandbox for \"2ca9aa04990ddef21c2be9c0541a6ba059a9e9b04f7a11ab44238ab264ef8f0c\" returns successfully" Dec 13 13:17:06.852144 kubelet[2400]: I1213 13:17:06.846000 2400 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6f8e7977-105f-4870-a2e8-b8e9037bfc78-xtables-lock\") pod \"6f8e7977-105f-4870-a2e8-b8e9037bfc78\" (UID: \"6f8e7977-105f-4870-a2e8-b8e9037bfc78\") " Dec 13 13:17:06.852144 kubelet[2400]: I1213 13:17:06.846411 2400 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6f8e7977-105f-4870-a2e8-b8e9037bfc78-bpf-maps\") pod \"6f8e7977-105f-4870-a2e8-b8e9037bfc78\" (UID: \"6f8e7977-105f-4870-a2e8-b8e9037bfc78\") " Dec 13 13:17:06.852144 kubelet[2400]: I1213 13:17:06.846461 2400 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6f8e7977-105f-4870-a2e8-b8e9037bfc78-clustermesh-secrets\") pod \"6f8e7977-105f-4870-a2e8-b8e9037bfc78\" (UID: \"6f8e7977-105f-4870-a2e8-b8e9037bfc78\") " Dec 13 13:17:06.852144 kubelet[2400]: I1213 13:17:06.846499 2400 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6f8e7977-105f-4870-a2e8-b8e9037bfc78-cilium-run\") pod \"6f8e7977-105f-4870-a2e8-b8e9037bfc78\" (UID: \"6f8e7977-105f-4870-a2e8-b8e9037bfc78\") " Dec 13 13:17:06.852144 kubelet[2400]: I1213 13:17:06.846532 2400 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6f8e7977-105f-4870-a2e8-b8e9037bfc78-cilium-cgroup\") pod \"6f8e7977-105f-4870-a2e8-b8e9037bfc78\" (UID: \"6f8e7977-105f-4870-a2e8-b8e9037bfc78\") " Dec 13 13:17:06.852144 kubelet[2400]: I1213 13:17:06.846564 2400 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6f8e7977-105f-4870-a2e8-b8e9037bfc78-lib-modules\") pod \"6f8e7977-105f-4870-a2e8-b8e9037bfc78\" (UID: \"6f8e7977-105f-4870-a2e8-b8e9037bfc78\") " Dec 13 13:17:06.852586 kubelet[2400]: I1213 13:17:06.846604 2400 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6f8e7977-105f-4870-a2e8-b8e9037bfc78-hubble-tls\") pod \"6f8e7977-105f-4870-a2e8-b8e9037bfc78\" (UID: \"6f8e7977-105f-4870-a2e8-b8e9037bfc78\") " Dec 13 13:17:06.852586 kubelet[2400]: I1213 13:17:06.846636 2400 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6f8e7977-105f-4870-a2e8-b8e9037bfc78-hostproc\") pod \"6f8e7977-105f-4870-a2e8-b8e9037bfc78\" (UID: \"6f8e7977-105f-4870-a2e8-b8e9037bfc78\") " Dec 13 13:17:06.852586 kubelet[2400]: I1213 13:17:06.846667 2400 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6f8e7977-105f-4870-a2e8-b8e9037bfc78-etc-cni-netd\") pod \"6f8e7977-105f-4870-a2e8-b8e9037bfc78\" (UID: \"6f8e7977-105f-4870-a2e8-b8e9037bfc78\") " Dec 13 13:17:06.852586 kubelet[2400]: I1213 13:17:06.846698 2400 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6f8e7977-105f-4870-a2e8-b8e9037bfc78-host-proc-sys-net\") pod \"6f8e7977-105f-4870-a2e8-b8e9037bfc78\" (UID: \"6f8e7977-105f-4870-a2e8-b8e9037bfc78\") " Dec 13 13:17:06.852586 kubelet[2400]: I1213 13:17:06.846732 2400 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6f8e7977-105f-4870-a2e8-b8e9037bfc78-cni-path\") pod \"6f8e7977-105f-4870-a2e8-b8e9037bfc78\" (UID: \"6f8e7977-105f-4870-a2e8-b8e9037bfc78\") " Dec 13 13:17:06.852586 kubelet[2400]: I1213 13:17:06.846794 2400 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6f8e7977-105f-4870-a2e8-b8e9037bfc78-host-proc-sys-kernel\") pod \"6f8e7977-105f-4870-a2e8-b8e9037bfc78\" (UID: \"6f8e7977-105f-4870-a2e8-b8e9037bfc78\") " Dec 13 13:17:06.852891 kubelet[2400]: I1213 13:17:06.846895 2400 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2tmgh\" (UniqueName: \"kubernetes.io/projected/6f8e7977-105f-4870-a2e8-b8e9037bfc78-kube-api-access-2tmgh\") pod \"6f8e7977-105f-4870-a2e8-b8e9037bfc78\" (UID: \"6f8e7977-105f-4870-a2e8-b8e9037bfc78\") " Dec 13 13:17:06.852891 kubelet[2400]: I1213 13:17:06.847087 2400 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6f8e7977-105f-4870-a2e8-b8e9037bfc78-cilium-config-path\") pod \"6f8e7977-105f-4870-a2e8-b8e9037bfc78\" (UID: \"6f8e7977-105f-4870-a2e8-b8e9037bfc78\") " Dec 13 13:17:06.855066 kubelet[2400]: I1213 13:17:06.853486 2400 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f8e7977-105f-4870-a2e8-b8e9037bfc78-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6f8e7977-105f-4870-a2e8-b8e9037bfc78" (UID: "6f8e7977-105f-4870-a2e8-b8e9037bfc78"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 13:17:06.855066 kubelet[2400]: I1213 13:17:06.853586 2400 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f8e7977-105f-4870-a2e8-b8e9037bfc78-hostproc" (OuterVolumeSpecName: "hostproc") pod "6f8e7977-105f-4870-a2e8-b8e9037bfc78" (UID: "6f8e7977-105f-4870-a2e8-b8e9037bfc78"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:17:06.855066 kubelet[2400]: I1213 13:17:06.853647 2400 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f8e7977-105f-4870-a2e8-b8e9037bfc78-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6f8e7977-105f-4870-a2e8-b8e9037bfc78" (UID: "6f8e7977-105f-4870-a2e8-b8e9037bfc78"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:17:06.855066 kubelet[2400]: I1213 13:17:06.853692 2400 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f8e7977-105f-4870-a2e8-b8e9037bfc78-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6f8e7977-105f-4870-a2e8-b8e9037bfc78" (UID: "6f8e7977-105f-4870-a2e8-b8e9037bfc78"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:17:06.855066 kubelet[2400]: I1213 13:17:06.853733 2400 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f8e7977-105f-4870-a2e8-b8e9037bfc78-cni-path" (OuterVolumeSpecName: "cni-path") pod "6f8e7977-105f-4870-a2e8-b8e9037bfc78" (UID: "6f8e7977-105f-4870-a2e8-b8e9037bfc78"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:17:06.855427 kubelet[2400]: I1213 13:17:06.853774 2400 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f8e7977-105f-4870-a2e8-b8e9037bfc78-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6f8e7977-105f-4870-a2e8-b8e9037bfc78" (UID: "6f8e7977-105f-4870-a2e8-b8e9037bfc78"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:17:06.855840 kubelet[2400]: I1213 13:17:06.855770 2400 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f8e7977-105f-4870-a2e8-b8e9037bfc78-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6f8e7977-105f-4870-a2e8-b8e9037bfc78" (UID: "6f8e7977-105f-4870-a2e8-b8e9037bfc78"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:17:06.856116 kubelet[2400]: I1213 13:17:06.856082 2400 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f8e7977-105f-4870-a2e8-b8e9037bfc78-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6f8e7977-105f-4870-a2e8-b8e9037bfc78" (UID: "6f8e7977-105f-4870-a2e8-b8e9037bfc78"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:17:06.856895 kubelet[2400]: I1213 13:17:06.856836 2400 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f8e7977-105f-4870-a2e8-b8e9037bfc78-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6f8e7977-105f-4870-a2e8-b8e9037bfc78" (UID: "6f8e7977-105f-4870-a2e8-b8e9037bfc78"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:17:06.857005 kubelet[2400]: I1213 13:17:06.856912 2400 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f8e7977-105f-4870-a2e8-b8e9037bfc78-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6f8e7977-105f-4870-a2e8-b8e9037bfc78" (UID: "6f8e7977-105f-4870-a2e8-b8e9037bfc78"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:17:06.857005 kubelet[2400]: I1213 13:17:06.856952 2400 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f8e7977-105f-4870-a2e8-b8e9037bfc78-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6f8e7977-105f-4870-a2e8-b8e9037bfc78" (UID: "6f8e7977-105f-4870-a2e8-b8e9037bfc78"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:17:06.862464 kubelet[2400]: I1213 13:17:06.862393 2400 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f8e7977-105f-4870-a2e8-b8e9037bfc78-kube-api-access-2tmgh" (OuterVolumeSpecName: "kube-api-access-2tmgh") pod "6f8e7977-105f-4870-a2e8-b8e9037bfc78" (UID: "6f8e7977-105f-4870-a2e8-b8e9037bfc78"). InnerVolumeSpecName "kube-api-access-2tmgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 13:17:06.862603 kubelet[2400]: I1213 13:17:06.862546 2400 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f8e7977-105f-4870-a2e8-b8e9037bfc78-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6f8e7977-105f-4870-a2e8-b8e9037bfc78" (UID: "6f8e7977-105f-4870-a2e8-b8e9037bfc78"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 13:17:06.863516 systemd[1]: var-lib-kubelet-pods-6f8e7977\x2d105f\x2d4870\x2da2e8\x2db8e9037bfc78-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2tmgh.mount: Deactivated successfully. Dec 13 13:17:06.863910 systemd[1]: var-lib-kubelet-pods-6f8e7977\x2d105f\x2d4870\x2da2e8\x2db8e9037bfc78-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 13:17:06.869127 systemd[1]: var-lib-kubelet-pods-6f8e7977\x2d105f\x2d4870\x2da2e8\x2db8e9037bfc78-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 13:17:06.869583 kubelet[2400]: I1213 13:17:06.869312 2400 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f8e7977-105f-4870-a2e8-b8e9037bfc78-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6f8e7977-105f-4870-a2e8-b8e9037bfc78" (UID: "6f8e7977-105f-4870-a2e8-b8e9037bfc78"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 13:17:06.947722 kubelet[2400]: I1213 13:17:06.947573 2400 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6f8e7977-105f-4870-a2e8-b8e9037bfc78-bpf-maps\") on node \"172.31.17.245\" DevicePath \"\"" Dec 13 13:17:06.947722 kubelet[2400]: I1213 13:17:06.947626 2400 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6f8e7977-105f-4870-a2e8-b8e9037bfc78-clustermesh-secrets\") on node \"172.31.17.245\" DevicePath \"\"" Dec 13 13:17:06.947722 kubelet[2400]: I1213 13:17:06.947655 2400 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6f8e7977-105f-4870-a2e8-b8e9037bfc78-xtables-lock\") on node \"172.31.17.245\" DevicePath \"\"" Dec 13 13:17:06.947722 kubelet[2400]: I1213 13:17:06.947675 2400 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6f8e7977-105f-4870-a2e8-b8e9037bfc78-cilium-cgroup\") on node \"172.31.17.245\" DevicePath \"\"" Dec 13 13:17:06.947722 kubelet[2400]: I1213 13:17:06.947695 2400 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6f8e7977-105f-4870-a2e8-b8e9037bfc78-lib-modules\") on node \"172.31.17.245\" DevicePath \"\"" Dec 13 13:17:06.948923 kubelet[2400]: I1213 13:17:06.948640 2400 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6f8e7977-105f-4870-a2e8-b8e9037bfc78-cilium-run\") on node \"172.31.17.245\" DevicePath \"\"" Dec 13 13:17:06.948923 kubelet[2400]: I1213 13:17:06.948690 2400 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6f8e7977-105f-4870-a2e8-b8e9037bfc78-etc-cni-netd\") on node \"172.31.17.245\" DevicePath \"\"" Dec 13 13:17:06.948923 kubelet[2400]: I1213 13:17:06.948713 2400 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6f8e7977-105f-4870-a2e8-b8e9037bfc78-host-proc-sys-net\") on node \"172.31.17.245\" DevicePath \"\"" Dec 13 13:17:06.948923 kubelet[2400]: I1213 13:17:06.948733 2400 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6f8e7977-105f-4870-a2e8-b8e9037bfc78-hubble-tls\") on node \"172.31.17.245\" DevicePath \"\"" Dec 13 13:17:06.948923 kubelet[2400]: I1213 13:17:06.948753 2400 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6f8e7977-105f-4870-a2e8-b8e9037bfc78-hostproc\") on node \"172.31.17.245\" DevicePath \"\"" Dec 13 13:17:06.948923 kubelet[2400]: I1213 13:17:06.948772 2400 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6f8e7977-105f-4870-a2e8-b8e9037bfc78-host-proc-sys-kernel\") on node \"172.31.17.245\" DevicePath \"\"" Dec 13 13:17:06.948923 kubelet[2400]: I1213 13:17:06.948792 2400 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-2tmgh\" (UniqueName: \"kubernetes.io/projected/6f8e7977-105f-4870-a2e8-b8e9037bfc78-kube-api-access-2tmgh\") on node \"172.31.17.245\" DevicePath \"\"" Dec 13 13:17:06.948923 kubelet[2400]: I1213 13:17:06.948811 2400 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6f8e7977-105f-4870-a2e8-b8e9037bfc78-cilium-config-path\") on node \"172.31.17.245\" DevicePath \"\"" Dec 13 13:17:06.949627 kubelet[2400]: I1213 13:17:06.948830 2400 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6f8e7977-105f-4870-a2e8-b8e9037bfc78-cni-path\") on node \"172.31.17.245\" DevicePath \"\"" Dec 13 13:17:07.041309 kubelet[2400]: E1213 13:17:07.041250 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:17:07.479069 kubelet[2400]: I1213 13:17:07.478098 2400 scope.go:117] "RemoveContainer" containerID="1440ce3dd1a8dbd97b6e546b12121f073c37435ebad684425ea0d26cd3383a0d" Dec 13 13:17:07.484652 containerd[1941]: time="2024-12-13T13:17:07.483874693Z" level=info msg="RemoveContainer for \"1440ce3dd1a8dbd97b6e546b12121f073c37435ebad684425ea0d26cd3383a0d\"" Dec 13 13:17:07.496886 systemd[1]: Removed slice kubepods-burstable-pod6f8e7977_105f_4870_a2e8_b8e9037bfc78.slice - libcontainer container kubepods-burstable-pod6f8e7977_105f_4870_a2e8_b8e9037bfc78.slice. Dec 13 13:17:07.497104 systemd[1]: kubepods-burstable-pod6f8e7977_105f_4870_a2e8_b8e9037bfc78.slice: Consumed 14.274s CPU time. Dec 13 13:17:07.501601 containerd[1941]: time="2024-12-13T13:17:07.501540886Z" level=info msg="RemoveContainer for \"1440ce3dd1a8dbd97b6e546b12121f073c37435ebad684425ea0d26cd3383a0d\" returns successfully" Dec 13 13:17:07.502180 kubelet[2400]: I1213 13:17:07.501983 2400 scope.go:117] "RemoveContainer" containerID="4f5552af206cd1b3d38942be0c709303b2356cf804e8b188e22a7bea2b283b31" Dec 13 13:17:07.505456 containerd[1941]: time="2024-12-13T13:17:07.505378400Z" level=info msg="RemoveContainer for \"4f5552af206cd1b3d38942be0c709303b2356cf804e8b188e22a7bea2b283b31\"" Dec 13 13:17:07.510405 containerd[1941]: time="2024-12-13T13:17:07.510333961Z" level=info msg="RemoveContainer for \"4f5552af206cd1b3d38942be0c709303b2356cf804e8b188e22a7bea2b283b31\" returns successfully" Dec 13 13:17:07.510872 kubelet[2400]: I1213 13:17:07.510787 2400 scope.go:117] "RemoveContainer" containerID="40aca46ccd6d954ed6ffa019584ab1ab66c10819c5b4f41760d34acd14d51be7" Dec 13 13:17:07.512873 containerd[1941]: time="2024-12-13T13:17:07.512823297Z" level=info msg="RemoveContainer for \"40aca46ccd6d954ed6ffa019584ab1ab66c10819c5b4f41760d34acd14d51be7\"" Dec 13 13:17:07.519447 containerd[1941]: time="2024-12-13T13:17:07.519385056Z" level=info msg="RemoveContainer for \"40aca46ccd6d954ed6ffa019584ab1ab66c10819c5b4f41760d34acd14d51be7\" returns successfully" Dec 13 13:17:07.519785 kubelet[2400]: I1213 13:17:07.519751 2400 scope.go:117] "RemoveContainer" containerID="e2e3c40fe6a732e541ba79332288ed42f58b4f906fff9aa6668f99f66c047786" Dec 13 13:17:07.521605 containerd[1941]: time="2024-12-13T13:17:07.521556989Z" level=info msg="RemoveContainer for \"e2e3c40fe6a732e541ba79332288ed42f58b4f906fff9aa6668f99f66c047786\"" Dec 13 13:17:07.526171 containerd[1941]: time="2024-12-13T13:17:07.526102557Z" level=info msg="RemoveContainer for \"e2e3c40fe6a732e541ba79332288ed42f58b4f906fff9aa6668f99f66c047786\" returns successfully" Dec 13 13:17:07.526507 kubelet[2400]: I1213 13:17:07.526420 2400 scope.go:117] "RemoveContainer" containerID="f66cb0b968a768f2e82c4423903da4bfb27f81b3ef8e5b965a4cbb26b502e572" Dec 13 13:17:07.528590 containerd[1941]: time="2024-12-13T13:17:07.528460499Z" level=info msg="RemoveContainer for \"f66cb0b968a768f2e82c4423903da4bfb27f81b3ef8e5b965a4cbb26b502e572\"" Dec 13 13:17:07.533087 containerd[1941]: time="2024-12-13T13:17:07.532991012Z" level=info msg="RemoveContainer for \"f66cb0b968a768f2e82c4423903da4bfb27f81b3ef8e5b965a4cbb26b502e572\" returns successfully" Dec 13 13:17:07.533580 kubelet[2400]: I1213 13:17:07.533475 2400 scope.go:117] "RemoveContainer" containerID="1440ce3dd1a8dbd97b6e546b12121f073c37435ebad684425ea0d26cd3383a0d" Dec 13 13:17:07.534388 containerd[1941]: time="2024-12-13T13:17:07.533914897Z" level=error msg="ContainerStatus for \"1440ce3dd1a8dbd97b6e546b12121f073c37435ebad684425ea0d26cd3383a0d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1440ce3dd1a8dbd97b6e546b12121f073c37435ebad684425ea0d26cd3383a0d\": not found" Dec 13 13:17:07.534515 kubelet[2400]: E1213 13:17:07.534320 2400 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1440ce3dd1a8dbd97b6e546b12121f073c37435ebad684425ea0d26cd3383a0d\": not found" containerID="1440ce3dd1a8dbd97b6e546b12121f073c37435ebad684425ea0d26cd3383a0d" Dec 13 13:17:07.534769 kubelet[2400]: I1213 13:17:07.534363 2400 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1440ce3dd1a8dbd97b6e546b12121f073c37435ebad684425ea0d26cd3383a0d"} err="failed to get container status \"1440ce3dd1a8dbd97b6e546b12121f073c37435ebad684425ea0d26cd3383a0d\": rpc error: code = NotFound desc = an error occurred when try to find container \"1440ce3dd1a8dbd97b6e546b12121f073c37435ebad684425ea0d26cd3383a0d\": not found" Dec 13 13:17:07.534949 kubelet[2400]: I1213 13:17:07.534742 2400 scope.go:117] "RemoveContainer" containerID="4f5552af206cd1b3d38942be0c709303b2356cf804e8b188e22a7bea2b283b31" Dec 13 13:17:07.535409 containerd[1941]: time="2024-12-13T13:17:07.535337020Z" level=error msg="ContainerStatus for \"4f5552af206cd1b3d38942be0c709303b2356cf804e8b188e22a7bea2b283b31\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4f5552af206cd1b3d38942be0c709303b2356cf804e8b188e22a7bea2b283b31\": not found" Dec 13 13:17:07.535875 kubelet[2400]: E1213 13:17:07.535656 2400 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4f5552af206cd1b3d38942be0c709303b2356cf804e8b188e22a7bea2b283b31\": not found" containerID="4f5552af206cd1b3d38942be0c709303b2356cf804e8b188e22a7bea2b283b31" Dec 13 13:17:07.535875 kubelet[2400]: I1213 13:17:07.535705 2400 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4f5552af206cd1b3d38942be0c709303b2356cf804e8b188e22a7bea2b283b31"} err="failed to get container status \"4f5552af206cd1b3d38942be0c709303b2356cf804e8b188e22a7bea2b283b31\": rpc error: code = NotFound desc = an error occurred when try to find container \"4f5552af206cd1b3d38942be0c709303b2356cf804e8b188e22a7bea2b283b31\": not found" Dec 13 13:17:07.535875 kubelet[2400]: I1213 13:17:07.535742 2400 scope.go:117] "RemoveContainer" containerID="40aca46ccd6d954ed6ffa019584ab1ab66c10819c5b4f41760d34acd14d51be7" Dec 13 13:17:07.536550 containerd[1941]: time="2024-12-13T13:17:07.536473484Z" level=error msg="ContainerStatus for \"40aca46ccd6d954ed6ffa019584ab1ab66c10819c5b4f41760d34acd14d51be7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"40aca46ccd6d954ed6ffa019584ab1ab66c10819c5b4f41760d34acd14d51be7\": not found" Dec 13 13:17:07.536788 kubelet[2400]: E1213 13:17:07.536710 2400 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"40aca46ccd6d954ed6ffa019584ab1ab66c10819c5b4f41760d34acd14d51be7\": not found" containerID="40aca46ccd6d954ed6ffa019584ab1ab66c10819c5b4f41760d34acd14d51be7" Dec 13 13:17:07.536871 kubelet[2400]: I1213 13:17:07.536794 2400 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"40aca46ccd6d954ed6ffa019584ab1ab66c10819c5b4f41760d34acd14d51be7"} err="failed to get container status \"40aca46ccd6d954ed6ffa019584ab1ab66c10819c5b4f41760d34acd14d51be7\": rpc error: code = NotFound desc = an error occurred when try to find container \"40aca46ccd6d954ed6ffa019584ab1ab66c10819c5b4f41760d34acd14d51be7\": not found" Dec 13 13:17:07.536871 kubelet[2400]: I1213 13:17:07.536830 2400 scope.go:117] "RemoveContainer" containerID="e2e3c40fe6a732e541ba79332288ed42f58b4f906fff9aa6668f99f66c047786" Dec 13 13:17:07.537258 containerd[1941]: time="2024-12-13T13:17:07.537203737Z" level=error msg="ContainerStatus for \"e2e3c40fe6a732e541ba79332288ed42f58b4f906fff9aa6668f99f66c047786\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e2e3c40fe6a732e541ba79332288ed42f58b4f906fff9aa6668f99f66c047786\": not found" Dec 13 13:17:07.537660 kubelet[2400]: E1213 13:17:07.537589 2400 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e2e3c40fe6a732e541ba79332288ed42f58b4f906fff9aa6668f99f66c047786\": not found" containerID="e2e3c40fe6a732e541ba79332288ed42f58b4f906fff9aa6668f99f66c047786" Dec 13 13:17:07.538130 kubelet[2400]: I1213 13:17:07.537952 2400 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e2e3c40fe6a732e541ba79332288ed42f58b4f906fff9aa6668f99f66c047786"} err="failed to get container status \"e2e3c40fe6a732e541ba79332288ed42f58b4f906fff9aa6668f99f66c047786\": rpc error: code = NotFound desc = an error occurred when try to find container \"e2e3c40fe6a732e541ba79332288ed42f58b4f906fff9aa6668f99f66c047786\": not found" Dec 13 13:17:07.538130 kubelet[2400]: I1213 13:17:07.538045 2400 scope.go:117] "RemoveContainer" containerID="f66cb0b968a768f2e82c4423903da4bfb27f81b3ef8e5b965a4cbb26b502e572" Dec 13 13:17:07.538885 containerd[1941]: time="2024-12-13T13:17:07.538527795Z" level=error msg="ContainerStatus for \"f66cb0b968a768f2e82c4423903da4bfb27f81b3ef8e5b965a4cbb26b502e572\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f66cb0b968a768f2e82c4423903da4bfb27f81b3ef8e5b965a4cbb26b502e572\": not found" Dec 13 13:17:07.538991 kubelet[2400]: E1213 13:17:07.538802 2400 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f66cb0b968a768f2e82c4423903da4bfb27f81b3ef8e5b965a4cbb26b502e572\": not found" containerID="f66cb0b968a768f2e82c4423903da4bfb27f81b3ef8e5b965a4cbb26b502e572" Dec 13 13:17:07.538991 kubelet[2400]: I1213 13:17:07.538847 2400 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f66cb0b968a768f2e82c4423903da4bfb27f81b3ef8e5b965a4cbb26b502e572"} err="failed to get container status \"f66cb0b968a768f2e82c4423903da4bfb27f81b3ef8e5b965a4cbb26b502e572\": rpc error: code = NotFound desc = an error occurred when try to find container \"f66cb0b968a768f2e82c4423903da4bfb27f81b3ef8e5b965a4cbb26b502e572\": not found" Dec 13 13:17:07.677544 kubelet[2400]: I1213 13:17:07.676422 2400 setters.go:580] "Node became not ready" node="172.31.17.245" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T13:17:07Z","lastTransitionTime":"2024-12-13T13:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 13:17:08.041909 kubelet[2400]: E1213 13:17:08.041845 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:17:08.044256 ntpd[1920]: Deleting interface #12 lxc_health, fe80::fcec:d0ff:fe3c:6b2c%7#123, interface stats: received=0, sent=0, dropped=0, active_time=43 secs Dec 13 13:17:08.044767 ntpd[1920]: 13 Dec 13:17:08 ntpd[1920]: Deleting interface #12 lxc_health, fe80::fcec:d0ff:fe3c:6b2c%7#123, interface stats: received=0, sent=0, dropped=0, active_time=43 secs Dec 13 13:17:08.150992 kubelet[2400]: I1213 13:17:08.150935 2400 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f8e7977-105f-4870-a2e8-b8e9037bfc78" path="/var/lib/kubelet/pods/6f8e7977-105f-4870-a2e8-b8e9037bfc78/volumes" Dec 13 13:17:09.042999 kubelet[2400]: E1213 13:17:09.042918 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:17:09.739513 kubelet[2400]: I1213 13:17:09.739463 2400 topology_manager.go:215] "Topology Admit Handler" podUID="1c1044d5-fd58-44c5-b1cf-b12699c15a93" podNamespace="kube-system" podName="cilium-kv6zk" Dec 13 13:17:09.739797 kubelet[2400]: E1213 13:17:09.739754 2400 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6f8e7977-105f-4870-a2e8-b8e9037bfc78" containerName="clean-cilium-state" Dec 13 13:17:09.739966 kubelet[2400]: E1213 13:17:09.739888 2400 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6f8e7977-105f-4870-a2e8-b8e9037bfc78" containerName="mount-bpf-fs" Dec 13 13:17:09.739966 kubelet[2400]: E1213 13:17:09.739907 2400 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6f8e7977-105f-4870-a2e8-b8e9037bfc78" containerName="cilium-agent" Dec 13 13:17:09.739966 kubelet[2400]: E1213 13:17:09.739924 2400 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6f8e7977-105f-4870-a2e8-b8e9037bfc78" containerName="mount-cgroup" Dec 13 13:17:09.740348 kubelet[2400]: E1213 13:17:09.739939 2400 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6f8e7977-105f-4870-a2e8-b8e9037bfc78" containerName="apply-sysctl-overwrites" Dec 13 13:17:09.740348 kubelet[2400]: I1213 13:17:09.740213 2400 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f8e7977-105f-4870-a2e8-b8e9037bfc78" containerName="cilium-agent" Dec 13 13:17:09.752393 systemd[1]: Created slice kubepods-burstable-pod1c1044d5_fd58_44c5_b1cf_b12699c15a93.slice - libcontainer container kubepods-burstable-pod1c1044d5_fd58_44c5_b1cf_b12699c15a93.slice. Dec 13 13:17:09.765646 kubelet[2400]: W1213 13:17:09.765249 2400 reflector.go:547] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:172.31.17.245" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '172.31.17.245' and this object Dec 13 13:17:09.765646 kubelet[2400]: E1213 13:17:09.765305 2400 reflector.go:150] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:172.31.17.245" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '172.31.17.245' and this object Dec 13 13:17:09.765646 kubelet[2400]: W1213 13:17:09.765389 2400 reflector.go:547] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:172.31.17.245" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172.31.17.245' and this object Dec 13 13:17:09.765646 kubelet[2400]: E1213 13:17:09.765414 2400 reflector.go:150] object-"kube-system"/"cilium-ipsec-keys": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:172.31.17.245" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172.31.17.245' and this object Dec 13 13:17:09.765646 kubelet[2400]: W1213 13:17:09.765485 2400 reflector.go:547] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:172.31.17.245" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172.31.17.245' and this object Dec 13 13:17:09.765998 kubelet[2400]: E1213 13:17:09.765508 2400 reflector.go:150] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:172.31.17.245" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172.31.17.245' and this object Dec 13 13:17:09.765998 kubelet[2400]: W1213 13:17:09.765575 2400 reflector.go:547] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:172.31.17.245" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172.31.17.245' and this object Dec 13 13:17:09.765998 kubelet[2400]: E1213 13:17:09.765597 2400 reflector.go:150] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:172.31.17.245" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172.31.17.245' and this object Dec 13 13:17:09.803330 kubelet[2400]: I1213 13:17:09.803278 2400 topology_manager.go:215] "Topology Admit Handler" podUID="fb99e31c-8ec6-4f00-8ecf-2d4ed385dd52" podNamespace="kube-system" podName="cilium-operator-599987898-kkvph" Dec 13 13:17:09.813253 systemd[1]: Created slice kubepods-besteffort-podfb99e31c_8ec6_4f00_8ecf_2d4ed385dd52.slice - libcontainer container kubepods-besteffort-podfb99e31c_8ec6_4f00_8ecf_2d4ed385dd52.slice. Dec 13 13:17:09.864651 kubelet[2400]: I1213 13:17:09.864607 2400 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1c1044d5-fd58-44c5-b1cf-b12699c15a93-bpf-maps\") pod \"cilium-kv6zk\" (UID: \"1c1044d5-fd58-44c5-b1cf-b12699c15a93\") " pod="kube-system/cilium-kv6zk" Dec 13 13:17:09.864944 kubelet[2400]: I1213 13:17:09.864918 2400 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1c1044d5-fd58-44c5-b1cf-b12699c15a93-clustermesh-secrets\") pod \"cilium-kv6zk\" (UID: \"1c1044d5-fd58-44c5-b1cf-b12699c15a93\") " pod="kube-system/cilium-kv6zk" Dec 13 13:17:09.865200 kubelet[2400]: I1213 13:17:09.865161 2400 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1c1044d5-fd58-44c5-b1cf-b12699c15a93-cilium-config-path\") pod \"cilium-kv6zk\" (UID: \"1c1044d5-fd58-44c5-b1cf-b12699c15a93\") " pod="kube-system/cilium-kv6zk" Dec 13 13:17:09.865947 kubelet[2400]: I1213 13:17:09.865341 2400 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1c1044d5-fd58-44c5-b1cf-b12699c15a93-cni-path\") pod \"cilium-kv6zk\" (UID: \"1c1044d5-fd58-44c5-b1cf-b12699c15a93\") " pod="kube-system/cilium-kv6zk" Dec 13 13:17:09.865947 kubelet[2400]: I1213 13:17:09.865398 2400 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1c1044d5-fd58-44c5-b1cf-b12699c15a93-etc-cni-netd\") pod \"cilium-kv6zk\" (UID: \"1c1044d5-fd58-44c5-b1cf-b12699c15a93\") " pod="kube-system/cilium-kv6zk" Dec 13 13:17:09.865947 kubelet[2400]: I1213 13:17:09.865433 2400 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1c1044d5-fd58-44c5-b1cf-b12699c15a93-lib-modules\") pod \"cilium-kv6zk\" (UID: \"1c1044d5-fd58-44c5-b1cf-b12699c15a93\") " pod="kube-system/cilium-kv6zk" Dec 13 13:17:09.865947 kubelet[2400]: I1213 13:17:09.865469 2400 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1c1044d5-fd58-44c5-b1cf-b12699c15a93-cilium-ipsec-secrets\") pod \"cilium-kv6zk\" (UID: \"1c1044d5-fd58-44c5-b1cf-b12699c15a93\") " pod="kube-system/cilium-kv6zk" Dec 13 13:17:09.865947 kubelet[2400]: I1213 13:17:09.865507 2400 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1c1044d5-fd58-44c5-b1cf-b12699c15a93-cilium-run\") pod \"cilium-kv6zk\" (UID: \"1c1044d5-fd58-44c5-b1cf-b12699c15a93\") " pod="kube-system/cilium-kv6zk" Dec 13 13:17:09.865947 kubelet[2400]: I1213 13:17:09.865543 2400 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1c1044d5-fd58-44c5-b1cf-b12699c15a93-host-proc-sys-kernel\") pod \"cilium-kv6zk\" (UID: \"1c1044d5-fd58-44c5-b1cf-b12699c15a93\") " pod="kube-system/cilium-kv6zk" Dec 13 13:17:09.866301 kubelet[2400]: I1213 13:17:09.865577 2400 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rvmm\" (UniqueName: \"kubernetes.io/projected/1c1044d5-fd58-44c5-b1cf-b12699c15a93-kube-api-access-5rvmm\") pod \"cilium-kv6zk\" (UID: \"1c1044d5-fd58-44c5-b1cf-b12699c15a93\") " pod="kube-system/cilium-kv6zk" Dec 13 13:17:09.866301 kubelet[2400]: I1213 13:17:09.865612 2400 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1c1044d5-fd58-44c5-b1cf-b12699c15a93-host-proc-sys-net\") pod \"cilium-kv6zk\" (UID: \"1c1044d5-fd58-44c5-b1cf-b12699c15a93\") " pod="kube-system/cilium-kv6zk" Dec 13 13:17:09.866301 kubelet[2400]: I1213 13:17:09.865645 2400 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1c1044d5-fd58-44c5-b1cf-b12699c15a93-hubble-tls\") pod \"cilium-kv6zk\" (UID: \"1c1044d5-fd58-44c5-b1cf-b12699c15a93\") " pod="kube-system/cilium-kv6zk" Dec 13 13:17:09.866301 kubelet[2400]: I1213 13:17:09.865685 2400 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1c1044d5-fd58-44c5-b1cf-b12699c15a93-hostproc\") pod \"cilium-kv6zk\" (UID: \"1c1044d5-fd58-44c5-b1cf-b12699c15a93\") " pod="kube-system/cilium-kv6zk" Dec 13 13:17:09.866301 kubelet[2400]: I1213 13:17:09.865724 2400 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1c1044d5-fd58-44c5-b1cf-b12699c15a93-cilium-cgroup\") pod \"cilium-kv6zk\" (UID: \"1c1044d5-fd58-44c5-b1cf-b12699c15a93\") " pod="kube-system/cilium-kv6zk" Dec 13 13:17:09.866301 kubelet[2400]: I1213 13:17:09.865758 2400 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1c1044d5-fd58-44c5-b1cf-b12699c15a93-xtables-lock\") pod \"cilium-kv6zk\" (UID: \"1c1044d5-fd58-44c5-b1cf-b12699c15a93\") " pod="kube-system/cilium-kv6zk" Dec 13 13:17:09.967236 kubelet[2400]: I1213 13:17:09.966214 2400 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fb99e31c-8ec6-4f00-8ecf-2d4ed385dd52-cilium-config-path\") pod \"cilium-operator-599987898-kkvph\" (UID: \"fb99e31c-8ec6-4f00-8ecf-2d4ed385dd52\") " pod="kube-system/cilium-operator-599987898-kkvph" Dec 13 13:17:09.967236 kubelet[2400]: I1213 13:17:09.966814 2400 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfblm\" (UniqueName: \"kubernetes.io/projected/fb99e31c-8ec6-4f00-8ecf-2d4ed385dd52-kube-api-access-kfblm\") pod \"cilium-operator-599987898-kkvph\" (UID: \"fb99e31c-8ec6-4f00-8ecf-2d4ed385dd52\") " pod="kube-system/cilium-operator-599987898-kkvph" Dec 13 13:17:10.043559 kubelet[2400]: E1213 13:17:10.043410 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:17:10.966974 kubelet[2400]: E1213 13:17:10.966896 2400 secret.go:194] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Dec 13 13:17:10.967211 kubelet[2400]: E1213 13:17:10.967067 2400 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1c1044d5-fd58-44c5-b1cf-b12699c15a93-clustermesh-secrets podName:1c1044d5-fd58-44c5-b1cf-b12699c15a93 nodeName:}" failed. No retries permitted until 2024-12-13 13:17:11.467031318 +0000 UTC m=+76.641482337 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/1c1044d5-fd58-44c5-b1cf-b12699c15a93-clustermesh-secrets") pod "cilium-kv6zk" (UID: "1c1044d5-fd58-44c5-b1cf-b12699c15a93") : failed to sync secret cache: timed out waiting for the condition Dec 13 13:17:10.967516 kubelet[2400]: E1213 13:17:10.967396 2400 secret.go:194] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition Dec 13 13:17:10.967516 kubelet[2400]: E1213 13:17:10.967484 2400 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1c1044d5-fd58-44c5-b1cf-b12699c15a93-cilium-ipsec-secrets podName:1c1044d5-fd58-44c5-b1cf-b12699c15a93 nodeName:}" failed. No retries permitted until 2024-12-13 13:17:11.467459872 +0000 UTC m=+76.641910879 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/1c1044d5-fd58-44c5-b1cf-b12699c15a93-cilium-ipsec-secrets") pod "cilium-kv6zk" (UID: "1c1044d5-fd58-44c5-b1cf-b12699c15a93") : failed to sync secret cache: timed out waiting for the condition Dec 13 13:17:11.018550 containerd[1941]: time="2024-12-13T13:17:11.018461192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-kkvph,Uid:fb99e31c-8ec6-4f00-8ecf-2d4ed385dd52,Namespace:kube-system,Attempt:0,}" Dec 13 13:17:11.044380 kubelet[2400]: E1213 13:17:11.044303 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:17:11.057471 containerd[1941]: time="2024-12-13T13:17:11.057192944Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:17:11.058188 containerd[1941]: time="2024-12-13T13:17:11.057470823Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:17:11.059196 containerd[1941]: time="2024-12-13T13:17:11.058195614Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:17:11.059196 containerd[1941]: time="2024-12-13T13:17:11.058358643Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:17:11.102347 systemd[1]: Started cri-containerd-c0d37261e7a62e3c4c97a2492b1ae2da31c6a9c924aaeeb13a1c0db4fecd89e9.scope - libcontainer container c0d37261e7a62e3c4c97a2492b1ae2da31c6a9c924aaeeb13a1c0db4fecd89e9. Dec 13 13:17:11.163871 kubelet[2400]: E1213 13:17:11.163601 2400 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 13:17:11.165831 containerd[1941]: time="2024-12-13T13:17:11.165766193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-kkvph,Uid:fb99e31c-8ec6-4f00-8ecf-2d4ed385dd52,Namespace:kube-system,Attempt:0,} returns sandbox id \"c0d37261e7a62e3c4c97a2492b1ae2da31c6a9c924aaeeb13a1c0db4fecd89e9\"" Dec 13 13:17:11.169314 containerd[1941]: time="2024-12-13T13:17:11.169251150Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 13:17:11.567290 containerd[1941]: time="2024-12-13T13:17:11.567234817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kv6zk,Uid:1c1044d5-fd58-44c5-b1cf-b12699c15a93,Namespace:kube-system,Attempt:0,}" Dec 13 13:17:11.602083 containerd[1941]: time="2024-12-13T13:17:11.601205206Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:17:11.602083 containerd[1941]: time="2024-12-13T13:17:11.601335640Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:17:11.602083 containerd[1941]: time="2024-12-13T13:17:11.601368296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:17:11.602621 containerd[1941]: time="2024-12-13T13:17:11.602398099Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:17:11.637365 systemd[1]: Started cri-containerd-153d6a8862556e9e3ca482045480cada542810e5ca6d4da405648fbab0eae1f5.scope - libcontainer container 153d6a8862556e9e3ca482045480cada542810e5ca6d4da405648fbab0eae1f5. Dec 13 13:17:11.678414 containerd[1941]: time="2024-12-13T13:17:11.678347539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kv6zk,Uid:1c1044d5-fd58-44c5-b1cf-b12699c15a93,Namespace:kube-system,Attempt:0,} returns sandbox id \"153d6a8862556e9e3ca482045480cada542810e5ca6d4da405648fbab0eae1f5\"" Dec 13 13:17:11.684933 containerd[1941]: time="2024-12-13T13:17:11.684607984Z" level=info msg="CreateContainer within sandbox \"153d6a8862556e9e3ca482045480cada542810e5ca6d4da405648fbab0eae1f5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 13:17:11.706402 containerd[1941]: time="2024-12-13T13:17:11.706311578Z" level=info msg="CreateContainer within sandbox \"153d6a8862556e9e3ca482045480cada542810e5ca6d4da405648fbab0eae1f5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6ed153d8b7e20e0c1259549bb7b2908b4bc101248892b3211b64d3e69dbd3f19\"" Dec 13 13:17:11.707397 containerd[1941]: time="2024-12-13T13:17:11.707338727Z" level=info msg="StartContainer for \"6ed153d8b7e20e0c1259549bb7b2908b4bc101248892b3211b64d3e69dbd3f19\"" Dec 13 13:17:11.747329 systemd[1]: Started cri-containerd-6ed153d8b7e20e0c1259549bb7b2908b4bc101248892b3211b64d3e69dbd3f19.scope - libcontainer container 6ed153d8b7e20e0c1259549bb7b2908b4bc101248892b3211b64d3e69dbd3f19. Dec 13 13:17:11.796245 containerd[1941]: time="2024-12-13T13:17:11.796057077Z" level=info msg="StartContainer for \"6ed153d8b7e20e0c1259549bb7b2908b4bc101248892b3211b64d3e69dbd3f19\" returns successfully" Dec 13 13:17:11.810199 systemd[1]: cri-containerd-6ed153d8b7e20e0c1259549bb7b2908b4bc101248892b3211b64d3e69dbd3f19.scope: Deactivated successfully. Dec 13 13:17:11.868237 containerd[1941]: time="2024-12-13T13:17:11.868119359Z" level=info msg="shim disconnected" id=6ed153d8b7e20e0c1259549bb7b2908b4bc101248892b3211b64d3e69dbd3f19 namespace=k8s.io Dec 13 13:17:11.868237 containerd[1941]: time="2024-12-13T13:17:11.868195381Z" level=warning msg="cleaning up after shim disconnected" id=6ed153d8b7e20e0c1259549bb7b2908b4bc101248892b3211b64d3e69dbd3f19 namespace=k8s.io Dec 13 13:17:11.868237 containerd[1941]: time="2024-12-13T13:17:11.868215059Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:17:12.045169 kubelet[2400]: E1213 13:17:12.045067 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:17:12.503184 containerd[1941]: time="2024-12-13T13:17:12.503126848Z" level=info msg="CreateContainer within sandbox \"153d6a8862556e9e3ca482045480cada542810e5ca6d4da405648fbab0eae1f5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 13:17:12.535854 containerd[1941]: time="2024-12-13T13:17:12.535761125Z" level=info msg="CreateContainer within sandbox \"153d6a8862556e9e3ca482045480cada542810e5ca6d4da405648fbab0eae1f5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e3c2caf953ab504e68630e4fb0281a62139dc16cd3d7b957b5f4e0711ee3c167\"" Dec 13 13:17:12.536786 containerd[1941]: time="2024-12-13T13:17:12.536706934Z" level=info msg="StartContainer for \"e3c2caf953ab504e68630e4fb0281a62139dc16cd3d7b957b5f4e0711ee3c167\"" Dec 13 13:17:12.589388 systemd[1]: Started cri-containerd-e3c2caf953ab504e68630e4fb0281a62139dc16cd3d7b957b5f4e0711ee3c167.scope - libcontainer container e3c2caf953ab504e68630e4fb0281a62139dc16cd3d7b957b5f4e0711ee3c167. Dec 13 13:17:12.640043 containerd[1941]: time="2024-12-13T13:17:12.639953564Z" level=info msg="StartContainer for \"e3c2caf953ab504e68630e4fb0281a62139dc16cd3d7b957b5f4e0711ee3c167\" returns successfully" Dec 13 13:17:12.649259 systemd[1]: cri-containerd-e3c2caf953ab504e68630e4fb0281a62139dc16cd3d7b957b5f4e0711ee3c167.scope: Deactivated successfully. Dec 13 13:17:12.692972 containerd[1941]: time="2024-12-13T13:17:12.692819032Z" level=info msg="shim disconnected" id=e3c2caf953ab504e68630e4fb0281a62139dc16cd3d7b957b5f4e0711ee3c167 namespace=k8s.io Dec 13 13:17:12.692972 containerd[1941]: time="2024-12-13T13:17:12.692892089Z" level=warning msg="cleaning up after shim disconnected" id=e3c2caf953ab504e68630e4fb0281a62139dc16cd3d7b957b5f4e0711ee3c167 namespace=k8s.io Dec 13 13:17:12.692972 containerd[1941]: time="2024-12-13T13:17:12.692913543Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:17:12.714825 containerd[1941]: time="2024-12-13T13:17:12.714765988Z" level=warning msg="cleanup warnings time=\"2024-12-13T13:17:12Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 13:17:13.032796 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e3c2caf953ab504e68630e4fb0281a62139dc16cd3d7b957b5f4e0711ee3c167-rootfs.mount: Deactivated successfully. Dec 13 13:17:13.046059 kubelet[2400]: E1213 13:17:13.045988 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:17:13.508761 containerd[1941]: time="2024-12-13T13:17:13.508695580Z" level=info msg="CreateContainer within sandbox \"153d6a8862556e9e3ca482045480cada542810e5ca6d4da405648fbab0eae1f5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 13:17:13.543845 containerd[1941]: time="2024-12-13T13:17:13.543746473Z" level=info msg="CreateContainer within sandbox \"153d6a8862556e9e3ca482045480cada542810e5ca6d4da405648fbab0eae1f5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a39878ae39b083c9ab04a17159efbf03a5d7567ac73348528ff283e5460cc7af\"" Dec 13 13:17:13.545469 containerd[1941]: time="2024-12-13T13:17:13.544611361Z" level=info msg="StartContainer for \"a39878ae39b083c9ab04a17159efbf03a5d7567ac73348528ff283e5460cc7af\"" Dec 13 13:17:13.597329 systemd[1]: Started cri-containerd-a39878ae39b083c9ab04a17159efbf03a5d7567ac73348528ff283e5460cc7af.scope - libcontainer container a39878ae39b083c9ab04a17159efbf03a5d7567ac73348528ff283e5460cc7af. Dec 13 13:17:13.657765 containerd[1941]: time="2024-12-13T13:17:13.657696777Z" level=info msg="StartContainer for \"a39878ae39b083c9ab04a17159efbf03a5d7567ac73348528ff283e5460cc7af\" returns successfully" Dec 13 13:17:13.661716 systemd[1]: cri-containerd-a39878ae39b083c9ab04a17159efbf03a5d7567ac73348528ff283e5460cc7af.scope: Deactivated successfully. Dec 13 13:17:13.720237 containerd[1941]: time="2024-12-13T13:17:13.720072065Z" level=info msg="shim disconnected" id=a39878ae39b083c9ab04a17159efbf03a5d7567ac73348528ff283e5460cc7af namespace=k8s.io Dec 13 13:17:13.720237 containerd[1941]: time="2024-12-13T13:17:13.720151545Z" level=warning msg="cleaning up after shim disconnected" id=a39878ae39b083c9ab04a17159efbf03a5d7567ac73348528ff283e5460cc7af namespace=k8s.io Dec 13 13:17:13.720237 containerd[1941]: time="2024-12-13T13:17:13.720171319Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:17:14.035329 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a39878ae39b083c9ab04a17159efbf03a5d7567ac73348528ff283e5460cc7af-rootfs.mount: Deactivated successfully. Dec 13 13:17:14.046350 kubelet[2400]: E1213 13:17:14.046278 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:17:14.514544 containerd[1941]: time="2024-12-13T13:17:14.514450237Z" level=info msg="CreateContainer within sandbox \"153d6a8862556e9e3ca482045480cada542810e5ca6d4da405648fbab0eae1f5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 13:17:14.542438 containerd[1941]: time="2024-12-13T13:17:14.542297098Z" level=info msg="CreateContainer within sandbox \"153d6a8862556e9e3ca482045480cada542810e5ca6d4da405648fbab0eae1f5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3f6904b9906c33cf3655b601f465f9c021b354b5b550154f616d0d6f99cb4d58\"" Dec 13 13:17:14.543648 containerd[1941]: time="2024-12-13T13:17:14.543468667Z" level=info msg="StartContainer for \"3f6904b9906c33cf3655b601f465f9c021b354b5b550154f616d0d6f99cb4d58\"" Dec 13 13:17:14.600371 systemd[1]: Started cri-containerd-3f6904b9906c33cf3655b601f465f9c021b354b5b550154f616d0d6f99cb4d58.scope - libcontainer container 3f6904b9906c33cf3655b601f465f9c021b354b5b550154f616d0d6f99cb4d58. Dec 13 13:17:14.641681 systemd[1]: cri-containerd-3f6904b9906c33cf3655b601f465f9c021b354b5b550154f616d0d6f99cb4d58.scope: Deactivated successfully. Dec 13 13:17:14.651179 containerd[1941]: time="2024-12-13T13:17:14.650892869Z" level=info msg="StartContainer for \"3f6904b9906c33cf3655b601f465f9c021b354b5b550154f616d0d6f99cb4d58\" returns successfully" Dec 13 13:17:14.696159 containerd[1941]: time="2024-12-13T13:17:14.696084162Z" level=info msg="shim disconnected" id=3f6904b9906c33cf3655b601f465f9c021b354b5b550154f616d0d6f99cb4d58 namespace=k8s.io Dec 13 13:17:14.696923 containerd[1941]: time="2024-12-13T13:17:14.696511924Z" level=warning msg="cleaning up after shim disconnected" id=3f6904b9906c33cf3655b601f465f9c021b354b5b550154f616d0d6f99cb4d58 namespace=k8s.io Dec 13 13:17:14.696923 containerd[1941]: time="2024-12-13T13:17:14.696543560Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:17:15.033103 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3f6904b9906c33cf3655b601f465f9c021b354b5b550154f616d0d6f99cb4d58-rootfs.mount: Deactivated successfully. Dec 13 13:17:15.047261 kubelet[2400]: E1213 13:17:15.047198 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:17:15.523477 containerd[1941]: time="2024-12-13T13:17:15.523275609Z" level=info msg="CreateContainer within sandbox \"153d6a8862556e9e3ca482045480cada542810e5ca6d4da405648fbab0eae1f5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 13:17:15.556068 containerd[1941]: time="2024-12-13T13:17:15.555963169Z" level=info msg="CreateContainer within sandbox \"153d6a8862556e9e3ca482045480cada542810e5ca6d4da405648fbab0eae1f5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9b14f9b25dd4007e62f4202d6c3c835153b5d9be7700829849aeff2bdfe36e33\"" Dec 13 13:17:15.557078 containerd[1941]: time="2024-12-13T13:17:15.556809256Z" level=info msg="StartContainer for \"9b14f9b25dd4007e62f4202d6c3c835153b5d9be7700829849aeff2bdfe36e33\"" Dec 13 13:17:15.616350 systemd[1]: Started cri-containerd-9b14f9b25dd4007e62f4202d6c3c835153b5d9be7700829849aeff2bdfe36e33.scope - libcontainer container 9b14f9b25dd4007e62f4202d6c3c835153b5d9be7700829849aeff2bdfe36e33. Dec 13 13:17:15.666557 containerd[1941]: time="2024-12-13T13:17:15.666478063Z" level=info msg="StartContainer for \"9b14f9b25dd4007e62f4202d6c3c835153b5d9be7700829849aeff2bdfe36e33\" returns successfully" Dec 13 13:17:15.987838 kubelet[2400]: E1213 13:17:15.987636 2400 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:17:16.033262 systemd[1]: run-containerd-runc-k8s.io-9b14f9b25dd4007e62f4202d6c3c835153b5d9be7700829849aeff2bdfe36e33-runc.eWJMQF.mount: Deactivated successfully. Dec 13 13:17:16.048310 kubelet[2400]: E1213 13:17:16.048247 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:17:16.302535 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2468377436.mount: Deactivated successfully. Dec 13 13:17:16.606199 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Dec 13 13:17:17.049254 kubelet[2400]: E1213 13:17:17.049085 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:17:18.049972 kubelet[2400]: E1213 13:17:18.049893 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:17:18.927060 containerd[1941]: time="2024-12-13T13:17:18.925951320Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:17:18.929288 containerd[1941]: time="2024-12-13T13:17:18.929216352Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17138298" Dec 13 13:17:18.931390 containerd[1941]: time="2024-12-13T13:17:18.931325722Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:17:18.936366 containerd[1941]: time="2024-12-13T13:17:18.935873415Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 7.766489935s" Dec 13 13:17:18.936366 containerd[1941]: time="2024-12-13T13:17:18.936007846Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Dec 13 13:17:18.941796 containerd[1941]: time="2024-12-13T13:17:18.941311449Z" level=info msg="CreateContainer within sandbox \"c0d37261e7a62e3c4c97a2492b1ae2da31c6a9c924aaeeb13a1c0db4fecd89e9\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 13:17:18.974497 containerd[1941]: time="2024-12-13T13:17:18.972929274Z" level=info msg="CreateContainer within sandbox \"c0d37261e7a62e3c4c97a2492b1ae2da31c6a9c924aaeeb13a1c0db4fecd89e9\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"de22f1b61423a0e02851f6a1665ce0b7bfad8270483a240526d71586fac69832\"" Dec 13 13:17:18.973144 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount629649807.mount: Deactivated successfully. Dec 13 13:17:18.977370 containerd[1941]: time="2024-12-13T13:17:18.976433657Z" level=info msg="StartContainer for \"de22f1b61423a0e02851f6a1665ce0b7bfad8270483a240526d71586fac69832\"" Dec 13 13:17:19.043472 systemd[1]: run-containerd-runc-k8s.io-de22f1b61423a0e02851f6a1665ce0b7bfad8270483a240526d71586fac69832-runc.N4dkzZ.mount: Deactivated successfully. Dec 13 13:17:19.050217 kubelet[2400]: E1213 13:17:19.050110 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:17:19.059582 systemd[1]: Started cri-containerd-de22f1b61423a0e02851f6a1665ce0b7bfad8270483a240526d71586fac69832.scope - libcontainer container de22f1b61423a0e02851f6a1665ce0b7bfad8270483a240526d71586fac69832. Dec 13 13:17:19.119887 containerd[1941]: time="2024-12-13T13:17:19.119811519Z" level=info msg="StartContainer for \"de22f1b61423a0e02851f6a1665ce0b7bfad8270483a240526d71586fac69832\" returns successfully" Dec 13 13:17:19.586219 kubelet[2400]: I1213 13:17:19.585689 2400 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-kkvph" podStartSLOduration=2.81529189 podStartE2EDuration="10.585666918s" podCreationTimestamp="2024-12-13 13:17:09 +0000 UTC" firstStartedPulling="2024-12-13 13:17:11.167879105 +0000 UTC m=+76.342330100" lastFinishedPulling="2024-12-13 13:17:18.938254133 +0000 UTC m=+84.112705128" observedRunningTime="2024-12-13 13:17:19.58530813 +0000 UTC m=+84.759759126" watchObservedRunningTime="2024-12-13 13:17:19.585666918 +0000 UTC m=+84.760117901" Dec 13 13:17:19.586219 kubelet[2400]: I1213 13:17:19.585830 2400 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-kv6zk" podStartSLOduration=10.585819682 podStartE2EDuration="10.585819682s" podCreationTimestamp="2024-12-13 13:17:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:17:16.565850939 +0000 UTC m=+81.740301958" watchObservedRunningTime="2024-12-13 13:17:19.585819682 +0000 UTC m=+84.760270785" Dec 13 13:17:20.050836 kubelet[2400]: E1213 13:17:20.050262 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:17:20.940873 (udev-worker)[5090]: Network interface NamePolicy= disabled on kernel command line. Dec 13 13:17:20.943586 systemd-networkd[1850]: lxc_health: Link UP Dec 13 13:17:20.960985 systemd-networkd[1850]: lxc_health: Gained carrier Dec 13 13:17:21.051222 kubelet[2400]: E1213 13:17:21.051151 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:17:22.051840 kubelet[2400]: E1213 13:17:22.051766 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:17:22.206506 systemd-networkd[1850]: lxc_health: Gained IPv6LL Dec 13 13:17:23.052405 kubelet[2400]: E1213 13:17:23.052329 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:17:23.934157 systemd[1]: run-containerd-runc-k8s.io-9b14f9b25dd4007e62f4202d6c3c835153b5d9be7700829849aeff2bdfe36e33-runc.g53C2T.mount: Deactivated successfully. Dec 13 13:17:24.053170 kubelet[2400]: E1213 13:17:24.053106 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:17:25.044396 ntpd[1920]: Listen normally on 16 lxc_health [fe80::d412:1bff:fef0:5499%15]:123 Dec 13 13:17:25.046233 ntpd[1920]: 13 Dec 13:17:25 ntpd[1920]: Listen normally on 16 lxc_health [fe80::d412:1bff:fef0:5499%15]:123 Dec 13 13:17:25.053923 kubelet[2400]: E1213 13:17:25.053853 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:17:26.054370 kubelet[2400]: E1213 13:17:26.054303 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:17:27.054992 kubelet[2400]: E1213 13:17:27.054864 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:17:28.055666 kubelet[2400]: E1213 13:17:28.055544 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:17:29.056503 kubelet[2400]: E1213 13:17:29.056448 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:17:30.057604 kubelet[2400]: E1213 13:17:30.057533 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:17:31.058003 kubelet[2400]: E1213 13:17:31.057932 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:17:32.058748 kubelet[2400]: E1213 13:17:32.058650 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:17:33.059269 kubelet[2400]: E1213 13:17:33.059196 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:17:34.059769 kubelet[2400]: E1213 13:17:34.059707 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:17:35.060422 kubelet[2400]: E1213 13:17:35.060358 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:17:35.987265 kubelet[2400]: E1213 13:17:35.987193 2400 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:17:36.061571 kubelet[2400]: E1213 13:17:36.061516 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:17:37.062719 kubelet[2400]: E1213 13:17:37.062621 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:17:38.063698 kubelet[2400]: E1213 13:17:38.063639 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:17:39.064500 kubelet[2400]: E1213 13:17:39.064439 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:17:40.065429 kubelet[2400]: E1213 13:17:40.065367 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:17:41.065794 kubelet[2400]: E1213 13:17:41.065726 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:17:42.066768 kubelet[2400]: E1213 13:17:42.066713 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:17:43.067587 kubelet[2400]: E1213 13:17:43.067526 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:17:44.068550 kubelet[2400]: E1213 13:17:44.068496 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:17:45.069132 kubelet[2400]: E1213 13:17:45.069062 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:17:46.069488 kubelet[2400]: E1213 13:17:46.069419 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:17:47.070455 kubelet[2400]: E1213 13:17:47.070378 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:17:48.071407 kubelet[2400]: E1213 13:17:48.071323 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:17:48.288228 kubelet[2400]: E1213 13:17:48.288155 2400 controller.go:195] "Failed to update lease" err="Put \"https://172.31.29.1:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.17.245?timeout=10s\": context deadline exceeded" Dec 13 13:17:49.072591 kubelet[2400]: E1213 13:17:49.072520 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:17:50.072923 kubelet[2400]: E1213 13:17:50.072866 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:17:51.073997 kubelet[2400]: E1213 13:17:51.073928 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:17:52.075181 kubelet[2400]: E1213 13:17:52.075115 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:17:53.075643 kubelet[2400]: E1213 13:17:53.075583 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:17:54.076374 kubelet[2400]: E1213 13:17:54.076305 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:17:55.076982 kubelet[2400]: E1213 13:17:55.076925 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:17:55.987829 kubelet[2400]: E1213 13:17:55.987761 2400 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:17:56.008991 containerd[1941]: time="2024-12-13T13:17:56.008934816Z" level=info msg="StopPodSandbox for \"2ca9aa04990ddef21c2be9c0541a6ba059a9e9b04f7a11ab44238ab264ef8f0c\"" Dec 13 13:17:56.009587 containerd[1941]: time="2024-12-13T13:17:56.009110548Z" level=info msg="TearDown network for sandbox \"2ca9aa04990ddef21c2be9c0541a6ba059a9e9b04f7a11ab44238ab264ef8f0c\" successfully" Dec 13 13:17:56.009587 containerd[1941]: time="2024-12-13T13:17:56.009136637Z" level=info msg="StopPodSandbox for \"2ca9aa04990ddef21c2be9c0541a6ba059a9e9b04f7a11ab44238ab264ef8f0c\" returns successfully" Dec 13 13:17:56.010404 containerd[1941]: time="2024-12-13T13:17:56.010356554Z" level=info msg="RemovePodSandbox for \"2ca9aa04990ddef21c2be9c0541a6ba059a9e9b04f7a11ab44238ab264ef8f0c\"" Dec 13 13:17:56.010540 containerd[1941]: time="2024-12-13T13:17:56.010410029Z" level=info msg="Forcibly stopping sandbox \"2ca9aa04990ddef21c2be9c0541a6ba059a9e9b04f7a11ab44238ab264ef8f0c\"" Dec 13 13:17:56.010540 containerd[1941]: time="2024-12-13T13:17:56.010517675Z" level=info msg="TearDown network for sandbox \"2ca9aa04990ddef21c2be9c0541a6ba059a9e9b04f7a11ab44238ab264ef8f0c\" successfully" Dec 13 13:17:56.017392 containerd[1941]: time="2024-12-13T13:17:56.017314992Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2ca9aa04990ddef21c2be9c0541a6ba059a9e9b04f7a11ab44238ab264ef8f0c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:17:56.018403 containerd[1941]: time="2024-12-13T13:17:56.017425219Z" level=info msg="RemovePodSandbox \"2ca9aa04990ddef21c2be9c0541a6ba059a9e9b04f7a11ab44238ab264ef8f0c\" returns successfully" Dec 13 13:17:56.078163 kubelet[2400]: E1213 13:17:56.078082 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:17:57.078368 kubelet[2400]: E1213 13:17:57.078306 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:17:58.078857 kubelet[2400]: E1213 13:17:58.078791 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:17:58.289414 kubelet[2400]: E1213 13:17:58.289337 2400 controller.go:195] "Failed to update lease" err="Put \"https://172.31.29.1:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.17.245?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 13 13:17:59.079136 kubelet[2400]: E1213 13:17:59.079070 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:18:00.080167 kubelet[2400]: E1213 13:18:00.080109 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:18:00.666807 update_engine[1925]: I20241213 13:18:00.666730 1925 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Dec 13 13:18:00.666807 update_engine[1925]: I20241213 13:18:00.666802 1925 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Dec 13 13:18:00.667411 update_engine[1925]: I20241213 13:18:00.667104 1925 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Dec 13 13:18:00.667973 update_engine[1925]: I20241213 13:18:00.667909 1925 omaha_request_params.cc:62] Current group set to alpha Dec 13 13:18:00.668556 update_engine[1925]: I20241213 13:18:00.668116 1925 update_attempter.cc:499] Already updated boot flags. Skipping. Dec 13 13:18:00.668556 update_engine[1925]: I20241213 13:18:00.668149 1925 update_attempter.cc:643] Scheduling an action processor start. Dec 13 13:18:00.668556 update_engine[1925]: I20241213 13:18:00.668188 1925 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 13 13:18:00.668556 update_engine[1925]: I20241213 13:18:00.668246 1925 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Dec 13 13:18:00.668556 update_engine[1925]: I20241213 13:18:00.668367 1925 omaha_request_action.cc:271] Posting an Omaha request to disabled Dec 13 13:18:00.668556 update_engine[1925]: I20241213 13:18:00.668390 1925 omaha_request_action.cc:272] Request: <?xml version="1.0" encoding="UTF-8"?> Dec 13 13:18:00.668556 update_engine[1925]: <request protocol="3.0" version="update_engine-0.4.10" updaterversion="update_engine-0.4.10" installsource="scheduler" ismachine="1"> Dec 13 13:18:00.668556 update_engine[1925]: <os version="Chateau" platform="CoreOS" sp="4186.0.0_aarch64"></os> Dec 13 13:18:00.668556 update_engine[1925]: <app appid="{e96281a6-d1af-4bde-9a0a-97b76e56dc57}" version="4186.0.0" track="alpha" bootid="{e3bac351-df89-47f1-9a5f-67156d4e2d4b}" oem="ami" oemversion="3.2.985.0-r1" alephversion="4186.0.0" machineid="ec2298afae61cbb831b36bb14ff494ed" machinealias="" lang="en-US" board="arm64-usr" hardware_class="" delta_okay="false" > Dec 13 13:18:00.668556 update_engine[1925]: <ping active="1"></ping> Dec 13 13:18:00.668556 update_engine[1925]: <updatecheck></updatecheck> Dec 13 13:18:00.668556 update_engine[1925]: <event eventtype="3" eventresult="2" previousversion="0.0.0.0"></event> Dec 13 13:18:00.668556 update_engine[1925]: </app> Dec 13 13:18:00.668556 update_engine[1925]: </request> Dec 13 13:18:00.668556 update_engine[1925]: I20241213 13:18:00.668407 1925 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 13:18:00.669458 locksmithd[1962]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Dec 13 13:18:00.670634 update_engine[1925]: I20241213 13:18:00.670560 1925 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 13:18:00.671146 update_engine[1925]: I20241213 13:18:00.671082 1925 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 13:18:00.691604 update_engine[1925]: E20241213 13:18:00.691545 1925 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 13:18:00.691696 update_engine[1925]: I20241213 13:18:00.691650 1925 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Dec 13 13:18:01.081422 kubelet[2400]: E1213 13:18:01.081274 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:18:02.082512 kubelet[2400]: E1213 13:18:02.082415 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:18:03.083349 kubelet[2400]: E1213 13:18:03.083283 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:18:04.084450 kubelet[2400]: E1213 13:18:04.084396 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:18:05.085385 kubelet[2400]: E1213 13:18:05.085327 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:18:06.085973 kubelet[2400]: E1213 13:18:06.085912 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:18:07.086828 kubelet[2400]: E1213 13:18:07.086757 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:18:08.087681 kubelet[2400]: E1213 13:18:08.087621 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:18:08.290241 kubelet[2400]: E1213 13:18:08.290170 2400 controller.go:195] "Failed to update lease" err="Put \"https://172.31.29.1:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.17.245?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 13 13:18:09.088782 kubelet[2400]: E1213 13:18:09.088725 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:18:10.089647 kubelet[2400]: E1213 13:18:10.089579 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:18:10.651784 update_engine[1925]: I20241213 13:18:10.651677 1925 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 13:18:10.652437 update_engine[1925]: I20241213 13:18:10.652110 1925 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 13:18:10.652529 update_engine[1925]: I20241213 13:18:10.652451 1925 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 13:18:10.652970 update_engine[1925]: E20241213 13:18:10.652911 1925 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 13:18:10.653080 update_engine[1925]: I20241213 13:18:10.653003 1925 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Dec 13 13:18:11.090370 kubelet[2400]: E1213 13:18:11.090305 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:18:12.090466 kubelet[2400]: E1213 13:18:12.090409 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:18:12.196048 kubelet[2400]: E1213 13:18:12.193127 2400 controller.go:195] "Failed to update lease" err="Put \"https://172.31.29.1:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.17.245?timeout=10s\": unexpected EOF" Dec 13 13:18:12.205221 kubelet[2400]: E1213 13:18:12.203116 2400 controller.go:195] "Failed to update lease" err="Put \"https://172.31.29.1:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.17.245?timeout=10s\": read tcp 172.31.17.245:45686->172.31.29.1:6443: read: connection reset by peer" Dec 13 13:18:12.205221 kubelet[2400]: I1213 13:18:12.203180 2400 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 13 13:18:12.205221 kubelet[2400]: E1213 13:18:12.203893 2400 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.1:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.17.245?timeout=10s\": dial tcp 172.31.29.1:6443: connect: connection refused" interval="200ms" Dec 13 13:18:12.405424 kubelet[2400]: E1213 13:18:12.405262 2400 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.1:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.17.245?timeout=10s\": dial tcp 172.31.29.1:6443: connect: connection refused" interval="400ms" Dec 13 13:18:12.806868 kubelet[2400]: E1213 13:18:12.806695 2400 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.1:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.17.245?timeout=10s\": dial tcp 172.31.29.1:6443: connect: connection refused" interval="800ms" Dec 13 13:18:13.091007 kubelet[2400]: E1213 13:18:13.090939 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:18:13.200632 kubelet[2400]: E1213 13:18:13.200540 2400 desired_state_of_world_populator.go:318] "Error processing volume" err="error processing PVC default/test-dynamic-volume-claim: failed to fetch PVC from API server: Get \"https://172.31.29.1:6443/api/v1/namespaces/default/persistentvolumeclaims/test-dynamic-volume-claim\": dial tcp 172.31.29.1:6443: connect: connection refused - error from a previous attempt: unexpected EOF" pod="default/test-pod-1" volumeName="config" Dec 13 13:18:14.091864 kubelet[2400]: E1213 13:18:14.091800 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:18:15.092755 kubelet[2400]: E1213 13:18:15.092697 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:18:15.987573 kubelet[2400]: E1213 13:18:15.987517 2400 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:18:16.093362 kubelet[2400]: E1213 13:18:16.093316 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:18:17.093730 kubelet[2400]: E1213 13:18:17.093670 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:18:18.094671 kubelet[2400]: E1213 13:18:18.094608 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:18:19.095357 kubelet[2400]: E1213 13:18:19.095300 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:18:20.096439 kubelet[2400]: E1213 13:18:20.096367 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:18:20.657186 update_engine[1925]: I20241213 13:18:20.657109 1925 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 13:18:20.657697 update_engine[1925]: I20241213 13:18:20.657446 1925 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 13:18:20.657799 update_engine[1925]: I20241213 13:18:20.657750 1925 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 13:18:20.658318 update_engine[1925]: E20241213 13:18:20.658260 1925 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 13:18:20.658397 update_engine[1925]: I20241213 13:18:20.658353 1925 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Dec 13 13:18:21.096731 kubelet[2400]: E1213 13:18:21.096672 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:18:22.097119 kubelet[2400]: E1213 13:18:22.097070 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:18:23.098284 kubelet[2400]: E1213 13:18:23.098194 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:18:23.608825 kubelet[2400]: E1213 13:18:23.608703 2400 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.1:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.17.245?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="1.6s" Dec 13 13:18:24.099485 kubelet[2400]: E1213 13:18:24.099394 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:18:25.100234 kubelet[2400]: E1213 13:18:25.100163 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:18:26.100867 kubelet[2400]: E1213 13:18:26.100801 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:18:27.101758 kubelet[2400]: E1213 13:18:27.101695 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:18:28.102840 kubelet[2400]: E1213 13:18:28.102781 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:18:28.993317 kubelet[2400]: E1213 13:18:28.993248 2400 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.17.245\": Get \"https://172.31.29.1:6443/api/v1/nodes/172.31.17.245?resourceVersion=0&timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 13 13:18:29.104096 kubelet[2400]: E1213 13:18:29.103993 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:18:30.104778 kubelet[2400]: E1213 13:18:30.104725 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:18:30.658086 update_engine[1925]: I20241213 13:18:30.657450 1925 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 13:18:30.658086 update_engine[1925]: I20241213 13:18:30.657773 1925 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 13:18:30.658908 update_engine[1925]: I20241213 13:18:30.658108 1925 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 13:18:30.658908 update_engine[1925]: E20241213 13:18:30.658566 1925 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 13:18:30.658908 update_engine[1925]: I20241213 13:18:30.658643 1925 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 13 13:18:30.658908 update_engine[1925]: I20241213 13:18:30.658663 1925 omaha_request_action.cc:617] Omaha request response: Dec 13 13:18:30.658908 update_engine[1925]: E20241213 13:18:30.658802 1925 omaha_request_action.cc:636] Omaha request network transfer failed. Dec 13 13:18:30.658908 update_engine[1925]: I20241213 13:18:30.658838 1925 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Dec 13 13:18:30.658908 update_engine[1925]: I20241213 13:18:30.658856 1925 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 13:18:30.658908 update_engine[1925]: I20241213 13:18:30.658871 1925 update_attempter.cc:306] Processing Done. Dec 13 13:18:30.658908 update_engine[1925]: E20241213 13:18:30.658900 1925 update_attempter.cc:619] Update failed. Dec 13 13:18:30.659469 update_engine[1925]: I20241213 13:18:30.658921 1925 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Dec 13 13:18:30.659469 update_engine[1925]: I20241213 13:18:30.658936 1925 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Dec 13 13:18:30.659469 update_engine[1925]: I20241213 13:18:30.658951 1925 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Dec 13 13:18:30.659469 update_engine[1925]: I20241213 13:18:30.659092 1925 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 13 13:18:30.659469 update_engine[1925]: I20241213 13:18:30.659134 1925 omaha_request_action.cc:271] Posting an Omaha request to disabled Dec 13 13:18:30.659469 update_engine[1925]: I20241213 13:18:30.659152 1925 omaha_request_action.cc:272] Request: <?xml version="1.0" encoding="UTF-8"?> Dec 13 13:18:30.659469 update_engine[1925]: <request protocol="3.0" version="update_engine-0.4.10" updaterversion="update_engine-0.4.10" installsource="scheduler" ismachine="1"> Dec 13 13:18:30.659469 update_engine[1925]: <os version="Chateau" platform="CoreOS" sp="4186.0.0_aarch64"></os> Dec 13 13:18:30.659469 update_engine[1925]: <app appid="{e96281a6-d1af-4bde-9a0a-97b76e56dc57}" version="4186.0.0" track="alpha" bootid="{e3bac351-df89-47f1-9a5f-67156d4e2d4b}" oem="ami" oemversion="3.2.985.0-r1" alephversion="4186.0.0" machineid="ec2298afae61cbb831b36bb14ff494ed" machinealias="" lang="en-US" board="arm64-usr" hardware_class="" delta_okay="false" > Dec 13 13:18:30.659469 update_engine[1925]: <event eventtype="3" eventresult="0" errorcode="268437456"></event> Dec 13 13:18:30.659469 update_engine[1925]: </app> Dec 13 13:18:30.659469 update_engine[1925]: </request> Dec 13 13:18:30.659469 update_engine[1925]: I20241213 13:18:30.659168 1925 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 13:18:30.659469 update_engine[1925]: I20241213 13:18:30.659421 1925 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 13:18:30.660383 update_engine[1925]: I20241213 13:18:30.659701 1925 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 13:18:30.660383 update_engine[1925]: E20241213 13:18:30.660202 1925 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 13:18:30.660383 update_engine[1925]: I20241213 13:18:30.660279 1925 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 13 13:18:30.660383 update_engine[1925]: I20241213 13:18:30.660297 1925 omaha_request_action.cc:617] Omaha request response: Dec 13 13:18:30.660383 update_engine[1925]: I20241213 13:18:30.660315 1925 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 13:18:30.660383 update_engine[1925]: I20241213 13:18:30.660329 1925 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 13:18:30.660383 update_engine[1925]: I20241213 13:18:30.660344 1925 update_attempter.cc:306] Processing Done. Dec 13 13:18:30.660383 update_engine[1925]: I20241213 13:18:30.660360 1925 update_attempter.cc:310] Error event sent. Dec 13 13:18:30.660383 update_engine[1925]: I20241213 13:18:30.660380 1925 update_check_scheduler.cc:74] Next update check in 49m12s Dec 13 13:18:30.660945 locksmithd[1962]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Dec 13 13:18:30.660945 locksmithd[1962]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Dec 13 13:18:31.105377 kubelet[2400]: E1213 13:18:31.105323 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:18:32.106417 kubelet[2400]: E1213 13:18:32.106335 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:18:33.106769 kubelet[2400]: E1213 13:18:33.106713 2400 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"