Dec 13 14:15:08.024620 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Dec 13 14:15:08.024662 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Dec 13 12:58:58 -00 2024 Dec 13 14:15:08.024686 kernel: efi: EFI v2.70 by EDK II Dec 13 14:15:08.024701 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x7171cf98 Dec 13 14:15:08.024714 kernel: ACPI: Early table checksum verification disabled Dec 13 14:15:08.024728 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Dec 13 14:15:08.024744 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Dec 13 14:15:08.024758 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Dec 13 14:15:08.024772 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Dec 13 14:15:08.024785 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Dec 13 14:15:08.024803 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Dec 13 14:15:08.024818 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Dec 13 14:15:08.024832 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Dec 13 14:15:08.024846 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Dec 13 14:15:08.024862 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Dec 13 14:15:08.024881 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Dec 13 14:15:08.024896 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Dec 13 14:15:08.024915 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Dec 13 14:15:08.024930 kernel: printk: bootconsole [uart0] enabled Dec 13 14:15:08.024945 kernel: NUMA: Failed to initialise from firmware Dec 13 14:15:08.024960 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Dec 13 14:15:08.024974 kernel: NUMA: NODE_DATA [mem 0x4b5843900-0x4b5848fff] Dec 13 14:15:08.024988 kernel: Zone ranges: Dec 13 14:15:08.025003 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Dec 13 14:15:08.025017 kernel: DMA32 empty Dec 13 14:15:08.025031 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Dec 13 14:15:08.025050 kernel: Movable zone start for each node Dec 13 14:15:08.025065 kernel: Early memory node ranges Dec 13 14:15:08.025079 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Dec 13 14:15:08.025093 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Dec 13 14:15:08.025108 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Dec 13 14:15:08.025122 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Dec 13 14:15:08.025136 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Dec 13 14:15:08.025150 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Dec 13 14:15:08.025165 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Dec 13 14:15:08.025179 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Dec 13 14:15:08.025193 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Dec 13 14:15:08.025208 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Dec 13 14:15:08.025226 kernel: psci: probing for conduit method from ACPI. Dec 13 14:15:08.025241 kernel: psci: PSCIv1.0 detected in firmware. Dec 13 14:15:08.025264 kernel: psci: Using standard PSCI v0.2 function IDs Dec 13 14:15:08.025283 kernel: psci: Trusted OS migration not required Dec 13 14:15:08.025299 kernel: psci: SMC Calling Convention v1.1 Dec 13 14:15:08.025323 kernel: ACPI: SRAT not present Dec 13 14:15:08.025340 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 Dec 13 14:15:08.025379 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 Dec 13 14:15:08.025403 kernel: pcpu-alloc: [0] 0 [0] 1 Dec 13 14:15:08.025420 kernel: Detected PIPT I-cache on CPU0 Dec 13 14:15:08.025438 kernel: CPU features: detected: GIC system register CPU interface Dec 13 14:15:08.025457 kernel: CPU features: detected: Spectre-v2 Dec 13 14:15:08.025476 kernel: CPU features: detected: Spectre-v3a Dec 13 14:15:08.025494 kernel: CPU features: detected: Spectre-BHB Dec 13 14:15:08.025510 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 13 14:15:08.025530 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 13 14:15:08.025556 kernel: CPU features: detected: ARM erratum 1742098 Dec 13 14:15:08.025577 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Dec 13 14:15:08.025595 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Dec 13 14:15:08.025613 kernel: Policy zone: Normal Dec 13 14:15:08.025635 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=5997a8cf94b1df1856dc785f0a7074604bbf4c21fdcca24a1996021471a77601 Dec 13 14:15:08.025655 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 14:15:08.025675 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 14:15:08.025693 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 14:15:08.025713 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 14:15:08.025729 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Dec 13 14:15:08.025753 kernel: Memory: 3824524K/4030464K available (9792K kernel code, 2092K rwdata, 7576K rodata, 36416K init, 777K bss, 205940K reserved, 0K cma-reserved) Dec 13 14:15:08.025771 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 14:15:08.025788 kernel: trace event string verifier disabled Dec 13 14:15:08.025807 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 14:15:08.025824 kernel: rcu: RCU event tracing is enabled. Dec 13 14:15:08.025845 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 14:15:08.025863 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 14:15:08.025883 kernel: Tracing variant of Tasks RCU enabled. Dec 13 14:15:08.025899 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 14:15:08.025914 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 14:15:08.025929 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 13 14:15:08.025949 kernel: GICv3: 96 SPIs implemented Dec 13 14:15:08.025969 kernel: GICv3: 0 Extended SPIs implemented Dec 13 14:15:08.025984 kernel: GICv3: Distributor has no Range Selector support Dec 13 14:15:08.025999 kernel: Root IRQ handler: gic_handle_irq Dec 13 14:15:08.026019 kernel: GICv3: 16 PPIs implemented Dec 13 14:15:08.026035 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Dec 13 14:15:08.026050 kernel: ACPI: SRAT not present Dec 13 14:15:08.026065 kernel: ITS [mem 0x10080000-0x1009ffff] Dec 13 14:15:08.026081 kernel: ITS@0x0000000010080000: allocated 8192 Devices @400090000 (indirect, esz 8, psz 64K, shr 1) Dec 13 14:15:08.026103 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000a0000 (flat, esz 8, psz 64K, shr 1) Dec 13 14:15:08.026121 kernel: GICv3: using LPI property table @0x00000004000b0000 Dec 13 14:15:08.026140 kernel: ITS: Using hypervisor restricted LPI range [128] Dec 13 14:15:08.026159 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000d0000 Dec 13 14:15:08.026180 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Dec 13 14:15:08.026200 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Dec 13 14:15:08.026219 kernel: sched_clock: 56 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Dec 13 14:15:08.026239 kernel: Console: colour dummy device 80x25 Dec 13 14:15:08.026255 kernel: printk: console [tty1] enabled Dec 13 14:15:08.026271 kernel: ACPI: Core revision 20210730 Dec 13 14:15:08.026286 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Dec 13 14:15:08.026302 kernel: pid_max: default: 32768 minimum: 301 Dec 13 14:15:08.026317 kernel: LSM: Security Framework initializing Dec 13 14:15:08.026337 kernel: SELinux: Initializing. Dec 13 14:15:08.026353 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 14:15:08.043588 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 14:15:08.045405 kernel: rcu: Hierarchical SRCU implementation. Dec 13 14:15:08.045441 kernel: Platform MSI: ITS@0x10080000 domain created Dec 13 14:15:08.045459 kernel: PCI/MSI: ITS@0x10080000 domain created Dec 13 14:15:08.045476 kernel: Remapping and enabling EFI services. Dec 13 14:15:08.045491 kernel: smp: Bringing up secondary CPUs ... Dec 13 14:15:08.045507 kernel: Detected PIPT I-cache on CPU1 Dec 13 14:15:08.045531 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Dec 13 14:15:08.045548 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000e0000 Dec 13 14:15:08.045564 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Dec 13 14:15:08.045580 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 14:15:08.045596 kernel: SMP: Total of 2 processors activated. Dec 13 14:15:08.045612 kernel: CPU features: detected: 32-bit EL0 Support Dec 13 14:15:08.045628 kernel: CPU features: detected: 32-bit EL1 Support Dec 13 14:15:08.045643 kernel: CPU features: detected: CRC32 instructions Dec 13 14:15:08.045658 kernel: CPU: All CPU(s) started at EL1 Dec 13 14:15:08.045674 kernel: alternatives: patching kernel code Dec 13 14:15:08.045693 kernel: devtmpfs: initialized Dec 13 14:15:08.045709 kernel: KASLR disabled due to lack of seed Dec 13 14:15:08.045736 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 14:15:08.046493 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 14:15:08.046511 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 14:15:08.046528 kernel: SMBIOS 3.0.0 present. Dec 13 14:15:08.046544 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Dec 13 14:15:08.046561 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 14:15:08.046578 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 13 14:15:08.046612 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 13 14:15:08.046634 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 13 14:15:08.046658 kernel: audit: initializing netlink subsys (disabled) Dec 13 14:15:08.046675 kernel: audit: type=2000 audit(0.252:1): state=initialized audit_enabled=0 res=1 Dec 13 14:15:08.046692 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 14:15:08.046708 kernel: cpuidle: using governor menu Dec 13 14:15:08.046724 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 13 14:15:08.046744 kernel: ASID allocator initialised with 32768 entries Dec 13 14:15:08.046761 kernel: ACPI: bus type PCI registered Dec 13 14:15:08.046777 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 14:15:08.046793 kernel: Serial: AMBA PL011 UART driver Dec 13 14:15:08.046809 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 14:15:08.046826 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Dec 13 14:15:08.046843 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 14:15:08.046859 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Dec 13 14:15:08.046875 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 14:15:08.046896 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 13 14:15:08.046913 kernel: ACPI: Added _OSI(Module Device) Dec 13 14:15:08.046929 kernel: ACPI: Added _OSI(Processor Device) Dec 13 14:15:08.046945 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 14:15:08.046961 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 14:15:08.046977 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 14:15:08.046993 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 14:15:08.047009 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 14:15:08.047026 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 14:15:08.047046 kernel: ACPI: Interpreter enabled Dec 13 14:15:08.047062 kernel: ACPI: Using GIC for interrupt routing Dec 13 14:15:08.047078 kernel: ACPI: MCFG table detected, 1 entries Dec 13 14:15:08.047094 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Dec 13 14:15:08.047430 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 14:15:08.047632 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 13 14:15:08.047822 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 13 14:15:08.048027 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Dec 13 14:15:08.048227 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Dec 13 14:15:08.048250 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Dec 13 14:15:08.048267 kernel: acpiphp: Slot [1] registered Dec 13 14:15:08.048283 kernel: acpiphp: Slot [2] registered Dec 13 14:15:08.048300 kernel: acpiphp: Slot [3] registered Dec 13 14:15:08.048316 kernel: acpiphp: Slot [4] registered Dec 13 14:15:08.048332 kernel: acpiphp: Slot [5] registered Dec 13 14:15:08.048348 kernel: acpiphp: Slot [6] registered Dec 13 14:15:08.052159 kernel: acpiphp: Slot [7] registered Dec 13 14:15:08.052194 kernel: acpiphp: Slot [8] registered Dec 13 14:15:08.052212 kernel: acpiphp: Slot [9] registered Dec 13 14:15:08.052230 kernel: acpiphp: Slot [10] registered Dec 13 14:15:08.052246 kernel: acpiphp: Slot [11] registered Dec 13 14:15:08.052263 kernel: acpiphp: Slot [12] registered Dec 13 14:15:08.052279 kernel: acpiphp: Slot [13] registered Dec 13 14:15:08.052295 kernel: acpiphp: Slot [14] registered Dec 13 14:15:08.052312 kernel: acpiphp: Slot [15] registered Dec 13 14:15:08.052328 kernel: acpiphp: Slot [16] registered Dec 13 14:15:08.052349 kernel: acpiphp: Slot [17] registered Dec 13 14:15:08.052462 kernel: acpiphp: Slot [18] registered Dec 13 14:15:08.052481 kernel: acpiphp: Slot [19] registered Dec 13 14:15:08.052497 kernel: acpiphp: Slot [20] registered Dec 13 14:15:08.052513 kernel: acpiphp: Slot [21] registered Dec 13 14:15:08.052530 kernel: acpiphp: Slot [22] registered Dec 13 14:15:08.052546 kernel: acpiphp: Slot [23] registered Dec 13 14:15:08.052562 kernel: acpiphp: Slot [24] registered Dec 13 14:15:08.052578 kernel: acpiphp: Slot [25] registered Dec 13 14:15:08.052595 kernel: acpiphp: Slot [26] registered Dec 13 14:15:08.052617 kernel: acpiphp: Slot [27] registered Dec 13 14:15:08.052633 kernel: acpiphp: Slot [28] registered Dec 13 14:15:08.052649 kernel: acpiphp: Slot [29] registered Dec 13 14:15:08.052665 kernel: acpiphp: Slot [30] registered Dec 13 14:15:08.052681 kernel: acpiphp: Slot [31] registered Dec 13 14:15:08.052697 kernel: PCI host bridge to bus 0000:00 Dec 13 14:15:08.052972 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Dec 13 14:15:08.053155 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 13 14:15:08.053337 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Dec 13 14:15:08.053533 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Dec 13 14:15:08.053757 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Dec 13 14:15:08.053968 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Dec 13 14:15:08.054164 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Dec 13 14:15:08.057455 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Dec 13 14:15:08.057757 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Dec 13 14:15:08.057958 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 13 14:15:08.058176 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Dec 13 14:15:08.059444 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Dec 13 14:15:08.059665 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Dec 13 14:15:08.059858 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Dec 13 14:15:08.060050 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 13 14:15:08.060247 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Dec 13 14:15:08.060478 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Dec 13 14:15:08.060674 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Dec 13 14:15:08.064434 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Dec 13 14:15:08.064716 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Dec 13 14:15:08.064911 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Dec 13 14:15:08.065089 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 13 14:15:08.065292 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Dec 13 14:15:08.065318 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 13 14:15:08.065335 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 13 14:15:08.065352 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 13 14:15:08.066439 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 13 14:15:08.066459 kernel: iommu: Default domain type: Translated Dec 13 14:15:08.066476 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 13 14:15:08.066493 kernel: vgaarb: loaded Dec 13 14:15:08.066510 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 14:15:08.066534 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 14:15:08.066551 kernel: PTP clock support registered Dec 13 14:15:08.066568 kernel: Registered efivars operations Dec 13 14:15:08.066584 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 13 14:15:08.066618 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 14:15:08.066646 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 14:15:08.066675 kernel: pnp: PnP ACPI init Dec 13 14:15:08.066929 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Dec 13 14:15:08.066962 kernel: pnp: PnP ACPI: found 1 devices Dec 13 14:15:08.066980 kernel: NET: Registered PF_INET protocol family Dec 13 14:15:08.066996 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 14:15:08.067013 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 14:15:08.067030 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 14:15:08.067047 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 14:15:08.067064 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Dec 13 14:15:08.067080 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 14:15:08.067097 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 14:15:08.067117 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 14:15:08.067134 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 14:15:08.067150 kernel: PCI: CLS 0 bytes, default 64 Dec 13 14:15:08.067166 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Dec 13 14:15:08.067182 kernel: kvm [1]: HYP mode not available Dec 13 14:15:08.067198 kernel: Initialise system trusted keyrings Dec 13 14:15:08.067215 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 14:15:08.067231 kernel: Key type asymmetric registered Dec 13 14:15:08.067247 kernel: Asymmetric key parser 'x509' registered Dec 13 14:15:08.067267 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 14:15:08.067284 kernel: io scheduler mq-deadline registered Dec 13 14:15:08.067300 kernel: io scheduler kyber registered Dec 13 14:15:08.067316 kernel: io scheduler bfq registered Dec 13 14:15:08.067579 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Dec 13 14:15:08.067608 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 13 14:15:08.067625 kernel: ACPI: button: Power Button [PWRB] Dec 13 14:15:08.067641 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Dec 13 14:15:08.067663 kernel: ACPI: button: Sleep Button [SLPB] Dec 13 14:15:08.067681 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 14:15:08.067698 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Dec 13 14:15:08.067894 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Dec 13 14:15:08.067918 kernel: printk: console [ttyS0] disabled Dec 13 14:15:08.067936 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Dec 13 14:15:08.067952 kernel: printk: console [ttyS0] enabled Dec 13 14:15:08.067968 kernel: printk: bootconsole [uart0] disabled Dec 13 14:15:08.067984 kernel: thunder_xcv, ver 1.0 Dec 13 14:15:08.068004 kernel: thunder_bgx, ver 1.0 Dec 13 14:15:08.068021 kernel: nicpf, ver 1.0 Dec 13 14:15:08.068037 kernel: nicvf, ver 1.0 Dec 13 14:15:08.068241 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 13 14:15:08.072453 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-12-13T14:15:07 UTC (1734099307) Dec 13 14:15:08.072502 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 14:15:08.072526 kernel: NET: Registered PF_INET6 protocol family Dec 13 14:15:08.072543 kernel: Segment Routing with IPv6 Dec 13 14:15:08.072560 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 14:15:08.072585 kernel: NET: Registered PF_PACKET protocol family Dec 13 14:15:08.072601 kernel: Key type dns_resolver registered Dec 13 14:15:08.072618 kernel: registered taskstats version 1 Dec 13 14:15:08.072634 kernel: Loading compiled-in X.509 certificates Dec 13 14:15:08.072650 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: e011ba9949ade5a6d03f7a5e28171f7f59e70f8a' Dec 13 14:15:08.072667 kernel: Key type .fscrypt registered Dec 13 14:15:08.072682 kernel: Key type fscrypt-provisioning registered Dec 13 14:15:08.072698 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 14:15:08.072714 kernel: ima: Allocated hash algorithm: sha1 Dec 13 14:15:08.072734 kernel: ima: No architecture policies found Dec 13 14:15:08.072750 kernel: clk: Disabling unused clocks Dec 13 14:15:08.072766 kernel: Freeing unused kernel memory: 36416K Dec 13 14:15:08.072782 kernel: Run /init as init process Dec 13 14:15:08.072798 kernel: with arguments: Dec 13 14:15:08.072814 kernel: /init Dec 13 14:15:08.072829 kernel: with environment: Dec 13 14:15:08.072845 kernel: HOME=/ Dec 13 14:15:08.072861 kernel: TERM=linux Dec 13 14:15:08.072880 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 14:15:08.072901 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:15:08.072922 systemd[1]: Detected virtualization amazon. Dec 13 14:15:08.072940 systemd[1]: Detected architecture arm64. Dec 13 14:15:08.072957 systemd[1]: Running in initrd. Dec 13 14:15:08.072974 systemd[1]: No hostname configured, using default hostname. Dec 13 14:15:08.072991 systemd[1]: Hostname set to . Dec 13 14:15:08.073013 systemd[1]: Initializing machine ID from VM UUID. Dec 13 14:15:08.073031 systemd[1]: Queued start job for default target initrd.target. Dec 13 14:15:08.073049 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:15:08.073066 systemd[1]: Reached target cryptsetup.target. Dec 13 14:15:08.073083 systemd[1]: Reached target paths.target. Dec 13 14:15:08.073100 systemd[1]: Reached target slices.target. Dec 13 14:15:08.073117 systemd[1]: Reached target swap.target. Dec 13 14:15:08.073134 systemd[1]: Reached target timers.target. Dec 13 14:15:08.073156 systemd[1]: Listening on iscsid.socket. Dec 13 14:15:08.073174 systemd[1]: Listening on iscsiuio.socket. Dec 13 14:15:08.073191 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 14:15:08.073209 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 14:15:08.073226 systemd[1]: Listening on systemd-journald.socket. Dec 13 14:15:08.073244 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:15:08.073261 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:15:08.073279 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:15:08.073300 systemd[1]: Reached target sockets.target. Dec 13 14:15:08.073319 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:15:08.073337 systemd[1]: Finished network-cleanup.service. Dec 13 14:15:08.073366 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 14:15:08.076574 systemd[1]: Starting systemd-journald.service... Dec 13 14:15:08.076595 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:15:08.076614 systemd[1]: Starting systemd-resolved.service... Dec 13 14:15:08.076632 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 14:15:08.076650 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:15:08.076675 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 14:15:08.076694 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 14:15:08.076711 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 14:15:08.076729 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 14:15:08.076747 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 14:15:08.076764 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 14:15:08.076783 kernel: audit: type=1130 audit(1734099308.030:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:08.076802 systemd[1]: Starting dracut-cmdline.service... Dec 13 14:15:08.076828 systemd-journald[310]: Journal started Dec 13 14:15:08.076921 systemd-journald[310]: Runtime Journal (/run/log/journal/ec29d055126f4623ec4bc9ea633b864d) is 8.0M, max 75.4M, 67.4M free. Dec 13 14:15:08.079427 systemd[1]: Started systemd-journald.service. Dec 13 14:15:08.088659 kernel: audit: type=1130 audit(1734099308.079:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:08.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:08.079000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:07.963262 systemd-modules-load[311]: Inserted module 'overlay' Dec 13 14:15:08.024819 systemd-resolved[312]: Positive Trust Anchors: Dec 13 14:15:08.024833 systemd-resolved[312]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:15:08.024892 systemd-resolved[312]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:15:08.122753 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 14:15:08.123290 dracut-cmdline[327]: dracut-dracut-053 Dec 13 14:15:08.123290 dracut-cmdline[327]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=t Dec 13 14:15:08.123290 dracut-cmdline[327]: tyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=5997a8cf94b1df1856dc785f0a7074604bbf4c21fdcca24a1996021471a77601 Dec 13 14:15:08.158793 kernel: Bridge firewalling registered Dec 13 14:15:08.147718 systemd-modules-load[311]: Inserted module 'br_netfilter' Dec 13 14:15:08.191983 kernel: SCSI subsystem initialized Dec 13 14:15:08.215108 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 14:15:08.215179 kernel: device-mapper: uevent: version 1.0.3 Dec 13 14:15:08.221011 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 14:15:08.225897 systemd-modules-load[311]: Inserted module 'dm_multipath' Dec 13 14:15:08.228699 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:15:08.231000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:08.234688 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:15:08.246109 kernel: audit: type=1130 audit(1734099308.231:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:08.258495 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:15:08.257000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:08.271465 kernel: audit: type=1130 audit(1734099308.257:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:08.328406 kernel: Loading iSCSI transport class v2.0-870. Dec 13 14:15:08.347404 kernel: iscsi: registered transport (tcp) Dec 13 14:15:08.374466 kernel: iscsi: registered transport (qla4xxx) Dec 13 14:15:08.374538 kernel: QLogic iSCSI HBA Driver Dec 13 14:15:08.527218 systemd-resolved[312]: Defaulting to hostname 'linux'. Dec 13 14:15:08.529507 kernel: random: crng init done Dec 13 14:15:08.542695 kernel: audit: type=1130 audit(1734099308.528:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:08.528000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:08.529298 systemd[1]: Started systemd-resolved.service. Dec 13 14:15:08.531424 systemd[1]: Reached target nss-lookup.target. Dec 13 14:15:08.558060 systemd[1]: Finished dracut-cmdline.service. Dec 13 14:15:08.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:08.562121 systemd[1]: Starting dracut-pre-udev.service... Dec 13 14:15:08.570561 kernel: audit: type=1130 audit(1734099308.556:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:08.628403 kernel: raid6: neonx8 gen() 6410 MB/s Dec 13 14:15:08.646390 kernel: raid6: neonx8 xor() 4584 MB/s Dec 13 14:15:08.664390 kernel: raid6: neonx4 gen() 6597 MB/s Dec 13 14:15:08.682389 kernel: raid6: neonx4 xor() 4729 MB/s Dec 13 14:15:08.700395 kernel: raid6: neonx2 gen() 5816 MB/s Dec 13 14:15:08.718389 kernel: raid6: neonx2 xor() 4363 MB/s Dec 13 14:15:08.736392 kernel: raid6: neonx1 gen() 4456 MB/s Dec 13 14:15:08.754389 kernel: raid6: neonx1 xor() 3532 MB/s Dec 13 14:15:08.772389 kernel: raid6: int64x8 gen() 3432 MB/s Dec 13 14:15:08.790388 kernel: raid6: int64x8 xor() 2028 MB/s Dec 13 14:15:08.808389 kernel: raid6: int64x4 gen() 3848 MB/s Dec 13 14:15:08.826388 kernel: raid6: int64x4 xor() 2168 MB/s Dec 13 14:15:08.844392 kernel: raid6: int64x2 gen() 3615 MB/s Dec 13 14:15:08.862401 kernel: raid6: int64x2 xor() 1922 MB/s Dec 13 14:15:08.880394 kernel: raid6: int64x1 gen() 2764 MB/s Dec 13 14:15:08.899809 kernel: raid6: int64x1 xor() 1437 MB/s Dec 13 14:15:08.899845 kernel: raid6: using algorithm neonx4 gen() 6597 MB/s Dec 13 14:15:08.899870 kernel: raid6: .... xor() 4729 MB/s, rmw enabled Dec 13 14:15:08.901487 kernel: raid6: using neon recovery algorithm Dec 13 14:15:08.921716 kernel: xor: measuring software checksum speed Dec 13 14:15:08.921827 kernel: 8regs : 9288 MB/sec Dec 13 14:15:08.923424 kernel: 32regs : 11117 MB/sec Dec 13 14:15:08.925213 kernel: arm64_neon : 9290 MB/sec Dec 13 14:15:08.925243 kernel: xor: using function: 32regs (11117 MB/sec) Dec 13 14:15:09.016398 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Dec 13 14:15:09.034789 systemd[1]: Finished dracut-pre-udev.service. Dec 13 14:15:09.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:09.037000 audit: BPF prog-id=7 op=LOAD Dec 13 14:15:09.047512 kernel: audit: type=1130 audit(1734099309.036:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:09.047597 kernel: audit: type=1334 audit(1734099309.037:9): prog-id=7 op=LOAD Dec 13 14:15:09.045849 systemd[1]: Starting systemd-udevd.service... Dec 13 14:15:09.052447 kernel: audit: type=1334 audit(1734099309.043:10): prog-id=8 op=LOAD Dec 13 14:15:09.043000 audit: BPF prog-id=8 op=LOAD Dec 13 14:15:09.075795 systemd-udevd[508]: Using default interface naming scheme 'v252'. Dec 13 14:15:09.086444 systemd[1]: Started systemd-udevd.service. Dec 13 14:15:09.093625 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 14:15:09.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:09.119622 dracut-pre-trigger[518]: rd.md=0: removing MD RAID activation Dec 13 14:15:09.181243 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 14:15:09.181000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:09.185610 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:15:09.283278 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:15:09.284000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:09.412274 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 13 14:15:09.412336 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Dec 13 14:15:09.437984 kernel: ena 0000:00:05.0: ENA device version: 0.10 Dec 13 14:15:09.438215 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Dec 13 14:15:09.438441 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Dec 13 14:15:09.438468 kernel: nvme nvme0: pci function 0000:00:04.0 Dec 13 14:15:09.438732 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:5f:2b:45:7a:9b Dec 13 14:15:09.438946 kernel: nvme nvme0: 2/0/0 default/read/poll queues Dec 13 14:15:09.447770 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 14:15:09.447911 kernel: GPT:9289727 != 16777215 Dec 13 14:15:09.447938 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 14:15:09.449744 kernel: GPT:9289727 != 16777215 Dec 13 14:15:09.450914 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 14:15:09.454007 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 14:15:09.460863 (udev-worker)[557]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:15:09.536398 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (565) Dec 13 14:15:09.554528 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 14:15:09.628413 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 14:15:09.664766 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 14:15:09.667134 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 14:15:09.680810 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:15:09.684136 systemd[1]: Starting disk-uuid.service... Dec 13 14:15:09.696262 disk-uuid[668]: Primary Header is updated. Dec 13 14:15:09.696262 disk-uuid[668]: Secondary Entries is updated. Dec 13 14:15:09.696262 disk-uuid[668]: Secondary Header is updated. Dec 13 14:15:09.705414 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 14:15:09.713403 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 14:15:09.721394 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 14:15:10.720415 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 14:15:10.721715 disk-uuid[669]: The operation has completed successfully. Dec 13 14:15:10.906843 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 14:15:10.908928 systemd[1]: Finished disk-uuid.service. Dec 13 14:15:10.910000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:10.910000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:10.913052 systemd[1]: Starting verity-setup.service... Dec 13 14:15:10.949396 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Dec 13 14:15:11.040617 systemd[1]: Found device dev-mapper-usr.device. Dec 13 14:15:11.044571 systemd[1]: Mounting sysusr-usr.mount... Dec 13 14:15:11.053583 systemd[1]: Finished verity-setup.service. Dec 13 14:15:11.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:11.137403 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 14:15:11.138700 systemd[1]: Mounted sysusr-usr.mount. Dec 13 14:15:11.139847 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 14:15:11.141412 systemd[1]: Starting ignition-setup.service... Dec 13 14:15:11.144557 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 14:15:11.195146 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 13 14:15:11.195217 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 14:15:11.197449 kernel: BTRFS info (device nvme0n1p6): has skinny extents Dec 13 14:15:11.207708 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 14:15:11.225761 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 14:15:11.238973 systemd[1]: Finished ignition-setup.service. Dec 13 14:15:11.240000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:11.243193 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 14:15:11.300815 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 14:15:11.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:11.303000 audit: BPF prog-id=9 op=LOAD Dec 13 14:15:11.305875 systemd[1]: Starting systemd-networkd.service... Dec 13 14:15:11.352706 systemd-networkd[1181]: lo: Link UP Dec 13 14:15:11.352729 systemd-networkd[1181]: lo: Gained carrier Dec 13 14:15:11.356497 systemd-networkd[1181]: Enumeration completed Dec 13 14:15:11.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:11.356973 systemd-networkd[1181]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:15:11.357302 systemd[1]: Started systemd-networkd.service. Dec 13 14:15:11.359233 systemd[1]: Reached target network.target. Dec 13 14:15:11.363232 systemd[1]: Starting iscsiuio.service... Dec 13 14:15:11.375052 systemd-networkd[1181]: eth0: Link UP Dec 13 14:15:11.376571 systemd-networkd[1181]: eth0: Gained carrier Dec 13 14:15:11.379633 systemd[1]: Started iscsiuio.service. Dec 13 14:15:11.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:11.383763 systemd[1]: Starting iscsid.service... Dec 13 14:15:11.392328 iscsid[1186]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:15:11.392328 iscsid[1186]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Dec 13 14:15:11.392328 iscsid[1186]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 14:15:11.392328 iscsid[1186]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 14:15:11.392328 iscsid[1186]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 14:15:11.392328 iscsid[1186]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:15:11.392328 iscsid[1186]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 14:15:11.416318 systemd[1]: Started iscsid.service. Dec 13 14:15:11.416000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:11.419897 systemd-networkd[1181]: eth0: DHCPv4 address 172.31.20.24/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 14:15:11.422687 systemd[1]: Starting dracut-initqueue.service... Dec 13 14:15:11.448592 systemd[1]: Finished dracut-initqueue.service. Dec 13 14:15:11.447000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:11.449608 systemd[1]: Reached target remote-fs-pre.target. Dec 13 14:15:11.449713 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:15:11.450007 systemd[1]: Reached target remote-fs.target. Dec 13 14:15:11.464577 systemd[1]: Starting dracut-pre-mount.service... Dec 13 14:15:11.490558 systemd[1]: Finished dracut-pre-mount.service. Dec 13 14:15:11.493000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:11.952143 ignition[1133]: Ignition 2.14.0 Dec 13 14:15:11.952689 ignition[1133]: Stage: fetch-offline Dec 13 14:15:11.953032 ignition[1133]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:15:11.953145 ignition[1133]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:15:11.971832 ignition[1133]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:15:11.972980 ignition[1133]: Ignition finished successfully Dec 13 14:15:11.977753 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 14:15:11.990196 kernel: kauditd_printk_skb: 14 callbacks suppressed Dec 13 14:15:11.991484 kernel: audit: type=1130 audit(1734099311.978:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:11.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:11.982002 systemd[1]: Starting ignition-fetch.service... Dec 13 14:15:11.997208 ignition[1205]: Ignition 2.14.0 Dec 13 14:15:11.997238 ignition[1205]: Stage: fetch Dec 13 14:15:11.997789 ignition[1205]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:15:11.998254 ignition[1205]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:15:12.017487 ignition[1205]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:15:12.019643 ignition[1205]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:15:12.032250 ignition[1205]: INFO : PUT result: OK Dec 13 14:15:12.036056 ignition[1205]: DEBUG : parsed url from cmdline: "" Dec 13 14:15:12.037786 ignition[1205]: INFO : no config URL provided Dec 13 14:15:12.039375 ignition[1205]: INFO : reading system config file "/usr/lib/ignition/user.ign" Dec 13 14:15:12.041536 ignition[1205]: INFO : no config at "/usr/lib/ignition/user.ign" Dec 13 14:15:12.043483 ignition[1205]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:15:12.046538 ignition[1205]: INFO : PUT result: OK Dec 13 14:15:12.048066 ignition[1205]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Dec 13 14:15:12.050907 ignition[1205]: INFO : GET result: OK Dec 13 14:15:12.052412 ignition[1205]: DEBUG : parsing config with SHA512: 8dfb8bdb8f9b204f0bd02d61920b3ea83dc3b4a90bf3b7fe8382a1eadec6d6cfbf33599bcada809ef677a1ab6a286537cc77dccf0d022a2c4d67982fc45dd5c0 Dec 13 14:15:12.057648 unknown[1205]: fetched base config from "system" Dec 13 14:15:12.058645 ignition[1205]: fetch: fetch complete Dec 13 14:15:12.057665 unknown[1205]: fetched base config from "system" Dec 13 14:15:12.058659 ignition[1205]: fetch: fetch passed Dec 13 14:15:12.057680 unknown[1205]: fetched user config from "aws" Dec 13 14:15:12.058745 ignition[1205]: Ignition finished successfully Dec 13 14:15:12.072965 systemd[1]: Finished ignition-fetch.service. Dec 13 14:15:12.084553 kernel: audit: type=1130 audit(1734099312.073:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:12.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:12.076918 systemd[1]: Starting ignition-kargs.service... Dec 13 14:15:12.099600 ignition[1211]: Ignition 2.14.0 Dec 13 14:15:12.101215 ignition[1211]: Stage: kargs Dec 13 14:15:12.102677 ignition[1211]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:15:12.104845 ignition[1211]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:15:12.115693 ignition[1211]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:15:12.118024 ignition[1211]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:15:12.121302 ignition[1211]: INFO : PUT result: OK Dec 13 14:15:12.126563 ignition[1211]: kargs: kargs passed Dec 13 14:15:12.126851 ignition[1211]: Ignition finished successfully Dec 13 14:15:12.131198 systemd[1]: Finished ignition-kargs.service. Dec 13 14:15:12.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:12.143556 kernel: audit: type=1130 audit(1734099312.132:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:12.142020 systemd[1]: Starting ignition-disks.service... Dec 13 14:15:12.150750 ignition[1217]: Ignition 2.14.0 Dec 13 14:15:12.152340 ignition[1217]: Stage: disks Dec 13 14:15:12.153883 ignition[1217]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:15:12.154091 ignition[1217]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:15:12.165884 ignition[1217]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:15:12.168227 ignition[1217]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:15:12.172025 ignition[1217]: INFO : PUT result: OK Dec 13 14:15:12.177555 ignition[1217]: disks: disks passed Dec 13 14:15:12.177670 ignition[1217]: Ignition finished successfully Dec 13 14:15:12.181815 systemd[1]: Finished ignition-disks.service. Dec 13 14:15:12.184892 systemd[1]: Reached target initrd-root-device.target. Dec 13 14:15:12.199035 kernel: audit: type=1130 audit(1734099312.183:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:12.183000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:12.199026 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:15:12.200652 systemd[1]: Reached target local-fs.target. Dec 13 14:15:12.202187 systemd[1]: Reached target sysinit.target. Dec 13 14:15:12.203712 systemd[1]: Reached target basic.target. Dec 13 14:15:12.219976 systemd[1]: Starting systemd-fsck-root.service... Dec 13 14:15:12.266672 systemd-fsck[1225]: ROOT: clean, 621/553520 files, 56020/553472 blocks Dec 13 14:15:12.273486 systemd[1]: Finished systemd-fsck-root.service. Dec 13 14:15:12.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:12.279556 systemd[1]: Mounting sysroot.mount... Dec 13 14:15:12.286835 kernel: audit: type=1130 audit(1734099312.275:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:12.308386 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 14:15:12.309374 systemd[1]: Mounted sysroot.mount. Dec 13 14:15:12.312757 systemd[1]: Reached target initrd-root-fs.target. Dec 13 14:15:12.324512 systemd[1]: Mounting sysroot-usr.mount... Dec 13 14:15:12.327905 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Dec 13 14:15:12.328108 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 14:15:12.328236 systemd[1]: Reached target ignition-diskful.target. Dec 13 14:15:12.346723 systemd[1]: Mounted sysroot-usr.mount. Dec 13 14:15:12.369810 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 14:15:12.374426 systemd[1]: Starting initrd-setup-root.service... Dec 13 14:15:12.393806 initrd-setup-root[1247]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 14:15:12.402411 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1242) Dec 13 14:15:12.408090 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 13 14:15:12.408147 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 14:15:12.410222 kernel: BTRFS info (device nvme0n1p6): has skinny extents Dec 13 14:15:12.413073 initrd-setup-root[1262]: cut: /sysroot/etc/group: No such file or directory Dec 13 14:15:12.421567 initrd-setup-root[1279]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 14:15:12.427414 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 14:15:12.432818 initrd-setup-root[1289]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 14:15:12.439948 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 14:15:12.626656 systemd-networkd[1181]: eth0: Gained IPv6LL Dec 13 14:15:12.635969 systemd[1]: Finished initrd-setup-root.service. Dec 13 14:15:12.637000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:12.644980 systemd[1]: Starting ignition-mount.service... Dec 13 14:15:12.650102 kernel: audit: type=1130 audit(1734099312.637:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:12.649190 systemd[1]: Starting sysroot-boot.service... Dec 13 14:15:12.665313 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Dec 13 14:15:12.665549 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Dec 13 14:15:12.690779 ignition[1307]: INFO : Ignition 2.14.0 Dec 13 14:15:12.690779 ignition[1307]: INFO : Stage: mount Dec 13 14:15:12.694598 ignition[1307]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:15:12.694598 ignition[1307]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:15:12.714117 ignition[1307]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:15:12.714117 ignition[1307]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:15:12.729531 ignition[1307]: INFO : PUT result: OK Dec 13 14:15:12.735642 ignition[1307]: INFO : mount: mount passed Dec 13 14:15:12.737555 ignition[1307]: INFO : Ignition finished successfully Dec 13 14:15:12.741281 systemd[1]: Finished ignition-mount.service. Dec 13 14:15:12.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:12.752505 systemd[1]: Starting ignition-files.service... Dec 13 14:15:12.760420 kernel: audit: type=1130 audit(1734099312.741:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:12.765897 systemd[1]: Finished sysroot-boot.service. Dec 13 14:15:12.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:12.777466 kernel: audit: type=1130 audit(1734099312.768:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:12.779573 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 14:15:12.805449 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1318) Dec 13 14:15:12.812413 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 13 14:15:12.812600 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 14:15:12.814700 kernel: BTRFS info (device nvme0n1p6): has skinny extents Dec 13 14:15:12.829385 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 14:15:12.835209 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 14:15:12.854648 ignition[1337]: INFO : Ignition 2.14.0 Dec 13 14:15:12.854648 ignition[1337]: INFO : Stage: files Dec 13 14:15:12.857914 ignition[1337]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:15:12.857914 ignition[1337]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:15:12.876828 ignition[1337]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:15:12.879272 ignition[1337]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:15:12.882711 ignition[1337]: INFO : PUT result: OK Dec 13 14:15:12.888834 ignition[1337]: DEBUG : files: compiled without relabeling support, skipping Dec 13 14:15:12.892906 ignition[1337]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 14:15:12.892906 ignition[1337]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 14:15:12.924239 ignition[1337]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 14:15:12.927714 ignition[1337]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 14:15:12.931344 unknown[1337]: wrote ssh authorized keys file for user: core Dec 13 14:15:12.933558 ignition[1337]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 14:15:12.944163 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 14:15:12.947616 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 14:15:12.951019 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Dec 13 14:15:12.954591 ignition[1337]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:15:12.980391 ignition[1337]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem995591967" Dec 13 14:15:12.986879 kernel: BTRFS info: devid 1 device path /dev/nvme0n1p6 changed to /dev/disk/by-label/OEM scanned by ignition (1340) Dec 13 14:15:12.986962 ignition[1337]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem995591967": device or resource busy Dec 13 14:15:12.986962 ignition[1337]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem995591967", trying btrfs: device or resource busy Dec 13 14:15:12.986962 ignition[1337]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem995591967" Dec 13 14:15:12.986962 ignition[1337]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem995591967" Dec 13 14:15:13.011671 ignition[1337]: INFO : op(3): [started] unmounting "/mnt/oem995591967" Dec 13 14:15:13.013994 ignition[1337]: INFO : op(3): [finished] unmounting "/mnt/oem995591967" Dec 13 14:15:13.013994 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Dec 13 14:15:13.013994 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 14:15:13.023239 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 14:15:13.023239 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:15:13.023239 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:15:13.023239 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 14:15:13.023239 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 14:15:13.023239 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Dec 13 14:15:13.051211 ignition[1337]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:15:13.055936 ignition[1337]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3789589625" Dec 13 14:15:13.066632 ignition[1337]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3789589625": device or resource busy Dec 13 14:15:13.066632 ignition[1337]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3789589625", trying btrfs: device or resource busy Dec 13 14:15:13.066632 ignition[1337]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3789589625" Dec 13 14:15:13.066632 ignition[1337]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3789589625" Dec 13 14:15:13.066632 ignition[1337]: INFO : op(6): [started] unmounting "/mnt/oem3789589625" Dec 13 14:15:13.066632 ignition[1337]: INFO : op(6): [finished] unmounting "/mnt/oem3789589625" Dec 13 14:15:13.066632 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Dec 13 14:15:13.066632 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Dec 13 14:15:13.066632 ignition[1337]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:15:13.102889 ignition[1337]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1322030077" Dec 13 14:15:13.102889 ignition[1337]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1322030077": device or resource busy Dec 13 14:15:13.102889 ignition[1337]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1322030077", trying btrfs: device or resource busy Dec 13 14:15:13.102889 ignition[1337]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1322030077" Dec 13 14:15:13.102889 ignition[1337]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1322030077" Dec 13 14:15:13.102889 ignition[1337]: INFO : op(9): [started] unmounting "/mnt/oem1322030077" Dec 13 14:15:13.102889 ignition[1337]: INFO : op(9): [finished] unmounting "/mnt/oem1322030077" Dec 13 14:15:13.102889 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Dec 13 14:15:13.102889 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Dec 13 14:15:13.102889 ignition[1337]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:15:13.134168 ignition[1337]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2072366336" Dec 13 14:15:13.136936 ignition[1337]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2072366336": device or resource busy Dec 13 14:15:13.136936 ignition[1337]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2072366336", trying btrfs: device or resource busy Dec 13 14:15:13.136936 ignition[1337]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2072366336" Dec 13 14:15:13.146556 ignition[1337]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2072366336" Dec 13 14:15:13.146556 ignition[1337]: INFO : op(c): [started] unmounting "/mnt/oem2072366336" Dec 13 14:15:13.151472 ignition[1337]: INFO : op(c): [finished] unmounting "/mnt/oem2072366336" Dec 13 14:15:13.153589 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Dec 13 14:15:13.157141 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 14:15:13.161110 ignition[1337]: INFO : GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Dec 13 14:15:13.600585 ignition[1337]: INFO : GET result: OK Dec 13 14:15:14.454852 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 14:15:14.454852 ignition[1337]: INFO : files: op(c): [started] processing unit "amazon-ssm-agent.service" Dec 13 14:15:14.461581 ignition[1337]: INFO : files: op(c): op(d): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Dec 13 14:15:14.461581 ignition[1337]: INFO : files: op(c): op(d): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Dec 13 14:15:14.461581 ignition[1337]: INFO : files: op(c): [finished] processing unit "amazon-ssm-agent.service" Dec 13 14:15:14.461581 ignition[1337]: INFO : files: op(e): [started] processing unit "nvidia.service" Dec 13 14:15:14.461581 ignition[1337]: INFO : files: op(e): [finished] processing unit "nvidia.service" Dec 13 14:15:14.461581 ignition[1337]: INFO : files: op(f): [started] processing unit "coreos-metadata-sshkeys@.service" Dec 13 14:15:14.461581 ignition[1337]: INFO : files: op(f): [finished] processing unit "coreos-metadata-sshkeys@.service" Dec 13 14:15:14.461581 ignition[1337]: INFO : files: op(10): [started] processing unit "containerd.service" Dec 13 14:15:14.461581 ignition[1337]: INFO : files: op(10): op(11): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 14:15:14.490265 ignition[1337]: INFO : files: op(10): op(11): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 14:15:14.490265 ignition[1337]: INFO : files: op(10): [finished] processing unit "containerd.service" Dec 13 14:15:14.490265 ignition[1337]: INFO : files: op(12): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 14:15:14.490265 ignition[1337]: INFO : files: op(12): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 14:15:14.490265 ignition[1337]: INFO : files: op(13): [started] setting preset to enabled for "amazon-ssm-agent.service" Dec 13 14:15:14.490265 ignition[1337]: INFO : files: op(13): [finished] setting preset to enabled for "amazon-ssm-agent.service" Dec 13 14:15:14.490265 ignition[1337]: INFO : files: op(14): [started] setting preset to enabled for "nvidia.service" Dec 13 14:15:14.490265 ignition[1337]: INFO : files: op(14): [finished] setting preset to enabled for "nvidia.service" Dec 13 14:15:14.490265 ignition[1337]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:15:14.490265 ignition[1337]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:15:14.490265 ignition[1337]: INFO : files: files passed Dec 13 14:15:14.490265 ignition[1337]: INFO : Ignition finished successfully Dec 13 14:15:14.523880 systemd[1]: Finished ignition-files.service. Dec 13 14:15:14.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:14.535399 kernel: audit: type=1130 audit(1734099314.524:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:14.544850 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 14:15:14.546763 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 14:15:14.556279 systemd[1]: Starting ignition-quench.service... Dec 13 14:15:14.565006 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 14:15:14.565451 systemd[1]: Finished ignition-quench.service. Dec 13 14:15:14.567000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:14.567000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:14.578436 kernel: audit: type=1130 audit(1734099314.567:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:14.581405 initrd-setup-root-after-ignition[1362]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 14:15:14.585867 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 14:15:14.588747 systemd[1]: Reached target ignition-complete.target. Dec 13 14:15:14.586000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:14.598023 systemd[1]: Starting initrd-parse-etc.service... Dec 13 14:15:14.627987 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 14:15:14.628407 systemd[1]: Finished initrd-parse-etc.service. Dec 13 14:15:14.630000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:14.632000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:14.633523 systemd[1]: Reached target initrd-fs.target. Dec 13 14:15:14.636395 systemd[1]: Reached target initrd.target. Dec 13 14:15:14.639194 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 14:15:14.643121 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 14:15:14.670920 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 14:15:14.672000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:14.675646 systemd[1]: Starting initrd-cleanup.service... Dec 13 14:15:14.697698 systemd[1]: Stopped target nss-lookup.target. Dec 13 14:15:14.701444 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 14:15:14.705539 systemd[1]: Stopped target timers.target. Dec 13 14:15:14.709301 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 14:15:14.711646 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 14:15:14.713000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:14.715294 systemd[1]: Stopped target initrd.target. Dec 13 14:15:14.718329 systemd[1]: Stopped target basic.target. Dec 13 14:15:14.721450 systemd[1]: Stopped target ignition-complete.target. Dec 13 14:15:14.725052 systemd[1]: Stopped target ignition-diskful.target. Dec 13 14:15:14.728852 systemd[1]: Stopped target initrd-root-device.target. Dec 13 14:15:14.732700 systemd[1]: Stopped target remote-fs.target. Dec 13 14:15:14.736103 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 14:15:14.739753 systemd[1]: Stopped target sysinit.target. Dec 13 14:15:14.743164 systemd[1]: Stopped target local-fs.target. Dec 13 14:15:14.746579 systemd[1]: Stopped target local-fs-pre.target. Dec 13 14:15:14.750000 systemd[1]: Stopped target swap.target. Dec 13 14:15:14.752936 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 14:15:14.755160 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 14:15:14.757000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:14.758773 systemd[1]: Stopped target cryptsetup.target. Dec 13 14:15:14.762020 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 14:15:14.764116 systemd[1]: Stopped dracut-initqueue.service. Dec 13 14:15:14.765000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:14.767389 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 14:15:14.769750 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 14:15:14.772000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:14.773564 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 14:15:14.773805 systemd[1]: Stopped ignition-files.service. Dec 13 14:15:14.777000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:14.780608 systemd[1]: Stopping ignition-mount.service... Dec 13 14:15:14.796000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:14.784144 systemd[1]: Stopping sysroot-boot.service... Dec 13 14:15:14.802025 ignition[1375]: INFO : Ignition 2.14.0 Dec 13 14:15:14.802025 ignition[1375]: INFO : Stage: umount Dec 13 14:15:14.803000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:14.795931 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 14:15:14.809998 ignition[1375]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:15:14.809998 ignition[1375]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:15:14.796270 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 14:15:14.799281 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 14:15:14.799838 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 14:15:14.823706 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 14:15:14.824186 systemd[1]: Finished initrd-cleanup.service. Dec 13 14:15:14.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:14.838000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:14.845701 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 14:15:14.848542 ignition[1375]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:15:14.850932 ignition[1375]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:15:14.854247 ignition[1375]: INFO : PUT result: OK Dec 13 14:15:14.859714 ignition[1375]: INFO : umount: umount passed Dec 13 14:15:14.861571 ignition[1375]: INFO : Ignition finished successfully Dec 13 14:15:14.864897 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 14:15:14.865124 systemd[1]: Stopped ignition-mount.service. Dec 13 14:15:14.867000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:14.869995 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 14:15:14.870118 systemd[1]: Stopped ignition-disks.service. Dec 13 14:15:14.871000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:14.877235 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 14:15:14.879000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:14.881000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:14.877402 systemd[1]: Stopped ignition-kargs.service. Dec 13 14:15:14.880739 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 14:15:14.880841 systemd[1]: Stopped ignition-fetch.service. Dec 13 14:15:14.882686 systemd[1]: Stopped target network.target. Dec 13 14:15:14.885882 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 14:15:14.886013 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 14:15:14.893000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:14.894808 systemd[1]: Stopped target paths.target. Dec 13 14:15:14.894952 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 14:15:14.903157 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 14:15:14.908540 systemd[1]: Stopped target slices.target. Dec 13 14:15:14.911504 systemd[1]: Stopped target sockets.target. Dec 13 14:15:14.914733 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 14:15:14.914864 systemd[1]: Closed iscsid.socket. Dec 13 14:15:14.919129 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 14:15:14.919275 systemd[1]: Closed iscsiuio.socket. Dec 13 14:15:14.922418 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 14:15:14.922549 systemd[1]: Stopped ignition-setup.service. Dec 13 14:15:14.926000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:14.928984 systemd[1]: Stopping systemd-networkd.service... Dec 13 14:15:14.934811 systemd[1]: Stopping systemd-resolved.service... Dec 13 14:15:14.937267 systemd-networkd[1181]: eth0: DHCPv6 lease lost Dec 13 14:15:14.950871 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 14:15:14.954000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:14.951101 systemd[1]: Stopped systemd-resolved.service. Dec 13 14:15:14.960000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:14.957040 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 14:15:14.957434 systemd[1]: Stopped systemd-networkd.service. Dec 13 14:15:14.966000 audit: BPF prog-id=6 op=UNLOAD Dec 13 14:15:14.966000 audit: BPF prog-id=9 op=UNLOAD Dec 13 14:15:14.968026 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 14:15:14.974000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:14.968100 systemd[1]: Closed systemd-networkd.socket. Dec 13 14:15:14.971136 systemd[1]: Stopping network-cleanup.service... Dec 13 14:15:14.980000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:14.972685 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 14:15:14.972817 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 14:15:14.976976 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:15:14.977086 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:15:14.990646 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 14:15:14.992000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:14.990757 systemd[1]: Stopped systemd-modules-load.service. Dec 13 14:15:15.000275 systemd[1]: Stopping systemd-udevd.service... Dec 13 14:15:15.004879 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 14:15:15.016973 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 14:15:15.017262 systemd[1]: Stopped sysroot-boot.service. Dec 13 14:15:15.026000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:15.030098 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 14:15:15.030000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:15.030333 systemd[1]: Stopped network-cleanup.service. Dec 13 14:15:15.035680 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 14:15:15.038000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:15.035805 systemd[1]: Stopped initrd-setup-root.service. Dec 13 14:15:15.041076 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 14:15:15.041430 systemd[1]: Stopped systemd-udevd.service. Dec 13 14:15:15.050000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:15.052206 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 14:15:15.052299 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 14:15:15.059105 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 14:15:15.059211 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 14:15:15.064263 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 14:15:15.064460 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 14:15:15.074000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:15.076594 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 14:15:15.077000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:15.081000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:15.076713 systemd[1]: Stopped dracut-cmdline.service. Dec 13 14:15:15.080087 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 14:15:15.096000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:15.080260 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 14:15:15.102000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:15.105000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:15.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:15.118000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:15.085078 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 14:15:15.094608 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 14:15:15.094801 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Dec 13 14:15:15.098812 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 14:15:15.098927 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 14:15:15.104733 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 14:15:15.104958 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 14:15:15.109186 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 13 14:15:15.114596 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 14:15:15.114836 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 14:15:15.120353 systemd[1]: Reached target initrd-switch-root.target. Dec 13 14:15:15.155000 audit: BPF prog-id=5 op=UNLOAD Dec 13 14:15:15.155000 audit: BPF prog-id=4 op=UNLOAD Dec 13 14:15:15.155000 audit: BPF prog-id=3 op=UNLOAD Dec 13 14:15:15.155000 audit: BPF prog-id=8 op=UNLOAD Dec 13 14:15:15.155000 audit: BPF prog-id=7 op=UNLOAD Dec 13 14:15:15.126454 systemd[1]: Starting initrd-switch-root.service... Dec 13 14:15:15.156748 systemd[1]: Switching root. Dec 13 14:15:15.190501 iscsid[1186]: iscsid shutting down. Dec 13 14:15:15.195571 systemd-journald[310]: Received SIGTERM from PID 1 (systemd). Dec 13 14:15:15.195671 systemd-journald[310]: Journal stopped Dec 13 14:15:21.403015 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 14:15:21.403152 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 14:15:21.403196 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 14:15:21.403227 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 14:15:21.403257 kernel: SELinux: policy capability open_perms=1 Dec 13 14:15:21.403294 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 14:15:21.403327 kernel: SELinux: policy capability always_check_network=0 Dec 13 14:15:21.403410 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 14:15:21.403448 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 14:15:21.403483 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 14:15:21.403513 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 14:15:21.403548 systemd[1]: Successfully loaded SELinux policy in 113.027ms. Dec 13 14:15:21.403614 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 29.378ms. Dec 13 14:15:21.403654 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:15:21.403689 systemd[1]: Detected virtualization amazon. Dec 13 14:15:21.403719 systemd[1]: Detected architecture arm64. Dec 13 14:15:21.403750 systemd[1]: Detected first boot. Dec 13 14:15:21.403783 systemd[1]: Initializing machine ID from VM UUID. Dec 13 14:15:21.403817 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 14:15:21.403850 systemd[1]: Populated /etc with preset unit settings. Dec 13 14:15:21.403883 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:15:21.403923 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:15:21.403960 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:15:21.403994 systemd[1]: Queued start job for default target multi-user.target. Dec 13 14:15:21.404027 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 14:15:21.404061 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 14:15:21.404093 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Dec 13 14:15:21.404125 systemd[1]: Created slice system-getty.slice. Dec 13 14:15:21.404154 systemd[1]: Created slice system-modprobe.slice. Dec 13 14:15:21.404189 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 14:15:21.404223 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 14:15:21.404255 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 14:15:21.404288 systemd[1]: Created slice user.slice. Dec 13 14:15:21.404320 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:15:21.404350 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 14:15:21.404406 systemd[1]: Set up automount boot.automount. Dec 13 14:15:21.404438 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 14:15:21.404473 systemd[1]: Reached target integritysetup.target. Dec 13 14:15:21.404510 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:15:21.404543 systemd[1]: Reached target remote-fs.target. Dec 13 14:15:21.404578 systemd[1]: Reached target slices.target. Dec 13 14:15:21.404610 systemd[1]: Reached target swap.target. Dec 13 14:15:21.404641 systemd[1]: Reached target torcx.target. Dec 13 14:15:21.404672 systemd[1]: Reached target veritysetup.target. Dec 13 14:15:21.404704 systemd[1]: Listening on systemd-coredump.socket. Dec 13 14:15:21.404733 systemd[1]: Listening on systemd-initctl.socket. Dec 13 14:15:21.404768 kernel: kauditd_printk_skb: 55 callbacks suppressed Dec 13 14:15:21.404797 kernel: audit: type=1400 audit(1734099320.955:83): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:15:21.404837 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 14:15:21.404867 kernel: audit: type=1335 audit(1734099320.955:84): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 13 14:15:21.410462 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 14:15:21.410542 systemd[1]: Listening on systemd-journald.socket. Dec 13 14:15:21.410580 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:15:21.410617 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:15:21.410652 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:15:21.410684 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 14:15:21.410716 systemd[1]: Mounting dev-hugepages.mount... Dec 13 14:15:21.410748 systemd[1]: Mounting dev-mqueue.mount... Dec 13 14:15:21.410778 systemd[1]: Mounting media.mount... Dec 13 14:15:21.410809 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 14:15:21.410842 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 14:15:21.410873 systemd[1]: Mounting tmp.mount... Dec 13 14:15:21.410902 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 14:15:21.410936 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:15:21.410967 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:15:21.410997 systemd[1]: Starting modprobe@configfs.service... Dec 13 14:15:21.411032 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:15:21.411064 systemd[1]: Starting modprobe@drm.service... Dec 13 14:15:21.411094 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:15:21.411126 systemd[1]: Starting modprobe@fuse.service... Dec 13 14:15:21.411159 systemd[1]: Starting modprobe@loop.service... Dec 13 14:15:21.411193 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 14:15:21.411229 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Dec 13 14:15:21.411260 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Dec 13 14:15:21.411289 kernel: fuse: init (API version 7.34) Dec 13 14:15:21.411321 systemd[1]: Starting systemd-journald.service... Dec 13 14:15:21.411352 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:15:21.411405 systemd[1]: Starting systemd-network-generator.service... Dec 13 14:15:21.411439 systemd[1]: Starting systemd-remount-fs.service... Dec 13 14:15:21.411472 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:15:21.411503 systemd[1]: Mounted dev-hugepages.mount. Dec 13 14:15:21.411539 kernel: loop: module loaded Dec 13 14:15:21.411569 systemd[1]: Mounted dev-mqueue.mount. Dec 13 14:15:21.411598 systemd[1]: Mounted media.mount. Dec 13 14:15:21.411629 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 14:15:21.411663 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 14:15:21.411692 systemd[1]: Mounted tmp.mount. Dec 13 14:15:21.411723 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:15:21.411753 kernel: audit: type=1130 audit(1734099321.253:85): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:21.411790 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 14:15:21.411821 systemd[1]: Finished modprobe@configfs.service. Dec 13 14:15:21.411851 kernel: audit: type=1130 audit(1734099321.272:86): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:21.411880 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:15:21.411910 kernel: audit: type=1131 audit(1734099321.272:87): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:21.411939 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:15:21.411968 kernel: audit: type=1130 audit(1734099321.294:88): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:21.411997 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:15:21.412034 kernel: audit: type=1131 audit(1734099321.294:89): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:21.412063 systemd[1]: Finished modprobe@drm.service. Dec 13 14:15:21.412093 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:15:21.412124 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:15:21.412155 kernel: audit: type=1130 audit(1734099321.317:90): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:21.412190 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 14:15:21.412220 kernel: audit: type=1131 audit(1734099321.317:91): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:21.412249 systemd[1]: Finished modprobe@fuse.service. Dec 13 14:15:21.412279 kernel: audit: type=1130 audit(1734099321.342:92): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:21.412310 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:15:21.412342 systemd[1]: Finished modprobe@loop.service. Dec 13 14:15:21.413088 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:15:21.413134 systemd[1]: Finished systemd-network-generator.service. Dec 13 14:15:21.413165 systemd[1]: Finished systemd-remount-fs.service. Dec 13 14:15:21.413195 systemd[1]: Reached target network-pre.target. Dec 13 14:15:21.413225 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 14:15:21.413257 systemd-journald[1534]: Journal started Dec 13 14:15:21.413372 systemd-journald[1534]: Runtime Journal (/run/log/journal/ec29d055126f4623ec4bc9ea633b864d) is 8.0M, max 75.4M, 67.4M free. Dec 13 14:15:20.955000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:15:20.955000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 13 14:15:21.253000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:21.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:21.272000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:21.294000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:21.294000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:21.317000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:21.317000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:21.342000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:21.342000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:21.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:21.368000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:21.376000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:21.376000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:21.381000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:21.387000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:21.391000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 14:15:21.391000 audit[1534]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=4 a1=ffffc90c8af0 a2=4000 a3=1 items=0 ppid=1 pid=1534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:21.391000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 14:15:21.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:21.430283 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 14:15:21.430437 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 14:15:21.450538 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 14:15:21.450625 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:15:21.450664 systemd[1]: Starting systemd-random-seed.service... Dec 13 14:15:21.456735 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:15:21.462552 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:15:21.470721 systemd[1]: Started systemd-journald.service. Dec 13 14:15:21.472000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:21.476051 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 14:15:21.477964 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 14:15:21.482560 systemd[1]: Starting systemd-journal-flush.service... Dec 13 14:15:21.500145 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 14:15:21.500000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:21.513958 systemd[1]: Starting systemd-sysusers.service... Dec 13 14:15:21.521503 systemd[1]: Finished systemd-random-seed.service. Dec 13 14:15:21.522000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:21.523526 systemd[1]: Reached target first-boot-complete.target. Dec 13 14:15:21.531651 systemd-journald[1534]: Time spent on flushing to /var/log/journal/ec29d055126f4623ec4bc9ea633b864d is 65.634ms for 1057 entries. Dec 13 14:15:21.531651 systemd-journald[1534]: System Journal (/var/log/journal/ec29d055126f4623ec4bc9ea633b864d) is 8.0M, max 195.6M, 187.6M free. Dec 13 14:15:21.636519 systemd-journald[1534]: Received client request to flush runtime journal. Dec 13 14:15:21.565000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:21.564314 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:15:21.638461 systemd[1]: Finished systemd-journal-flush.service. Dec 13 14:15:21.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:21.667535 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:15:21.668000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:21.671842 systemd[1]: Starting systemd-udev-settle.service... Dec 13 14:15:21.693624 udevadm[1583]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 14:15:21.765312 systemd[1]: Finished systemd-sysusers.service. Dec 13 14:15:21.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:21.769797 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 14:15:21.887863 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 14:15:21.888000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:22.349800 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 14:15:22.350000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:22.354149 systemd[1]: Starting systemd-udevd.service... Dec 13 14:15:22.396695 systemd-udevd[1589]: Using default interface naming scheme 'v252'. Dec 13 14:15:22.450131 systemd[1]: Started systemd-udevd.service. Dec 13 14:15:22.450000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:22.455639 systemd[1]: Starting systemd-networkd.service... Dec 13 14:15:22.477922 systemd[1]: Starting systemd-userdbd.service... Dec 13 14:15:22.558298 systemd[1]: Found device dev-ttyS0.device. Dec 13 14:15:22.594628 systemd[1]: Started systemd-userdbd.service. Dec 13 14:15:22.595000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:22.601602 (udev-worker)[1590]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:15:22.831745 systemd-networkd[1593]: lo: Link UP Dec 13 14:15:22.831773 systemd-networkd[1593]: lo: Gained carrier Dec 13 14:15:22.832832 systemd-networkd[1593]: Enumeration completed Dec 13 14:15:22.833065 systemd[1]: Started systemd-networkd.service. Dec 13 14:15:22.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:22.837822 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 14:15:22.841032 systemd-networkd[1593]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:15:22.846395 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:15:22.847781 systemd-networkd[1593]: eth0: Link UP Dec 13 14:15:22.848240 systemd-networkd[1593]: eth0: Gained carrier Dec 13 14:15:22.874109 systemd-networkd[1593]: eth0: DHCPv4 address 172.31.20.24/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 14:15:22.921411 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/nvme0n1p6 scanned by (udev-worker) (1591) Dec 13 14:15:23.062732 systemd[1]: dev-disk-by\x2dlabel-OEM.device was skipped because of an unmet condition check (ConditionPathExists=!/usr/.noupdate). Dec 13 14:15:23.063625 systemd[1]: Finished systemd-udev-settle.service. Dec 13 14:15:23.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:23.068067 systemd[1]: Starting lvm2-activation-early.service... Dec 13 14:15:23.132905 lvm[1709]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:15:23.173248 systemd[1]: Finished lvm2-activation-early.service. Dec 13 14:15:23.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:23.175713 systemd[1]: Reached target cryptsetup.target. Dec 13 14:15:23.180327 systemd[1]: Starting lvm2-activation.service... Dec 13 14:15:23.190064 lvm[1711]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:15:23.228218 systemd[1]: Finished lvm2-activation.service. Dec 13 14:15:23.230751 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:15:23.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:23.232577 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 14:15:23.232639 systemd[1]: Reached target local-fs.target. Dec 13 14:15:23.234751 systemd[1]: Reached target machines.target. Dec 13 14:15:23.239167 systemd[1]: Starting ldconfig.service... Dec 13 14:15:23.242005 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:15:23.242147 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:15:23.244855 systemd[1]: Starting systemd-boot-update.service... Dec 13 14:15:23.249107 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 14:15:23.254727 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 14:15:23.260257 systemd[1]: Starting systemd-sysext.service... Dec 13 14:15:23.281069 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1714 (bootctl) Dec 13 14:15:23.283736 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 14:15:23.303234 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 14:15:23.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:23.312106 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 14:15:23.323908 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 14:15:23.324526 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 14:15:23.356424 kernel: loop0: detected capacity change from 0 to 194512 Dec 13 14:15:23.434450 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 14:15:23.449059 systemd-fsck[1726]: fsck.fat 4.2 (2021-01-31) Dec 13 14:15:23.449059 systemd-fsck[1726]: /dev/nvme0n1p1: 236 files, 117175/258078 clusters Dec 13 14:15:23.450585 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 14:15:23.453014 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 14:15:23.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:23.460080 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 14:15:23.465412 kernel: loop1: detected capacity change from 0 to 194512 Dec 13 14:15:23.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:23.468089 systemd[1]: Mounting boot.mount... Dec 13 14:15:23.499104 (sd-sysext)[1729]: Using extensions 'kubernetes'. Dec 13 14:15:23.515432 (sd-sysext)[1729]: Merged extensions into '/usr'. Dec 13 14:15:23.538479 systemd[1]: Mounted boot.mount. Dec 13 14:15:23.562388 systemd[1]: Mounting usr-share-oem.mount... Dec 13 14:15:23.565888 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:15:23.569055 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:15:23.574443 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:15:23.578553 systemd[1]: Starting modprobe@loop.service... Dec 13 14:15:23.580342 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:15:23.580687 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:15:23.592162 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:15:23.592570 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:15:23.598000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:23.600000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:23.603531 systemd[1]: Finished systemd-boot-update.service. Dec 13 14:15:23.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:23.608273 systemd[1]: Mounted usr-share-oem.mount. Dec 13 14:15:23.611007 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:15:23.611429 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:15:23.614000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:23.614000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:23.626215 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:15:23.627079 systemd[1]: Finished modprobe@loop.service. Dec 13 14:15:23.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:23.628000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:23.632740 systemd[1]: Finished systemd-sysext.service. Dec 13 14:15:23.633000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:23.639757 systemd[1]: Starting ensure-sysext.service... Dec 13 14:15:23.641559 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:15:23.641720 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:15:23.644341 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 14:15:23.668015 systemd[1]: Reloading. Dec 13 14:15:23.675946 systemd-tmpfiles[1760]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 14:15:23.678050 systemd-tmpfiles[1760]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 14:15:23.682697 systemd-tmpfiles[1760]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 14:15:23.801840 /usr/lib/systemd/system-generators/torcx-generator[1782]: time="2024-12-13T14:15:23Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:15:23.801902 /usr/lib/systemd/system-generators/torcx-generator[1782]: time="2024-12-13T14:15:23Z" level=info msg="torcx already run" Dec 13 14:15:24.070114 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:15:24.070548 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:15:24.135304 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:15:24.210560 systemd-networkd[1593]: eth0: Gained IPv6LL Dec 13 14:15:24.287604 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 14:15:24.288000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:24.292957 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 14:15:24.294000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:24.301226 systemd[1]: Starting audit-rules.service... Dec 13 14:15:24.306146 systemd[1]: Starting clean-ca-certificates.service... Dec 13 14:15:24.311292 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 14:15:24.322891 systemd[1]: Starting systemd-resolved.service... Dec 13 14:15:24.329461 systemd[1]: Starting systemd-timesyncd.service... Dec 13 14:15:24.336384 systemd[1]: Starting systemd-update-utmp.service... Dec 13 14:15:24.348770 systemd[1]: Finished clean-ca-certificates.service. Dec 13 14:15:24.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:24.368682 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:15:24.373692 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:15:24.379656 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:15:24.385646 systemd[1]: Starting modprobe@loop.service... Dec 13 14:15:24.388810 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:15:24.389180 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:15:24.389610 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:15:24.392202 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:15:24.392677 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:15:24.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:24.397000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:24.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:24.401000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:24.400553 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:15:24.400943 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:15:24.403778 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:15:24.411769 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:15:24.415227 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:15:24.423585 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:15:24.428185 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:15:24.428647 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:15:24.428894 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:15:24.438274 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:15:24.443650 systemd[1]: Starting modprobe@drm.service... Dec 13 14:15:24.446112 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:15:24.446517 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:15:24.446849 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:15:24.451329 systemd[1]: Finished ensure-sysext.service. Dec 13 14:15:24.452000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:24.470000 audit[1849]: SYSTEM_BOOT pid=1849 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 14:15:24.474873 systemd[1]: Finished systemd-update-utmp.service. Dec 13 14:15:24.475000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:24.480218 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:15:24.481000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:24.481000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:24.480668 systemd[1]: Finished modprobe@loop.service. Dec 13 14:15:24.495689 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:15:24.497000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:24.497000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:24.496054 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:15:24.498813 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:15:24.511000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:24.511000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:24.510162 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:15:24.510628 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:15:24.512981 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:15:24.519290 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:15:24.520000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:24.520000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:24.519762 systemd[1]: Finished modprobe@drm.service. Dec 13 14:15:24.532000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:15:24.531262 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 14:15:24.602435 ldconfig[1713]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 14:15:24.607000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 14:15:24.607000 audit[1882]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffd215a1d0 a2=420 a3=0 items=0 ppid=1844 pid=1882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:15:24.607000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 14:15:24.609100 augenrules[1882]: No rules Dec 13 14:15:24.611319 systemd[1]: Finished audit-rules.service. Dec 13 14:15:24.616951 systemd[1]: Finished ldconfig.service. Dec 13 14:15:24.621402 systemd[1]: Starting systemd-update-done.service... Dec 13 14:15:24.646216 systemd[1]: Finished systemd-update-done.service. Dec 13 14:15:24.681117 systemd-resolved[1847]: Positive Trust Anchors: Dec 13 14:15:24.681145 systemd-resolved[1847]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:15:24.681198 systemd-resolved[1847]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:15:24.701670 systemd[1]: Started systemd-timesyncd.service. Dec 13 14:15:24.703485 systemd[1]: Reached target time-set.target. Dec 13 14:15:24.731836 systemd-resolved[1847]: Defaulting to hostname 'linux'. Dec 13 14:15:24.734859 systemd[1]: Started systemd-resolved.service. Dec 13 14:15:24.736689 systemd[1]: Reached target network.target. Dec 13 14:15:24.738298 systemd[1]: Reached target network-online.target. Dec 13 14:15:24.740036 systemd[1]: Reached target nss-lookup.target. Dec 13 14:15:24.741683 systemd[1]: Reached target sysinit.target. Dec 13 14:15:24.743353 systemd[1]: Started motdgen.path. Dec 13 14:15:24.286197 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 14:15:24.354051 systemd-journald[1534]: Time jumped backwards, rotating. Dec 13 14:15:24.288864 systemd-resolved[1847]: Clock change detected. Flushing caches. Dec 13 14:15:24.290950 systemd-timesyncd[1848]: Contacted time server 45.63.54.13:123 (0.flatcar.pool.ntp.org). Dec 13 14:15:24.291161 systemd-timesyncd[1848]: Initial clock synchronization to Fri 2024-12-13 14:15:24.286087 UTC. Dec 13 14:15:24.292926 systemd[1]: Started logrotate.timer. Dec 13 14:15:24.294663 systemd[1]: Started mdadm.timer. Dec 13 14:15:24.296235 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 14:15:24.298050 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 14:15:24.298115 systemd[1]: Reached target paths.target. Dec 13 14:15:24.299651 systemd[1]: Reached target timers.target. Dec 13 14:15:24.308901 systemd[1]: Listening on dbus.socket. Dec 13 14:15:24.315308 systemd[1]: Starting docker.socket... Dec 13 14:15:24.320086 systemd[1]: Listening on sshd.socket. Dec 13 14:15:24.321970 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:15:24.324750 systemd[1]: Listening on docker.socket. Dec 13 14:15:24.327019 systemd[1]: Reached target sockets.target. Dec 13 14:15:24.328879 systemd[1]: Reached target basic.target. Dec 13 14:15:24.343953 systemd[1]: System is tainted: cgroupsv1 Dec 13 14:15:24.344040 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:15:24.344099 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:15:24.346894 systemd[1]: Started amazon-ssm-agent.service. Dec 13 14:15:24.351795 systemd[1]: Starting containerd.service... Dec 13 14:15:24.356294 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Dec 13 14:15:24.361049 systemd[1]: Starting dbus.service... Dec 13 14:15:24.365580 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 14:15:24.371547 systemd[1]: Starting extend-filesystems.service... Dec 13 14:15:24.505836 jq[1900]: false Dec 13 14:15:24.373256 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 14:15:24.378872 systemd[1]: Starting kubelet.service... Dec 13 14:15:24.392191 systemd[1]: Starting motdgen.service... Dec 13 14:15:24.398619 systemd[1]: Started nvidia.service. Dec 13 14:15:24.417234 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 14:15:24.425403 systemd[1]: Starting sshd-keygen.service... Dec 13 14:15:24.435532 systemd[1]: Starting systemd-logind.service... Dec 13 14:15:24.439294 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:15:24.439577 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 14:15:24.516286 jq[1911]: true Dec 13 14:15:24.453432 systemd[1]: Starting update-engine.service... Dec 13 14:15:24.464365 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 14:15:24.478748 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 14:15:24.479314 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 14:15:24.513336 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 14:15:24.513955 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 14:15:24.638039 jq[1925]: true Dec 13 14:15:24.704317 dbus-daemon[1898]: [system] SELinux support is enabled Dec 13 14:15:24.714980 dbus-daemon[1898]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1593 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 13 14:15:24.704687 systemd[1]: Started dbus.service. Dec 13 14:15:24.710728 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 14:15:24.710827 systemd[1]: Reached target system-config.target. Dec 13 14:15:24.713151 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 14:15:24.713219 systemd[1]: Reached target user-config.target. Dec 13 14:15:24.719088 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 14:15:24.719656 systemd[1]: Finished motdgen.service. Dec 13 14:15:24.726258 dbus-daemon[1898]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 14:15:24.732804 systemd[1]: Starting systemd-hostnamed.service... Dec 13 14:15:24.751653 extend-filesystems[1901]: Found loop1 Dec 13 14:15:24.751653 extend-filesystems[1901]: Found nvme0n1 Dec 13 14:15:24.755905 extend-filesystems[1901]: Found nvme0n1p1 Dec 13 14:15:24.755905 extend-filesystems[1901]: Found nvme0n1p2 Dec 13 14:15:24.755905 extend-filesystems[1901]: Found nvme0n1p3 Dec 13 14:15:24.755905 extend-filesystems[1901]: Found usr Dec 13 14:15:24.755905 extend-filesystems[1901]: Found nvme0n1p4 Dec 13 14:15:24.755905 extend-filesystems[1901]: Found nvme0n1p6 Dec 13 14:15:24.755905 extend-filesystems[1901]: Found nvme0n1p7 Dec 13 14:15:24.755905 extend-filesystems[1901]: Found nvme0n1p9 Dec 13 14:15:24.755905 extend-filesystems[1901]: Checking size of /dev/nvme0n1p9 Dec 13 14:15:24.813565 update_engine[1910]: I1213 14:15:24.812295 1910 main.cc:92] Flatcar Update Engine starting Dec 13 14:15:24.817545 extend-filesystems[1901]: Resized partition /dev/nvme0n1p9 Dec 13 14:15:24.824780 update_engine[1910]: I1213 14:15:24.822456 1910 update_check_scheduler.cc:74] Next update check in 7m39s Dec 13 14:15:24.825907 systemd[1]: Started update-engine.service. Dec 13 14:15:24.833604 systemd[1]: Started locksmithd.service. Dec 13 14:15:24.851010 extend-filesystems[1965]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 14:15:24.865088 amazon-ssm-agent[1895]: 2024/12/13 14:15:24 Failed to load instance info from vault. RegistrationKey does not exist. Dec 13 14:15:24.880352 amazon-ssm-agent[1895]: Initializing new seelog logger Dec 13 14:15:24.880726 amazon-ssm-agent[1895]: New Seelog Logger Creation Complete Dec 13 14:15:24.880726 amazon-ssm-agent[1895]: 2024/12/13 14:15:24 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 14:15:24.880726 amazon-ssm-agent[1895]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 14:15:24.881501 amazon-ssm-agent[1895]: 2024/12/13 14:15:24 processing appconfig overrides Dec 13 14:15:24.896537 bash[1971]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:15:24.899874 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 14:15:25.051707 env[1916]: time="2024-12-13T14:15:25.050917292Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 14:15:25.060635 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Dec 13 14:15:25.104644 systemd[1]: nvidia.service: Deactivated successfully. Dec 13 14:15:25.127491 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Dec 13 14:15:25.153741 extend-filesystems[1965]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Dec 13 14:15:25.153741 extend-filesystems[1965]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 14:15:25.153741 extend-filesystems[1965]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Dec 13 14:15:25.160643 extend-filesystems[1901]: Resized filesystem in /dev/nvme0n1p9 Dec 13 14:15:25.172493 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 14:15:25.173268 systemd[1]: Finished extend-filesystems.service. Dec 13 14:15:25.311147 systemd-logind[1909]: Watching system buttons on /dev/input/event0 (Power Button) Dec 13 14:15:25.313960 systemd-logind[1909]: Watching system buttons on /dev/input/event1 (Sleep Button) Dec 13 14:15:25.317447 systemd-logind[1909]: New seat seat0. Dec 13 14:15:25.329564 systemd[1]: Started systemd-logind.service. Dec 13 14:15:25.351333 env[1916]: time="2024-12-13T14:15:25.351264321Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 14:15:25.352247 env[1916]: time="2024-12-13T14:15:25.352194693Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:15:25.363179 env[1916]: time="2024-12-13T14:15:25.363041950Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:15:25.363559 env[1916]: time="2024-12-13T14:15:25.363515842Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:15:25.366228 env[1916]: time="2024-12-13T14:15:25.366166630Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:15:25.367583 env[1916]: time="2024-12-13T14:15:25.367526446Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 14:15:25.367849 env[1916]: time="2024-12-13T14:15:25.367766038Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 14:15:25.368049 env[1916]: time="2024-12-13T14:15:25.367987582Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 14:15:25.368390 env[1916]: time="2024-12-13T14:15:25.368346370Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:15:25.369085 env[1916]: time="2024-12-13T14:15:25.369034690Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:15:25.373876 env[1916]: time="2024-12-13T14:15:25.373811914Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:15:25.374104 env[1916]: time="2024-12-13T14:15:25.374069710Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 14:15:25.376049 env[1916]: time="2024-12-13T14:15:25.375992002Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 14:15:25.376652 env[1916]: time="2024-12-13T14:15:25.376603942Z" level=info msg="metadata content store policy set" policy=shared Dec 13 14:15:25.387911 env[1916]: time="2024-12-13T14:15:25.387846586Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 14:15:25.388195 env[1916]: time="2024-12-13T14:15:25.388158046Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 14:15:25.388332 env[1916]: time="2024-12-13T14:15:25.388301734Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 14:15:25.388600 env[1916]: time="2024-12-13T14:15:25.388561966Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 14:15:25.388793 env[1916]: time="2024-12-13T14:15:25.388760806Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 14:15:25.388985 env[1916]: time="2024-12-13T14:15:25.388928362Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 14:15:25.389186 env[1916]: time="2024-12-13T14:15:25.389117662Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 14:15:25.389981 env[1916]: time="2024-12-13T14:15:25.389916430Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 14:15:25.390259 env[1916]: time="2024-12-13T14:15:25.390225478Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 14:15:25.390430 env[1916]: time="2024-12-13T14:15:25.390396514Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 14:15:25.390613 env[1916]: time="2024-12-13T14:15:25.390578674Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 14:15:25.390757 env[1916]: time="2024-12-13T14:15:25.390727702Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 14:15:25.391120 env[1916]: time="2024-12-13T14:15:25.391082686Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 14:15:25.391420 env[1916]: time="2024-12-13T14:15:25.391384666Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 14:15:25.392158 env[1916]: time="2024-12-13T14:15:25.392109502Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 14:15:25.392355 env[1916]: time="2024-12-13T14:15:25.392321350Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 14:15:25.393185 env[1916]: time="2024-12-13T14:15:25.393129118Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 14:15:25.395848 env[1916]: time="2024-12-13T14:15:25.395767546Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 14:15:25.396196 env[1916]: time="2024-12-13T14:15:25.396151318Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 14:15:25.396443 env[1916]: time="2024-12-13T14:15:25.396402862Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 14:15:25.397818 env[1916]: time="2024-12-13T14:15:25.397766926Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 14:15:25.398025 env[1916]: time="2024-12-13T14:15:25.397990618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 14:15:25.398161 env[1916]: time="2024-12-13T14:15:25.398130718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 14:15:25.398284 env[1916]: time="2024-12-13T14:15:25.398254474Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 14:15:25.398413 env[1916]: time="2024-12-13T14:15:25.398382742Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 14:15:25.398563 env[1916]: time="2024-12-13T14:15:25.398531830Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 14:15:25.398976 env[1916]: time="2024-12-13T14:15:25.398938342Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 14:15:25.401161 env[1916]: time="2024-12-13T14:15:25.401102194Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 14:15:25.401821 env[1916]: time="2024-12-13T14:15:25.401773282Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 14:15:25.402545 env[1916]: time="2024-12-13T14:15:25.402495118Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 14:15:25.402803 env[1916]: time="2024-12-13T14:15:25.402765022Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 14:15:25.402925 env[1916]: time="2024-12-13T14:15:25.402895450Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 14:15:25.403313 env[1916]: time="2024-12-13T14:15:25.403272562Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 14:15:25.403506 env[1916]: time="2024-12-13T14:15:25.403473898Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 14:15:25.410223 env[1916]: time="2024-12-13T14:15:25.410082898Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 14:15:25.411797 env[1916]: time="2024-12-13T14:15:25.410686402Z" level=info msg="Connect containerd service" Dec 13 14:15:25.411797 env[1916]: time="2024-12-13T14:15:25.410785138Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 14:15:25.412237 env[1916]: time="2024-12-13T14:15:25.412186078Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:15:25.414237 env[1916]: time="2024-12-13T14:15:25.414183214Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 14:15:25.414540 env[1916]: time="2024-12-13T14:15:25.414492046Z" level=info msg="Start subscribing containerd event" Dec 13 14:15:25.417809 env[1916]: time="2024-12-13T14:15:25.417760102Z" level=info msg="Start recovering state" Dec 13 14:15:25.422031 env[1916]: time="2024-12-13T14:15:25.421981810Z" level=info msg="Start event monitor" Dec 13 14:15:25.422263 env[1916]: time="2024-12-13T14:15:25.422229586Z" level=info msg="Start snapshots syncer" Dec 13 14:15:25.422384 env[1916]: time="2024-12-13T14:15:25.422355574Z" level=info msg="Start cni network conf syncer for default" Dec 13 14:15:25.422503 env[1916]: time="2024-12-13T14:15:25.422474494Z" level=info msg="Start streaming server" Dec 13 14:15:25.425927 env[1916]: time="2024-12-13T14:15:25.421845742Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 14:15:25.426248 env[1916]: time="2024-12-13T14:15:25.426215158Z" level=info msg="containerd successfully booted in 0.457706s" Dec 13 14:15:25.426416 systemd[1]: Started containerd.service. Dec 13 14:15:25.544351 coreos-metadata[1897]: Dec 13 14:15:25.544 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 13 14:15:25.545952 coreos-metadata[1897]: Dec 13 14:15:25.545 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Dec 13 14:15:25.546845 coreos-metadata[1897]: Dec 13 14:15:25.546 INFO Fetch successful Dec 13 14:15:25.546845 coreos-metadata[1897]: Dec 13 14:15:25.546 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 13 14:15:25.548071 coreos-metadata[1897]: Dec 13 14:15:25.547 INFO Fetch successful Dec 13 14:15:25.554075 unknown[1897]: wrote ssh authorized keys file for user: core Dec 13 14:15:25.571010 dbus-daemon[1898]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 13 14:15:25.572198 systemd[1]: Started systemd-hostnamed.service. Dec 13 14:15:25.575841 dbus-daemon[1898]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1945 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 13 14:15:25.581568 systemd[1]: Starting polkit.service... Dec 13 14:15:25.611032 update-ssh-keys[2039]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:15:25.612592 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Dec 13 14:15:25.635804 polkitd[2040]: Started polkitd version 121 Dec 13 14:15:25.659543 polkitd[2040]: Loading rules from directory /etc/polkit-1/rules.d Dec 13 14:15:25.659904 polkitd[2040]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 13 14:15:25.662087 polkitd[2040]: Finished loading, compiling and executing 2 rules Dec 13 14:15:25.663042 dbus-daemon[1898]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 13 14:15:25.663317 systemd[1]: Started polkit.service. Dec 13 14:15:25.664640 polkitd[2040]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 13 14:15:25.693673 systemd-hostnamed[1945]: Hostname set to (transient) Dec 13 14:15:25.693876 systemd-resolved[1847]: System hostname changed to 'ip-172-31-20-24'. Dec 13 14:15:25.837744 amazon-ssm-agent[1895]: 2024-12-13 14:15:25 INFO Create new startup processor Dec 13 14:15:25.843212 amazon-ssm-agent[1895]: 2024-12-13 14:15:25 INFO [LongRunningPluginsManager] registered plugins: {} Dec 13 14:15:25.847684 amazon-ssm-agent[1895]: 2024-12-13 14:15:25 INFO Initializing bookkeeping folders Dec 13 14:15:25.849939 amazon-ssm-agent[1895]: 2024-12-13 14:15:25 INFO removing the completed state files Dec 13 14:15:25.850128 amazon-ssm-agent[1895]: 2024-12-13 14:15:25 INFO Initializing bookkeeping folders for long running plugins Dec 13 14:15:25.851789 amazon-ssm-agent[1895]: 2024-12-13 14:15:25 INFO Initializing replies folder for MDS reply requests that couldn't reach the service Dec 13 14:15:25.852008 amazon-ssm-agent[1895]: 2024-12-13 14:15:25 INFO Initializing healthcheck folders for long running plugins Dec 13 14:15:25.852128 amazon-ssm-agent[1895]: 2024-12-13 14:15:25 INFO Initializing locations for inventory plugin Dec 13 14:15:25.852242 amazon-ssm-agent[1895]: 2024-12-13 14:15:25 INFO Initializing default location for custom inventory Dec 13 14:15:25.852352 amazon-ssm-agent[1895]: 2024-12-13 14:15:25 INFO Initializing default location for file inventory Dec 13 14:15:25.852475 amazon-ssm-agent[1895]: 2024-12-13 14:15:25 INFO Initializing default location for role inventory Dec 13 14:15:25.852588 amazon-ssm-agent[1895]: 2024-12-13 14:15:25 INFO Init the cloudwatchlogs publisher Dec 13 14:15:25.852720 amazon-ssm-agent[1895]: 2024-12-13 14:15:25 INFO [instanceID=i-086c119cc47aee441] Successfully loaded platform independent plugin aws:configureDocker Dec 13 14:15:25.852854 amazon-ssm-agent[1895]: 2024-12-13 14:15:25 INFO [instanceID=i-086c119cc47aee441] Successfully loaded platform independent plugin aws:refreshAssociation Dec 13 14:15:25.852967 amazon-ssm-agent[1895]: 2024-12-13 14:15:25 INFO [instanceID=i-086c119cc47aee441] Successfully loaded platform independent plugin aws:downloadContent Dec 13 14:15:25.853078 amazon-ssm-agent[1895]: 2024-12-13 14:15:25 INFO [instanceID=i-086c119cc47aee441] Successfully loaded platform independent plugin aws:runDocument Dec 13 14:15:25.853189 amazon-ssm-agent[1895]: 2024-12-13 14:15:25 INFO [instanceID=i-086c119cc47aee441] Successfully loaded platform independent plugin aws:softwareInventory Dec 13 14:15:25.853300 amazon-ssm-agent[1895]: 2024-12-13 14:15:25 INFO [instanceID=i-086c119cc47aee441] Successfully loaded platform independent plugin aws:runPowerShellScript Dec 13 14:15:25.853410 amazon-ssm-agent[1895]: 2024-12-13 14:15:25 INFO [instanceID=i-086c119cc47aee441] Successfully loaded platform independent plugin aws:updateSsmAgent Dec 13 14:15:25.853520 amazon-ssm-agent[1895]: 2024-12-13 14:15:25 INFO [instanceID=i-086c119cc47aee441] Successfully loaded platform independent plugin aws:runDockerAction Dec 13 14:15:25.853650 amazon-ssm-agent[1895]: 2024-12-13 14:15:25 INFO [instanceID=i-086c119cc47aee441] Successfully loaded platform independent plugin aws:configurePackage Dec 13 14:15:25.853786 amazon-ssm-agent[1895]: 2024-12-13 14:15:25 INFO [instanceID=i-086c119cc47aee441] Successfully loaded platform dependent plugin aws:runShellScript Dec 13 14:15:25.854775 amazon-ssm-agent[1895]: 2024-12-13 14:15:25 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0 Dec 13 14:15:25.854912 amazon-ssm-agent[1895]: 2024-12-13 14:15:25 INFO OS: linux, Arch: arm64 Dec 13 14:15:25.873412 amazon-ssm-agent[1895]: datastore file /var/lib/amazon/ssm/i-086c119cc47aee441/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute Dec 13 14:15:25.942494 amazon-ssm-agent[1895]: 2024-12-13 14:15:25 INFO [MessageGatewayService] Starting session document processing engine... Dec 13 14:15:26.037498 amazon-ssm-agent[1895]: 2024-12-13 14:15:25 INFO [MessageGatewayService] [EngineProcessor] Starting Dec 13 14:15:26.132041 amazon-ssm-agent[1895]: 2024-12-13 14:15:25 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module. Dec 13 14:15:26.228737 amazon-ssm-agent[1895]: 2024-12-13 14:15:25 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-086c119cc47aee441, requestId: 10c406b5-c3d9-4f04-943b-60c0832c8714 Dec 13 14:15:26.323337 amazon-ssm-agent[1895]: 2024-12-13 14:15:25 INFO [MessagingDeliveryService] Starting document processing engine... Dec 13 14:15:26.360861 locksmithd[1966]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 14:15:26.418370 amazon-ssm-agent[1895]: 2024-12-13 14:15:25 INFO [MessagingDeliveryService] [EngineProcessor] Starting Dec 13 14:15:26.513793 amazon-ssm-agent[1895]: 2024-12-13 14:15:25 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing Dec 13 14:15:26.608938 amazon-ssm-agent[1895]: 2024-12-13 14:15:25 INFO [MessagingDeliveryService] Starting message polling Dec 13 14:15:26.704530 amazon-ssm-agent[1895]: 2024-12-13 14:15:25 INFO [MessagingDeliveryService] Starting send replies to MDS Dec 13 14:15:26.742572 systemd[1]: Started kubelet.service. Dec 13 14:15:26.800313 amazon-ssm-agent[1895]: 2024-12-13 14:15:25 INFO [instanceID=i-086c119cc47aee441] Starting association polling Dec 13 14:15:26.896151 amazon-ssm-agent[1895]: 2024-12-13 14:15:25 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting Dec 13 14:15:26.992244 amazon-ssm-agent[1895]: 2024-12-13 14:15:25 INFO [MessagingDeliveryService] [Association] Launching response handler Dec 13 14:15:27.088633 amazon-ssm-agent[1895]: 2024-12-13 14:15:25 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing Dec 13 14:15:27.185128 amazon-ssm-agent[1895]: 2024-12-13 14:15:25 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service Dec 13 14:15:27.281851 amazon-ssm-agent[1895]: 2024-12-13 14:15:25 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized Dec 13 14:15:27.378835 amazon-ssm-agent[1895]: 2024-12-13 14:15:25 INFO [MessageGatewayService] listening reply. Dec 13 14:15:27.475893 amazon-ssm-agent[1895]: 2024-12-13 14:15:25 INFO [HealthCheck] HealthCheck reporting agent health. Dec 13 14:15:27.573224 amazon-ssm-agent[1895]: 2024-12-13 14:15:25 INFO [OfflineService] Starting document processing engine... Dec 13 14:15:27.670737 amazon-ssm-agent[1895]: 2024-12-13 14:15:25 INFO [OfflineService] [EngineProcessor] Starting Dec 13 14:15:27.768452 amazon-ssm-agent[1895]: 2024-12-13 14:15:25 INFO [OfflineService] [EngineProcessor] Initial processing Dec 13 14:15:27.856655 kubelet[2125]: E1213 14:15:27.856534 2125 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:15:27.860322 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:15:27.860739 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:15:27.866344 amazon-ssm-agent[1895]: 2024-12-13 14:15:25 INFO [OfflineService] Starting message polling Dec 13 14:15:27.964539 amazon-ssm-agent[1895]: 2024-12-13 14:15:25 INFO [OfflineService] Starting send replies to MDS Dec 13 14:15:28.062813 amazon-ssm-agent[1895]: 2024-12-13 14:15:25 INFO [LongRunningPluginsManager] starting long running plugin manager Dec 13 14:15:28.161316 amazon-ssm-agent[1895]: 2024-12-13 14:15:25 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute Dec 13 14:15:28.260056 amazon-ssm-agent[1895]: 2024-12-13 14:15:25 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck Dec 13 14:15:28.358980 amazon-ssm-agent[1895]: 2024-12-13 14:15:25 INFO [StartupProcessor] Executing startup processor tasks Dec 13 14:15:28.458066 amazon-ssm-agent[1895]: 2024-12-13 14:15:25 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running Dec 13 14:15:28.557402 amazon-ssm-agent[1895]: 2024-12-13 14:15:25 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk Dec 13 14:15:28.656834 amazon-ssm-agent[1895]: 2024-12-13 14:15:25 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.6 Dec 13 14:15:28.756510 amazon-ssm-agent[1895]: 2024-12-13 14:15:25 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-086c119cc47aee441?role=subscribe&stream=input Dec 13 14:15:28.856472 amazon-ssm-agent[1895]: 2024-12-13 14:15:25 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-086c119cc47aee441?role=subscribe&stream=input Dec 13 14:15:28.956476 amazon-ssm-agent[1895]: 2024-12-13 14:15:25 INFO [MessageGatewayService] Starting receiving message from control channel Dec 13 14:15:29.056770 amazon-ssm-agent[1895]: 2024-12-13 14:15:25 INFO [MessageGatewayService] [EngineProcessor] Initial processing Dec 13 14:15:29.716007 sshd_keygen[1939]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 14:15:29.760279 systemd[1]: Finished sshd-keygen.service. Dec 13 14:15:29.765537 systemd[1]: Starting issuegen.service... Dec 13 14:15:29.781151 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 14:15:29.781836 systemd[1]: Finished issuegen.service. Dec 13 14:15:29.787659 systemd[1]: Starting systemd-user-sessions.service... Dec 13 14:15:29.804881 systemd[1]: Finished systemd-user-sessions.service. Dec 13 14:15:29.812088 systemd[1]: Started getty@tty1.service. Dec 13 14:15:29.817507 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 14:15:29.819967 systemd[1]: Reached target getty.target. Dec 13 14:15:29.821909 systemd[1]: Reached target multi-user.target. Dec 13 14:15:29.829649 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 14:15:29.849529 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 14:15:29.850446 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 14:15:29.854745 systemd[1]: Startup finished in 9.767s (kernel) + 14.094s (userspace) = 23.861s. Dec 13 14:15:33.489144 systemd[1]: Created slice system-sshd.slice. Dec 13 14:15:33.491849 systemd[1]: Started sshd@0-172.31.20.24:22-139.178.89.65:50404.service. Dec 13 14:15:33.740606 sshd[2152]: Accepted publickey for core from 139.178.89.65 port 50404 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:15:33.745620 sshd[2152]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:15:33.769443 systemd[1]: Created slice user-500.slice. Dec 13 14:15:33.772583 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 14:15:33.784319 systemd-logind[1909]: New session 1 of user core. Dec 13 14:15:33.806532 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 14:15:33.812651 systemd[1]: Starting user@500.service... Dec 13 14:15:33.823588 (systemd)[2157]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:15:34.036215 systemd[2157]: Queued start job for default target default.target. Dec 13 14:15:34.038643 systemd[2157]: Reached target paths.target. Dec 13 14:15:34.038729 systemd[2157]: Reached target sockets.target. Dec 13 14:15:34.038766 systemd[2157]: Reached target timers.target. Dec 13 14:15:34.038797 systemd[2157]: Reached target basic.target. Dec 13 14:15:34.039013 systemd[1]: Started user@500.service. Dec 13 14:15:34.039998 systemd[2157]: Reached target default.target. Dec 13 14:15:34.040432 systemd[2157]: Startup finished in 202ms. Dec 13 14:15:34.042724 systemd[1]: Started session-1.scope. Dec 13 14:15:34.193825 systemd[1]: Started sshd@1-172.31.20.24:22-139.178.89.65:50414.service. Dec 13 14:15:34.368152 sshd[2166]: Accepted publickey for core from 139.178.89.65 port 50414 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:15:34.370618 sshd[2166]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:15:34.379916 systemd[1]: Started session-2.scope. Dec 13 14:15:34.380851 systemd-logind[1909]: New session 2 of user core. Dec 13 14:15:34.512807 sshd[2166]: pam_unix(sshd:session): session closed for user core Dec 13 14:15:34.517426 systemd[1]: sshd@1-172.31.20.24:22-139.178.89.65:50414.service: Deactivated successfully. Dec 13 14:15:34.519269 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 14:15:34.519454 systemd-logind[1909]: Session 2 logged out. Waiting for processes to exit. Dec 13 14:15:34.522225 systemd-logind[1909]: Removed session 2. Dec 13 14:15:34.537979 systemd[1]: Started sshd@2-172.31.20.24:22-139.178.89.65:50426.service. Dec 13 14:15:34.705634 sshd[2173]: Accepted publickey for core from 139.178.89.65 port 50426 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:15:34.708720 sshd[2173]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:15:34.717047 systemd[1]: Started session-3.scope. Dec 13 14:15:34.717601 systemd-logind[1909]: New session 3 of user core. Dec 13 14:15:34.840061 sshd[2173]: pam_unix(sshd:session): session closed for user core Dec 13 14:15:34.846064 systemd[1]: sshd@2-172.31.20.24:22-139.178.89.65:50426.service: Deactivated successfully. Dec 13 14:15:34.849031 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 14:15:34.851074 systemd-logind[1909]: Session 3 logged out. Waiting for processes to exit. Dec 13 14:15:34.854503 systemd-logind[1909]: Removed session 3. Dec 13 14:15:34.866050 systemd[1]: Started sshd@3-172.31.20.24:22-139.178.89.65:50442.service. Dec 13 14:15:35.037495 sshd[2180]: Accepted publickey for core from 139.178.89.65 port 50442 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:15:35.040246 sshd[2180]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:15:35.048179 systemd-logind[1909]: New session 4 of user core. Dec 13 14:15:35.049088 systemd[1]: Started session-4.scope. Dec 13 14:15:35.184880 sshd[2180]: pam_unix(sshd:session): session closed for user core Dec 13 14:15:35.190666 systemd[1]: sshd@3-172.31.20.24:22-139.178.89.65:50442.service: Deactivated successfully. Dec 13 14:15:35.192215 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 14:15:35.194977 systemd-logind[1909]: Session 4 logged out. Waiting for processes to exit. Dec 13 14:15:35.197286 systemd-logind[1909]: Removed session 4. Dec 13 14:15:35.209812 systemd[1]: Started sshd@4-172.31.20.24:22-139.178.89.65:50452.service. Dec 13 14:15:35.380797 sshd[2187]: Accepted publickey for core from 139.178.89.65 port 50452 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:15:35.383377 sshd[2187]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:15:35.391872 systemd-logind[1909]: New session 5 of user core. Dec 13 14:15:35.392207 systemd[1]: Started session-5.scope. Dec 13 14:15:35.539876 sudo[2191]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 14:15:35.541160 sudo[2191]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 14:15:35.568999 systemd[1]: Starting coreos-metadata.service... Dec 13 14:15:35.734343 coreos-metadata[2195]: Dec 13 14:15:35.733 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 13 14:15:35.735405 coreos-metadata[2195]: Dec 13 14:15:35.735 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/instance-id: Attempt #1 Dec 13 14:15:35.736111 coreos-metadata[2195]: Dec 13 14:15:35.735 INFO Fetch successful Dec 13 14:15:35.736747 coreos-metadata[2195]: Dec 13 14:15:35.736 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/instance-type: Attempt #1 Dec 13 14:15:35.737125 coreos-metadata[2195]: Dec 13 14:15:35.736 INFO Fetch successful Dec 13 14:15:35.737499 coreos-metadata[2195]: Dec 13 14:15:35.737 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/local-ipv4: Attempt #1 Dec 13 14:15:35.737917 coreos-metadata[2195]: Dec 13 14:15:35.737 INFO Fetch successful Dec 13 14:15:35.738274 coreos-metadata[2195]: Dec 13 14:15:35.738 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-ipv4: Attempt #1 Dec 13 14:15:35.738707 coreos-metadata[2195]: Dec 13 14:15:35.738 INFO Fetch successful Dec 13 14:15:35.739078 coreos-metadata[2195]: Dec 13 14:15:35.738 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/placement/availability-zone: Attempt #1 Dec 13 14:15:35.739472 coreos-metadata[2195]: Dec 13 14:15:35.739 INFO Fetch successful Dec 13 14:15:35.739815 coreos-metadata[2195]: Dec 13 14:15:35.739 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/hostname: Attempt #1 Dec 13 14:15:35.740201 coreos-metadata[2195]: Dec 13 14:15:35.739 INFO Fetch successful Dec 13 14:15:35.740532 coreos-metadata[2195]: Dec 13 14:15:35.740 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-hostname: Attempt #1 Dec 13 14:15:35.740856 coreos-metadata[2195]: Dec 13 14:15:35.740 INFO Fetch successful Dec 13 14:15:35.741107 coreos-metadata[2195]: Dec 13 14:15:35.740 INFO Fetching http://169.254.169.254/2019-10-01/dynamic/instance-identity/document: Attempt #1 Dec 13 14:15:35.741355 coreos-metadata[2195]: Dec 13 14:15:35.741 INFO Fetch successful Dec 13 14:15:35.756562 systemd[1]: Finished coreos-metadata.service. Dec 13 14:15:37.264309 systemd[1]: Stopped kubelet.service. Dec 13 14:15:37.270104 systemd[1]: Starting kubelet.service... Dec 13 14:15:37.311929 systemd[1]: Reloading. Dec 13 14:15:37.488808 /usr/lib/systemd/system-generators/torcx-generator[2262]: time="2024-12-13T14:15:37Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:15:37.488864 /usr/lib/systemd/system-generators/torcx-generator[2262]: time="2024-12-13T14:15:37Z" level=info msg="torcx already run" Dec 13 14:15:37.705175 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:15:37.705216 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:15:37.747932 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:15:37.959231 systemd[1]: Started kubelet.service. Dec 13 14:15:37.967612 systemd[1]: Stopping kubelet.service... Dec 13 14:15:37.970149 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 14:15:37.971977 systemd[1]: Stopped kubelet.service. Dec 13 14:15:37.980302 systemd[1]: Starting kubelet.service... Dec 13 14:15:38.262329 systemd[1]: Started kubelet.service. Dec 13 14:15:38.375711 kubelet[2337]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:15:38.376329 kubelet[2337]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:15:38.376329 kubelet[2337]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:15:38.376506 kubelet[2337]: I1213 14:15:38.376433 2337 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:15:39.301123 amazon-ssm-agent[1895]: 2024-12-13 14:15:39 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds. Dec 13 14:15:39.938117 kubelet[2337]: I1213 14:15:39.938049 2337 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 14:15:39.938117 kubelet[2337]: I1213 14:15:39.938108 2337 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:15:39.938862 kubelet[2337]: I1213 14:15:39.938478 2337 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 14:15:39.992079 kubelet[2337]: I1213 14:15:39.992000 2337 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:15:40.016043 kubelet[2337]: I1213 14:15:40.015965 2337 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:15:40.017380 kubelet[2337]: I1213 14:15:40.017315 2337 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:15:40.017958 kubelet[2337]: I1213 14:15:40.017891 2337 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 14:15:40.018189 kubelet[2337]: I1213 14:15:40.017964 2337 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:15:40.018189 kubelet[2337]: I1213 14:15:40.017988 2337 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 14:15:40.018320 kubelet[2337]: I1213 14:15:40.018204 2337 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:15:40.036356 kubelet[2337]: I1213 14:15:40.036257 2337 kubelet.go:396] "Attempting to sync node with API server" Dec 13 14:15:40.036610 kubelet[2337]: I1213 14:15:40.036584 2337 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:15:40.036788 kubelet[2337]: I1213 14:15:40.036764 2337 kubelet.go:312] "Adding apiserver pod source" Dec 13 14:15:40.036910 kubelet[2337]: I1213 14:15:40.036888 2337 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:15:40.038025 kubelet[2337]: E1213 14:15:40.037972 2337 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:15:40.038376 kubelet[2337]: E1213 14:15:40.038346 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:15:40.040876 kubelet[2337]: I1213 14:15:40.040829 2337 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 14:15:40.041486 kubelet[2337]: I1213 14:15:40.041441 2337 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:15:40.041580 kubelet[2337]: W1213 14:15:40.041548 2337 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 14:15:40.042861 kubelet[2337]: I1213 14:15:40.042742 2337 server.go:1256] "Started kubelet" Dec 13 14:15:40.043891 kubelet[2337]: I1213 14:15:40.043854 2337 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:15:40.045685 kubelet[2337]: I1213 14:15:40.045625 2337 server.go:461] "Adding debug handlers to kubelet server" Dec 13 14:15:40.048618 kubelet[2337]: I1213 14:15:40.048556 2337 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:15:40.049102 kubelet[2337]: I1213 14:15:40.049054 2337 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:15:40.056132 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 14:15:40.059461 kubelet[2337]: I1213 14:15:40.059319 2337 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:15:40.061023 kubelet[2337]: E1213 14:15:40.060984 2337 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 14:15:40.062653 kubelet[2337]: I1213 14:15:40.062615 2337 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 14:15:40.063072 kubelet[2337]: I1213 14:15:40.063034 2337 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 14:15:40.063344 kubelet[2337]: I1213 14:15:40.063319 2337 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 14:15:40.064744 kubelet[2337]: I1213 14:15:40.064709 2337 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:15:40.068031 kubelet[2337]: I1213 14:15:40.067981 2337 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:15:40.074463 kubelet[2337]: I1213 14:15:40.074424 2337 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:15:40.131451 kubelet[2337]: E1213 14:15:40.131074 2337 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.31.20.24\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Dec 13 14:15:40.131451 kubelet[2337]: W1213 14:15:40.131194 2337 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes "172.31.20.24" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 13 14:15:40.131451 kubelet[2337]: E1213 14:15:40.131233 2337 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.20.24" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 13 14:15:40.131451 kubelet[2337]: W1213 14:15:40.131402 2337 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 13 14:15:40.131451 kubelet[2337]: E1213 14:15:40.131439 2337 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 13 14:15:40.138819 kubelet[2337]: W1213 14:15:40.138743 2337 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Dec 13 14:15:40.138819 kubelet[2337]: E1213 14:15:40.138810 2337 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Dec 13 14:15:40.139025 kubelet[2337]: I1213 14:15:40.138992 2337 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:15:40.139025 kubelet[2337]: I1213 14:15:40.139017 2337 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:15:40.139167 kubelet[2337]: I1213 14:15:40.139049 2337 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:15:40.142030 kubelet[2337]: I1213 14:15:40.141974 2337 policy_none.go:49] "None policy: Start" Dec 13 14:15:40.143179 kubelet[2337]: E1213 14:15:40.143134 2337 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.20.24.1810c229807bcd96 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.20.24,UID:172.31.20.24,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172.31.20.24,},FirstTimestamp:2024-12-13 14:15:40.042685846 +0000 UTC m=+1.766643969,LastTimestamp:2024-12-13 14:15:40.042685846 +0000 UTC m=+1.766643969,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.20.24,}" Dec 13 14:15:40.144454 kubelet[2337]: I1213 14:15:40.144414 2337 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:15:40.144719 kubelet[2337]: I1213 14:15:40.144659 2337 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:15:40.155372 kubelet[2337]: I1213 14:15:40.155329 2337 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:15:40.155993 kubelet[2337]: I1213 14:15:40.155961 2337 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:15:40.170935 kubelet[2337]: E1213 14:15:40.170900 2337 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.20.24.1810c2298192a0df default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.20.24,UID:172.31.20.24,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:172.31.20.24,},FirstTimestamp:2024-12-13 14:15:40.060958943 +0000 UTC m=+1.784917054,LastTimestamp:2024-12-13 14:15:40.060958943 +0000 UTC m=+1.784917054,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.20.24,}" Dec 13 14:15:40.172015 kubelet[2337]: I1213 14:15:40.171976 2337 kubelet_node_status.go:73] "Attempting to register node" node="172.31.20.24" Dec 13 14:15:40.174548 kubelet[2337]: E1213 14:15:40.174510 2337 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.20.24\" not found" Dec 13 14:15:40.204834 kubelet[2337]: I1213 14:15:40.204668 2337 kubelet_node_status.go:76] "Successfully registered node" node="172.31.20.24" Dec 13 14:15:40.210173 kubelet[2337]: I1213 14:15:40.209648 2337 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Dec 13 14:15:40.210865 env[1916]: time="2024-12-13T14:15:40.210370739Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 14:15:40.211896 kubelet[2337]: I1213 14:15:40.211854 2337 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Dec 13 14:15:40.229142 kubelet[2337]: E1213 14:15:40.229041 2337 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.20.24\" not found" Dec 13 14:15:40.266220 kubelet[2337]: I1213 14:15:40.266117 2337 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:15:40.271578 kubelet[2337]: I1213 14:15:40.271540 2337 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:15:40.271836 kubelet[2337]: I1213 14:15:40.271811 2337 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:15:40.271973 kubelet[2337]: I1213 14:15:40.271950 2337 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 14:15:40.272142 kubelet[2337]: E1213 14:15:40.272122 2337 kubelet.go:2353] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Dec 13 14:15:40.329888 kubelet[2337]: E1213 14:15:40.329834 2337 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.20.24\" not found" Dec 13 14:15:40.431056 kubelet[2337]: E1213 14:15:40.431002 2337 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.20.24\" not found" Dec 13 14:15:40.532053 kubelet[2337]: E1213 14:15:40.531910 2337 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.20.24\" not found" Dec 13 14:15:40.632671 kubelet[2337]: E1213 14:15:40.632603 2337 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.20.24\" not found" Dec 13 14:15:40.733599 kubelet[2337]: E1213 14:15:40.733551 2337 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.20.24\" not found" Dec 13 14:15:40.797997 sudo[2191]: pam_unix(sudo:session): session closed for user root Dec 13 14:15:40.823416 sshd[2187]: pam_unix(sshd:session): session closed for user core Dec 13 14:15:40.828185 systemd[1]: sshd@4-172.31.20.24:22-139.178.89.65:50452.service: Deactivated successfully. Dec 13 14:15:40.830217 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 14:15:40.830260 systemd-logind[1909]: Session 5 logged out. Waiting for processes to exit. Dec 13 14:15:40.833644 systemd-logind[1909]: Removed session 5. Dec 13 14:15:40.835412 kubelet[2337]: E1213 14:15:40.835353 2337 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.20.24\" not found" Dec 13 14:15:40.936281 kubelet[2337]: E1213 14:15:40.936235 2337 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.20.24\" not found" Dec 13 14:15:40.941624 kubelet[2337]: I1213 14:15:40.941587 2337 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 13 14:15:40.942466 kubelet[2337]: W1213 14:15:40.942415 2337 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.RuntimeClass ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Dec 13 14:15:41.036993 kubelet[2337]: E1213 14:15:41.036899 2337 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.20.24\" not found" Dec 13 14:15:41.040067 kubelet[2337]: E1213 14:15:41.040012 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:15:41.137780 kubelet[2337]: E1213 14:15:41.137602 2337 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.20.24\" not found" Dec 13 14:15:41.238852 kubelet[2337]: E1213 14:15:41.238789 2337 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.20.24\" not found" Dec 13 14:15:41.339960 kubelet[2337]: E1213 14:15:41.339913 2337 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.20.24\" not found" Dec 13 14:15:42.040312 kubelet[2337]: I1213 14:15:42.040265 2337 apiserver.go:52] "Watching apiserver" Dec 13 14:15:42.041222 kubelet[2337]: E1213 14:15:42.041180 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:15:42.048965 kubelet[2337]: I1213 14:15:42.048918 2337 topology_manager.go:215] "Topology Admit Handler" podUID="cfdf2a87-71c8-452e-a524-dd06bcc4f0f8" podNamespace="kube-system" podName="cilium-4tm5f" Dec 13 14:15:42.049130 kubelet[2337]: I1213 14:15:42.049112 2337 topology_manager.go:215] "Topology Admit Handler" podUID="a6926eba-c558-466c-8e23-fd928ae64b53" podNamespace="kube-system" podName="kube-proxy-87xt7" Dec 13 14:15:42.063793 kubelet[2337]: I1213 14:15:42.063754 2337 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 14:15:42.080144 kubelet[2337]: I1213 14:15:42.080104 2337 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a6926eba-c558-466c-8e23-fd928ae64b53-lib-modules\") pod \"kube-proxy-87xt7\" (UID: \"a6926eba-c558-466c-8e23-fd928ae64b53\") " pod="kube-system/kube-proxy-87xt7" Dec 13 14:15:42.080393 kubelet[2337]: I1213 14:15:42.080369 2337 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cfdf2a87-71c8-452e-a524-dd06bcc4f0f8-cilium-cgroup\") pod \"cilium-4tm5f\" (UID: \"cfdf2a87-71c8-452e-a524-dd06bcc4f0f8\") " pod="kube-system/cilium-4tm5f" Dec 13 14:15:42.080546 kubelet[2337]: I1213 14:15:42.080525 2337 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cfdf2a87-71c8-452e-a524-dd06bcc4f0f8-hubble-tls\") pod \"cilium-4tm5f\" (UID: \"cfdf2a87-71c8-452e-a524-dd06bcc4f0f8\") " pod="kube-system/cilium-4tm5f" Dec 13 14:15:42.080722 kubelet[2337]: I1213 14:15:42.080674 2337 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cfdf2a87-71c8-452e-a524-dd06bcc4f0f8-xtables-lock\") pod \"cilium-4tm5f\" (UID: \"cfdf2a87-71c8-452e-a524-dd06bcc4f0f8\") " pod="kube-system/cilium-4tm5f" Dec 13 14:15:42.080909 kubelet[2337]: I1213 14:15:42.080886 2337 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cfdf2a87-71c8-452e-a524-dd06bcc4f0f8-clustermesh-secrets\") pod \"cilium-4tm5f\" (UID: \"cfdf2a87-71c8-452e-a524-dd06bcc4f0f8\") " pod="kube-system/cilium-4tm5f" Dec 13 14:15:42.081067 kubelet[2337]: I1213 14:15:42.081046 2337 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cfdf2a87-71c8-452e-a524-dd06bcc4f0f8-host-proc-sys-kernel\") pod \"cilium-4tm5f\" (UID: \"cfdf2a87-71c8-452e-a524-dd06bcc4f0f8\") " pod="kube-system/cilium-4tm5f" Dec 13 14:15:42.081215 kubelet[2337]: I1213 14:15:42.081193 2337 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cfdf2a87-71c8-452e-a524-dd06bcc4f0f8-hostproc\") pod \"cilium-4tm5f\" (UID: \"cfdf2a87-71c8-452e-a524-dd06bcc4f0f8\") " pod="kube-system/cilium-4tm5f" Dec 13 14:15:42.081370 kubelet[2337]: I1213 14:15:42.081349 2337 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cfdf2a87-71c8-452e-a524-dd06bcc4f0f8-cni-path\") pod \"cilium-4tm5f\" (UID: \"cfdf2a87-71c8-452e-a524-dd06bcc4f0f8\") " pod="kube-system/cilium-4tm5f" Dec 13 14:15:42.081517 kubelet[2337]: I1213 14:15:42.081496 2337 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a6926eba-c558-466c-8e23-fd928ae64b53-kube-proxy\") pod \"kube-proxy-87xt7\" (UID: \"a6926eba-c558-466c-8e23-fd928ae64b53\") " pod="kube-system/kube-proxy-87xt7" Dec 13 14:15:42.081679 kubelet[2337]: I1213 14:15:42.081658 2337 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a6926eba-c558-466c-8e23-fd928ae64b53-xtables-lock\") pod \"kube-proxy-87xt7\" (UID: \"a6926eba-c558-466c-8e23-fd928ae64b53\") " pod="kube-system/kube-proxy-87xt7" Dec 13 14:15:42.081846 kubelet[2337]: I1213 14:15:42.081825 2337 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cfdf2a87-71c8-452e-a524-dd06bcc4f0f8-cilium-config-path\") pod \"cilium-4tm5f\" (UID: \"cfdf2a87-71c8-452e-a524-dd06bcc4f0f8\") " pod="kube-system/cilium-4tm5f" Dec 13 14:15:42.081997 kubelet[2337]: I1213 14:15:42.081975 2337 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cfdf2a87-71c8-452e-a524-dd06bcc4f0f8-host-proc-sys-net\") pod \"cilium-4tm5f\" (UID: \"cfdf2a87-71c8-452e-a524-dd06bcc4f0f8\") " pod="kube-system/cilium-4tm5f" Dec 13 14:15:42.082159 kubelet[2337]: I1213 14:15:42.082138 2337 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cfdf2a87-71c8-452e-a524-dd06bcc4f0f8-etc-cni-netd\") pod \"cilium-4tm5f\" (UID: \"cfdf2a87-71c8-452e-a524-dd06bcc4f0f8\") " pod="kube-system/cilium-4tm5f" Dec 13 14:15:42.082321 kubelet[2337]: I1213 14:15:42.082295 2337 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cfdf2a87-71c8-452e-a524-dd06bcc4f0f8-lib-modules\") pod \"cilium-4tm5f\" (UID: \"cfdf2a87-71c8-452e-a524-dd06bcc4f0f8\") " pod="kube-system/cilium-4tm5f" Dec 13 14:15:42.082533 kubelet[2337]: I1213 14:15:42.082508 2337 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2dtc\" (UniqueName: \"kubernetes.io/projected/cfdf2a87-71c8-452e-a524-dd06bcc4f0f8-kube-api-access-m2dtc\") pod \"cilium-4tm5f\" (UID: \"cfdf2a87-71c8-452e-a524-dd06bcc4f0f8\") " pod="kube-system/cilium-4tm5f" Dec 13 14:15:42.082712 kubelet[2337]: I1213 14:15:42.082675 2337 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzfj8\" (UniqueName: \"kubernetes.io/projected/a6926eba-c558-466c-8e23-fd928ae64b53-kube-api-access-gzfj8\") pod \"kube-proxy-87xt7\" (UID: \"a6926eba-c558-466c-8e23-fd928ae64b53\") " pod="kube-system/kube-proxy-87xt7" Dec 13 14:15:42.082862 kubelet[2337]: I1213 14:15:42.082841 2337 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cfdf2a87-71c8-452e-a524-dd06bcc4f0f8-cilium-run\") pod \"cilium-4tm5f\" (UID: \"cfdf2a87-71c8-452e-a524-dd06bcc4f0f8\") " pod="kube-system/cilium-4tm5f" Dec 13 14:15:42.083051 kubelet[2337]: I1213 14:15:42.083027 2337 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cfdf2a87-71c8-452e-a524-dd06bcc4f0f8-bpf-maps\") pod \"cilium-4tm5f\" (UID: \"cfdf2a87-71c8-452e-a524-dd06bcc4f0f8\") " pod="kube-system/cilium-4tm5f" Dec 13 14:15:42.357773 env[1916]: time="2024-12-13T14:15:42.357549854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-87xt7,Uid:a6926eba-c558-466c-8e23-fd928ae64b53,Namespace:kube-system,Attempt:0,}" Dec 13 14:15:42.360429 env[1916]: time="2024-12-13T14:15:42.360196490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4tm5f,Uid:cfdf2a87-71c8-452e-a524-dd06bcc4f0f8,Namespace:kube-system,Attempt:0,}" Dec 13 14:15:42.924596 env[1916]: time="2024-12-13T14:15:42.924506405Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:15:42.926588 env[1916]: time="2024-12-13T14:15:42.926515109Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:15:42.930514 env[1916]: time="2024-12-13T14:15:42.930448301Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:15:42.934126 env[1916]: time="2024-12-13T14:15:42.934048361Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:15:42.939331 env[1916]: time="2024-12-13T14:15:42.939250865Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:15:42.945262 env[1916]: time="2024-12-13T14:15:42.945193901Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:15:42.948246 env[1916]: time="2024-12-13T14:15:42.948180281Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:15:42.953039 env[1916]: time="2024-12-13T14:15:42.952975481Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:15:42.970120 env[1916]: time="2024-12-13T14:15:42.969989669Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:15:42.970274 env[1916]: time="2024-12-13T14:15:42.970141121Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:15:42.970274 env[1916]: time="2024-12-13T14:15:42.970205165Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:15:42.971071 env[1916]: time="2024-12-13T14:15:42.970952729Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/01883d3b87dacb3dae05e32ee20d5199e335f551ea6bf2d4a9bd1cd10c9320dd pid=2390 runtime=io.containerd.runc.v2 Dec 13 14:15:42.978851 env[1916]: time="2024-12-13T14:15:42.978463757Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:15:42.978851 env[1916]: time="2024-12-13T14:15:42.978539897Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:15:42.978851 env[1916]: time="2024-12-13T14:15:42.978565193Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:15:42.979314 env[1916]: time="2024-12-13T14:15:42.979075277Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/244811f4d6bab5d4c66a7b38cc038a294eae0da39fa706da015a48cdba10af17 pid=2405 runtime=io.containerd.runc.v2 Dec 13 14:15:43.042145 kubelet[2337]: E1213 14:15:43.042074 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:15:43.085555 env[1916]: time="2024-12-13T14:15:43.085468430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4tm5f,Uid:cfdf2a87-71c8-452e-a524-dd06bcc4f0f8,Namespace:kube-system,Attempt:0,} returns sandbox id \"01883d3b87dacb3dae05e32ee20d5199e335f551ea6bf2d4a9bd1cd10c9320dd\"" Dec 13 14:15:43.092433 env[1916]: time="2024-12-13T14:15:43.092339486Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 14:15:43.097891 env[1916]: time="2024-12-13T14:15:43.097834370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-87xt7,Uid:a6926eba-c558-466c-8e23-fd928ae64b53,Namespace:kube-system,Attempt:0,} returns sandbox id \"244811f4d6bab5d4c66a7b38cc038a294eae0da39fa706da015a48cdba10af17\"" Dec 13 14:15:43.203527 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount416611432.mount: Deactivated successfully. Dec 13 14:15:44.042599 kubelet[2337]: E1213 14:15:44.042517 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:15:45.043601 kubelet[2337]: E1213 14:15:45.043501 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:15:46.044521 kubelet[2337]: E1213 14:15:46.044404 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:15:47.045150 kubelet[2337]: E1213 14:15:47.045057 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:15:48.046150 kubelet[2337]: E1213 14:15:48.046076 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:15:49.047325 kubelet[2337]: E1213 14:15:49.047117 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:15:49.991866 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount581686084.mount: Deactivated successfully. Dec 13 14:15:50.048149 kubelet[2337]: E1213 14:15:50.048087 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:15:51.048842 kubelet[2337]: E1213 14:15:51.048755 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:15:52.049674 kubelet[2337]: E1213 14:15:52.049592 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:15:53.050351 kubelet[2337]: E1213 14:15:53.050297 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:15:54.011655 env[1916]: time="2024-12-13T14:15:54.011557224Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:15:54.016142 env[1916]: time="2024-12-13T14:15:54.016077156Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:15:54.020357 env[1916]: time="2024-12-13T14:15:54.020291064Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:15:54.023801 env[1916]: time="2024-12-13T14:15:54.022469280Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Dec 13 14:15:54.025807 env[1916]: time="2024-12-13T14:15:54.025739652Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 14:15:54.028277 env[1916]: time="2024-12-13T14:15:54.028217652Z" level=info msg="CreateContainer within sandbox \"01883d3b87dacb3dae05e32ee20d5199e335f551ea6bf2d4a9bd1cd10c9320dd\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:15:54.053720 kubelet[2337]: E1213 14:15:54.051829 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:15:54.056288 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount688540497.mount: Deactivated successfully. Dec 13 14:15:54.067430 env[1916]: time="2024-12-13T14:15:54.067325148Z" level=info msg="CreateContainer within sandbox \"01883d3b87dacb3dae05e32ee20d5199e335f551ea6bf2d4a9bd1cd10c9320dd\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0d71510214061c0b9cf2e942e03eb047b2277eeed3e24eed5db7034c66610739\"" Dec 13 14:15:54.068802 env[1916]: time="2024-12-13T14:15:54.068745672Z" level=info msg="StartContainer for \"0d71510214061c0b9cf2e942e03eb047b2277eeed3e24eed5db7034c66610739\"" Dec 13 14:15:54.199790 env[1916]: time="2024-12-13T14:15:54.196246657Z" level=info msg="StartContainer for \"0d71510214061c0b9cf2e942e03eb047b2277eeed3e24eed5db7034c66610739\" returns successfully" Dec 13 14:15:55.047556 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0d71510214061c0b9cf2e942e03eb047b2277eeed3e24eed5db7034c66610739-rootfs.mount: Deactivated successfully. Dec 13 14:15:55.052610 kubelet[2337]: E1213 14:15:55.052541 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:15:55.727074 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 13 14:15:55.864731 env[1916]: time="2024-12-13T14:15:55.864641321Z" level=info msg="shim disconnected" id=0d71510214061c0b9cf2e942e03eb047b2277eeed3e24eed5db7034c66610739 Dec 13 14:15:55.865396 env[1916]: time="2024-12-13T14:15:55.864731657Z" level=warning msg="cleaning up after shim disconnected" id=0d71510214061c0b9cf2e942e03eb047b2277eeed3e24eed5db7034c66610739 namespace=k8s.io Dec 13 14:15:55.865396 env[1916]: time="2024-12-13T14:15:55.864755165Z" level=info msg="cleaning up dead shim" Dec 13 14:15:55.896569 env[1916]: time="2024-12-13T14:15:55.896478581Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:15:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2517 runtime=io.containerd.runc.v2\n" Dec 13 14:15:56.053918 kubelet[2337]: E1213 14:15:56.053705 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:15:56.330889 env[1916]: time="2024-12-13T14:15:56.330660459Z" level=info msg="CreateContainer within sandbox \"01883d3b87dacb3dae05e32ee20d5199e335f551ea6bf2d4a9bd1cd10c9320dd\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:15:56.380185 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1182438753.mount: Deactivated successfully. Dec 13 14:15:56.391274 env[1916]: time="2024-12-13T14:15:56.391207492Z" level=info msg="CreateContainer within sandbox \"01883d3b87dacb3dae05e32ee20d5199e335f551ea6bf2d4a9bd1cd10c9320dd\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"eb528ecc36f3f3bc8dc3c08bd3a9de2f1333a79ee2d68e30b34e5e178206fd83\"" Dec 13 14:15:56.395994 env[1916]: time="2024-12-13T14:15:56.395937724Z" level=info msg="StartContainer for \"eb528ecc36f3f3bc8dc3c08bd3a9de2f1333a79ee2d68e30b34e5e178206fd83\"" Dec 13 14:15:56.518843 env[1916]: time="2024-12-13T14:15:56.518773888Z" level=info msg="StartContainer for \"eb528ecc36f3f3bc8dc3c08bd3a9de2f1333a79ee2d68e30b34e5e178206fd83\" returns successfully" Dec 13 14:15:56.540500 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:15:56.545309 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:15:56.546174 systemd[1]: Stopping systemd-sysctl.service... Dec 13 14:15:56.551981 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:15:56.591211 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:15:56.710824 env[1916]: time="2024-12-13T14:15:56.710725697Z" level=info msg="shim disconnected" id=eb528ecc36f3f3bc8dc3c08bd3a9de2f1333a79ee2d68e30b34e5e178206fd83 Dec 13 14:15:56.711997 env[1916]: time="2024-12-13T14:15:56.711292373Z" level=warning msg="cleaning up after shim disconnected" id=eb528ecc36f3f3bc8dc3c08bd3a9de2f1333a79ee2d68e30b34e5e178206fd83 namespace=k8s.io Dec 13 14:15:56.712266 env[1916]: time="2024-12-13T14:15:56.712198721Z" level=info msg="cleaning up dead shim" Dec 13 14:15:56.741247 env[1916]: time="2024-12-13T14:15:56.741167177Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:15:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2588 runtime=io.containerd.runc.v2\n" Dec 13 14:15:57.054685 kubelet[2337]: E1213 14:15:57.054634 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:15:57.340028 env[1916]: time="2024-12-13T14:15:57.339811391Z" level=info msg="CreateContainer within sandbox \"01883d3b87dacb3dae05e32ee20d5199e335f551ea6bf2d4a9bd1cd10c9320dd\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:15:57.352292 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eb528ecc36f3f3bc8dc3c08bd3a9de2f1333a79ee2d68e30b34e5e178206fd83-rootfs.mount: Deactivated successfully. Dec 13 14:15:57.352605 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4022743339.mount: Deactivated successfully. Dec 13 14:15:57.405054 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4073279013.mount: Deactivated successfully. Dec 13 14:15:57.424914 env[1916]: time="2024-12-13T14:15:57.424843503Z" level=info msg="CreateContainer within sandbox \"01883d3b87dacb3dae05e32ee20d5199e335f551ea6bf2d4a9bd1cd10c9320dd\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"af83171067a54696c18b931b94342b5f7ab6c350501ccff2d67ea6afda024b6a\"" Dec 13 14:15:57.426028 env[1916]: time="2024-12-13T14:15:57.425977351Z" level=info msg="StartContainer for \"af83171067a54696c18b931b94342b5f7ab6c350501ccff2d67ea6afda024b6a\"" Dec 13 14:15:57.558929 env[1916]: time="2024-12-13T14:15:57.558044201Z" level=info msg="StartContainer for \"af83171067a54696c18b931b94342b5f7ab6c350501ccff2d67ea6afda024b6a\" returns successfully" Dec 13 14:15:57.679554 env[1916]: time="2024-12-13T14:15:57.679491109Z" level=info msg="shim disconnected" id=af83171067a54696c18b931b94342b5f7ab6c350501ccff2d67ea6afda024b6a Dec 13 14:15:57.679954 env[1916]: time="2024-12-13T14:15:57.679905864Z" level=warning msg="cleaning up after shim disconnected" id=af83171067a54696c18b931b94342b5f7ab6c350501ccff2d67ea6afda024b6a namespace=k8s.io Dec 13 14:15:57.680096 env[1916]: time="2024-12-13T14:15:57.680065173Z" level=info msg="cleaning up dead shim" Dec 13 14:15:57.699078 env[1916]: time="2024-12-13T14:15:57.698995888Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:15:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2648 runtime=io.containerd.runc.v2\n" Dec 13 14:15:57.797130 env[1916]: time="2024-12-13T14:15:57.797042151Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:15:57.802386 env[1916]: time="2024-12-13T14:15:57.802320074Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:15:57.805843 env[1916]: time="2024-12-13T14:15:57.805768230Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:15:57.809102 env[1916]: time="2024-12-13T14:15:57.809037300Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:15:57.810550 env[1916]: time="2024-12-13T14:15:57.810485626Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\"" Dec 13 14:15:57.816120 env[1916]: time="2024-12-13T14:15:57.816065523Z" level=info msg="CreateContainer within sandbox \"244811f4d6bab5d4c66a7b38cc038a294eae0da39fa706da015a48cdba10af17\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 14:15:57.844536 env[1916]: time="2024-12-13T14:15:57.844444614Z" level=info msg="CreateContainer within sandbox \"244811f4d6bab5d4c66a7b38cc038a294eae0da39fa706da015a48cdba10af17\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f052b08b952a3ae7c5f35ba78de156838486bb5fd072a91ce3cbc801fa1de237\"" Dec 13 14:15:57.845594 env[1916]: time="2024-12-13T14:15:57.845516131Z" level=info msg="StartContainer for \"f052b08b952a3ae7c5f35ba78de156838486bb5fd072a91ce3cbc801fa1de237\"" Dec 13 14:15:57.964865 env[1916]: time="2024-12-13T14:15:57.964645298Z" level=info msg="StartContainer for \"f052b08b952a3ae7c5f35ba78de156838486bb5fd072a91ce3cbc801fa1de237\" returns successfully" Dec 13 14:15:58.055087 kubelet[2337]: E1213 14:15:58.055015 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:15:58.353776 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-af83171067a54696c18b931b94342b5f7ab6c350501ccff2d67ea6afda024b6a-rootfs.mount: Deactivated successfully. Dec 13 14:15:58.359180 env[1916]: time="2024-12-13T14:15:58.357984759Z" level=info msg="CreateContainer within sandbox \"01883d3b87dacb3dae05e32ee20d5199e335f551ea6bf2d4a9bd1cd10c9320dd\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:15:58.363765 kubelet[2337]: I1213 14:15:58.362224 2337 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-87xt7" podStartSLOduration=3.651284259 podStartE2EDuration="18.362135072s" podCreationTimestamp="2024-12-13 14:15:40 +0000 UTC" firstStartedPulling="2024-12-13 14:15:43.100080674 +0000 UTC m=+4.824038761" lastFinishedPulling="2024-12-13 14:15:57.810931475 +0000 UTC m=+19.534889574" observedRunningTime="2024-12-13 14:15:58.361337953 +0000 UTC m=+20.085296076" watchObservedRunningTime="2024-12-13 14:15:58.362135072 +0000 UTC m=+20.086093195" Dec 13 14:15:58.403325 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount958364455.mount: Deactivated successfully. Dec 13 14:15:58.432066 env[1916]: time="2024-12-13T14:15:58.431837362Z" level=info msg="CreateContainer within sandbox \"01883d3b87dacb3dae05e32ee20d5199e335f551ea6bf2d4a9bd1cd10c9320dd\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6ff0cdda3164b4fab64ed408d882d6d7f16891eeb7d7505ef584fae4dd2fde10\"" Dec 13 14:15:58.437030 env[1916]: time="2024-12-13T14:15:58.434782534Z" level=info msg="StartContainer for \"6ff0cdda3164b4fab64ed408d882d6d7f16891eeb7d7505ef584fae4dd2fde10\"" Dec 13 14:15:58.568557 env[1916]: time="2024-12-13T14:15:58.568465258Z" level=info msg="StartContainer for \"6ff0cdda3164b4fab64ed408d882d6d7f16891eeb7d7505ef584fae4dd2fde10\" returns successfully" Dec 13 14:15:58.627293 env[1916]: time="2024-12-13T14:15:58.626567803Z" level=info msg="shim disconnected" id=6ff0cdda3164b4fab64ed408d882d6d7f16891eeb7d7505ef584fae4dd2fde10 Dec 13 14:15:58.627770 env[1916]: time="2024-12-13T14:15:58.627709039Z" level=warning msg="cleaning up after shim disconnected" id=6ff0cdda3164b4fab64ed408d882d6d7f16891eeb7d7505ef584fae4dd2fde10 namespace=k8s.io Dec 13 14:15:58.627967 env[1916]: time="2024-12-13T14:15:58.627931435Z" level=info msg="cleaning up dead shim" Dec 13 14:15:58.642150 env[1916]: time="2024-12-13T14:15:58.642091169Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:15:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2862 runtime=io.containerd.runc.v2\n" Dec 13 14:15:59.055965 kubelet[2337]: E1213 14:15:59.055858 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:15:59.352363 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6ff0cdda3164b4fab64ed408d882d6d7f16891eeb7d7505ef584fae4dd2fde10-rootfs.mount: Deactivated successfully. Dec 13 14:15:59.369503 env[1916]: time="2024-12-13T14:15:59.369428793Z" level=info msg="CreateContainer within sandbox \"01883d3b87dacb3dae05e32ee20d5199e335f551ea6bf2d4a9bd1cd10c9320dd\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:15:59.398983 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3946220151.mount: Deactivated successfully. Dec 13 14:15:59.416607 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3179940298.mount: Deactivated successfully. Dec 13 14:15:59.425643 env[1916]: time="2024-12-13T14:15:59.425566430Z" level=info msg="CreateContainer within sandbox \"01883d3b87dacb3dae05e32ee20d5199e335f551ea6bf2d4a9bd1cd10c9320dd\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ad1ff9790dd209b98ab03e1be1cd9713df1042d7b9ae49618274505f69d61fd6\"" Dec 13 14:15:59.429714 env[1916]: time="2024-12-13T14:15:59.429609552Z" level=info msg="StartContainer for \"ad1ff9790dd209b98ab03e1be1cd9713df1042d7b9ae49618274505f69d61fd6\"" Dec 13 14:15:59.549552 env[1916]: time="2024-12-13T14:15:59.549487482Z" level=info msg="StartContainer for \"ad1ff9790dd209b98ab03e1be1cd9713df1042d7b9ae49618274505f69d61fd6\" returns successfully" Dec 13 14:15:59.720343 kubelet[2337]: I1213 14:15:59.719938 2337 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 14:15:59.771753 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Dec 13 14:16:00.037868 kubelet[2337]: E1213 14:16:00.037683 2337 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:16:00.057038 kubelet[2337]: E1213 14:16:00.056967 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:16:00.406237 kubelet[2337]: I1213 14:16:00.405772 2337 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-4tm5f" podStartSLOduration=9.471876949 podStartE2EDuration="20.405561971s" podCreationTimestamp="2024-12-13 14:15:40 +0000 UTC" firstStartedPulling="2024-12-13 14:15:43.090623714 +0000 UTC m=+4.814581801" lastFinishedPulling="2024-12-13 14:15:54.024308736 +0000 UTC m=+15.748266823" observedRunningTime="2024-12-13 14:16:00.404080226 +0000 UTC m=+22.128038505" watchObservedRunningTime="2024-12-13 14:16:00.405561971 +0000 UTC m=+22.129520082" Dec 13 14:16:00.570752 kernel: Initializing XFRM netlink socket Dec 13 14:16:00.576752 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Dec 13 14:16:01.057195 kubelet[2337]: E1213 14:16:01.057140 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:16:02.058008 kubelet[2337]: E1213 14:16:02.057903 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:16:02.390024 (udev-worker)[2756]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:16:02.392965 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Dec 13 14:16:02.393109 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 14:16:02.395377 systemd-networkd[1593]: cilium_host: Link UP Dec 13 14:16:02.396352 systemd-networkd[1593]: cilium_net: Link UP Dec 13 14:16:02.396838 systemd-networkd[1593]: cilium_net: Gained carrier Dec 13 14:16:02.399035 systemd-networkd[1593]: cilium_host: Gained carrier Dec 13 14:16:02.403547 (udev-worker)[3003]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:16:02.576460 systemd-networkd[1593]: cilium_vxlan: Link UP Dec 13 14:16:02.576479 systemd-networkd[1593]: cilium_vxlan: Gained carrier Dec 13 14:16:02.680289 systemd-networkd[1593]: cilium_net: Gained IPv6LL Dec 13 14:16:02.984294 systemd-networkd[1593]: cilium_host: Gained IPv6LL Dec 13 14:16:03.059144 kubelet[2337]: E1213 14:16:03.059036 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:16:03.077868 kernel: NET: Registered PF_ALG protocol family Dec 13 14:16:03.776365 kubelet[2337]: I1213 14:16:03.776299 2337 topology_manager.go:215] "Topology Admit Handler" podUID="19251db1-8204-41fe-9f68-688d18c470d0" podNamespace="default" podName="nginx-deployment-6d5f899847-hmjql" Dec 13 14:16:03.855039 kubelet[2337]: I1213 14:16:03.854985 2337 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97m87\" (UniqueName: \"kubernetes.io/projected/19251db1-8204-41fe-9f68-688d18c470d0-kube-api-access-97m87\") pod \"nginx-deployment-6d5f899847-hmjql\" (UID: \"19251db1-8204-41fe-9f68-688d18c470d0\") " pod="default/nginx-deployment-6d5f899847-hmjql" Dec 13 14:16:04.059868 kubelet[2337]: E1213 14:16:04.059747 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:16:04.083537 env[1916]: time="2024-12-13T14:16:04.083468407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-hmjql,Uid:19251db1-8204-41fe-9f68-688d18c470d0,Namespace:default,Attempt:0,}" Dec 13 14:16:04.136016 systemd-networkd[1593]: cilium_vxlan: Gained IPv6LL Dec 13 14:16:04.526009 systemd-networkd[1593]: lxc_health: Link UP Dec 13 14:16:04.528649 (udev-worker)[3033]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:16:04.542859 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 14:16:04.543537 systemd-networkd[1593]: lxc_health: Gained carrier Dec 13 14:16:05.061206 kubelet[2337]: E1213 14:16:05.061140 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:16:05.169223 systemd-networkd[1593]: lxcfb45d146d264: Link UP Dec 13 14:16:05.179803 kernel: eth0: renamed from tmp4e6a2 Dec 13 14:16:05.189393 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcfb45d146d264: link becomes ready Dec 13 14:16:05.188298 systemd-networkd[1593]: lxcfb45d146d264: Gained carrier Dec 13 14:16:05.928036 systemd-networkd[1593]: lxc_health: Gained IPv6LL Dec 13 14:16:06.062230 kubelet[2337]: E1213 14:16:06.062182 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:16:06.951980 systemd-networkd[1593]: lxcfb45d146d264: Gained IPv6LL Dec 13 14:16:07.063343 kubelet[2337]: E1213 14:16:07.063292 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:16:08.064654 kubelet[2337]: E1213 14:16:08.064593 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:16:09.065842 kubelet[2337]: E1213 14:16:09.065794 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:16:09.331357 amazon-ssm-agent[1895]: 2024-12-13 14:16:09 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated Dec 13 14:16:10.006852 update_engine[1910]: I1213 14:16:10.006779 1910 update_attempter.cc:509] Updating boot flags... Dec 13 14:16:10.067328 kubelet[2337]: E1213 14:16:10.067227 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:16:11.068382 kubelet[2337]: E1213 14:16:11.068278 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:16:12.069336 kubelet[2337]: E1213 14:16:12.069217 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:16:12.864677 amazon-ssm-agent[1895]: 2024-12-13 14:16:12 INFO [HealthCheck] HealthCheck reporting agent health. Dec 13 14:16:13.070920 kubelet[2337]: E1213 14:16:13.070850 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:16:14.071955 kubelet[2337]: E1213 14:16:14.071867 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:16:14.234183 env[1916]: time="2024-12-13T14:16:14.233629474Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:16:14.234183 env[1916]: time="2024-12-13T14:16:14.233800249Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:16:14.234183 env[1916]: time="2024-12-13T14:16:14.233828582Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:16:14.236354 env[1916]: time="2024-12-13T14:16:14.235737974Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4e6a2318ce67da4fc1a75108241e3cd73fe830fff8b649b85bc77761c2076be6 pid=3648 runtime=io.containerd.runc.v2 Dec 13 14:16:14.285501 systemd[1]: run-containerd-runc-k8s.io-4e6a2318ce67da4fc1a75108241e3cd73fe830fff8b649b85bc77761c2076be6-runc.smZfBb.mount: Deactivated successfully. Dec 13 14:16:14.373979 env[1916]: time="2024-12-13T14:16:14.371870246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-hmjql,Uid:19251db1-8204-41fe-9f68-688d18c470d0,Namespace:default,Attempt:0,} returns sandbox id \"4e6a2318ce67da4fc1a75108241e3cd73fe830fff8b649b85bc77761c2076be6\"" Dec 13 14:16:14.376623 env[1916]: time="2024-12-13T14:16:14.376547407Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 14:16:15.073136 kubelet[2337]: E1213 14:16:15.073082 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:16:16.074108 kubelet[2337]: E1213 14:16:16.074000 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:16:17.074942 kubelet[2337]: E1213 14:16:17.074884 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:16:18.075888 kubelet[2337]: E1213 14:16:18.075814 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:16:18.160459 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3759244219.mount: Deactivated successfully. Dec 13 14:16:19.076566 kubelet[2337]: E1213 14:16:19.076504 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:16:20.036954 kubelet[2337]: E1213 14:16:20.036890 2337 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:16:20.077549 kubelet[2337]: E1213 14:16:20.077483 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:16:20.339421 env[1916]: time="2024-12-13T14:16:20.339111420Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:16:20.343210 env[1916]: time="2024-12-13T14:16:20.343125639Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d5cb91e7550dca840aad69277b6dbccf8dc3739757998181746daf777a8bd9de,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:16:20.348099 env[1916]: time="2024-12-13T14:16:20.348033643Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:16:20.352731 env[1916]: time="2024-12-13T14:16:20.352639122Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:16:20.357147 env[1916]: time="2024-12-13T14:16:20.354896435Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:d5cb91e7550dca840aad69277b6dbccf8dc3739757998181746daf777a8bd9de\"" Dec 13 14:16:20.361505 env[1916]: time="2024-12-13T14:16:20.361426607Z" level=info msg="CreateContainer within sandbox \"4e6a2318ce67da4fc1a75108241e3cd73fe830fff8b649b85bc77761c2076be6\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Dec 13 14:16:20.386994 env[1916]: time="2024-12-13T14:16:20.386922339Z" level=info msg="CreateContainer within sandbox \"4e6a2318ce67da4fc1a75108241e3cd73fe830fff8b649b85bc77761c2076be6\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"de078cad6ddff8c71ad1ed53cacd6a460ce759dfa53d1e34d403c9cf41884ab8\"" Dec 13 14:16:20.388137 env[1916]: time="2024-12-13T14:16:20.388077654Z" level=info msg="StartContainer for \"de078cad6ddff8c71ad1ed53cacd6a460ce759dfa53d1e34d403c9cf41884ab8\"" Dec 13 14:16:20.524004 env[1916]: time="2024-12-13T14:16:20.523938111Z" level=info msg="StartContainer for \"de078cad6ddff8c71ad1ed53cacd6a460ce759dfa53d1e34d403c9cf41884ab8\" returns successfully" Dec 13 14:16:21.078014 kubelet[2337]: E1213 14:16:21.077953 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:16:21.470075 kubelet[2337]: I1213 14:16:21.470009 2337 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-6d5f899847-hmjql" podStartSLOduration=12.4882005 podStartE2EDuration="18.469953352s" podCreationTimestamp="2024-12-13 14:16:03 +0000 UTC" firstStartedPulling="2024-12-13 14:16:14.375793829 +0000 UTC m=+36.099751916" lastFinishedPulling="2024-12-13 14:16:20.357546681 +0000 UTC m=+42.081504768" observedRunningTime="2024-12-13 14:16:21.469627044 +0000 UTC m=+43.193585167" watchObservedRunningTime="2024-12-13 14:16:21.469953352 +0000 UTC m=+43.193911463" Dec 13 14:16:22.078865 kubelet[2337]: E1213 14:16:22.078808 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:16:23.079909 kubelet[2337]: E1213 14:16:23.079810 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:16:24.080752 kubelet[2337]: E1213 14:16:24.080668 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:16:25.081578 kubelet[2337]: E1213 14:16:25.081529 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:16:26.082511 kubelet[2337]: E1213 14:16:26.082446 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:16:27.083915 kubelet[2337]: E1213 14:16:27.083817 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:16:28.084861 kubelet[2337]: E1213 14:16:28.084795 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:16:29.086044 kubelet[2337]: E1213 14:16:29.085987 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:16:29.658045 kubelet[2337]: I1213 14:16:29.657968 2337 topology_manager.go:215] "Topology Admit Handler" podUID="39d788e0-0bd8-456a-ac0a-f1e18be34f85" podNamespace="default" podName="nfs-server-provisioner-0" Dec 13 14:16:29.823559 kubelet[2337]: I1213 14:16:29.823417 2337 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/39d788e0-0bd8-456a-ac0a-f1e18be34f85-data\") pod \"nfs-server-provisioner-0\" (UID: \"39d788e0-0bd8-456a-ac0a-f1e18be34f85\") " pod="default/nfs-server-provisioner-0" Dec 13 14:16:29.824146 kubelet[2337]: I1213 14:16:29.824085 2337 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mhqk\" (UniqueName: \"kubernetes.io/projected/39d788e0-0bd8-456a-ac0a-f1e18be34f85-kube-api-access-7mhqk\") pod \"nfs-server-provisioner-0\" (UID: \"39d788e0-0bd8-456a-ac0a-f1e18be34f85\") " pod="default/nfs-server-provisioner-0" Dec 13 14:16:29.966349 env[1916]: time="2024-12-13T14:16:29.965646421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:39d788e0-0bd8-456a-ac0a-f1e18be34f85,Namespace:default,Attempt:0,}" Dec 13 14:16:30.031346 (udev-worker)[3743]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:16:30.032760 (udev-worker)[3744]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:16:30.033603 systemd-networkd[1593]: lxc68c791676a52: Link UP Dec 13 14:16:30.045854 kernel: eth0: renamed from tmpe039f Dec 13 14:16:30.057660 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:16:30.057990 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc68c791676a52: link becomes ready Dec 13 14:16:30.058271 systemd-networkd[1593]: lxc68c791676a52: Gained carrier Dec 13 14:16:30.086267 kubelet[2337]: E1213 14:16:30.086183 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:16:30.439004 env[1916]: time="2024-12-13T14:16:30.438847996Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:16:30.439004 env[1916]: time="2024-12-13T14:16:30.438940709Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:16:30.439371 env[1916]: time="2024-12-13T14:16:30.438969929Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:16:30.439893 env[1916]: time="2024-12-13T14:16:30.439784603Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e039fc31af40dad1090f75fc842288e49e1cff5ea882b2882cb07d137888f832 pid=3774 runtime=io.containerd.runc.v2 Dec 13 14:16:30.549727 env[1916]: time="2024-12-13T14:16:30.549629852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:39d788e0-0bd8-456a-ac0a-f1e18be34f85,Namespace:default,Attempt:0,} returns sandbox id \"e039fc31af40dad1090f75fc842288e49e1cff5ea882b2882cb07d137888f832\"" Dec 13 14:16:30.552629 env[1916]: time="2024-12-13T14:16:30.552451876Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Dec 13 14:16:31.087183 kubelet[2337]: E1213 14:16:31.087116 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:16:31.401323 systemd-networkd[1593]: lxc68c791676a52: Gained IPv6LL Dec 13 14:16:32.088371 kubelet[2337]: E1213 14:16:32.088124 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:16:33.088677 kubelet[2337]: E1213 14:16:33.088580 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:16:33.857657 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1044826449.mount: Deactivated successfully. Dec 13 14:16:34.089821 kubelet[2337]: E1213 14:16:34.089715 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:16:35.091179 kubelet[2337]: E1213 14:16:35.091097 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:16:36.092357 kubelet[2337]: E1213 14:16:36.092243 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:16:37.092629 kubelet[2337]: E1213 14:16:37.092562 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:16:37.574829 env[1916]: time="2024-12-13T14:16:37.574771113Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:16:37.580835 env[1916]: time="2024-12-13T14:16:37.580781663Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:16:37.586745 env[1916]: time="2024-12-13T14:16:37.586666692Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:16:37.592419 env[1916]: time="2024-12-13T14:16:37.592369321Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:16:37.594400 env[1916]: time="2024-12-13T14:16:37.594340065Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Dec 13 14:16:37.599556 env[1916]: time="2024-12-13T14:16:37.599502079Z" level=info msg="CreateContainer within sandbox \"e039fc31af40dad1090f75fc842288e49e1cff5ea882b2882cb07d137888f832\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Dec 13 14:16:37.622395 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3640706680.mount: Deactivated successfully. Dec 13 14:16:37.636511 env[1916]: time="2024-12-13T14:16:37.636417166Z" level=info msg="CreateContainer within sandbox \"e039fc31af40dad1090f75fc842288e49e1cff5ea882b2882cb07d137888f832\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"4c15da3152c79da8821576a4248a6b957632cbeedc8eafa77915e1ef5ac91ac1\"" Dec 13 14:16:37.637499 env[1916]: time="2024-12-13T14:16:37.637442115Z" level=info msg="StartContainer for \"4c15da3152c79da8821576a4248a6b957632cbeedc8eafa77915e1ef5ac91ac1\"" Dec 13 14:16:37.744745 env[1916]: time="2024-12-13T14:16:37.741990800Z" level=info msg="StartContainer for \"4c15da3152c79da8821576a4248a6b957632cbeedc8eafa77915e1ef5ac91ac1\" returns successfully" Dec 13 14:16:38.093097 kubelet[2337]: E1213 14:16:38.093040 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:16:39.094041 kubelet[2337]: E1213 14:16:39.093972 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:16:40.037815 kubelet[2337]: E1213 14:16:40.037649 2337 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:16:40.095189 kubelet[2337]: E1213 14:16:40.095119 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:16:41.095677 kubelet[2337]: E1213 14:16:41.095613 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:16:42.096397 kubelet[2337]: E1213 14:16:42.096241 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:16:43.098205 kubelet[2337]: E1213 14:16:43.098131 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:16:44.098981 kubelet[2337]: E1213 14:16:44.098910 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:16:45.099480 kubelet[2337]: E1213 14:16:45.099359 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:16:46.100584 kubelet[2337]: E1213 14:16:46.100505 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:16:47.101706 kubelet[2337]: E1213 14:16:47.101636 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:16:48.091238 kubelet[2337]: I1213 14:16:48.091168 2337 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=12.047682501 podStartE2EDuration="19.09108919s" podCreationTimestamp="2024-12-13 14:16:29 +0000 UTC" firstStartedPulling="2024-12-13 14:16:30.551888232 +0000 UTC m=+52.275846319" lastFinishedPulling="2024-12-13 14:16:37.595294921 +0000 UTC m=+59.319253008" observedRunningTime="2024-12-13 14:16:38.536408653 +0000 UTC m=+60.260366836" watchObservedRunningTime="2024-12-13 14:16:48.09108919 +0000 UTC m=+69.815047313" Dec 13 14:16:48.091499 kubelet[2337]: I1213 14:16:48.091432 2337 topology_manager.go:215] "Topology Admit Handler" podUID="df1592f3-8361-4093-a6b6-d8a7b084a202" podNamespace="default" podName="test-pod-1" Dec 13 14:16:48.101812 kubelet[2337]: E1213 14:16:48.101767 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:16:48.224748 kubelet[2337]: I1213 14:16:48.224678 2337 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-de930156-15ba-4c52-b476-90333b66d47f\" (UniqueName: \"kubernetes.io/nfs/df1592f3-8361-4093-a6b6-d8a7b084a202-pvc-de930156-15ba-4c52-b476-90333b66d47f\") pod \"test-pod-1\" (UID: \"df1592f3-8361-4093-a6b6-d8a7b084a202\") " pod="default/test-pod-1" Dec 13 14:16:48.225024 kubelet[2337]: I1213 14:16:48.224997 2337 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxvgs\" (UniqueName: \"kubernetes.io/projected/df1592f3-8361-4093-a6b6-d8a7b084a202-kube-api-access-fxvgs\") pod \"test-pod-1\" (UID: \"df1592f3-8361-4093-a6b6-d8a7b084a202\") " pod="default/test-pod-1" Dec 13 14:16:48.371730 kernel: FS-Cache: Loaded Dec 13 14:16:48.424003 kernel: RPC: Registered named UNIX socket transport module. Dec 13 14:16:48.424180 kernel: RPC: Registered udp transport module. Dec 13 14:16:48.425864 kernel: RPC: Registered tcp transport module. Dec 13 14:16:48.428154 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Dec 13 14:16:48.509725 kernel: FS-Cache: Netfs 'nfs' registered for caching Dec 13 14:16:48.766083 kernel: NFS: Registering the id_resolver key type Dec 13 14:16:48.766233 kernel: Key type id_resolver registered Dec 13 14:16:48.767879 kernel: Key type id_legacy registered Dec 13 14:16:48.816521 nfsidmap[3889]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Dec 13 14:16:48.823125 nfsidmap[3890]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Dec 13 14:16:49.000456 env[1916]: time="2024-12-13T14:16:49.000356735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:df1592f3-8361-4093-a6b6-d8a7b084a202,Namespace:default,Attempt:0,}" Dec 13 14:16:49.066975 (udev-worker)[3883]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:16:49.068012 (udev-worker)[3877]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:16:49.071111 systemd-networkd[1593]: lxcf22f1bfb987a: Link UP Dec 13 14:16:49.078731 kernel: eth0: renamed from tmpb69ba Dec 13 14:16:49.087418 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:16:49.089497 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcf22f1bfb987a: link becomes ready Dec 13 14:16:49.088505 systemd-networkd[1593]: lxcf22f1bfb987a: Gained carrier Dec 13 14:16:49.107651 kubelet[2337]: E1213 14:16:49.102824 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:16:49.386246 env[1916]: time="2024-12-13T14:16:49.386043872Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:16:49.386511 env[1916]: time="2024-12-13T14:16:49.386449953Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:16:49.386758 env[1916]: time="2024-12-13T14:16:49.386638677Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:16:49.387784 env[1916]: time="2024-12-13T14:16:49.387568823Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b69ba410b7f061cd432bf03556c2096dfb26a976aa1889bfc443a783e5731a75 pid=3916 runtime=io.containerd.runc.v2 Dec 13 14:16:49.437289 systemd[1]: run-containerd-runc-k8s.io-b69ba410b7f061cd432bf03556c2096dfb26a976aa1889bfc443a783e5731a75-runc.vZpsss.mount: Deactivated successfully. Dec 13 14:16:49.500614 env[1916]: time="2024-12-13T14:16:49.500526967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:df1592f3-8361-4093-a6b6-d8a7b084a202,Namespace:default,Attempt:0,} returns sandbox id \"b69ba410b7f061cd432bf03556c2096dfb26a976aa1889bfc443a783e5731a75\"" Dec 13 14:16:49.504399 env[1916]: time="2024-12-13T14:16:49.504334179Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 14:16:50.104148 kubelet[2337]: E1213 14:16:50.104089 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:16:50.244822 env[1916]: time="2024-12-13T14:16:50.244765979Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:16:50.248787 env[1916]: time="2024-12-13T14:16:50.248732155Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:d5cb91e7550dca840aad69277b6dbccf8dc3739757998181746daf777a8bd9de,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:16:50.252506 env[1916]: time="2024-12-13T14:16:50.252440462Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:16:50.256283 env[1916]: time="2024-12-13T14:16:50.256219797Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:16:50.257670 env[1916]: time="2024-12-13T14:16:50.257619683Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:d5cb91e7550dca840aad69277b6dbccf8dc3739757998181746daf777a8bd9de\"" Dec 13 14:16:50.262222 env[1916]: time="2024-12-13T14:16:50.262149296Z" level=info msg="CreateContainer within sandbox \"b69ba410b7f061cd432bf03556c2096dfb26a976aa1889bfc443a783e5731a75\" for container &ContainerMetadata{Name:test,Attempt:0,}" Dec 13 14:16:50.299402 env[1916]: time="2024-12-13T14:16:50.299319593Z" level=info msg="CreateContainer within sandbox \"b69ba410b7f061cd432bf03556c2096dfb26a976aa1889bfc443a783e5731a75\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"525eaee50ae59aeb5811eaabd2347211bc60c4702c111262280d58e8064aa5d3\"" Dec 13 14:16:50.300511 env[1916]: time="2024-12-13T14:16:50.300424831Z" level=info msg="StartContainer for \"525eaee50ae59aeb5811eaabd2347211bc60c4702c111262280d58e8064aa5d3\"" Dec 13 14:16:50.346556 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2710641569.mount: Deactivated successfully. Dec 13 14:16:50.438189 env[1916]: time="2024-12-13T14:16:50.438122339Z" level=info msg="StartContainer for \"525eaee50ae59aeb5811eaabd2347211bc60c4702c111262280d58e8064aa5d3\" returns successfully" Dec 13 14:16:50.569171 kubelet[2337]: I1213 14:16:50.569085 2337 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=19.813419078 podStartE2EDuration="20.569007807s" podCreationTimestamp="2024-12-13 14:16:30 +0000 UTC" firstStartedPulling="2024-12-13 14:16:49.502558643 +0000 UTC m=+71.226516730" lastFinishedPulling="2024-12-13 14:16:50.25814736 +0000 UTC m=+71.982105459" observedRunningTime="2024-12-13 14:16:50.567583908 +0000 UTC m=+72.291541995" watchObservedRunningTime="2024-12-13 14:16:50.569007807 +0000 UTC m=+72.292965930" Dec 13 14:16:50.984148 systemd-networkd[1593]: lxcf22f1bfb987a: Gained IPv6LL Dec 13 14:16:51.105114 kubelet[2337]: E1213 14:16:51.105044 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:16:52.105890 kubelet[2337]: E1213 14:16:52.105850 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:16:53.107312 kubelet[2337]: E1213 14:16:53.107239 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:16:54.108030 kubelet[2337]: E1213 14:16:54.107973 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:16:55.109210 kubelet[2337]: E1213 14:16:55.109135 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:16:56.109665 kubelet[2337]: E1213 14:16:56.109624 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:16:56.386817 systemd[1]: run-containerd-runc-k8s.io-ad1ff9790dd209b98ab03e1be1cd9713df1042d7b9ae49618274505f69d61fd6-runc.4YMZ3E.mount: Deactivated successfully. Dec 13 14:16:56.423637 env[1916]: time="2024-12-13T14:16:56.423555095Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:16:56.433202 env[1916]: time="2024-12-13T14:16:56.433150775Z" level=info msg="StopContainer for \"ad1ff9790dd209b98ab03e1be1cd9713df1042d7b9ae49618274505f69d61fd6\" with timeout 2 (s)" Dec 13 14:16:56.433969 env[1916]: time="2024-12-13T14:16:56.433927884Z" level=info msg="Stop container \"ad1ff9790dd209b98ab03e1be1cd9713df1042d7b9ae49618274505f69d61fd6\" with signal terminated" Dec 13 14:16:56.445014 systemd-networkd[1593]: lxc_health: Link DOWN Dec 13 14:16:56.445040 systemd-networkd[1593]: lxc_health: Lost carrier Dec 13 14:16:56.509918 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ad1ff9790dd209b98ab03e1be1cd9713df1042d7b9ae49618274505f69d61fd6-rootfs.mount: Deactivated successfully. Dec 13 14:16:56.890511 env[1916]: time="2024-12-13T14:16:56.890434646Z" level=info msg="shim disconnected" id=ad1ff9790dd209b98ab03e1be1cd9713df1042d7b9ae49618274505f69d61fd6 Dec 13 14:16:56.890823 env[1916]: time="2024-12-13T14:16:56.890510390Z" level=warning msg="cleaning up after shim disconnected" id=ad1ff9790dd209b98ab03e1be1cd9713df1042d7b9ae49618274505f69d61fd6 namespace=k8s.io Dec 13 14:16:56.890823 env[1916]: time="2024-12-13T14:16:56.890533598Z" level=info msg="cleaning up dead shim" Dec 13 14:16:56.904921 env[1916]: time="2024-12-13T14:16:56.904852280Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:16:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4042 runtime=io.containerd.runc.v2\n" Dec 13 14:16:56.910378 env[1916]: time="2024-12-13T14:16:56.910313403Z" level=info msg="StopContainer for \"ad1ff9790dd209b98ab03e1be1cd9713df1042d7b9ae49618274505f69d61fd6\" returns successfully" Dec 13 14:16:56.911428 env[1916]: time="2024-12-13T14:16:56.911377744Z" level=info msg="StopPodSandbox for \"01883d3b87dacb3dae05e32ee20d5199e335f551ea6bf2d4a9bd1cd10c9320dd\"" Dec 13 14:16:56.911575 env[1916]: time="2024-12-13T14:16:56.911475796Z" level=info msg="Container to stop \"0d71510214061c0b9cf2e942e03eb047b2277eeed3e24eed5db7034c66610739\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:16:56.911575 env[1916]: time="2024-12-13T14:16:56.911508376Z" level=info msg="Container to stop \"eb528ecc36f3f3bc8dc3c08bd3a9de2f1333a79ee2d68e30b34e5e178206fd83\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:16:56.911575 env[1916]: time="2024-12-13T14:16:56.911538076Z" level=info msg="Container to stop \"af83171067a54696c18b931b94342b5f7ab6c350501ccff2d67ea6afda024b6a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:16:56.917138 env[1916]: time="2024-12-13T14:16:56.911569517Z" level=info msg="Container to stop \"6ff0cdda3164b4fab64ed408d882d6d7f16891eeb7d7505ef584fae4dd2fde10\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:16:56.917138 env[1916]: time="2024-12-13T14:16:56.911596097Z" level=info msg="Container to stop \"ad1ff9790dd209b98ab03e1be1cd9713df1042d7b9ae49618274505f69d61fd6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:16:56.915341 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-01883d3b87dacb3dae05e32ee20d5199e335f551ea6bf2d4a9bd1cd10c9320dd-shm.mount: Deactivated successfully. Dec 13 14:16:56.964864 env[1916]: time="2024-12-13T14:16:56.964789956Z" level=info msg="shim disconnected" id=01883d3b87dacb3dae05e32ee20d5199e335f551ea6bf2d4a9bd1cd10c9320dd Dec 13 14:16:56.965122 env[1916]: time="2024-12-13T14:16:56.964864824Z" level=warning msg="cleaning up after shim disconnected" id=01883d3b87dacb3dae05e32ee20d5199e335f551ea6bf2d4a9bd1cd10c9320dd namespace=k8s.io Dec 13 14:16:56.965122 env[1916]: time="2024-12-13T14:16:56.964888884Z" level=info msg="cleaning up dead shim" Dec 13 14:16:56.979221 env[1916]: time="2024-12-13T14:16:56.979143282Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:16:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4075 runtime=io.containerd.runc.v2\n" Dec 13 14:16:56.979766 env[1916]: time="2024-12-13T14:16:56.979682563Z" level=info msg="TearDown network for sandbox \"01883d3b87dacb3dae05e32ee20d5199e335f551ea6bf2d4a9bd1cd10c9320dd\" successfully" Dec 13 14:16:56.979869 env[1916]: time="2024-12-13T14:16:56.979760599Z" level=info msg="StopPodSandbox for \"01883d3b87dacb3dae05e32ee20d5199e335f551ea6bf2d4a9bd1cd10c9320dd\" returns successfully" Dec 13 14:16:57.087492 kubelet[2337]: I1213 14:16:57.087438 2337 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cfdf2a87-71c8-452e-a524-dd06bcc4f0f8-clustermesh-secrets\") pod \"cfdf2a87-71c8-452e-a524-dd06bcc4f0f8\" (UID: \"cfdf2a87-71c8-452e-a524-dd06bcc4f0f8\") " Dec 13 14:16:57.087870 kubelet[2337]: I1213 14:16:57.087821 2337 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cfdf2a87-71c8-452e-a524-dd06bcc4f0f8-host-proc-sys-kernel\") pod \"cfdf2a87-71c8-452e-a524-dd06bcc4f0f8\" (UID: \"cfdf2a87-71c8-452e-a524-dd06bcc4f0f8\") " Dec 13 14:16:57.088093 kubelet[2337]: I1213 14:16:57.088055 2337 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cfdf2a87-71c8-452e-a524-dd06bcc4f0f8-hostproc\") pod \"cfdf2a87-71c8-452e-a524-dd06bcc4f0f8\" (UID: \"cfdf2a87-71c8-452e-a524-dd06bcc4f0f8\") " Dec 13 14:16:57.088354 kubelet[2337]: I1213 14:16:57.088328 2337 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cfdf2a87-71c8-452e-a524-dd06bcc4f0f8-lib-modules\") pod \"cfdf2a87-71c8-452e-a524-dd06bcc4f0f8\" (UID: \"cfdf2a87-71c8-452e-a524-dd06bcc4f0f8\") " Dec 13 14:16:57.088563 kubelet[2337]: I1213 14:16:57.088527 2337 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cfdf2a87-71c8-452e-a524-dd06bcc4f0f8-etc-cni-netd\") pod \"cfdf2a87-71c8-452e-a524-dd06bcc4f0f8\" (UID: \"cfdf2a87-71c8-452e-a524-dd06bcc4f0f8\") " Dec 13 14:16:57.088776 kubelet[2337]: I1213 14:16:57.088751 2337 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cfdf2a87-71c8-452e-a524-dd06bcc4f0f8-cilium-config-path\") pod \"cfdf2a87-71c8-452e-a524-dd06bcc4f0f8\" (UID: \"cfdf2a87-71c8-452e-a524-dd06bcc4f0f8\") " Dec 13 14:16:57.088955 kubelet[2337]: I1213 14:16:57.088934 2337 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cfdf2a87-71c8-452e-a524-dd06bcc4f0f8-cilium-run\") pod \"cfdf2a87-71c8-452e-a524-dd06bcc4f0f8\" (UID: \"cfdf2a87-71c8-452e-a524-dd06bcc4f0f8\") " Dec 13 14:16:57.089124 kubelet[2337]: I1213 14:16:57.089102 2337 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cfdf2a87-71c8-452e-a524-dd06bcc4f0f8-bpf-maps\") pod \"cfdf2a87-71c8-452e-a524-dd06bcc4f0f8\" (UID: \"cfdf2a87-71c8-452e-a524-dd06bcc4f0f8\") " Dec 13 14:16:57.089376 kubelet[2337]: I1213 14:16:57.089354 2337 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cfdf2a87-71c8-452e-a524-dd06bcc4f0f8-cilium-cgroup\") pod \"cfdf2a87-71c8-452e-a524-dd06bcc4f0f8\" (UID: \"cfdf2a87-71c8-452e-a524-dd06bcc4f0f8\") " Dec 13 14:16:57.089548 kubelet[2337]: I1213 14:16:57.089526 2337 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cfdf2a87-71c8-452e-a524-dd06bcc4f0f8-hubble-tls\") pod \"cfdf2a87-71c8-452e-a524-dd06bcc4f0f8\" (UID: \"cfdf2a87-71c8-452e-a524-dd06bcc4f0f8\") " Dec 13 14:16:57.089743 kubelet[2337]: I1213 14:16:57.089721 2337 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cfdf2a87-71c8-452e-a524-dd06bcc4f0f8-xtables-lock\") pod \"cfdf2a87-71c8-452e-a524-dd06bcc4f0f8\" (UID: \"cfdf2a87-71c8-452e-a524-dd06bcc4f0f8\") " Dec 13 14:16:57.089934 kubelet[2337]: I1213 14:16:57.089899 2337 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cfdf2a87-71c8-452e-a524-dd06bcc4f0f8-cni-path\") pod \"cfdf2a87-71c8-452e-a524-dd06bcc4f0f8\" (UID: \"cfdf2a87-71c8-452e-a524-dd06bcc4f0f8\") " Dec 13 14:16:57.090093 kubelet[2337]: I1213 14:16:57.090072 2337 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cfdf2a87-71c8-452e-a524-dd06bcc4f0f8-host-proc-sys-net\") pod \"cfdf2a87-71c8-452e-a524-dd06bcc4f0f8\" (UID: \"cfdf2a87-71c8-452e-a524-dd06bcc4f0f8\") " Dec 13 14:16:57.090293 kubelet[2337]: I1213 14:16:57.090259 2337 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2dtc\" (UniqueName: \"kubernetes.io/projected/cfdf2a87-71c8-452e-a524-dd06bcc4f0f8-kube-api-access-m2dtc\") pod \"cfdf2a87-71c8-452e-a524-dd06bcc4f0f8\" (UID: \"cfdf2a87-71c8-452e-a524-dd06bcc4f0f8\") " Dec 13 14:16:57.091384 kubelet[2337]: I1213 14:16:57.088750 2337 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cfdf2a87-71c8-452e-a524-dd06bcc4f0f8-hostproc" (OuterVolumeSpecName: "hostproc") pod "cfdf2a87-71c8-452e-a524-dd06bcc4f0f8" (UID: "cfdf2a87-71c8-452e-a524-dd06bcc4f0f8"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:16:57.091529 kubelet[2337]: I1213 14:16:57.088790 2337 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cfdf2a87-71c8-452e-a524-dd06bcc4f0f8-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "cfdf2a87-71c8-452e-a524-dd06bcc4f0f8" (UID: "cfdf2a87-71c8-452e-a524-dd06bcc4f0f8"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:16:57.091529 kubelet[2337]: I1213 14:16:57.088823 2337 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cfdf2a87-71c8-452e-a524-dd06bcc4f0f8-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "cfdf2a87-71c8-452e-a524-dd06bcc4f0f8" (UID: "cfdf2a87-71c8-452e-a524-dd06bcc4f0f8"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:16:57.091529 kubelet[2337]: I1213 14:16:57.088851 2337 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cfdf2a87-71c8-452e-a524-dd06bcc4f0f8-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "cfdf2a87-71c8-452e-a524-dd06bcc4f0f8" (UID: "cfdf2a87-71c8-452e-a524-dd06bcc4f0f8"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:16:57.091529 kubelet[2337]: I1213 14:16:57.091476 2337 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cfdf2a87-71c8-452e-a524-dd06bcc4f0f8-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "cfdf2a87-71c8-452e-a524-dd06bcc4f0f8" (UID: "cfdf2a87-71c8-452e-a524-dd06bcc4f0f8"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:16:57.091821 kubelet[2337]: I1213 14:16:57.091525 2337 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cfdf2a87-71c8-452e-a524-dd06bcc4f0f8-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "cfdf2a87-71c8-452e-a524-dd06bcc4f0f8" (UID: "cfdf2a87-71c8-452e-a524-dd06bcc4f0f8"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:16:57.091821 kubelet[2337]: I1213 14:16:57.091566 2337 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cfdf2a87-71c8-452e-a524-dd06bcc4f0f8-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "cfdf2a87-71c8-452e-a524-dd06bcc4f0f8" (UID: "cfdf2a87-71c8-452e-a524-dd06bcc4f0f8"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:16:57.094929 kubelet[2337]: I1213 14:16:57.094849 2337 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfdf2a87-71c8-452e-a524-dd06bcc4f0f8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "cfdf2a87-71c8-452e-a524-dd06bcc4f0f8" (UID: "cfdf2a87-71c8-452e-a524-dd06bcc4f0f8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:16:57.095101 kubelet[2337]: I1213 14:16:57.094985 2337 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cfdf2a87-71c8-452e-a524-dd06bcc4f0f8-cni-path" (OuterVolumeSpecName: "cni-path") pod "cfdf2a87-71c8-452e-a524-dd06bcc4f0f8" (UID: "cfdf2a87-71c8-452e-a524-dd06bcc4f0f8"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:16:57.095101 kubelet[2337]: I1213 14:16:57.095030 2337 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cfdf2a87-71c8-452e-a524-dd06bcc4f0f8-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "cfdf2a87-71c8-452e-a524-dd06bcc4f0f8" (UID: "cfdf2a87-71c8-452e-a524-dd06bcc4f0f8"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:16:57.095101 kubelet[2337]: I1213 14:16:57.095075 2337 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cfdf2a87-71c8-452e-a524-dd06bcc4f0f8-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "cfdf2a87-71c8-452e-a524-dd06bcc4f0f8" (UID: "cfdf2a87-71c8-452e-a524-dd06bcc4f0f8"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:16:57.099005 kubelet[2337]: I1213 14:16:57.098940 2337 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfdf2a87-71c8-452e-a524-dd06bcc4f0f8-kube-api-access-m2dtc" (OuterVolumeSpecName: "kube-api-access-m2dtc") pod "cfdf2a87-71c8-452e-a524-dd06bcc4f0f8" (UID: "cfdf2a87-71c8-452e-a524-dd06bcc4f0f8"). InnerVolumeSpecName "kube-api-access-m2dtc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:16:57.100130 kubelet[2337]: I1213 14:16:57.100066 2337 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfdf2a87-71c8-452e-a524-dd06bcc4f0f8-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "cfdf2a87-71c8-452e-a524-dd06bcc4f0f8" (UID: "cfdf2a87-71c8-452e-a524-dd06bcc4f0f8"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:16:57.104426 kubelet[2337]: I1213 14:16:57.104353 2337 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfdf2a87-71c8-452e-a524-dd06bcc4f0f8-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "cfdf2a87-71c8-452e-a524-dd06bcc4f0f8" (UID: "cfdf2a87-71c8-452e-a524-dd06bcc4f0f8"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:16:57.110429 kubelet[2337]: E1213 14:16:57.110390 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:16:57.190995 kubelet[2337]: I1213 14:16:57.190929 2337 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cfdf2a87-71c8-452e-a524-dd06bcc4f0f8-etc-cni-netd\") on node \"172.31.20.24\" DevicePath \"\"" Dec 13 14:16:57.190995 kubelet[2337]: I1213 14:16:57.190998 2337 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cfdf2a87-71c8-452e-a524-dd06bcc4f0f8-cilium-config-path\") on node \"172.31.20.24\" DevicePath \"\"" Dec 13 14:16:57.191243 kubelet[2337]: I1213 14:16:57.191025 2337 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cfdf2a87-71c8-452e-a524-dd06bcc4f0f8-cilium-run\") on node \"172.31.20.24\" DevicePath \"\"" Dec 13 14:16:57.191243 kubelet[2337]: I1213 14:16:57.191051 2337 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cfdf2a87-71c8-452e-a524-dd06bcc4f0f8-bpf-maps\") on node \"172.31.20.24\" DevicePath \"\"" Dec 13 14:16:57.191243 kubelet[2337]: I1213 14:16:57.191076 2337 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cfdf2a87-71c8-452e-a524-dd06bcc4f0f8-cilium-cgroup\") on node \"172.31.20.24\" DevicePath \"\"" Dec 13 14:16:57.191243 kubelet[2337]: I1213 14:16:57.191099 2337 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cfdf2a87-71c8-452e-a524-dd06bcc4f0f8-hubble-tls\") on node \"172.31.20.24\" DevicePath \"\"" Dec 13 14:16:57.191243 kubelet[2337]: I1213 14:16:57.191126 2337 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cfdf2a87-71c8-452e-a524-dd06bcc4f0f8-xtables-lock\") on node \"172.31.20.24\" DevicePath \"\"" Dec 13 14:16:57.191243 kubelet[2337]: I1213 14:16:57.191150 2337 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cfdf2a87-71c8-452e-a524-dd06bcc4f0f8-cni-path\") on node \"172.31.20.24\" DevicePath \"\"" Dec 13 14:16:57.191243 kubelet[2337]: I1213 14:16:57.191173 2337 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cfdf2a87-71c8-452e-a524-dd06bcc4f0f8-host-proc-sys-net\") on node \"172.31.20.24\" DevicePath \"\"" Dec 13 14:16:57.191243 kubelet[2337]: I1213 14:16:57.191198 2337 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-m2dtc\" (UniqueName: \"kubernetes.io/projected/cfdf2a87-71c8-452e-a524-dd06bcc4f0f8-kube-api-access-m2dtc\") on node \"172.31.20.24\" DevicePath \"\"" Dec 13 14:16:57.191731 kubelet[2337]: I1213 14:16:57.191223 2337 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cfdf2a87-71c8-452e-a524-dd06bcc4f0f8-clustermesh-secrets\") on node \"172.31.20.24\" DevicePath \"\"" Dec 13 14:16:57.191731 kubelet[2337]: I1213 14:16:57.191247 2337 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cfdf2a87-71c8-452e-a524-dd06bcc4f0f8-host-proc-sys-kernel\") on node \"172.31.20.24\" DevicePath \"\"" Dec 13 14:16:57.191731 kubelet[2337]: I1213 14:16:57.191270 2337 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cfdf2a87-71c8-452e-a524-dd06bcc4f0f8-hostproc\") on node \"172.31.20.24\" DevicePath \"\"" Dec 13 14:16:57.191731 kubelet[2337]: I1213 14:16:57.191293 2337 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cfdf2a87-71c8-452e-a524-dd06bcc4f0f8-lib-modules\") on node \"172.31.20.24\" DevicePath \"\"" Dec 13 14:16:57.374897 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-01883d3b87dacb3dae05e32ee20d5199e335f551ea6bf2d4a9bd1cd10c9320dd-rootfs.mount: Deactivated successfully. Dec 13 14:16:57.375205 systemd[1]: var-lib-kubelet-pods-cfdf2a87\x2d71c8\x2d452e\x2da524\x2ddd06bcc4f0f8-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 14:16:57.375429 systemd[1]: var-lib-kubelet-pods-cfdf2a87\x2d71c8\x2d452e\x2da524\x2ddd06bcc4f0f8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dm2dtc.mount: Deactivated successfully. Dec 13 14:16:57.375645 systemd[1]: var-lib-kubelet-pods-cfdf2a87\x2d71c8\x2d452e\x2da524\x2ddd06bcc4f0f8-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 14:16:57.571726 kubelet[2337]: I1213 14:16:57.571602 2337 scope.go:117] "RemoveContainer" containerID="ad1ff9790dd209b98ab03e1be1cd9713df1042d7b9ae49618274505f69d61fd6" Dec 13 14:16:57.575455 env[1916]: time="2024-12-13T14:16:57.575005875Z" level=info msg="RemoveContainer for \"ad1ff9790dd209b98ab03e1be1cd9713df1042d7b9ae49618274505f69d61fd6\"" Dec 13 14:16:57.584313 env[1916]: time="2024-12-13T14:16:57.584251910Z" level=info msg="RemoveContainer for \"ad1ff9790dd209b98ab03e1be1cd9713df1042d7b9ae49618274505f69d61fd6\" returns successfully" Dec 13 14:16:57.584983 kubelet[2337]: I1213 14:16:57.584949 2337 scope.go:117] "RemoveContainer" containerID="6ff0cdda3164b4fab64ed408d882d6d7f16891eeb7d7505ef584fae4dd2fde10" Dec 13 14:16:57.587763 env[1916]: time="2024-12-13T14:16:57.587241317Z" level=info msg="RemoveContainer for \"6ff0cdda3164b4fab64ed408d882d6d7f16891eeb7d7505ef584fae4dd2fde10\"" Dec 13 14:16:57.594492 env[1916]: time="2024-12-13T14:16:57.594420614Z" level=info msg="RemoveContainer for \"6ff0cdda3164b4fab64ed408d882d6d7f16891eeb7d7505ef584fae4dd2fde10\" returns successfully" Dec 13 14:16:57.594915 kubelet[2337]: I1213 14:16:57.594863 2337 scope.go:117] "RemoveContainer" containerID="af83171067a54696c18b931b94342b5f7ab6c350501ccff2d67ea6afda024b6a" Dec 13 14:16:57.598095 env[1916]: time="2024-12-13T14:16:57.597612089Z" level=info msg="RemoveContainer for \"af83171067a54696c18b931b94342b5f7ab6c350501ccff2d67ea6afda024b6a\"" Dec 13 14:16:57.603471 env[1916]: time="2024-12-13T14:16:57.603417504Z" level=info msg="RemoveContainer for \"af83171067a54696c18b931b94342b5f7ab6c350501ccff2d67ea6afda024b6a\" returns successfully" Dec 13 14:16:57.604755 kubelet[2337]: I1213 14:16:57.604661 2337 scope.go:117] "RemoveContainer" containerID="eb528ecc36f3f3bc8dc3c08bd3a9de2f1333a79ee2d68e30b34e5e178206fd83" Dec 13 14:16:57.607444 env[1916]: time="2024-12-13T14:16:57.607353329Z" level=info msg="RemoveContainer for \"eb528ecc36f3f3bc8dc3c08bd3a9de2f1333a79ee2d68e30b34e5e178206fd83\"" Dec 13 14:16:57.614218 env[1916]: time="2024-12-13T14:16:57.614117701Z" level=info msg="RemoveContainer for \"eb528ecc36f3f3bc8dc3c08bd3a9de2f1333a79ee2d68e30b34e5e178206fd83\" returns successfully" Dec 13 14:16:57.614680 kubelet[2337]: I1213 14:16:57.614649 2337 scope.go:117] "RemoveContainer" containerID="0d71510214061c0b9cf2e942e03eb047b2277eeed3e24eed5db7034c66610739" Dec 13 14:16:57.617239 env[1916]: time="2024-12-13T14:16:57.617150753Z" level=info msg="RemoveContainer for \"0d71510214061c0b9cf2e942e03eb047b2277eeed3e24eed5db7034c66610739\"" Dec 13 14:16:57.623862 env[1916]: time="2024-12-13T14:16:57.623786413Z" level=info msg="RemoveContainer for \"0d71510214061c0b9cf2e942e03eb047b2277eeed3e24eed5db7034c66610739\" returns successfully" Dec 13 14:16:57.624312 kubelet[2337]: I1213 14:16:57.624281 2337 scope.go:117] "RemoveContainer" containerID="ad1ff9790dd209b98ab03e1be1cd9713df1042d7b9ae49618274505f69d61fd6" Dec 13 14:16:57.624940 env[1916]: time="2024-12-13T14:16:57.624823886Z" level=error msg="ContainerStatus for \"ad1ff9790dd209b98ab03e1be1cd9713df1042d7b9ae49618274505f69d61fd6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ad1ff9790dd209b98ab03e1be1cd9713df1042d7b9ae49618274505f69d61fd6\": not found" Dec 13 14:16:57.625349 kubelet[2337]: E1213 14:16:57.625323 2337 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ad1ff9790dd209b98ab03e1be1cd9713df1042d7b9ae49618274505f69d61fd6\": not found" containerID="ad1ff9790dd209b98ab03e1be1cd9713df1042d7b9ae49618274505f69d61fd6" Dec 13 14:16:57.625651 kubelet[2337]: I1213 14:16:57.625615 2337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ad1ff9790dd209b98ab03e1be1cd9713df1042d7b9ae49618274505f69d61fd6"} err="failed to get container status \"ad1ff9790dd209b98ab03e1be1cd9713df1042d7b9ae49618274505f69d61fd6\": rpc error: code = NotFound desc = an error occurred when try to find container \"ad1ff9790dd209b98ab03e1be1cd9713df1042d7b9ae49618274505f69d61fd6\": not found" Dec 13 14:16:57.625822 kubelet[2337]: I1213 14:16:57.625800 2337 scope.go:117] "RemoveContainer" containerID="6ff0cdda3164b4fab64ed408d882d6d7f16891eeb7d7505ef584fae4dd2fde10" Dec 13 14:16:57.626747 env[1916]: time="2024-12-13T14:16:57.626581024Z" level=error msg="ContainerStatus for \"6ff0cdda3164b4fab64ed408d882d6d7f16891eeb7d7505ef584fae4dd2fde10\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6ff0cdda3164b4fab64ed408d882d6d7f16891eeb7d7505ef584fae4dd2fde10\": not found" Dec 13 14:16:57.627037 kubelet[2337]: E1213 14:16:57.626999 2337 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6ff0cdda3164b4fab64ed408d882d6d7f16891eeb7d7505ef584fae4dd2fde10\": not found" containerID="6ff0cdda3164b4fab64ed408d882d6d7f16891eeb7d7505ef584fae4dd2fde10" Dec 13 14:16:57.627134 kubelet[2337]: I1213 14:16:57.627063 2337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6ff0cdda3164b4fab64ed408d882d6d7f16891eeb7d7505ef584fae4dd2fde10"} err="failed to get container status \"6ff0cdda3164b4fab64ed408d882d6d7f16891eeb7d7505ef584fae4dd2fde10\": rpc error: code = NotFound desc = an error occurred when try to find container \"6ff0cdda3164b4fab64ed408d882d6d7f16891eeb7d7505ef584fae4dd2fde10\": not found" Dec 13 14:16:57.627134 kubelet[2337]: I1213 14:16:57.627089 2337 scope.go:117] "RemoveContainer" containerID="af83171067a54696c18b931b94342b5f7ab6c350501ccff2d67ea6afda024b6a" Dec 13 14:16:57.627659 env[1916]: time="2024-12-13T14:16:57.627527705Z" level=error msg="ContainerStatus for \"af83171067a54696c18b931b94342b5f7ab6c350501ccff2d67ea6afda024b6a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"af83171067a54696c18b931b94342b5f7ab6c350501ccff2d67ea6afda024b6a\": not found" Dec 13 14:16:57.628041 kubelet[2337]: E1213 14:16:57.627959 2337 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"af83171067a54696c18b931b94342b5f7ab6c350501ccff2d67ea6afda024b6a\": not found" containerID="af83171067a54696c18b931b94342b5f7ab6c350501ccff2d67ea6afda024b6a" Dec 13 14:16:57.628041 kubelet[2337]: I1213 14:16:57.628025 2337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"af83171067a54696c18b931b94342b5f7ab6c350501ccff2d67ea6afda024b6a"} err="failed to get container status \"af83171067a54696c18b931b94342b5f7ab6c350501ccff2d67ea6afda024b6a\": rpc error: code = NotFound desc = an error occurred when try to find container \"af83171067a54696c18b931b94342b5f7ab6c350501ccff2d67ea6afda024b6a\": not found" Dec 13 14:16:57.628226 kubelet[2337]: I1213 14:16:57.628056 2337 scope.go:117] "RemoveContainer" containerID="eb528ecc36f3f3bc8dc3c08bd3a9de2f1333a79ee2d68e30b34e5e178206fd83" Dec 13 14:16:57.628747 env[1916]: time="2024-12-13T14:16:57.628569822Z" level=error msg="ContainerStatus for \"eb528ecc36f3f3bc8dc3c08bd3a9de2f1333a79ee2d68e30b34e5e178206fd83\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"eb528ecc36f3f3bc8dc3c08bd3a9de2f1333a79ee2d68e30b34e5e178206fd83\": not found" Dec 13 14:16:57.628866 kubelet[2337]: E1213 14:16:57.628844 2337 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"eb528ecc36f3f3bc8dc3c08bd3a9de2f1333a79ee2d68e30b34e5e178206fd83\": not found" containerID="eb528ecc36f3f3bc8dc3c08bd3a9de2f1333a79ee2d68e30b34e5e178206fd83" Dec 13 14:16:57.628941 kubelet[2337]: I1213 14:16:57.628896 2337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"eb528ecc36f3f3bc8dc3c08bd3a9de2f1333a79ee2d68e30b34e5e178206fd83"} err="failed to get container status \"eb528ecc36f3f3bc8dc3c08bd3a9de2f1333a79ee2d68e30b34e5e178206fd83\": rpc error: code = NotFound desc = an error occurred when try to find container \"eb528ecc36f3f3bc8dc3c08bd3a9de2f1333a79ee2d68e30b34e5e178206fd83\": not found" Dec 13 14:16:57.628941 kubelet[2337]: I1213 14:16:57.628919 2337 scope.go:117] "RemoveContainer" containerID="0d71510214061c0b9cf2e942e03eb047b2277eeed3e24eed5db7034c66610739" Dec 13 14:16:57.629678 env[1916]: time="2024-12-13T14:16:57.629569663Z" level=error msg="ContainerStatus for \"0d71510214061c0b9cf2e942e03eb047b2277eeed3e24eed5db7034c66610739\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0d71510214061c0b9cf2e942e03eb047b2277eeed3e24eed5db7034c66610739\": not found" Dec 13 14:16:57.629961 kubelet[2337]: E1213 14:16:57.629921 2337 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0d71510214061c0b9cf2e942e03eb047b2277eeed3e24eed5db7034c66610739\": not found" containerID="0d71510214061c0b9cf2e942e03eb047b2277eeed3e24eed5db7034c66610739" Dec 13 14:16:57.630043 kubelet[2337]: I1213 14:16:57.629983 2337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0d71510214061c0b9cf2e942e03eb047b2277eeed3e24eed5db7034c66610739"} err="failed to get container status \"0d71510214061c0b9cf2e942e03eb047b2277eeed3e24eed5db7034c66610739\": rpc error: code = NotFound desc = an error occurred when try to find container \"0d71510214061c0b9cf2e942e03eb047b2277eeed3e24eed5db7034c66610739\": not found" Dec 13 14:16:58.111444 kubelet[2337]: E1213 14:16:58.111397 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:16:58.277005 kubelet[2337]: I1213 14:16:58.276952 2337 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="cfdf2a87-71c8-452e-a524-dd06bcc4f0f8" path="/var/lib/kubelet/pods/cfdf2a87-71c8-452e-a524-dd06bcc4f0f8/volumes" Dec 13 14:16:59.112870 kubelet[2337]: E1213 14:16:59.112804 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:16:59.802821 kubelet[2337]: I1213 14:16:59.802753 2337 topology_manager.go:215] "Topology Admit Handler" podUID="1c8d4e04-13b7-49b2-b765-9fc4d1ea9937" podNamespace="kube-system" podName="cilium-pr84d" Dec 13 14:16:59.802984 kubelet[2337]: E1213 14:16:59.802840 2337 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cfdf2a87-71c8-452e-a524-dd06bcc4f0f8" containerName="clean-cilium-state" Dec 13 14:16:59.802984 kubelet[2337]: E1213 14:16:59.802863 2337 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cfdf2a87-71c8-452e-a524-dd06bcc4f0f8" containerName="cilium-agent" Dec 13 14:16:59.802984 kubelet[2337]: E1213 14:16:59.802882 2337 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cfdf2a87-71c8-452e-a524-dd06bcc4f0f8" containerName="mount-cgroup" Dec 13 14:16:59.802984 kubelet[2337]: E1213 14:16:59.802902 2337 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cfdf2a87-71c8-452e-a524-dd06bcc4f0f8" containerName="apply-sysctl-overwrites" Dec 13 14:16:59.802984 kubelet[2337]: E1213 14:16:59.802920 2337 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cfdf2a87-71c8-452e-a524-dd06bcc4f0f8" containerName="mount-bpf-fs" Dec 13 14:16:59.802984 kubelet[2337]: I1213 14:16:59.802963 2337 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfdf2a87-71c8-452e-a524-dd06bcc4f0f8" containerName="cilium-agent" Dec 13 14:16:59.803680 kubelet[2337]: I1213 14:16:59.803645 2337 topology_manager.go:215] "Topology Admit Handler" podUID="33066bba-b6e5-43a3-8331-45ede334000f" podNamespace="kube-system" podName="cilium-operator-5cc964979-88tlc" Dec 13 14:16:59.841859 kubelet[2337]: W1213 14:16:59.841807 2337 reflector.go:539] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:172.31.20.24" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '172.31.20.24' and this object Dec 13 14:16:59.841859 kubelet[2337]: E1213 14:16:59.841870 2337 reflector.go:147] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:172.31.20.24" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '172.31.20.24' and this object Dec 13 14:16:59.846337 kubelet[2337]: W1213 14:16:59.846245 2337 reflector.go:539] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:172.31.20.24" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172.31.20.24' and this object Dec 13 14:16:59.846820 kubelet[2337]: E1213 14:16:59.846777 2337 reflector.go:147] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:172.31.20.24" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172.31.20.24' and this object Dec 13 14:16:59.847024 kubelet[2337]: W1213 14:16:59.846333 2337 reflector.go:539] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:172.31.20.24" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172.31.20.24' and this object Dec 13 14:16:59.847193 kubelet[2337]: E1213 14:16:59.847166 2337 reflector.go:147] object-"kube-system"/"cilium-ipsec-keys": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:172.31.20.24" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172.31.20.24' and this object Dec 13 14:16:59.906192 kubelet[2337]: I1213 14:16:59.906132 2337 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxt82\" (UniqueName: \"kubernetes.io/projected/33066bba-b6e5-43a3-8331-45ede334000f-kube-api-access-fxt82\") pod \"cilium-operator-5cc964979-88tlc\" (UID: \"33066bba-b6e5-43a3-8331-45ede334000f\") " pod="kube-system/cilium-operator-5cc964979-88tlc" Dec 13 14:16:59.906536 kubelet[2337]: I1213 14:16:59.906479 2337 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1c8d4e04-13b7-49b2-b765-9fc4d1ea9937-hostproc\") pod \"cilium-pr84d\" (UID: \"1c8d4e04-13b7-49b2-b765-9fc4d1ea9937\") " pod="kube-system/cilium-pr84d" Dec 13 14:16:59.906793 kubelet[2337]: I1213 14:16:59.906769 2337 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1c8d4e04-13b7-49b2-b765-9fc4d1ea9937-etc-cni-netd\") pod \"cilium-pr84d\" (UID: \"1c8d4e04-13b7-49b2-b765-9fc4d1ea9937\") " pod="kube-system/cilium-pr84d" Dec 13 14:16:59.906992 kubelet[2337]: I1213 14:16:59.906953 2337 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1c8d4e04-13b7-49b2-b765-9fc4d1ea9937-cilium-config-path\") pod \"cilium-pr84d\" (UID: \"1c8d4e04-13b7-49b2-b765-9fc4d1ea9937\") " pod="kube-system/cilium-pr84d" Dec 13 14:16:59.907216 kubelet[2337]: I1213 14:16:59.907171 2337 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1c8d4e04-13b7-49b2-b765-9fc4d1ea9937-host-proc-sys-kernel\") pod \"cilium-pr84d\" (UID: \"1c8d4e04-13b7-49b2-b765-9fc4d1ea9937\") " pod="kube-system/cilium-pr84d" Dec 13 14:16:59.907414 kubelet[2337]: I1213 14:16:59.907376 2337 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/33066bba-b6e5-43a3-8331-45ede334000f-cilium-config-path\") pod \"cilium-operator-5cc964979-88tlc\" (UID: \"33066bba-b6e5-43a3-8331-45ede334000f\") " pod="kube-system/cilium-operator-5cc964979-88tlc" Dec 13 14:16:59.907630 kubelet[2337]: I1213 14:16:59.907585 2337 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1c8d4e04-13b7-49b2-b765-9fc4d1ea9937-xtables-lock\") pod \"cilium-pr84d\" (UID: \"1c8d4e04-13b7-49b2-b765-9fc4d1ea9937\") " pod="kube-system/cilium-pr84d" Dec 13 14:16:59.907852 kubelet[2337]: I1213 14:16:59.907816 2337 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1c8d4e04-13b7-49b2-b765-9fc4d1ea9937-clustermesh-secrets\") pod \"cilium-pr84d\" (UID: \"1c8d4e04-13b7-49b2-b765-9fc4d1ea9937\") " pod="kube-system/cilium-pr84d" Dec 13 14:16:59.908017 kubelet[2337]: I1213 14:16:59.907995 2337 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1c8d4e04-13b7-49b2-b765-9fc4d1ea9937-bpf-maps\") pod \"cilium-pr84d\" (UID: \"1c8d4e04-13b7-49b2-b765-9fc4d1ea9937\") " pod="kube-system/cilium-pr84d" Dec 13 14:16:59.908235 kubelet[2337]: I1213 14:16:59.908181 2337 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1c8d4e04-13b7-49b2-b765-9fc4d1ea9937-host-proc-sys-net\") pod \"cilium-pr84d\" (UID: \"1c8d4e04-13b7-49b2-b765-9fc4d1ea9937\") " pod="kube-system/cilium-pr84d" Dec 13 14:16:59.908507 kubelet[2337]: I1213 14:16:59.908478 2337 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1c8d4e04-13b7-49b2-b765-9fc4d1ea9937-cni-path\") pod \"cilium-pr84d\" (UID: \"1c8d4e04-13b7-49b2-b765-9fc4d1ea9937\") " pod="kube-system/cilium-pr84d" Dec 13 14:16:59.908607 kubelet[2337]: I1213 14:16:59.908548 2337 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1c8d4e04-13b7-49b2-b765-9fc4d1ea9937-cilium-ipsec-secrets\") pod \"cilium-pr84d\" (UID: \"1c8d4e04-13b7-49b2-b765-9fc4d1ea9937\") " pod="kube-system/cilium-pr84d" Dec 13 14:16:59.908607 kubelet[2337]: I1213 14:16:59.908598 2337 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1c8d4e04-13b7-49b2-b765-9fc4d1ea9937-cilium-cgroup\") pod \"cilium-pr84d\" (UID: \"1c8d4e04-13b7-49b2-b765-9fc4d1ea9937\") " pod="kube-system/cilium-pr84d" Dec 13 14:16:59.908766 kubelet[2337]: I1213 14:16:59.908646 2337 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1c8d4e04-13b7-49b2-b765-9fc4d1ea9937-lib-modules\") pod \"cilium-pr84d\" (UID: \"1c8d4e04-13b7-49b2-b765-9fc4d1ea9937\") " pod="kube-system/cilium-pr84d" Dec 13 14:16:59.908766 kubelet[2337]: I1213 14:16:59.908714 2337 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1c8d4e04-13b7-49b2-b765-9fc4d1ea9937-cilium-run\") pod \"cilium-pr84d\" (UID: \"1c8d4e04-13b7-49b2-b765-9fc4d1ea9937\") " pod="kube-system/cilium-pr84d" Dec 13 14:16:59.908879 kubelet[2337]: I1213 14:16:59.908770 2337 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1c8d4e04-13b7-49b2-b765-9fc4d1ea9937-hubble-tls\") pod \"cilium-pr84d\" (UID: \"1c8d4e04-13b7-49b2-b765-9fc4d1ea9937\") " pod="kube-system/cilium-pr84d" Dec 13 14:16:59.908879 kubelet[2337]: I1213 14:16:59.908819 2337 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn2m4\" (UniqueName: \"kubernetes.io/projected/1c8d4e04-13b7-49b2-b765-9fc4d1ea9937-kube-api-access-wn2m4\") pod \"cilium-pr84d\" (UID: \"1c8d4e04-13b7-49b2-b765-9fc4d1ea9937\") " pod="kube-system/cilium-pr84d" Dec 13 14:17:00.040554 kubelet[2337]: E1213 14:17:00.040515 2337 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:17:00.113870 kubelet[2337]: E1213 14:17:00.113751 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:17:00.183844 kubelet[2337]: E1213 14:17:00.183811 2337 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:17:00.471166 kubelet[2337]: E1213 14:17:00.471075 2337 pod_workers.go:1298] "Error syncing pod, skipping" err="unmounted volumes=[cilium-config-path cilium-ipsec-secrets clustermesh-secrets], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-pr84d" podUID="1c8d4e04-13b7-49b2-b765-9fc4d1ea9937" Dec 13 14:17:00.716957 kubelet[2337]: I1213 14:17:00.716889 2337 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1c8d4e04-13b7-49b2-b765-9fc4d1ea9937-cilium-cgroup\") pod \"1c8d4e04-13b7-49b2-b765-9fc4d1ea9937\" (UID: \"1c8d4e04-13b7-49b2-b765-9fc4d1ea9937\") " Dec 13 14:17:00.717141 kubelet[2337]: I1213 14:17:00.716974 2337 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1c8d4e04-13b7-49b2-b765-9fc4d1ea9937-cilium-run\") pod \"1c8d4e04-13b7-49b2-b765-9fc4d1ea9937\" (UID: \"1c8d4e04-13b7-49b2-b765-9fc4d1ea9937\") " Dec 13 14:17:00.717141 kubelet[2337]: I1213 14:17:00.717019 2337 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1c8d4e04-13b7-49b2-b765-9fc4d1ea9937-xtables-lock\") pod \"1c8d4e04-13b7-49b2-b765-9fc4d1ea9937\" (UID: \"1c8d4e04-13b7-49b2-b765-9fc4d1ea9937\") " Dec 13 14:17:00.717141 kubelet[2337]: I1213 14:17:00.717073 2337 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wn2m4\" (UniqueName: \"kubernetes.io/projected/1c8d4e04-13b7-49b2-b765-9fc4d1ea9937-kube-api-access-wn2m4\") pod \"1c8d4e04-13b7-49b2-b765-9fc4d1ea9937\" (UID: \"1c8d4e04-13b7-49b2-b765-9fc4d1ea9937\") " Dec 13 14:17:00.717141 kubelet[2337]: I1213 14:17:00.717116 2337 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1c8d4e04-13b7-49b2-b765-9fc4d1ea9937-hostproc\") pod \"1c8d4e04-13b7-49b2-b765-9fc4d1ea9937\" (UID: \"1c8d4e04-13b7-49b2-b765-9fc4d1ea9937\") " Dec 13 14:17:00.717395 kubelet[2337]: I1213 14:17:00.717154 2337 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1c8d4e04-13b7-49b2-b765-9fc4d1ea9937-bpf-maps\") pod \"1c8d4e04-13b7-49b2-b765-9fc4d1ea9937\" (UID: \"1c8d4e04-13b7-49b2-b765-9fc4d1ea9937\") " Dec 13 14:17:00.717395 kubelet[2337]: I1213 14:17:00.717196 2337 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1c8d4e04-13b7-49b2-b765-9fc4d1ea9937-cni-path\") pod \"1c8d4e04-13b7-49b2-b765-9fc4d1ea9937\" (UID: \"1c8d4e04-13b7-49b2-b765-9fc4d1ea9937\") " Dec 13 14:17:00.717395 kubelet[2337]: I1213 14:17:00.717242 2337 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1c8d4e04-13b7-49b2-b765-9fc4d1ea9937-host-proc-sys-kernel\") pod \"1c8d4e04-13b7-49b2-b765-9fc4d1ea9937\" (UID: \"1c8d4e04-13b7-49b2-b765-9fc4d1ea9937\") " Dec 13 14:17:00.717395 kubelet[2337]: I1213 14:17:00.717284 2337 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1c8d4e04-13b7-49b2-b765-9fc4d1ea9937-lib-modules\") pod \"1c8d4e04-13b7-49b2-b765-9fc4d1ea9937\" (UID: \"1c8d4e04-13b7-49b2-b765-9fc4d1ea9937\") " Dec 13 14:17:00.717395 kubelet[2337]: I1213 14:17:00.717325 2337 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1c8d4e04-13b7-49b2-b765-9fc4d1ea9937-etc-cni-netd\") pod \"1c8d4e04-13b7-49b2-b765-9fc4d1ea9937\" (UID: \"1c8d4e04-13b7-49b2-b765-9fc4d1ea9937\") " Dec 13 14:17:00.717395 kubelet[2337]: I1213 14:17:00.717365 2337 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1c8d4e04-13b7-49b2-b765-9fc4d1ea9937-host-proc-sys-net\") pod \"1c8d4e04-13b7-49b2-b765-9fc4d1ea9937\" (UID: \"1c8d4e04-13b7-49b2-b765-9fc4d1ea9937\") " Dec 13 14:17:00.717801 kubelet[2337]: I1213 14:17:00.717412 2337 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1c8d4e04-13b7-49b2-b765-9fc4d1ea9937-hubble-tls\") pod \"1c8d4e04-13b7-49b2-b765-9fc4d1ea9937\" (UID: \"1c8d4e04-13b7-49b2-b765-9fc4d1ea9937\") " Dec 13 14:17:00.717879 kubelet[2337]: I1213 14:17:00.717844 2337 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c8d4e04-13b7-49b2-b765-9fc4d1ea9937-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "1c8d4e04-13b7-49b2-b765-9fc4d1ea9937" (UID: "1c8d4e04-13b7-49b2-b765-9fc4d1ea9937"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:17:00.717965 kubelet[2337]: I1213 14:17:00.717949 2337 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c8d4e04-13b7-49b2-b765-9fc4d1ea9937-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "1c8d4e04-13b7-49b2-b765-9fc4d1ea9937" (UID: "1c8d4e04-13b7-49b2-b765-9fc4d1ea9937"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:17:00.718042 kubelet[2337]: I1213 14:17:00.718001 2337 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c8d4e04-13b7-49b2-b765-9fc4d1ea9937-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "1c8d4e04-13b7-49b2-b765-9fc4d1ea9937" (UID: "1c8d4e04-13b7-49b2-b765-9fc4d1ea9937"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:17:00.718125 kubelet[2337]: I1213 14:17:00.718045 2337 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c8d4e04-13b7-49b2-b765-9fc4d1ea9937-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "1c8d4e04-13b7-49b2-b765-9fc4d1ea9937" (UID: "1c8d4e04-13b7-49b2-b765-9fc4d1ea9937"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:17:00.725020 kubelet[2337]: I1213 14:17:00.724001 2337 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c8d4e04-13b7-49b2-b765-9fc4d1ea9937-cni-path" (OuterVolumeSpecName: "cni-path") pod "1c8d4e04-13b7-49b2-b765-9fc4d1ea9937" (UID: "1c8d4e04-13b7-49b2-b765-9fc4d1ea9937"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:17:00.725462 kubelet[2337]: I1213 14:17:00.725421 2337 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c8d4e04-13b7-49b2-b765-9fc4d1ea9937-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "1c8d4e04-13b7-49b2-b765-9fc4d1ea9937" (UID: "1c8d4e04-13b7-49b2-b765-9fc4d1ea9937"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:17:00.725610 kubelet[2337]: I1213 14:17:00.725582 2337 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c8d4e04-13b7-49b2-b765-9fc4d1ea9937-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "1c8d4e04-13b7-49b2-b765-9fc4d1ea9937" (UID: "1c8d4e04-13b7-49b2-b765-9fc4d1ea9937"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:17:00.725793 kubelet[2337]: I1213 14:17:00.725765 2337 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c8d4e04-13b7-49b2-b765-9fc4d1ea9937-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "1c8d4e04-13b7-49b2-b765-9fc4d1ea9937" (UID: "1c8d4e04-13b7-49b2-b765-9fc4d1ea9937"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:17:00.725955 kubelet[2337]: I1213 14:17:00.725927 2337 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c8d4e04-13b7-49b2-b765-9fc4d1ea9937-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "1c8d4e04-13b7-49b2-b765-9fc4d1ea9937" (UID: "1c8d4e04-13b7-49b2-b765-9fc4d1ea9937"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:17:00.726209 kubelet[2337]: I1213 14:17:00.726164 2337 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c8d4e04-13b7-49b2-b765-9fc4d1ea9937-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "1c8d4e04-13b7-49b2-b765-9fc4d1ea9937" (UID: "1c8d4e04-13b7-49b2-b765-9fc4d1ea9937"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:17:00.726394 kubelet[2337]: I1213 14:17:00.726365 2337 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c8d4e04-13b7-49b2-b765-9fc4d1ea9937-hostproc" (OuterVolumeSpecName: "hostproc") pod "1c8d4e04-13b7-49b2-b765-9fc4d1ea9937" (UID: "1c8d4e04-13b7-49b2-b765-9fc4d1ea9937"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:17:00.728595 systemd[1]: var-lib-kubelet-pods-1c8d4e04\x2d13b7\x2d49b2\x2db765\x2d9fc4d1ea9937-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 14:17:00.736403 systemd[1]: var-lib-kubelet-pods-1c8d4e04\x2d13b7\x2d49b2\x2db765\x2d9fc4d1ea9937-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwn2m4.mount: Deactivated successfully. Dec 13 14:17:00.736651 kubelet[2337]: I1213 14:17:00.736398 2337 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c8d4e04-13b7-49b2-b765-9fc4d1ea9937-kube-api-access-wn2m4" (OuterVolumeSpecName: "kube-api-access-wn2m4") pod "1c8d4e04-13b7-49b2-b765-9fc4d1ea9937" (UID: "1c8d4e04-13b7-49b2-b765-9fc4d1ea9937"). InnerVolumeSpecName "kube-api-access-wn2m4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:17:00.818124 kubelet[2337]: I1213 14:17:00.818083 2337 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1c8d4e04-13b7-49b2-b765-9fc4d1ea9937-lib-modules\") on node \"172.31.20.24\" DevicePath \"\"" Dec 13 14:17:00.818357 kubelet[2337]: I1213 14:17:00.818335 2337 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1c8d4e04-13b7-49b2-b765-9fc4d1ea9937-host-proc-sys-kernel\") on node \"172.31.20.24\" DevicePath \"\"" Dec 13 14:17:00.818483 kubelet[2337]: I1213 14:17:00.818462 2337 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1c8d4e04-13b7-49b2-b765-9fc4d1ea9937-etc-cni-netd\") on node \"172.31.20.24\" DevicePath \"\"" Dec 13 14:17:00.818617 kubelet[2337]: I1213 14:17:00.818597 2337 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1c8d4e04-13b7-49b2-b765-9fc4d1ea9937-host-proc-sys-net\") on node \"172.31.20.24\" DevicePath \"\"" Dec 13 14:17:00.818779 kubelet[2337]: I1213 14:17:00.818756 2337 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1c8d4e04-13b7-49b2-b765-9fc4d1ea9937-hubble-tls\") on node \"172.31.20.24\" DevicePath \"\"" Dec 13 14:17:00.818897 kubelet[2337]: I1213 14:17:00.818877 2337 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1c8d4e04-13b7-49b2-b765-9fc4d1ea9937-cilium-run\") on node \"172.31.20.24\" DevicePath \"\"" Dec 13 14:17:00.819019 kubelet[2337]: I1213 14:17:00.818998 2337 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1c8d4e04-13b7-49b2-b765-9fc4d1ea9937-cilium-cgroup\") on node \"172.31.20.24\" DevicePath \"\"" Dec 13 14:17:00.819133 kubelet[2337]: I1213 14:17:00.819113 2337 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1c8d4e04-13b7-49b2-b765-9fc4d1ea9937-xtables-lock\") on node \"172.31.20.24\" DevicePath \"\"" Dec 13 14:17:00.819264 kubelet[2337]: I1213 14:17:00.819243 2337 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-wn2m4\" (UniqueName: \"kubernetes.io/projected/1c8d4e04-13b7-49b2-b765-9fc4d1ea9937-kube-api-access-wn2m4\") on node \"172.31.20.24\" DevicePath \"\"" Dec 13 14:17:00.819378 kubelet[2337]: I1213 14:17:00.819358 2337 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1c8d4e04-13b7-49b2-b765-9fc4d1ea9937-cni-path\") on node \"172.31.20.24\" DevicePath \"\"" Dec 13 14:17:00.819497 kubelet[2337]: I1213 14:17:00.819478 2337 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1c8d4e04-13b7-49b2-b765-9fc4d1ea9937-hostproc\") on node \"172.31.20.24\" DevicePath \"\"" Dec 13 14:17:00.819616 kubelet[2337]: I1213 14:17:00.819597 2337 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1c8d4e04-13b7-49b2-b765-9fc4d1ea9937-bpf-maps\") on node \"172.31.20.24\" DevicePath \"\"" Dec 13 14:17:01.011983 kubelet[2337]: E1213 14:17:01.010682 2337 secret.go:194] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition Dec 13 14:17:01.012342 kubelet[2337]: E1213 14:17:01.012308 2337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1c8d4e04-13b7-49b2-b765-9fc4d1ea9937-cilium-ipsec-secrets podName:1c8d4e04-13b7-49b2-b765-9fc4d1ea9937 nodeName:}" failed. No retries permitted until 2024-12-13 14:17:01.512269311 +0000 UTC m=+83.236227398 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/1c8d4e04-13b7-49b2-b765-9fc4d1ea9937-cilium-ipsec-secrets") pod "cilium-pr84d" (UID: "1c8d4e04-13b7-49b2-b765-9fc4d1ea9937") : failed to sync secret cache: timed out waiting for the condition Dec 13 14:17:01.014481 env[1916]: time="2024-12-13T14:17:01.013838381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-88tlc,Uid:33066bba-b6e5-43a3-8331-45ede334000f,Namespace:kube-system,Attempt:0,}" Dec 13 14:17:01.024644 kubelet[2337]: I1213 14:17:01.024571 2337 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1c8d4e04-13b7-49b2-b765-9fc4d1ea9937-cilium-config-path\") pod \"1c8d4e04-13b7-49b2-b765-9fc4d1ea9937\" (UID: \"1c8d4e04-13b7-49b2-b765-9fc4d1ea9937\") " Dec 13 14:17:01.033166 kubelet[2337]: I1213 14:17:01.033110 2337 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c8d4e04-13b7-49b2-b765-9fc4d1ea9937-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1c8d4e04-13b7-49b2-b765-9fc4d1ea9937" (UID: "1c8d4e04-13b7-49b2-b765-9fc4d1ea9937"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:17:01.059021 env[1916]: time="2024-12-13T14:17:01.058905014Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:17:01.059219 env[1916]: time="2024-12-13T14:17:01.058980425Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:17:01.059219 env[1916]: time="2024-12-13T14:17:01.059008158Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:17:01.059450 env[1916]: time="2024-12-13T14:17:01.059378707Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0d37bd29390b1f84c2cda0a54fc0333b3267c9847dcd7c3588eb6128de5319e0 pid=4108 runtime=io.containerd.runc.v2 Dec 13 14:17:01.115322 kubelet[2337]: E1213 14:17:01.115241 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:17:01.130178 kubelet[2337]: I1213 14:17:01.125580 2337 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1c8d4e04-13b7-49b2-b765-9fc4d1ea9937-clustermesh-secrets\") pod \"1c8d4e04-13b7-49b2-b765-9fc4d1ea9937\" (UID: \"1c8d4e04-13b7-49b2-b765-9fc4d1ea9937\") " Dec 13 14:17:01.130178 kubelet[2337]: I1213 14:17:01.125727 2337 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1c8d4e04-13b7-49b2-b765-9fc4d1ea9937-cilium-config-path\") on node \"172.31.20.24\" DevicePath \"\"" Dec 13 14:17:01.137584 systemd[1]: var-lib-kubelet-pods-1c8d4e04\x2d13b7\x2d49b2\x2db765\x2d9fc4d1ea9937-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 14:17:01.139396 kubelet[2337]: I1213 14:17:01.139319 2337 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c8d4e04-13b7-49b2-b765-9fc4d1ea9937-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "1c8d4e04-13b7-49b2-b765-9fc4d1ea9937" (UID: "1c8d4e04-13b7-49b2-b765-9fc4d1ea9937"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:17:01.163870 env[1916]: time="2024-12-13T14:17:01.163816018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-88tlc,Uid:33066bba-b6e5-43a3-8331-45ede334000f,Namespace:kube-system,Attempt:0,} returns sandbox id \"0d37bd29390b1f84c2cda0a54fc0333b3267c9847dcd7c3588eb6128de5319e0\"" Dec 13 14:17:01.166760 env[1916]: time="2024-12-13T14:17:01.166651314Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 14:17:01.226264 kubelet[2337]: I1213 14:17:01.226204 2337 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1c8d4e04-13b7-49b2-b765-9fc4d1ea9937-clustermesh-secrets\") on node \"172.31.20.24\" DevicePath \"\"" Dec 13 14:17:01.628817 kubelet[2337]: I1213 14:17:01.628751 2337 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1c8d4e04-13b7-49b2-b765-9fc4d1ea9937-cilium-ipsec-secrets\") pod \"1c8d4e04-13b7-49b2-b765-9fc4d1ea9937\" (UID: \"1c8d4e04-13b7-49b2-b765-9fc4d1ea9937\") " Dec 13 14:17:01.634256 kubelet[2337]: I1213 14:17:01.634196 2337 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c8d4e04-13b7-49b2-b765-9fc4d1ea9937-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "1c8d4e04-13b7-49b2-b765-9fc4d1ea9937" (UID: "1c8d4e04-13b7-49b2-b765-9fc4d1ea9937"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:17:01.729559 kubelet[2337]: I1213 14:17:01.729512 2337 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1c8d4e04-13b7-49b2-b765-9fc4d1ea9937-cilium-ipsec-secrets\") on node \"172.31.20.24\" DevicePath \"\"" Dec 13 14:17:02.001645 kubelet[2337]: I1213 14:17:02.001602 2337 setters.go:568] "Node became not ready" node="172.31.20.24" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T14:17:02Z","lastTransitionTime":"2024-12-13T14:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 14:17:02.008796 kubelet[2337]: I1213 14:17:02.008732 2337 topology_manager.go:215] "Topology Admit Handler" podUID="4d0af6a5-a57e-4161-82b5-b72d9a35d700" podNamespace="kube-system" podName="cilium-6xhht" Dec 13 14:17:02.115636 kubelet[2337]: E1213 14:17:02.115581 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:17:02.132001 kubelet[2337]: I1213 14:17:02.131952 2337 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4d0af6a5-a57e-4161-82b5-b72d9a35d700-cilium-cgroup\") pod \"cilium-6xhht\" (UID: \"4d0af6a5-a57e-4161-82b5-b72d9a35d700\") " pod="kube-system/cilium-6xhht" Dec 13 14:17:02.132180 kubelet[2337]: I1213 14:17:02.132036 2337 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4d0af6a5-a57e-4161-82b5-b72d9a35d700-cni-path\") pod \"cilium-6xhht\" (UID: \"4d0af6a5-a57e-4161-82b5-b72d9a35d700\") " pod="kube-system/cilium-6xhht" Dec 13 14:17:02.132180 kubelet[2337]: I1213 14:17:02.132084 2337 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4d0af6a5-a57e-4161-82b5-b72d9a35d700-clustermesh-secrets\") pod \"cilium-6xhht\" (UID: \"4d0af6a5-a57e-4161-82b5-b72d9a35d700\") " pod="kube-system/cilium-6xhht" Dec 13 14:17:02.132180 kubelet[2337]: I1213 14:17:02.132130 2337 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fndrj\" (UniqueName: \"kubernetes.io/projected/4d0af6a5-a57e-4161-82b5-b72d9a35d700-kube-api-access-fndrj\") pod \"cilium-6xhht\" (UID: \"4d0af6a5-a57e-4161-82b5-b72d9a35d700\") " pod="kube-system/cilium-6xhht" Dec 13 14:17:02.132180 kubelet[2337]: I1213 14:17:02.132176 2337 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4d0af6a5-a57e-4161-82b5-b72d9a35d700-hostproc\") pod \"cilium-6xhht\" (UID: \"4d0af6a5-a57e-4161-82b5-b72d9a35d700\") " pod="kube-system/cilium-6xhht" Dec 13 14:17:02.132440 kubelet[2337]: I1213 14:17:02.132222 2337 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4d0af6a5-a57e-4161-82b5-b72d9a35d700-host-proc-sys-net\") pod \"cilium-6xhht\" (UID: \"4d0af6a5-a57e-4161-82b5-b72d9a35d700\") " pod="kube-system/cilium-6xhht" Dec 13 14:17:02.132440 kubelet[2337]: I1213 14:17:02.132282 2337 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4d0af6a5-a57e-4161-82b5-b72d9a35d700-cilium-ipsec-secrets\") pod \"cilium-6xhht\" (UID: \"4d0af6a5-a57e-4161-82b5-b72d9a35d700\") " pod="kube-system/cilium-6xhht" Dec 13 14:17:02.132440 kubelet[2337]: I1213 14:17:02.132332 2337 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4d0af6a5-a57e-4161-82b5-b72d9a35d700-host-proc-sys-kernel\") pod \"cilium-6xhht\" (UID: \"4d0af6a5-a57e-4161-82b5-b72d9a35d700\") " pod="kube-system/cilium-6xhht" Dec 13 14:17:02.132440 kubelet[2337]: I1213 14:17:02.132383 2337 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4d0af6a5-a57e-4161-82b5-b72d9a35d700-hubble-tls\") pod \"cilium-6xhht\" (UID: \"4d0af6a5-a57e-4161-82b5-b72d9a35d700\") " pod="kube-system/cilium-6xhht" Dec 13 14:17:02.132440 kubelet[2337]: I1213 14:17:02.132432 2337 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4d0af6a5-a57e-4161-82b5-b72d9a35d700-cilium-config-path\") pod \"cilium-6xhht\" (UID: \"4d0af6a5-a57e-4161-82b5-b72d9a35d700\") " pod="kube-system/cilium-6xhht" Dec 13 14:17:02.132800 kubelet[2337]: I1213 14:17:02.132479 2337 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4d0af6a5-a57e-4161-82b5-b72d9a35d700-bpf-maps\") pod \"cilium-6xhht\" (UID: \"4d0af6a5-a57e-4161-82b5-b72d9a35d700\") " pod="kube-system/cilium-6xhht" Dec 13 14:17:02.132800 kubelet[2337]: I1213 14:17:02.132523 2337 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4d0af6a5-a57e-4161-82b5-b72d9a35d700-etc-cni-netd\") pod \"cilium-6xhht\" (UID: \"4d0af6a5-a57e-4161-82b5-b72d9a35d700\") " pod="kube-system/cilium-6xhht" Dec 13 14:17:02.132800 kubelet[2337]: I1213 14:17:02.132567 2337 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4d0af6a5-a57e-4161-82b5-b72d9a35d700-lib-modules\") pod \"cilium-6xhht\" (UID: \"4d0af6a5-a57e-4161-82b5-b72d9a35d700\") " pod="kube-system/cilium-6xhht" Dec 13 14:17:02.132800 kubelet[2337]: I1213 14:17:02.132611 2337 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4d0af6a5-a57e-4161-82b5-b72d9a35d700-xtables-lock\") pod \"cilium-6xhht\" (UID: \"4d0af6a5-a57e-4161-82b5-b72d9a35d700\") " pod="kube-system/cilium-6xhht" Dec 13 14:17:02.132800 kubelet[2337]: I1213 14:17:02.132654 2337 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4d0af6a5-a57e-4161-82b5-b72d9a35d700-cilium-run\") pod \"cilium-6xhht\" (UID: \"4d0af6a5-a57e-4161-82b5-b72d9a35d700\") " pod="kube-system/cilium-6xhht" Dec 13 14:17:02.283582 kubelet[2337]: I1213 14:17:02.283462 2337 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="1c8d4e04-13b7-49b2-b765-9fc4d1ea9937" path="/var/lib/kubelet/pods/1c8d4e04-13b7-49b2-b765-9fc4d1ea9937/volumes" Dec 13 14:17:02.324910 env[1916]: time="2024-12-13T14:17:02.324839222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6xhht,Uid:4d0af6a5-a57e-4161-82b5-b72d9a35d700,Namespace:kube-system,Attempt:0,}" Dec 13 14:17:02.354779 env[1916]: time="2024-12-13T14:17:02.354336130Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:17:02.354779 env[1916]: time="2024-12-13T14:17:02.354431869Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:17:02.354779 env[1916]: time="2024-12-13T14:17:02.354460790Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:17:02.355350 env[1916]: time="2024-12-13T14:17:02.355244622Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f1d666822eb7ad5c139fba55112301cf955c2f54cf320e4f1479939978041611 pid=4156 runtime=io.containerd.runc.v2 Dec 13 14:17:02.432033 env[1916]: time="2024-12-13T14:17:02.431968075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6xhht,Uid:4d0af6a5-a57e-4161-82b5-b72d9a35d700,Namespace:kube-system,Attempt:0,} returns sandbox id \"f1d666822eb7ad5c139fba55112301cf955c2f54cf320e4f1479939978041611\"" Dec 13 14:17:02.439507 env[1916]: time="2024-12-13T14:17:02.439332509Z" level=info msg="CreateContainer within sandbox \"f1d666822eb7ad5c139fba55112301cf955c2f54cf320e4f1479939978041611\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:17:02.466059 env[1916]: time="2024-12-13T14:17:02.465988580Z" level=info msg="CreateContainer within sandbox \"f1d666822eb7ad5c139fba55112301cf955c2f54cf320e4f1479939978041611\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"bf1364394e10af7064759481cb99732fc1b5f215c054db893c45cde7bd2630d9\"" Dec 13 14:17:02.467941 env[1916]: time="2024-12-13T14:17:02.467885752Z" level=info msg="StartContainer for \"bf1364394e10af7064759481cb99732fc1b5f215c054db893c45cde7bd2630d9\"" Dec 13 14:17:02.572998 env[1916]: time="2024-12-13T14:17:02.572221481Z" level=info msg="StartContainer for \"bf1364394e10af7064759481cb99732fc1b5f215c054db893c45cde7bd2630d9\" returns successfully" Dec 13 14:17:02.640091 env[1916]: time="2024-12-13T14:17:02.640025123Z" level=info msg="shim disconnected" id=bf1364394e10af7064759481cb99732fc1b5f215c054db893c45cde7bd2630d9 Dec 13 14:17:02.640562 env[1916]: time="2024-12-13T14:17:02.640522937Z" level=warning msg="cleaning up after shim disconnected" id=bf1364394e10af7064759481cb99732fc1b5f215c054db893c45cde7bd2630d9 namespace=k8s.io Dec 13 14:17:02.640735 env[1916]: time="2024-12-13T14:17:02.640682482Z" level=info msg="cleaning up dead shim" Dec 13 14:17:02.654952 env[1916]: time="2024-12-13T14:17:02.654895897Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:17:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4244 runtime=io.containerd.runc.v2\n" Dec 13 14:17:03.116678 kubelet[2337]: E1213 14:17:03.116632 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:17:03.599905 env[1916]: time="2024-12-13T14:17:03.599793389Z" level=info msg="CreateContainer within sandbox \"f1d666822eb7ad5c139fba55112301cf955c2f54cf320e4f1479939978041611\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:17:03.625659 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount705535942.mount: Deactivated successfully. Dec 13 14:17:03.637794 env[1916]: time="2024-12-13T14:17:03.637713012Z" level=info msg="CreateContainer within sandbox \"f1d666822eb7ad5c139fba55112301cf955c2f54cf320e4f1479939978041611\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"42d6381ff0c19ae8400f7c6266fbf3f9e0115bf983ef23eec286b149d658f609\"" Dec 13 14:17:03.638784 env[1916]: time="2024-12-13T14:17:03.638686306Z" level=info msg="StartContainer for \"42d6381ff0c19ae8400f7c6266fbf3f9e0115bf983ef23eec286b149d658f609\"" Dec 13 14:17:03.735337 env[1916]: time="2024-12-13T14:17:03.735272123Z" level=info msg="StartContainer for \"42d6381ff0c19ae8400f7c6266fbf3f9e0115bf983ef23eec286b149d658f609\" returns successfully" Dec 13 14:17:03.789762 env[1916]: time="2024-12-13T14:17:03.789671126Z" level=info msg="shim disconnected" id=42d6381ff0c19ae8400f7c6266fbf3f9e0115bf983ef23eec286b149d658f609 Dec 13 14:17:03.790150 env[1916]: time="2024-12-13T14:17:03.790116233Z" level=warning msg="cleaning up after shim disconnected" id=42d6381ff0c19ae8400f7c6266fbf3f9e0115bf983ef23eec286b149d658f609 namespace=k8s.io Dec 13 14:17:03.790277 env[1916]: time="2024-12-13T14:17:03.790249330Z" level=info msg="cleaning up dead shim" Dec 13 14:17:03.804748 env[1916]: time="2024-12-13T14:17:03.804664146Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:17:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4308 runtime=io.containerd.runc.v2\n" Dec 13 14:17:04.020355 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-42d6381ff0c19ae8400f7c6266fbf3f9e0115bf983ef23eec286b149d658f609-rootfs.mount: Deactivated successfully. Dec 13 14:17:04.118450 kubelet[2337]: E1213 14:17:04.118283 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:17:04.606635 env[1916]: time="2024-12-13T14:17:04.606569745Z" level=info msg="CreateContainer within sandbox \"f1d666822eb7ad5c139fba55112301cf955c2f54cf320e4f1479939978041611\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:17:04.636196 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount157543362.mount: Deactivated successfully. Dec 13 14:17:04.656742 env[1916]: time="2024-12-13T14:17:04.656641157Z" level=info msg="CreateContainer within sandbox \"f1d666822eb7ad5c139fba55112301cf955c2f54cf320e4f1479939978041611\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d1d6e5f6e23cd41d77fe9202e25144771934c1a2c197da449a4cd4f8c8b6b182\"" Dec 13 14:17:04.657816 env[1916]: time="2024-12-13T14:17:04.657770215Z" level=info msg="StartContainer for \"d1d6e5f6e23cd41d77fe9202e25144771934c1a2c197da449a4cd4f8c8b6b182\"" Dec 13 14:17:04.774892 env[1916]: time="2024-12-13T14:17:04.774819427Z" level=info msg="StartContainer for \"d1d6e5f6e23cd41d77fe9202e25144771934c1a2c197da449a4cd4f8c8b6b182\" returns successfully" Dec 13 14:17:04.829048 env[1916]: time="2024-12-13T14:17:04.828971889Z" level=info msg="shim disconnected" id=d1d6e5f6e23cd41d77fe9202e25144771934c1a2c197da449a4cd4f8c8b6b182 Dec 13 14:17:04.829048 env[1916]: time="2024-12-13T14:17:04.829042283Z" level=warning msg="cleaning up after shim disconnected" id=d1d6e5f6e23cd41d77fe9202e25144771934c1a2c197da449a4cd4f8c8b6b182 namespace=k8s.io Dec 13 14:17:04.829379 env[1916]: time="2024-12-13T14:17:04.829065144Z" level=info msg="cleaning up dead shim" Dec 13 14:17:04.847033 env[1916]: time="2024-12-13T14:17:04.846943503Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:17:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4367 runtime=io.containerd.runc.v2\n" Dec 13 14:17:05.081202 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3161539657.mount: Deactivated successfully. Dec 13 14:17:05.118806 kubelet[2337]: E1213 14:17:05.118735 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:17:05.186541 kubelet[2337]: E1213 14:17:05.186489 2337 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:17:05.619327 env[1916]: time="2024-12-13T14:17:05.619237349Z" level=info msg="CreateContainer within sandbox \"f1d666822eb7ad5c139fba55112301cf955c2f54cf320e4f1479939978041611\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:17:05.653566 env[1916]: time="2024-12-13T14:17:05.653463168Z" level=info msg="CreateContainer within sandbox \"f1d666822eb7ad5c139fba55112301cf955c2f54cf320e4f1479939978041611\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d009e39216b5437e6222d4d70750b5a542b09097ada7b9f1397568e0f3e5605e\"" Dec 13 14:17:05.654789 env[1916]: time="2024-12-13T14:17:05.654740310Z" level=info msg="StartContainer for \"d009e39216b5437e6222d4d70750b5a542b09097ada7b9f1397568e0f3e5605e\"" Dec 13 14:17:05.760960 env[1916]: time="2024-12-13T14:17:05.760883490Z" level=info msg="StartContainer for \"d009e39216b5437e6222d4d70750b5a542b09097ada7b9f1397568e0f3e5605e\" returns successfully" Dec 13 14:17:05.822900 env[1916]: time="2024-12-13T14:17:05.822836842Z" level=info msg="shim disconnected" id=d009e39216b5437e6222d4d70750b5a542b09097ada7b9f1397568e0f3e5605e Dec 13 14:17:05.823233 env[1916]: time="2024-12-13T14:17:05.823200034Z" level=warning msg="cleaning up after shim disconnected" id=d009e39216b5437e6222d4d70750b5a542b09097ada7b9f1397568e0f3e5605e namespace=k8s.io Dec 13 14:17:05.823360 env[1916]: time="2024-12-13T14:17:05.823332074Z" level=info msg="cleaning up dead shim" Dec 13 14:17:05.852412 env[1916]: time="2024-12-13T14:17:05.852306257Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:17:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4424 runtime=io.containerd.runc.v2\n" Dec 13 14:17:06.119878 kubelet[2337]: E1213 14:17:06.119823 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:17:06.481241 env[1916]: time="2024-12-13T14:17:06.481179598Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:17:06.486398 env[1916]: time="2024-12-13T14:17:06.486344755Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:17:06.492309 env[1916]: time="2024-12-13T14:17:06.492252567Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Dec 13 14:17:06.493892 env[1916]: time="2024-12-13T14:17:06.490869847Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:17:06.495479 env[1916]: time="2024-12-13T14:17:06.495404288Z" level=info msg="CreateContainer within sandbox \"0d37bd29390b1f84c2cda0a54fc0333b3267c9847dcd7c3588eb6128de5319e0\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 14:17:06.517683 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3119143397.mount: Deactivated successfully. Dec 13 14:17:06.530917 env[1916]: time="2024-12-13T14:17:06.530828194Z" level=info msg="CreateContainer within sandbox \"0d37bd29390b1f84c2cda0a54fc0333b3267c9847dcd7c3588eb6128de5319e0\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"d5fec5b3e8295d458b3112674734b6b5609b8be16642d386acbd8c3da5e19f86\"" Dec 13 14:17:06.531970 env[1916]: time="2024-12-13T14:17:06.531902048Z" level=info msg="StartContainer for \"d5fec5b3e8295d458b3112674734b6b5609b8be16642d386acbd8c3da5e19f86\"" Dec 13 14:17:06.641339 env[1916]: time="2024-12-13T14:17:06.641280917Z" level=info msg="CreateContainer within sandbox \"f1d666822eb7ad5c139fba55112301cf955c2f54cf320e4f1479939978041611\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:17:06.650497 env[1916]: time="2024-12-13T14:17:06.646524040Z" level=info msg="StartContainer for \"d5fec5b3e8295d458b3112674734b6b5609b8be16642d386acbd8c3da5e19f86\" returns successfully" Dec 13 14:17:06.690464 env[1916]: time="2024-12-13T14:17:06.690387587Z" level=info msg="CreateContainer within sandbox \"f1d666822eb7ad5c139fba55112301cf955c2f54cf320e4f1479939978041611\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"88dd7f7eb0c7d053f6f24bc64280da6b1c30a027caa3b45e2fa20e1fcd073ec0\"" Dec 13 14:17:06.692465 env[1916]: time="2024-12-13T14:17:06.692369643Z" level=info msg="StartContainer for \"88dd7f7eb0c7d053f6f24bc64280da6b1c30a027caa3b45e2fa20e1fcd073ec0\"" Dec 13 14:17:06.828530 env[1916]: time="2024-12-13T14:17:06.828264569Z" level=info msg="StartContainer for \"88dd7f7eb0c7d053f6f24bc64280da6b1c30a027caa3b45e2fa20e1fcd073ec0\" returns successfully" Dec 13 14:17:07.120971 kubelet[2337]: E1213 14:17:07.120821 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:17:07.617770 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Dec 13 14:17:07.697900 kubelet[2337]: I1213 14:17:07.697856 2337 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-88tlc" podStartSLOduration=3.370744465 podStartE2EDuration="8.697800156s" podCreationTimestamp="2024-12-13 14:16:59 +0000 UTC" firstStartedPulling="2024-12-13 14:17:01.165794975 +0000 UTC m=+82.889753062" lastFinishedPulling="2024-12-13 14:17:06.492850654 +0000 UTC m=+88.216808753" observedRunningTime="2024-12-13 14:17:07.697029828 +0000 UTC m=+89.420987927" watchObservedRunningTime="2024-12-13 14:17:07.697800156 +0000 UTC m=+89.421758267" Dec 13 14:17:07.698283 kubelet[2337]: I1213 14:17:07.698257 2337 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-6xhht" podStartSLOduration=6.698217121 podStartE2EDuration="6.698217121s" podCreationTimestamp="2024-12-13 14:17:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:17:07.672550221 +0000 UTC m=+89.396508344" watchObservedRunningTime="2024-12-13 14:17:07.698217121 +0000 UTC m=+89.422175256" Dec 13 14:17:08.121490 kubelet[2337]: E1213 14:17:08.121445 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:17:09.122987 kubelet[2337]: E1213 14:17:09.122941 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:17:09.340247 systemd[1]: run-containerd-runc-k8s.io-88dd7f7eb0c7d053f6f24bc64280da6b1c30a027caa3b45e2fa20e1fcd073ec0-runc.uLIogn.mount: Deactivated successfully. Dec 13 14:17:10.124517 kubelet[2337]: E1213 14:17:10.124453 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:17:11.125852 kubelet[2337]: E1213 14:17:11.125797 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:17:11.826056 systemd-networkd[1593]: lxc_health: Link UP Dec 13 14:17:11.838108 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 14:17:11.839089 systemd-networkd[1593]: lxc_health: Gained carrier Dec 13 14:17:11.846624 (udev-worker)[5024]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:17:11.884580 (udev-worker)[5022]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:17:12.126733 kubelet[2337]: E1213 14:17:12.126535 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:17:13.128106 kubelet[2337]: E1213 14:17:13.128036 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:17:13.320891 systemd-networkd[1593]: lxc_health: Gained IPv6LL Dec 13 14:17:14.003775 systemd[1]: run-containerd-runc-k8s.io-88dd7f7eb0c7d053f6f24bc64280da6b1c30a027caa3b45e2fa20e1fcd073ec0-runc.mV5qjc.mount: Deactivated successfully. Dec 13 14:17:14.129255 kubelet[2337]: E1213 14:17:14.129188 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:17:15.129970 kubelet[2337]: E1213 14:17:15.129897 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:17:16.130430 kubelet[2337]: E1213 14:17:16.130317 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:17:16.343078 systemd[1]: run-containerd-runc-k8s.io-88dd7f7eb0c7d053f6f24bc64280da6b1c30a027caa3b45e2fa20e1fcd073ec0-runc.ZcFzMo.mount: Deactivated successfully. Dec 13 14:17:17.131336 kubelet[2337]: E1213 14:17:17.131273 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:17:18.132479 kubelet[2337]: E1213 14:17:18.132426 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:17:18.665449 systemd[1]: run-containerd-runc-k8s.io-88dd7f7eb0c7d053f6f24bc64280da6b1c30a027caa3b45e2fa20e1fcd073ec0-runc.4hjfbi.mount: Deactivated successfully. Dec 13 14:17:19.133974 kubelet[2337]: E1213 14:17:19.133853 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:17:20.036969 kubelet[2337]: E1213 14:17:20.036918 2337 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:17:20.135205 kubelet[2337]: E1213 14:17:20.135157 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:17:21.136781 kubelet[2337]: E1213 14:17:21.136669 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:17:22.137449 kubelet[2337]: E1213 14:17:22.137375 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:17:23.138114 kubelet[2337]: E1213 14:17:23.137985 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:17:24.139226 kubelet[2337]: E1213 14:17:24.139121 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:17:25.139571 kubelet[2337]: E1213 14:17:25.139521 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:17:26.141058 kubelet[2337]: E1213 14:17:26.141014 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:17:27.142039 kubelet[2337]: E1213 14:17:27.141971 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:17:28.143199 kubelet[2337]: E1213 14:17:28.143056 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:17:29.143564 kubelet[2337]: E1213 14:17:29.143488 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:17:30.144622 kubelet[2337]: E1213 14:17:30.144546 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:17:31.145346 kubelet[2337]: E1213 14:17:31.145268 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:17:32.147204 kubelet[2337]: E1213 14:17:32.147137 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:17:32.603506 kubelet[2337]: E1213 14:17:32.603411 2337 controller.go:195] "Failed to update lease" err="Put \"https://172.31.27.214:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.20.24?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 13 14:17:33.147409 kubelet[2337]: E1213 14:17:33.147306 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:17:34.148247 kubelet[2337]: E1213 14:17:34.148197 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:17:35.149085 kubelet[2337]: E1213 14:17:35.149020 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:17:36.149481 kubelet[2337]: E1213 14:17:36.149410 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:17:37.150228 kubelet[2337]: E1213 14:17:37.150174 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:17:38.151546 kubelet[2337]: E1213 14:17:38.151498 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:17:39.152375 kubelet[2337]: E1213 14:17:39.152333 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:17:40.037839 kubelet[2337]: E1213 14:17:40.037764 2337 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:17:40.070240 env[1916]: time="2024-12-13T14:17:40.070164593Z" level=info msg="StopPodSandbox for \"01883d3b87dacb3dae05e32ee20d5199e335f551ea6bf2d4a9bd1cd10c9320dd\"" Dec 13 14:17:40.070952 env[1916]: time="2024-12-13T14:17:40.070361876Z" level=info msg="TearDown network for sandbox \"01883d3b87dacb3dae05e32ee20d5199e335f551ea6bf2d4a9bd1cd10c9320dd\" successfully" Dec 13 14:17:40.070952 env[1916]: time="2024-12-13T14:17:40.070444269Z" level=info msg="StopPodSandbox for \"01883d3b87dacb3dae05e32ee20d5199e335f551ea6bf2d4a9bd1cd10c9320dd\" returns successfully" Dec 13 14:17:40.071367 env[1916]: time="2024-12-13T14:17:40.071294000Z" level=info msg="RemovePodSandbox for \"01883d3b87dacb3dae05e32ee20d5199e335f551ea6bf2d4a9bd1cd10c9320dd\"" Dec 13 14:17:40.071451 env[1916]: time="2024-12-13T14:17:40.071378158Z" level=info msg="Forcibly stopping sandbox \"01883d3b87dacb3dae05e32ee20d5199e335f551ea6bf2d4a9bd1cd10c9320dd\"" Dec 13 14:17:40.071592 env[1916]: time="2024-12-13T14:17:40.071556696Z" level=info msg="TearDown network for sandbox \"01883d3b87dacb3dae05e32ee20d5199e335f551ea6bf2d4a9bd1cd10c9320dd\" successfully" Dec 13 14:17:40.082681 env[1916]: time="2024-12-13T14:17:40.082609854Z" level=info msg="RemovePodSandbox \"01883d3b87dacb3dae05e32ee20d5199e335f551ea6bf2d4a9bd1cd10c9320dd\" returns successfully" Dec 13 14:17:40.153230 kubelet[2337]: E1213 14:17:40.153156 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:17:41.153817 kubelet[2337]: E1213 14:17:41.153768 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:17:42.155111 kubelet[2337]: E1213 14:17:42.155007 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:17:42.604114 kubelet[2337]: E1213 14:17:42.604052 2337 controller.go:195] "Failed to update lease" err="Put \"https://172.31.27.214:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.20.24?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 13 14:17:43.156561 kubelet[2337]: E1213 14:17:43.156471 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:17:44.156776 kubelet[2337]: E1213 14:17:44.156720 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:17:45.157502 kubelet[2337]: E1213 14:17:45.157426 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:17:46.158633 kubelet[2337]: E1213 14:17:46.158561 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:17:47.159665 kubelet[2337]: E1213 14:17:47.159618 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:17:48.161108 kubelet[2337]: E1213 14:17:48.160992 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:17:49.161663 kubelet[2337]: E1213 14:17:49.161610 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:17:50.162853 kubelet[2337]: E1213 14:17:50.162770 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:17:51.164205 kubelet[2337]: E1213 14:17:51.164128 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:17:52.165213 kubelet[2337]: E1213 14:17:52.165128 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:17:52.605102 kubelet[2337]: E1213 14:17:52.604717 2337 controller.go:195] "Failed to update lease" err="Put \"https://172.31.27.214:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.20.24?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 13 14:17:53.165445 kubelet[2337]: E1213 14:17:53.165394 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:17:54.167145 kubelet[2337]: E1213 14:17:54.167077 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:17:55.167379 kubelet[2337]: E1213 14:17:55.167297 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:17:56.169191 kubelet[2337]: E1213 14:17:56.169116 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:17:57.170462 kubelet[2337]: E1213 14:17:57.170415 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:17:58.172408 kubelet[2337]: E1213 14:17:58.172363 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:17:59.174252 kubelet[2337]: E1213 14:17:59.174183 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:18:00.037567 kubelet[2337]: E1213 14:18:00.037499 2337 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:18:00.174916 kubelet[2337]: E1213 14:18:00.174789 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:18:01.175658 kubelet[2337]: E1213 14:18:01.175613 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:18:02.176981 kubelet[2337]: E1213 14:18:02.176895 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:18:02.606063 kubelet[2337]: E1213 14:18:02.605867 2337 controller.go:195] "Failed to update lease" err="Put \"https://172.31.27.214:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.20.24?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 13 14:18:02.633948 kubelet[2337]: E1213 14:18:02.633907 2337 controller.go:195] "Failed to update lease" err="Put \"https://172.31.27.214:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.20.24?timeout=10s\": read tcp 172.31.20.24:44274->172.31.27.214:6443: read: connection reset by peer" Dec 13 14:18:02.639681 kubelet[2337]: I1213 14:18:02.639641 2337 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 13 14:18:03.177548 kubelet[2337]: E1213 14:18:03.177483 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:18:03.369894 kubelet[2337]: E1213 14:18:03.369846 2337 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.20.24\": Get \"https://172.31.27.214:6443/api/v1/nodes/172.31.20.24?resourceVersion=0&timeout=10s\": dial tcp 172.31.27.214:6443: connect: connection refused" Dec 13 14:18:03.370351 kubelet[2337]: E1213 14:18:03.370325 2337 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.20.24\": Get \"https://172.31.27.214:6443/api/v1/nodes/172.31.20.24?timeout=10s\": dial tcp 172.31.27.214:6443: connect: connection refused" Dec 13 14:18:03.370907 kubelet[2337]: E1213 14:18:03.370876 2337 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.20.24\": Get \"https://172.31.27.214:6443/api/v1/nodes/172.31.20.24?timeout=10s\": dial tcp 172.31.27.214:6443: connect: connection refused" Dec 13 14:18:03.371417 kubelet[2337]: E1213 14:18:03.371394 2337 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.20.24\": Get \"https://172.31.27.214:6443/api/v1/nodes/172.31.20.24?timeout=10s\": dial tcp 172.31.27.214:6443: connect: connection refused" Dec 13 14:18:03.371909 kubelet[2337]: E1213 14:18:03.371886 2337 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.20.24\": Get \"https://172.31.27.214:6443/api/v1/nodes/172.31.20.24?timeout=10s\": dial tcp 172.31.27.214:6443: connect: connection refused" Dec 13 14:18:03.372044 kubelet[2337]: E1213 14:18:03.372021 2337 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count" Dec 13 14:18:03.646461 kubelet[2337]: E1213 14:18:03.646318 2337 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.214:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.20.24?timeout=10s\": dial tcp 172.31.27.214:6443: connect: connection refused - error from a previous attempt: read tcp 172.31.20.24:59806->172.31.27.214:6443: read: connection reset by peer" interval="200ms" Dec 13 14:18:03.847667 kubelet[2337]: E1213 14:18:03.847607 2337 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.214:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.20.24?timeout=10s\": dial tcp 172.31.27.214:6443: connect: connection refused" interval="400ms" Dec 13 14:18:04.178740 kubelet[2337]: E1213 14:18:04.178661 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:18:04.248860 kubelet[2337]: E1213 14:18:04.248815 2337 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.214:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.20.24?timeout=10s\": dial tcp 172.31.27.214:6443: connect: connection refused" interval="800ms" Dec 13 14:18:05.050218 kubelet[2337]: E1213 14:18:05.050171 2337 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.214:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.20.24?timeout=10s\": dial tcp 172.31.27.214:6443: connect: connection refused" interval="1.6s" Dec 13 14:18:05.180344 kubelet[2337]: E1213 14:18:05.180304 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:18:06.181279 kubelet[2337]: E1213 14:18:06.181237 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:18:07.182644 kubelet[2337]: E1213 14:18:07.182566 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:18:08.184358 kubelet[2337]: E1213 14:18:08.184315 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:18:09.185531 kubelet[2337]: E1213 14:18:09.185469 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:18:10.185917 kubelet[2337]: E1213 14:18:10.185840 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:18:11.186764 kubelet[2337]: E1213 14:18:11.186678 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:18:12.187375 kubelet[2337]: E1213 14:18:12.187306 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:18:13.187719 kubelet[2337]: E1213 14:18:13.187650 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:18:14.188374 kubelet[2337]: E1213 14:18:14.188300 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:18:15.189835 kubelet[2337]: E1213 14:18:15.189789 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:18:16.190867 kubelet[2337]: E1213 14:18:16.190805 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:18:16.651923 kubelet[2337]: E1213 14:18:16.651783 2337 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.214:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.20.24?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Dec 13 14:18:17.191188 kubelet[2337]: E1213 14:18:17.191122 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:18:18.192043 kubelet[2337]: E1213 14:18:18.191976 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:18:19.193204 kubelet[2337]: E1213 14:18:19.193142 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:18:20.037704 kubelet[2337]: E1213 14:18:20.037640 2337 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:18:20.193540 kubelet[2337]: E1213 14:18:20.193484 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:18:21.194382 kubelet[2337]: E1213 14:18:21.194107 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:18:22.195114 kubelet[2337]: E1213 14:18:22.195073 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:18:23.196347 kubelet[2337]: E1213 14:18:23.196241 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:18:23.466012 kubelet[2337]: E1213 14:18:23.465574 2337 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.20.24\": Get \"https://172.31.27.214:6443/api/v1/nodes/172.31.20.24?resourceVersion=0&timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 13 14:18:24.197909 kubelet[2337]: E1213 14:18:24.197849 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:18:25.198541 kubelet[2337]: E1213 14:18:25.198499 2337 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"